Can computers – and people – learn to think bottom-up?

Can computers - and people - learn to think bottom-up?

Tufts University biologist Michel Levin and neuroscientist from Columbia University Raphael Yuste have an ambitious project in hand: Explain how evolution “’hacked’ his way to bottom-up intelligence”, i.e. from nothing. They base their thesis on computer science:

This is intelligence in action: the ability to achieve a particular goal or solve a problem by taking new steps in the face of changing circumstances. This is evident not only in intelligent people, mammals, birds and cephalopods, but also cells and tissues, individual neurons and neural networks, viruses, ribosomes and RNA fragments, up to to motor proteins and molecular networks. At all of these scales, living things solve problems and achieve goals by flexibly navigating different spaces – metabolic, physiological, genetic, cognitive, behavioral.

But how did intelligence appear in biology? The question has preoccupied scientists since Charles Darwin, but it remains unanswered. The processes of intelligence are so complex, so multi-layered and so baroque that it is no wonder that some people are tempted by stories about a descending Creator. But we know that evolution had to be able to find intelligence on its own, from the bottom up.

Michael Levin and Rafael YusteModular cognition” at infinite time (March 8, 2022)

Can it really work? The big problem for evolution is to assemble a large number of components into a particular model. The probability of correct assembly decreases exponentially as the model grows. richard dawkins offered in his book The blind watchmaker: Why Evidence of Evolution Reveals a Universe Without Design (1986) that perhaps evolution can produce things that seem intelligently designed if, instead of putting all the components together at once, they can be put together piecemeal.

Piecemeal assembly reduces the problem to a series of linear choices. Ever since early pre-Socratic philosophers proposed materialism and the evolution of modern organisms through variation and selection, thinkers have wondered how variation and selection alone could produce highly complex and specified organisms that include independent intelligence of resolution. of problems.

The one area where we see complex specified artifacts such as the James Webb Space Telescope being created regularly is intelligent design by humans. This has led many thinkers over the millennia to conclude that organisms are also the product of intelligent design.

But, in “Modular cognition”, Levin and Juste disagree. They take the same approach as Dawkins. They argue that if evolution can proceed by piecemeal variation on individual modules, the plasticity seen in stem cells, tadpoles, and mental cognition will emerge.

They go further and propose that higher-order modules can emerge from lower-level variations of modules. The process envisioned is similar to how words can vary and form sentences which can in turn vary to form paragraphs, and so on. They call this process “modular cognition”.

In doing so, they make a very important implicit assumption. They assume that as we move to higher and higher levels of modularity, intermediate steps do not become considerably more difficult, if not impossible, to find. This is a key assumption to keep in mind as we move forward.

Let’s try the authors idea with word ladder puzzles. In word ladder puzzles, one word is transformed into another word by varying one letter at a time. The catch is that each intermediate step must also be a valid word. This rule is analogous to the common-sense assumption in evolutionary biology that if one type of creature is to evolve into another type of creature, each intermediate typoe must survive and reproduce.

So let’s try to turn a “cat” into a “dog” with a modular variation.

  1. CAT
  2. TOC
  3. TOOTH
  4. DOG

Pretty easy, huh? This makes plausible the idea that modular cognition can explain the origin of creative intelligence from monad to human.

But things get wonky when we have to tackle longer words – equivalent, perhaps, to more complex organisms. For example, there is no word scale between “electric” and “transcend”.

We encounter the same problem with sentences. Turn “The cat chases the dog.” in “The Dog Chases the Cat”. changing one letter at a time, while maintaining a meaningful phrase (equivalent in biology to keeping the organism alive) becomes much more difficult. What does “the cradle drive out the cog” mean? Beat me.

What if we could swap whole words in a sentence? This translates to a direct path:

  1. the cat chases the dog
  2. the dog chases the dog
  3. the dog chases the cat

Unfortunately, the solution of swapping words creates new problems. One problem is that, to swap words, we now need a variation mechanism that uses a dictionary to store and look up words that actually mean something in context, as opposed to just strings of letters.

Another problem is that the number of options to swap each word is now growing exponentially. Thus, the probability of finding coherent sentences also drops exponentially. Thus, we have solved a single problem at the expense of introducing two new and much more difficult problems.

What we just saw – that new problems are introduced by trying to solve the original problem at a higher level – is known as the vertical no free lunch theorem (VNFLT). VNFLT was first invented by Dr. Guillaume Dembski and dr. Robert J. Marks,

They prove in “The search for a search” that as we try to solve a problem at higher and higher levels—as the authors of “modular cognition” propose—the difficulty increases exponentially instead of decreasing. So we see that the key assumption made by the authors is wrong.

Modular cognition will not work as an intelligent design theory. This is yet another suggestion.
who is unable to climb the steep ladder of the VNFLT.

You can also read:

To what extent does life simply invent itself as it goes along? The evidence may surprise us. It does not seem that all life originated simply by common descent. But maybe he can’t invent himself without an inventor either. Human inventions illustrate this point. (Eric Holloway)


Can AI really evolve into superintelligence on its own? We can’t just hand over a big computer to evolution and go expecting big things. Perpetual innovation machines tend to falter because there is no universally good research. Computers are powerful because they have limitations.