Algorithm vs. Agent: Creating Information with iPhone Autofill

Here we illustrate the difference between algorithms and conscious beings playing a simple game on the iPhone: If you ONLY get to use autofill, how long can you keep the game going? Pretty long, so long as you make choices. On the other hand, if you just pick choices at random, or let the phone make the choice for you, you immediately devolve into repetitive gibberish.

This illustrates the fundamental difference between humans and algorithms. Humans anticipate the future, where algorithms can only respond to the present based on patterns in the past. The difference is huge. This problem exists no matter how sophisticated the algorithm. This simple game illustrates one of the most critical limitations of AI as it currently exists.

Transcript: I want to illustrate a key property of information and language, using Apple’s autofill feature on the iPhone, so I’m going to create a little game here and we’re going to create a text message three different ways.

One way, the first time, we’re going to create a text message where I can only use the autofill and I’m going to pick the best word that I can, and then I’m just going to pick the first choice, and then I’m going to pick random choices. It’s going to make clear a major problem with artificial intelligence, as well as biology.

I’m going to get this started just by typing one word, and then after this I’m only allowed to choose from what’s already on the screen.

Hi there, are you guys coming over today? Or are you going to be home today? I have some stuff to do that I need to get done with the kids, and tomorrow night I want to do…

At this point I’ve probably run out of good choices that are going to make a sensible sentence, but so far I actually managed to do it.

Let’s start over with a different set of rules.

The new set of rules is I’m just going to pick the first choice. You see something very interesting here.

I can do not get to see you tomorrow night and I can do it tomorrow night and I can do it tomorrow night and I can do it…

It gets stuck. All genetic algorithms, which are evolutionary programs, have this problem. The way that evolutionary algorithms typically get around the problem is that when they get stuck, they’re programmed to do something random, which might get them out of the loop.

I want to illustrate that too, so I’m going to start over. Now instead of picking the first choice, I’m just going to bounce around and pick random choices. What do I get then?

Yes, I can ask them for a dinner🍴Was a good night for you guys to do dinner?🍴Is that the one that you sent him to you get a….

As you see, all this gets you is spam.

What this illustrates is that in order to create language, you have to have intentionality or what philosophers call “agency.”

I am a human agent. I am self-aware. I know what I’m trying to do, so I can think forward into the future. The problem with an algorithm is that all an algorithm can do is look at the present, which is whatever you just typed in, and then make a calculation based on statistical probabilities that have happened in the past, so an algorithm is always looking in the rear-view mirror, if you will.

This is a fundamental problem with all computer programs. None of them have any kind of self-awareness. They can only learn from what has already happened. They don’t actually anticipate the future the way humans or even dogs and cats do.

In information theory, this is the dotted line between mathematical analysis, which works at one level, versus creativity, which the mathematical analysis is unable to address.

This is the most fundamental problem in biology and evolution because when Barbara McClintock damaged the chromosomes of her corn plants and found that the plant literally rearranged the chromosomes and evolved in real-time, what a lot of people don’t realize is the plant was actually making a choice based on what might work in the future, and it did something that no corn plant had ever done, because it was in a situation that no corn plant had ever been in.

What the plant did wasn’t random. It obeyed some kind of linguistic rules, and the plant was anticipating what might work in the future. This is a property of all biological systems that does not exist in human systems, and this is arguably one of the biggest unanswered scientific questions of all time, and it’s the essential motivation behind the Evolution 2.0 prize.

Join the conversation by commenting below!

Please “Like” This Video & Subscribe To Our YouTube Channel So You Never Miss A Future Video From Evolution 2.0.

Download The First 3 Chapters of Evolution 2.0 For Free, Here – https://evo2.org/evolution/

Where Did Life And The Genetic Code Come From? Can The Answer Build Superior AI? The #1 Mystery In Science Now Has A $10 Million Prize. Learn More About It, Here – https://www.herox.com/evolution2.0

7 Responses

  1. giancarlo reali says:

    Great illustration, could you please make the transcript available online?

  2. Richard Kay says:

    Humans are undoubtedly _currently_ better at predicting the future than algorithms within highly complex multivariate problem domains. I was able to beat a computer at go about 15 years ago – but not now. All an algorithm needs to predict the future is a good enough model of how the future works, and for some purposes these are available very far ahead in time. For example, I remember well as a schoolboy during a partial solar eclipse in the sixties being told on TV about the precise trajectory, date and time of the next total solar eclipse to be visible as such in the UK (in Cornwall) on August 11, 1999. I worked out as a child that I would be in my mid forties then, and predicted also I would probably have children and would take them to see it – about 3 and a half decades later. This all happened as predicted, though we saw it better where we were in Eastern France, due to a local eclipse-induced anticyclone that day. This seemed miraculous compared to how overcast it was just 20 minutes before a hole revealing clear sky appeared in just the right place for us to be able to see the whole event very clearly, but this weather phenomenon did in fact have an explanation when I read about this eclipse induced local anticyclones in New Scientist a year or 2 later. In Cornwall it was cloudy so the same eclipse wasn’t visible.

    Algorithmic weather forecasts are only much good a week or 2 ahead, and by then people tend to have usually already made travel plans. But weather forecasting algorithms, within the limitations of the model they work with – and the supercomputers available to run these models on, still do a much better job of forecasting weather a week ahead or further than any humans could, which is quite an achievement considering the complexity and inherent chaos (i.e. tiny variations having the abiilty to cause amplified effects which increase with time ahead) of the weather system.

    Wishing all who read this safety for you and those you love. during the current epidemic. Hoping you and those you love are keeping well in this lockdown. Despite plenty of historical evidence for the repeatability of epidemics from previously unknown infective agents, the precise date and timing of this one wasn’t predicted either by algorithms or by humans, because neither had a good enough model.

    • All true. Still keep in mind that machines are categorically unable to do inductive reasoning. ALL reasoning that machines do is deductive – meaning that machines only match patterns they have already seen. They cannot anticipate patterns they have never seen. Humans can. Case in point is religion. All religious narratives deal in intangible possibilities and humans instinctively do this all the time.

      It is this inability to do inductive reasoning that explains why machines are so terrible at language. Language nearly always requires inference and this is why nobody is even close to solving the Turing Test after decades of trying.

      We know that even cells can do inductive reasoning – this is an implication of Barbara McClintock’s Nobel Prize paper. No one knows how to build a mechanical process that can do this.

  3. Tom Peeler says:

    Breathtakingly simple and immediately comprehensible. Thanks.

    • Tom,

      Thank you. This experiment also shows something else – the power you have even given a very limited range of choices.

      If we ascribe any agency to bacteria, animals etc, we probably don’t ascribe much. We’d all probably agree our dogs and cats have choices, but we also know they live within the narrow constraints of animal instincts. But this experiment shows you, even in a narrow range, the choices are still real.

Leave a Reply

You must use your real first and last name. Anonymity is not allowed.
Your email address will not be published.
Required fields are marked *