Share and discuss this blog



Tuesday, October 25, 2016

Learning about AI by watching Family Feud


Last week I was in Bogota, Colombia, interviewing experts in the food industry in order to gather video stories for use in a program to be used by that industry. At least 2 of the interviewees mentioned that they were using AI. They didn’t know anything about me so they didn’t mention AI for that reason. They just were saying that they were using AI in their food companies. This is so weird I had no idea what it could mean. What could they know about AI? AI has become the kind of thing that people (and especially the media) mention at the drop of a hat these days. But, they deeply fail to know what AI really is or at least should be.

Also last week, Stephen Hawking yet again made a pronouncement about AI, making clear he has no idea what AI is either.

So, I have come to the conclusion that AI  is a word that many people and businesses say a lot while very few people who say it have a clue as to what it means. 

With that thought on my mind, and being someone who has thought about AI, and human intelligence, for about 50 years, I want to make some remarks here about human intelligence based on some rather mundane observations.

I happened to be watching a silly U.S. TV show, Family Feud, that my wife likes. She likes it because the contestants are often quite stupid and she likes to make fun of them. The show pits two families against each other to see who can best guess answers from a survey of 100 people.

On one particular show we were watching the following question was asked: 

Name something someone might hide in their freezer. 

The first contestant’s answer was:

a dead body

That response matched an answer that 5% of the people had given who had answered the survey. The response was written on the game board  as: body/lover’s head.

So, I started to think about AI. “Why?” you might ask. Because my approach to AI has always been to observe humans and see what they do and wonder how we could get a computer to do that. Based on such observations, my team and I would try to build programs to capture people’s intelligence by mimicking the cognitive processes that we recognized must be occurring.

So, the AI question here is: how does “lover’s head” or “dead body” come to mind when asked this question? You can be sure it doesn’t come to Google’s “mind” nor to Alexa’s “mind” nor to Watson’s “mind” because it is not a matter of text-based search nor statistical calculation.

In order for that answer to come to mind one has to ask oneself about weird things one may have seen in a freezer. Human search of human memory depends on many things none of which are text. What probably happened here, is what often happens; one tries to form an image in one’s mind based upon prior images one has seen. Now I have never seen a human body in a freezer. No wait. Of course I have. I have seen it in the movies. I can see that image now as I write this. I don’t recall the name of the movie, but I can visualize a large freezer in a garage someplace.

So, here we have one principle of human search of human minds:

People have the ability to recall images by trying to imagine something and then connecting what they are imagining to something they have actually seen.

Other answers on this show to this question included these:

2nd answer : jewelry (this matched an answer that 4% of the surveyed had given)

3rd answer: moolah, money (this matched the answer money)

The next three probably were again done by imaging. But they might been done by contestants asking themselves the question: what would I want to hide from someone:

Ice cream
booze 
jewelry

I am not sure why one would like to hide ice cream, but the others seem normal enough. People hide valuables. If you Google this answer you will find that there are texts that say that and therefore Google could come up with jewelry.

You can view this episode of Family Feud here starting at the next answer:


The first contestant in the video (which starts in the middle of this question) said:

I am gonna go with a little bit of that ganja weed.

This matched drugs/fat blunt which was an answer given by 8% of those surveyed. Someone who smokes marijuana would have to regularly hide it, but the human search mechanism here is a little different than what we have seen so far.

To answer this, a person would have to ask themselves a question like: “what am I afraid that someone might see in my home?”  (where someone might mean the police or one’s parents typically).

Now, this is a question about one’s personal experience. The person who answered it had to have transformed the initial question into: what do I have that I often hide?

The AI issue here is what we might call question transformation. We hear questions, and in order to answer them we ask ourselves about our own personal experiences or fears or desires.

People typically transform general questions into personal questions in order to answer them.
Would “an AI” have to do that? It would indeed. People try to find memories by asking themselves things and then coming up with new thoughts and old memories. They don’t search texts since they don’t have texts in their minds. But, no one in AI is thinking about that sort of thing anymore. They are too busy with search and machine learning based on tangible data.

The next answer (which is also on the video) was this:

well if you’ve got drugs and you've got booze you got to have something to protect yourself so I am gonna say a weapon  

This did not match any of the answers from the survey, but it caused the host of the show (Steve Harvey) to make the following comment:

“see, the thought process, the beauty of this family, is being how they arrive at the answer”


Oddly enough, Steve Harvey is making an AI observation here. He is noting that in order to arrive at an answer, the contestant had to actually think. Further the contestant was thinking about the answers that had already been given and was picturing a scene where the thing that had been mentioned had actually happened.

Thinking involves imagining events that one may have never witnessed in any way. It involves drawing conclusions from the events that one has imagined. 

People can imagine events and draw conclusions about what would happen next in the imagined circumstance.

Funny how no one is working on that in AI. It is hard to imagine, Mr. Hawking, that a machine that could not do that would be very smart, much less be something to be frightened of.

Now I would like to examine another question from a different episode of Family Feud. This one is a little off color which is not uncommon for this show. I am sorry about that, but the example gives us more food for thought about AI. The question to the families was:


Name something done to nuts that Mr. Peanut’s wife would likely do to him for cheating on him.


1st answer: crack him (this was correct and the top answer from the survey)

The fact that this was the top answer is very interesting because coming up with it is very difficult. It involves recognizing that a word that has two very different meanings can be used as an answer. One has to ask oneself what kinds of things one does with peanuts and then imagine how the answer you came up with might be similar to a different meaning of the word.

What can I do with peanuts? is a question that again requires one to imagine a circumstance. Then, one has to take the word that might be used to describe the action and see if that word can be applied to expressing anger in some way. To do that, one has to infer that the wife is angry and that she might want to retaliate in some way. Then after finding the word “crack” one has to recognize that “crack someone over the head” is an expression that exists in English and is a way of expressing anger. No computer today can do this. People do it easily.

Here are more answers that rely on similar processes:

2nd answer: hide them (not on survey)

3rd answer: throw them out (not on survey)

4th answer: eat them (6th answer on survey)

5th answer: switch foods (not on survey)

6th answer: roast them (#2 answer on survey)

7th answer: chop him to bits

8th answer: make peanut butter

9th answer: boil him

In order to answer questions we must frequently imagine circumstances, draw inferences about the feelings and actions of the participants in those circumstances and then think about words that can be used to describe those feelings and actions.


I will look at one more question from that same show:

Name something the ladies might do if a male stripper preforms at the nursing home

1st answer: faint (this was #7 on survey)

2nd answer: scream (this was considered to have matched the #1 answer which was laugh/cheer)

3rd answer: dance (this matched the #6 answer)

4th answer: get a lap dance (This was deemed to have matched handle goods/spank.)

5th answer: pull out the cell phone and take pictures (This was not on the survey.)


These answers involve putting oneself in someone else’s situation. One has to imagine oneself as an old lady in a nursing home. Since these are not old people on this show and since half of them are men this requires a great deal of imagination. You must ask yourself how someone feels when you may have only minimal knowledge of what that kind of person might feel.

What does a sick old woman feel when faced with male sexuality? is a complex question. How do you think Watson would do with it? My point is that human intelligence is complex indeed and no computer can do very much of it. 

I included this question because of the next answer. I had no idea whatsoever what the answer meant:

6th answer: make it rain (this matched $$$/make it rain which was the #2 answer)

So, not only couldn’t a computer come up with this, but neither could I. And, I could not comprehend the answer. But the audience did. (My wife explained it to me.)

This leads me to my key point. Why did the audience (and my wife) know what this meant? Probably the simple answer is that this is a new expression and I don’t pay that much attention to pop culture. But, and this is the issue, intelligent entities are constantly changing and growing. They learn new things constantly by listening to people, watching movies, and television, texting on the phone, and maybe even by reading. (I am not sure where exactly one might read this expression.)

Intelligence involves the ability to constantly change oneself by updating one’s world view according to new inputs.

I learned this lesson when I was running my AI lab at Yale many years ago in the following way. DARPA (our sponsor) was coming to see what we had done. We had built a program that used the UPI wire as input and read stories, summarized them, and answered questions about them. To prepare for this demo, we knew of many stories that our program had read successfully and so we showed them being read by our program to the visitors from DARPA.

The program did well and our visitors were impressed. But suddenly I found myself getting upset.  One of the stories was about an earthquake in Iran. I knew that our program has read that story many times. I realized that it should have responded:  Enough with the earthquake in Iran story. Or is this a new one? There sure have been a lot of earthquakes in Iran lately. 

Of course, it hadn’t done that because it hadn’t learned anything from that story.

People learn from every experience they have. An AI program would have to change after every experience as well, in order to be considered intelligent.


It was at that point in my career that I switched my interests to learning. I got interested in how to get computers to learn and I also became interested with every discovery about how what I was learning about learning bore very little relation to how learning was taking place in the schools.


So, Mr Hawking, and Food Industry people from Bogota, and Watson, listen up: if your “AI” isn’t changing as a result of every interaction it has it isn’t “an AI” at all. People, even not very bright ones, learn something new from every experience.

No comments: