TED Conversations

Farrukh Yakubov

Student, Purdue University

TEDCRED 50+

This conversation is closed.

What question would you ask to identify whether or not you were chatting with a well developed software or a person?

Imagine an experiment where you are asked to chat with one hundred people online, no sound or image, just text. Three of them are actually not real, they an extremely good automated response systems. Your task is to identify those three. You are allowed to ask only one and same question from everyone. People on the other end are specifically chosen such that none of them have similar personality. Programs are also given a unique personality. Only trick is, while you ask questions, programs observe responses of everybody else and may or may not change behavior based on that. What would your question be?

P.S. If you would like to be sure how good is 'extremely good' automated response system in the though experiment above, you may consider it to be the best of such systems you think is possible.

Share:

Closing Statement from Farrukh Yakubov

Now that the conversation is over I would like to leave you with more thoughts.

Imagine, this experiment took place and you asked your question, and indicated three of the participants as programs. What if this experiment was not what you thought it was, and after the experiment you were told that 100 participants were all human or all programs, or even a single person answering 100 different ways? What if the purpose of the experiment was not about the capabilities of programs, but about the people - to see how people percieve an intelligent software? Did you think about this possibility?

On the other hand, if the experiment was to test the programs, how effective do you thinki it would be to use this same question of the experiment? i.e. asking "What question would you ask to identify whether or not you were chatting with a well developed software or a person?" from each of the 100 participants.

It is up to you to chose the post experinment scenario, and you would be correct. Because, the experiment can work both ways wether you decide to look at this experiment as an attemp to test programs, or a way of understanding peoples' understanding of programs.

Showing single comment thread. View the full conversation.

  • thumb
    Jan 23 2014: When Alan Turing devised this test in 1950, he titled his paper: "I propose to consider the question, 'Can machines think?'" After further reflection, he then re-titled his paper: "Are there imaginable digital computers which would do well in the imitation game?" These are related questions.

    The key part of the Turing test is that it forces us to work on an information-only level. You are not permitted to physically inspect the other person/computer, see images of them, or hear them, or do anything else at the physical level (like ask them to mail you a picture of themselves).

    Software like IBM's Watson is the undisputed world champion of the game Jeopardy, which seemed until it's victory a uniquely human endeavor (see http://www.ted.com/talks/ken_jennings_watson_jeopardy_and_me_the_obsolete_know_it_all.html). Here we are in year 2014, and we are on the verge of producing quantum computers, which will allow us to magnify the power of Watson times billions. Naontech and 3D printing will give such software real, physical "legs" of whatever sort they or we would like to create. We're continuing to map out the human brain at a neural level, an information theoretical level, and so on - all of which is synergistic with Watson-type software, quantum computing, and the ever expanding reach of our collective human intelligence to which we (and our software!) all have greater and greater access via the Internet. Oh my that's a mouthful...

    I guess what I mean to say, is that Turing Test type questions will soon become a mute question. "Soon" is up for some debate, but I think that's another debate. As is the question of whether we (or you!) could ever in principle consider such software, sufficiently advanced, as "alive and self aware", in the same sense we consider we ourselves that way. If software were ever conferred such status, should it have the same moral or legal rights we humans have manufactured for ourselves?

Showing single comment thread. View the full conversation.