TED Conversations

Farrukh Yakubov

Student, Purdue University

TEDCRED 50+

This conversation is closed.

What question would you ask to identify whether or not you were chatting with a well developed software or a person?

Imagine an experiment where you are asked to chat with one hundred people online, no sound or image, just text. Three of them are actually not real, they an extremely good automated response systems. Your task is to identify those three. You are allowed to ask only one and same question from everyone. People on the other end are specifically chosen such that none of them have similar personality. Programs are also given a unique personality. Only trick is, while you ask questions, programs observe responses of everybody else and may or may not change behavior based on that. What would your question be?

P.S. If you would like to be sure how good is 'extremely good' automated response system in the though experiment above, you may consider it to be the best of such systems you think is possible.

Share:

Closing Statement from Farrukh Yakubov

Now that the conversation is over I would like to leave you with more thoughts.

Imagine, this experiment took place and you asked your question, and indicated three of the participants as programs. What if this experiment was not what you thought it was, and after the experiment you were told that 100 participants were all human or all programs, or even a single person answering 100 different ways? What if the purpose of the experiment was not about the capabilities of programs, but about the people - to see how people percieve an intelligent software? Did you think about this possibility?

On the other hand, if the experiment was to test the programs, how effective do you thinki it would be to use this same question of the experiment? i.e. asking "What question would you ask to identify whether or not you were chatting with a well developed software or a person?" from each of the 100 participants.

It is up to you to chose the post experinment scenario, and you would be correct. Because, the experiment can work both ways wether you decide to look at this experiment as an attemp to test programs, or a way of understanding peoples' understanding of programs.

Showing single comment thread. View the full conversation.

  • thumb
    Jan 23 2014: From each participant I would demand an association chain starting with 'witch' and ending of 'blue dog' with a minimum of 42 freely chosen, different, yet related steps in between them, by which each step has to alternate between a subject, or an object and a transitional descriptive adjective which both have in common and relate them to one another in the development direction from left to right.

    e.g.:

    Witch -> ugly -> Pimples -> subcutaneous -> Capillaries -> thin -> Chopsticks -> pointy -> Nose -> centered -> egoist -> ... -> smelly -> blue dog.


    And now the only question I am allowed to ask:

    What made the dog and not the witch colored by the 24th object you choose?
    • thumb
      Jan 23 2014: I think I"m not understanding the question, because I think the answer would always be: "What makes the dog colored is the adjective 'blue' before the word 'dog', as specified in the question." That would not seem to depend on any precedents in the association chain, and it doesn't depend on the immediate precedent "smelly".

      In your question when you ask what made the dog colored, are you assuming that "blue dog" refers to a real dog that is physically colored blue? You could instead be referring to artist George Rodrigue's famous "blue dog" cartoon, or any of a number of other "blue dog" references. If you're referring to Rodrigue's blue dog, here is his answer to this question:

      Rodrigue answers the title question by explaining that Blue Dog's color depends on what the artist is doing: when Rodrigue goes fishing, he paints the dog a salmon color; when Rodrigue wants a hot dog, he paints the dog mustard yellow, and so on. Would that be a valid answer?
      • thumb
        Jan 23 2014: If that was your answer and you a participant in that test, I would have you on the 'humanoid' pile in first selection. :o)

        As Farrukh virtually installed 'extremely good automated response systems' in his thought experiment, about which I have no clue what 'extremely good' means, because my one and only experience with this type of software was a pretty boring ELIZA derivative, I have to assume the worst, which for this response systems would be 'really damn good'.

        The approach I chose to gain sufficient information is based on complexity, uncertainty, ambiguities and creativity which are likely to confuse both, humans and programs, whereas the focus in the analysis would be the underlying approach of each given answer.

        When you think the answer would always be 'the adjective 'blue' before the word 'dog'', you assume, that all participants cut of the whole end of my question - as you did - which clearly asked for a relation to the 24th object in the association chain (if there is any, as also subjects were allowed) by which the dog and not the witch was colored. And as 'blue' is not an object, this answer returns a contradiction to my question.

        The uncertainty here is, if the 'blue' in 'blue dog' already represents the preceding adjective to 'dog' at the end of the chain or is seen as a closed entity which would allow for two descriptive adjectives at end, which is not forbidden. This is a choice everyone and the programs has to make, it is confusing on purpose to provoke uncertainty to some degree, because if you look closely at the example I gave, you'll find a separation arrow (->) in between smelly and blue dog, which holds some hint in itself.

        I didn't know about George Rodrigue's artwork 9 hours ago and I didn't have to, to find you in the situation which was desired. Here is another uncertainty, even multifold, as 'blue' in the English language can also be interpreted as 'sad' or 'melancholic'., which is another ambiguous degree.

        ... to be continue
      • thumb
        Jan 23 2014: And although Rodrigue's answer why his dog is blue is irrelevant to my question, as he is a subject and can therefore never be the 24th object of an association chain, the information you returned is valuable by its creativity and testifies your knowledge in, at least, his work.

        But Farrukh installed 'extremely good automated response systems' in this experiment by which I have to assume, that they are programmed in a way to gather 'knowledge' in real-time if necessary for a task or question, that is why knowledge alone would not be a good enough filter to spot them.

        It is also to assume that those programs are designed to mimic human imperfection, because after all that is what makes us special, in a way, but as far as I am aware of, this is tricky to cast into machine code... I mean, the intended ones :o)

        If I would take your comment as a 'valid' response in this test, I would consider the fact, that you spend almost half of all your words on the artist Rodrigue, which in proportion would be so way off my only and initial question, that you became a potential candidate for being a human in first selection.

        Whether you are highly interested in arts in general or just in this artist or randomly just knew about him is secondary behind the quantity you spent on it. But as quantity disproportions could also be 'simulated', it qualifies you humanoid 'only' in first degree, by which the number of selection steps itself would depend on the overall tendency of answers and their 'quality' and therefore 'spot-ability' of the programs.

        What I have to avoid in my final decision for each answer is a clear decision matrix which could and would be programmed in advance by smart programmers, so at the end it has to be my good old gut feeling I rely on.

        And not to render all of my pitfalls and approaches I stated here obsolete, please don't tell me, that you already are 'an extremely good automated response system' installed on TED to keep conversations going here ... ;o
      • thumb
        Jan 23 2014: Are you, at the end, a dangerous lamp post at the side of this information highway? ;o)
        • thumb
          Jan 23 2014: "[Insert long story about George Washington here...] What color was George Washington's White Horse?"

          That was one of my favorite riddles as a child. :)

          You are correct - I am merely human... But of course the TED-bot would say that too... :)

Showing single comment thread. View the full conversation.