Farrukh Yakubov

Student, Purdue University

This conversation is closed.

What question would you ask to identify whether or not you were chatting with a well developed software or a person?

Imagine an experiment where you are asked to chat with one hundred people online, no sound or image, just text. Three of them are actually not real, they an extremely good automated response systems. Your task is to identify those three. You are allowed to ask only one and same question from everyone. People on the other end are specifically chosen such that none of them have similar personality. Programs are also given a unique personality. Only trick is, while you ask questions, programs observe responses of everybody else and may or may not change behavior based on that. What would your question be?

P.S. If you would like to be sure how good is 'extremely good' automated response system in the though experiment above, you may consider it to be the best of such systems you think is possible.

Closing Statement from Farrukh Yakubov

Now that the conversation is over I would like to leave you with more thoughts.

Imagine, this experiment took place and you asked your question, and indicated three of the participants as programs. What if this experiment was not what you thought it was, and after the experiment you were told that 100 participants were all human or all programs, or even a single person answering 100 different ways? What if the purpose of the experiment was not about the capabilities of programs, but about the people - to see how people percieve an intelligent software? Did you think about this possibility?

On the other hand, if the experiment was to test the programs, how effective do you thinki it would be to use this same question of the experiment? i.e. asking "What question would you ask to identify whether or not you were chatting with a well developed software or a person?" from each of the 100 participants.

It is up to you to chose the post experinment scenario, and you would be correct. Because, the experiment can work both ways wether you decide to look at this experiment as an attemp to test programs, or a way of understanding peoples' understanding of programs.

  • Jan 27 2014: Nowhere in your conditions is the machine constrained to tell the truth. Therefore, no such magical question can exist, since the machine could simply lie.
    • thumb
      Jan 27 2014: That is profound I must admit.
    • thumb
      Jan 27 2014: Yes, machines on this experiment are not constrained. They could lie, but telling a lie while making it sound like truth is not simpler than telling the truth. Unless its a yes/no answer, it does not matter if its the truth or lie, any answer has some logic incorporated into it. Human participants could also lie, and it would have same effect as if they told the truth. Because, no prior information on the participants is provided.
      • Jan 27 2014: No such magical question can exist. There is no single question that can determine whether or not a "conversational partner" is human. There is no perfectly logical way to determine this. One must, instead, rely upon the illogical presumption that there is and must be a flaw in the masquerade that is never, under any circumstances, to be replicated by human-to-human misunderstanding or human variation. If we're dealing with ca. 1985 "AI", maybe, but we are not limiting ourselves to 1985, 2014, or the limits of any year. Thus, we can posit near-infinite databases, decision trees, and expert systems for the machine participants. The puzzle is posed in a way that cannot be solved.
        • thumb
          Jan 28 2014: This is a thought experiment and the question mentions the audience to imagine this scenario. The question is not about whether or not a single question can differenciate a software from human, it is about what question one would ask trying to see the difference, given they are limited to a single question. For this experiment it does not matter if such a question exists or not, it's what people come up with as such a question. :)
      • Jan 28 2014: If no such question exists, then there is no point in coming up with any such question, which has been my contention from the start. It's as nonsensical a task as saying "What incantation will take you to the moon without any means other than the power of magic engendered from your voice?"
  • thumb
    Jan 26 2014: Send them a CAPTCHA picture and ask them to solve it.
  • thumb

    Lejan .

    • +2
    Jan 23 2014: From each participant I would demand an association chain starting with 'witch' and ending of 'blue dog' with a minimum of 42 freely chosen, different, yet related steps in between them, by which each step has to alternate between a subject, or an object and a transitional descriptive adjective which both have in common and relate them to one another in the development direction from left to right.

    e.g.:

    Witch -> ugly -> Pimples -> subcutaneous -> Capillaries -> thin -> Chopsticks -> pointy -> Nose -> centered -> egoist -> ... -> smelly -> blue dog.


    And now the only question I am allowed to ask:

    What made the dog and not the witch colored by the 24th object you choose?
    • thumb
      Jan 23 2014: I think I"m not understanding the question, because I think the answer would always be: "What makes the dog colored is the adjective 'blue' before the word 'dog', as specified in the question." That would not seem to depend on any precedents in the association chain, and it doesn't depend on the immediate precedent "smelly".

      In your question when you ask what made the dog colored, are you assuming that "blue dog" refers to a real dog that is physically colored blue? You could instead be referring to artist George Rodrigue's famous "blue dog" cartoon, or any of a number of other "blue dog" references. If you're referring to Rodrigue's blue dog, here is his answer to this question:

      Rodrigue answers the title question by explaining that Blue Dog's color depends on what the artist is doing: when Rodrigue goes fishing, he paints the dog a salmon color; when Rodrigue wants a hot dog, he paints the dog mustard yellow, and so on. Would that be a valid answer?
      • thumb

        Lejan .

        • +1
        Jan 23 2014: If that was your answer and you a participant in that test, I would have you on the 'humanoid' pile in first selection. :o)

        As Farrukh virtually installed 'extremely good automated response systems' in his thought experiment, about which I have no clue what 'extremely good' means, because my one and only experience with this type of software was a pretty boring ELIZA derivative, I have to assume the worst, which for this response systems would be 'really damn good'.

        The approach I chose to gain sufficient information is based on complexity, uncertainty, ambiguities and creativity which are likely to confuse both, humans and programs, whereas the focus in the analysis would be the underlying approach of each given answer.

        When you think the answer would always be 'the adjective 'blue' before the word 'dog'', you assume, that all participants cut of the whole end of my question - as you did - which clearly asked for a relation to the 24th object in the association chain (if there is any, as also subjects were allowed) by which the dog and not the witch was colored. And as 'blue' is not an object, this answer returns a contradiction to my question.

        The uncertainty here is, if the 'blue' in 'blue dog' already represents the preceding adjective to 'dog' at the end of the chain or is seen as a closed entity which would allow for two descriptive adjectives at end, which is not forbidden. This is a choice everyone and the programs has to make, it is confusing on purpose to provoke uncertainty to some degree, because if you look closely at the example I gave, you'll find a separation arrow (->) in between smelly and blue dog, which holds some hint in itself.

        I didn't know about George Rodrigue's artwork 9 hours ago and I didn't have to, to find you in the situation which was desired. Here is another uncertainty, even multifold, as 'blue' in the English language can also be interpreted as 'sad' or 'melancholic'., which is another ambiguous degree.

        ... to be continue
      • thumb
        Jan 23 2014: And although Rodrigue's answer why his dog is blue is irrelevant to my question, as he is a subject and can therefore never be the 24th object of an association chain, the information you returned is valuable by its creativity and testifies your knowledge in, at least, his work.

        But Farrukh installed 'extremely good automated response systems' in this experiment by which I have to assume, that they are programmed in a way to gather 'knowledge' in real-time if necessary for a task or question, that is why knowledge alone would not be a good enough filter to spot them.

        It is also to assume that those programs are designed to mimic human imperfection, because after all that is what makes us special, in a way, but as far as I am aware of, this is tricky to cast into machine code... I mean, the intended ones :o)

        If I would take your comment as a 'valid' response in this test, I would consider the fact, that you spend almost half of all your words on the artist Rodrigue, which in proportion would be so way off my only and initial question, that you became a potential candidate for being a human in first selection.

        Whether you are highly interested in arts in general or just in this artist or randomly just knew about him is secondary behind the quantity you spent on it. But as quantity disproportions could also be 'simulated', it qualifies you humanoid 'only' in first degree, by which the number of selection steps itself would depend on the overall tendency of answers and their 'quality' and therefore 'spot-ability' of the programs.

        What I have to avoid in my final decision for each answer is a clear decision matrix which could and would be programmed in advance by smart programmers, so at the end it has to be my good old gut feeling I rely on.

        And not to render all of my pitfalls and approaches I stated here obsolete, please don't tell me, that you already are 'an extremely good automated response system' installed on TED to keep conversations going here ... ;o
      • thumb
        Jan 23 2014: Are you, at the end, a dangerous lamp post at the side of this information highway? ;o)
        • thumb
          Jan 23 2014: "[Insert long story about George Washington here...] What color was George Washington's White Horse?"

          That was one of my favorite riddles as a child. :)

          You are correct - I am merely human... But of course the TED-bot would say that too... :)
  • thumb
    Jan 23 2014: The Turing test, which this question is related to, is one of my favorite thought experiments. I believe that we are nearing a time when there is no way to tell the difference with one question. My question would be a bit recursive:

    What one question could I ask you to determine whether you're a human or software?
  • thumb
    Jan 23 2014: Where is Mars.
  • Jan 22 2014: You are in a prison with two other people, one always lies and the other always tells the truth. There are two doors in the prison, one leads to sudden death, the other to feedom. Both people know what is behind each door. You may ask one question to either person and walk out to freedom, what is the question?
    • thumb
      Jan 22 2014: How can we get out of here?:)
      • Jan 22 2014: sorry.. wrong
        • thumb
          Jan 23 2014: Compare with what is the answer wrong? Or becuase you say it's wrong, so it's wrong?
    • thumb
      Jan 23 2014: "What door would the other person point to, if I asked which one leads to freedom?" The door that was not pointed is the one to go through. Does not depend on which one of them answers this.
      • thumb
        Jan 23 2014: So in your philosophy, who answers the quesiton in the "wrong"(you defined) way are not real persons? Or they're just chatting bots?
        • thumb
          Jan 23 2014: The comment above was just how I would answer to Keith's question if I was one of the 100 on the other end of the network. But if you mean how would I judge if I was the one asking questions, then I would not expect everyone to answer the right way, because there might not be single correct way to answer. Instead, one way I could judge is to get all 100 answers, then compare them.
      • Jan 23 2014: You and I have very similar interests and knowledge backgrounds I see, my guess is less than 1 in a billion can solve that problem without looking it up on the internet. That one was easy, would you like to try this one? Can you tell me how to sort data without moving it? That took IBM's best over thirty years, I did it over a weekend 46 years ago. If you get that one I will give you a really hard one about quantum physics. I am curious to see if you have any limitations.
      • Jan 23 2014: Farrukh, why is your answer right and Yoka's answer wrong? I think you may explain it better than me...
        • thumb
          Jan 23 2014: Why do you think a liar never says a single truth? What if they play tricks on you becasuse they know you don't trust them? Do you think an intelligence test like this can help you to find a real person you like or has much in common with you in your real life?
        • thumb
          Jan 23 2014: Concise way of explaining this, is that those two (people in the question) behave like quantum entangled particles. Longer verbal explanation is below:

          Hi Yoka, I think Keith said it's wrong because just asking any one of them about the safe way out does not provide sufficient information to identify where the doors lead. You may just get lucky and ask the person that knows the truth, or it may turn out otherwise. The trick is to assume the person you are going to talk to could be both of them. If you asked any one of them which way the other one would point to as safe, they could either lie or tell the truth. But if they lie, then the other person would tell the truth, and vice versa. While having only two possible answer choices, opposite of lie is the truth, of truth is a lie.

          Therefore, no matter who you ask, you either get truth about the lie, or a lie about the truth. Thus you get a lie. Now you can be sure about where the doors lead.
        • thumb
          Jan 23 2014: Thank you for your encouragement. Hope my assertive words didn't hurt you two. Any inappropriate comments, please ignore them.
      • thumb
        Jan 23 2014: “"What door would the other person point to, if I asked which one leads to freedom?" The door that was not pointed is the one to go through. Does not depend on which one of them answers this. ”
        Why this answer can't be programmed in a smart robot?
      • thumb
        Jan 23 2014: Thank you for your elaboration. I think I understand it. But I meant to take all of us three to get out of the prison. I can just let them open the door and follow them to go out. So if the liar wants to survive, he has to tell the truth.

        And actually, I don't think this kind of question can help me judge a person on the internet in our real life. I'd be too lazy to answer it and pass my attention to chat with other people.
        • Jan 23 2014: Thanks for your patience Yoka, I figured Farrukh could help better than I could, he is a smart and gentle guy. The question is a brain teaser and not at all easy to solve but you just plowed into it anyway and I give you a thumbs up for trying. I enjoy your comments and think you have a lot to offer so hang in there and fire away anytime you like.
      • Jan 23 2014: The heap sort is a comparative sort still an incredibly slow sort compared to mine. I'll give you a hint, my sort does not sort anything. It operates as fast as the records can be read, no data movement and near zero cpu time. It was ingenious 46 years ago and as far as I know it is still the fastest sort in the world. Some Professors at Stanford challanged me to beat their sort version because their's is the fastest sort ever published. My sort has never been published and aside from my Professor a retired Air Force Mathematician no one has ever seen my code. It was my first program, a simple assignment for class and it was supposed to be written in COBOL, however I wrote it in Fortran which I taught myself and he did not understand the code.
        By the way I had a good laugh about your "quantum entangled particles" explanation. By the way if you have not seen Princess Bride by all means watch it some time.. 3 min. part on logic- https://www.youtube.com/watch?v=0sPVEBAtwmg
        • thumb
          Jan 24 2014: The first time I thought about this I assumed no data movement meant using any memory (other than where data already resides) for structures regarding the sorting information is not allowed. Also, I'm assuming linear complexity when u say "zero cpu time". Please let me know if you meant something else other than the above. Also, does your design work with any type of data with same efficiency? From what you describe it sounds as if its a method of accessing data as if it was sorted, while order of data entries remain unchanged.
          If the purpose is just to provide the sorted index of a requested entry, Selection algorithm to find kth smallest item from the set has a linear complexity. But it is not ideal if random kth items are being continuously accessed.
          Thus I have a solution in mind, that modifies, reuses and combines existing methods to create generic non-comparative sorting that works with a set of data (let size of the set be 'n'), where each item has arbitrary length, and does so in linear time.

          Edit: I don't expect it to be same or similar to what you have in mind, its just another way of doing things.

          Algorithm is explained on the next comment. This is going to be divided into few chunks due to limits of this conversation platform.
        • thumb
          Jan 24 2014: Continuation of my previous comment:

          It does not modify the original data set, but produces an array of pointers (referred as the map) of length n. Other memory that will be used is of size 256 integers (referred as the workspace), which is no longer required after completion of the algorithm. I'm going to start describing it from the lowest component to highest. Also, I'll use C notation to avoid wordy sentences.

          First component takes advantage of pointer manipulation and underlying architecture.It is a partial Counting sort. This stage takes in only set of bytes.
          1.Reset workspace to zeros.
          2.for each item e in the input set, perform workspace[e]++ //offset of each entry in workspace represents a value of an item; value at the offset represents the # of items in the set that are equal to the item.
          3.for i=1 to 'size of workspace', perform workspace[i]+=workspace[i-1] //value of each entry in workspace represents #of items in the set that are less than or equal to the item with value 'offset'.
          4.First component does not proceed with constructing sorted array, but instead provides a way find index of each item as if the set was sorted. Index of 'someItem' from the input set in a sorted set would be workspace[someItem]. Higer level component will obtain index for each item exacly once.

          Second component is a radix sort, but bytes will be used for grouping instead of bits. The map is initialized such that map[i] contains adress of set[i]. At each iteration, the first component is used to divide each subsequent set up to 256 groups, until they no longer need sorting, i.e. is of length 1. Also, actual items in the set will not be moved around, instead only the pointers in the map are modified such that map[i] is the index of ith item in a "sorted" set.

          Complexity is explained on the next comment.
        • thumb
          Jan 24 2014: Continuation of my previous comment:

          Counting sort(first subcomponent) has complexity O(n+k), k is maximum possible value of each integer item (256 in this case), n is length of the current subset. This is a stable non-comparative sort.

          Radix sort using stable non-comparative sort has execution time of Θ(d(n+k)), d is length of items in the set. n is size of the set. For arbitrary length items, upper bound should be O(p(n+k)). p is an avarage length of items in set. p.s. items of length less than p, will no longer be in subsets of size larger than 1 after p iterations.

          I may not use the same method if the nature of the input is known beforehand.

          Final comment, in the process I discovered this platform does not include anything after 'less than' symbol.
        • thumb
          Jan 25 2014: Farrukh - can you tell me if I'm not understanding your algorithm properly? I believe your algorithm is n-log-n and not linear, because the original question placed no constraints on the size of each input element.

          If each input element were allowed to be a random 64-bit integer, the size of your work space would be 16 quintillion bytes, which would be an issue.

          Am I missing something?
    • Jan 23 2014: I'm assuming you don't know which is which. It took me some time, but I think I figured it out. You ask either of them: what will the other guy say is the door to sudden death? The person will indicate a door, that's the one you want to take. Alternatively, you can ask which will the other guy say is the door to freedom, and take the door not indicated.
      • Jan 23 2014: Very good but Farrukh posted the answer 25 minutes earlier. Did you look it up or figure it out?
        Another way to phrase it is: Which door will the other guy tell me go through? and then go through the other one.
        Good work out Farrukh, Yoka and Timo... remember it is the journey that is most important and all of you took the same journey. Because you got different answers should in no way spoil your journey because there "is" no destination, the destination is an illusion. Buddha put it this way:

        "Nirvana is this moment seen directly. There is no where else than here. The only gate is now. The only doorway is your own body and mind. There’s nowhere to go. There’s nothing else to be. There’s no destination. It’s not something to aim for in the afterlife. It’s simply the quality of this moment."
    • Jan 23 2014: The question doesn't matter.
      If i choose life i'll go to the door that the lier will show me, no matter what i asked.

      edited

      Freedom is a lie, the lier will show me the door to freedom, if i ask for it. If i ask the door to sudden death, he will show me the same door, to freedom , because he is a lier.
      If ask the person who always tells truth, where is the door to freedom , he will show me the door to sudden death, because , it's a real freedom. If i ask him where is the door to sudden death , he will show me the door to sudden death, because he always tells truth.
      • Jan 23 2014: This is truely a remarkable explanation. What do you think Farrukh, can you see the beauty in her logic?
        • Jan 23 2014: Maybe you've pushed the wrong reply button ?:)
          It's me,not Farrukh.
          I've experienced the beauty of logic on the way to..., but now i see the flaws.
          Frankly, i don't see any version of explanation that can eliminate the uncertainty.
          In case, there is such and you know it, please share !
        • thumb
          Jan 24 2014: Interesting way of putting things together. If I were to further analyze this, under the above explained conditions, choosing a random door would be as good as talking to anyone. However, under this concept of the world, there is still a solution that leads to certainty. It's actually more efficient that the one in standard concept. If you ask anyone of them which way the other one would point to to freedom, they will always point to freedom. Its guaranteed by the design of the preconditions, no post thinking to be done like as in other case. Since a lier always points to freedom, truthful person would not alter lier's decision. On the other hand, the lier would point to any door other than the truthful person believes to be freedom. What I find amazing is that formulating a different preconditions allows formulating a logic that does not contradict with those of different setups.
        • thumb
          Jan 25 2014: "What do you think is the biggest problem in the world today...?"

          This must be one of the hardest questions to answer, due to many problems and not all of them can be compared. Perhaps this would be a good question of choice for the above thought experiment. :)

          Science is key to move everything forward, and computer science seems to be the beating heart of the current era. I am not sure what I would want to tackle first, but I would let my interests lead the way.
      • thumb
        Jan 24 2014: Its not a question of did I read it, it is that of when I read it.
        -this is a response to your response of Keith's response to you main comment. :)
        • Jan 24 2014: I see... you and Keith have ' do-not-disturb-us' kind of conversation :)
          OK then, enjoy it !
        • Jan 24 2014: Farrukh I copied your last response to a word file and will go over it next week, I'm not as sharp as I used to be so it will take me a while to figure out your method. I tried to email you but the link did not work for me. Here is my email (keithwhenline@gmail.com) drop me a line and I will tell you as best I can remember how my sort works for your information and you can do whatever you like with it. I am curious about your background I assume you spent time or was raised in the Kazakhstan area and moved to the US to further your education. Also wondering what kind of impact you want to have in the world, with your knowledge you obviously have a wide range of possiblities. What do you think is the biggest problem in the world today and are you willing to tackle it?
      • Jan 25 2014: Natasha you are right of course, I have no right to give anyone any more attention than someone else and I apologize for offending you. It was totally my fault. The riddle I purposed was my way of telling if I was speaking to a bot or a very smart person and was another version of his original Turning type suggestions. Upon reading Farrukh's background which is very similar to mine I wanted to see how deep the rabbit hole goes and I found it has no bottom to my delight. I got caught up in that as you witnessed and forgot my manners and you have every right to call me on it, thank you. I hope you can forgive me and I will try not to every do that again.
        • Jan 25 2014: No worries, you don't have any chance to offend me !
          I mean, my ego is thin enough :)
          Your riddle and that episode from " The Princes Bride " gave me an aha moment and i am grateful for that. Actually, those two are in perfect congruence. Probably i was a bit upset that there seemed to be nobody who was interested, but on the other hand, it's not easy to language what i've got, so it's OK anyway.

          Thank you !
    • thumb
      Jan 23 2014: The question about a 100% liar and 100% truth teller assumes that you can find two people, such that one always tells the truth and one always lies. I don't think that ever happens in reality, and it's not a premise of the original question, so I can't see how this classic logic puzzle is a solution to this Turing test.

      Am I missing something?
      • Jan 25 2014: I simply answered a question with another question... an old politician trick I quess.
      • thumb
        Jan 25 2014: I agree, if the two persons lie at the same time(not one lies, another must be honest )......I think making them go through the door first could help to get all the people free from the prison in reality. But if they're terrorists who want to kill you with any cost......:)
    • thumb
      Jan 23 2014: "Would the other person tell me that the left-hand door leads to freedom?"

      If the person says 'Yes' and is lying, then the other person would truthfully say 'No,' so the right-hand door leads to freedom.

      If the person says 'Yes' and is truthful, then the other person would deceitfully say 'Yes,' so the right-hand door leads to freedom.

      This is a logic question which would be much easier for an advanced computer to figure out than a person.

      If the person says 'No' and is truthful, then the other person would deceitfully say 'No' and the left-hand door leads to freedom.

      If the person says 'No' and is lying, then the other person would truthfully say 'Yes' and the left-hand door leads to freedom.

      So, if the person says 'Yes,' then the right-hand door leads to freedom. And if the person says 'No,' then the left-hand door leads to freedom.
      • Jan 23 2014: It was a riddle, and both Farrukh and Timo gave good logical answers.
      • Jan 25 2014: You do not know which person is which and if you ask the question right, it does not matter. Faarrukh and Timo both gave answers that would work.
    • Jan 27 2014: "Could you come over here?" Then I grab either one and throw him through a random doorway. If he's obliterated, that's the door not to go through.
      • Jan 27 2014: Now that is a solution without a question....:) Even more efficient..
  • thumb
    Feb 2 2014: Cpmeurtos are not as good wtih tsaks taht ionlvve ataiooxrpimpn or coenttaxul gseuinsg. Hmunas lkie to mkae ssene of the wlord and wlil hvae an eiesar tmie maknig sesne of tihs pgaaarprh. Aslo, iedircnt qtsueions and culoonevtd gramamr are tircky for cpueortms to iperntret. If you foellwod tihs ieda so far, tehn tihs paaprargh may rnemid you of smoe ohter lgguaane or wrdloapy taht you hvae laerned smoe ohter tmie in yuor lfie. Waht is it?

    ¿ɹɐǝʎ ʎɹǝʌǝ sᴉɥʇ ɹoɟ uoᴉʇᴉʇǝdɯoɔ ɐ ǝʌɐɥ ʎǝɥʇ ʇ,uop ¿ʇsǝʇ ƃuᴉɹnʇ ɐ ʇsnɾ sᴉɥʇ ʇ,usᴉ 'ʎɐʍ ǝɥʇ ʎq pu∀
  • thumb
    Jan 29 2014: Most, if not all, proposed answers in this thread have ignored the following clause.

    "Only trick is, while you ask questions, programs observe responses of everybody else and may or may not change behavior based on that. "

    It seems to me that, unless the computer application is first to provide an answer, a natural language processing (NLP) facility based on machine learning and/or statistical methods could be devised which could infer a convincing response based on those previously observed. Obviously if the computer is the 100th to answer, it is much more likely to be convincing than if it is first.

    Given that we're limited to text only, that stops any sort of perception test of the Captcha form. If it weren't for the facility to observe previous responses you could try some sort of text based emoticon, but it would only have to repeat a previous answer to pass that particular test.

    The nearest I can get is some sort of internally referenced query constructed entirely on the mechanics of the test itself. This would depend on people answering sequentially and knowing which position they held in the sequence.

    Something like.

    Using the numeric characters which represent your position in the sequence of responses (from 1 to 100) and additional punctuation marks found on a standard keyboard, create an original emoticon and explain what you intend it to mean. eg. if you are the eighth person to answer you could respond with "8=) means happy face" (this example may not be replicated in the test).

    Something like that...
    • thumb
      Jan 29 2014: Its an interesting coincidence that this morning I was in a discussion about smtp allowing only trasfer of plain text, yet we use it to transfer anything in an email.
      • thumb
        Jan 30 2014: Hi Farrukh,

        Truth is though that while SMTP is a text based protocol, it has been extended by MIME and other RFC standards to allow encoding of the inline multimedia and attachments that we see today.

        For these to work it is necessary that server and client software understand the encoding standards and provide functionality by which the encoded data is presented in the intended format.

        I guess our assumption when responding to your question, was that you were talking about the resulting/formatted communicated content (what a recipient might actually see and/or percieve) rather than the underlying character set processed by the communications protocol.
        • thumb
          Jan 30 2014: Yes, In the though experiment above the representation is text.

          About SMTP, since MIME is also text and is part of the content, I would assume its the same protocol.
  • Jan 28 2014: What does the color "red" feel like?
  • thumb
    Jan 27 2014: I have concluded that this is a loaded question. It is meant to illustrate the point that we are no different to the ideal synthetic brain!
    The ideal robotic replica of a human brain can logically only exist if it mirrors/replicates the human mind to perfection.
    This means that there is no question that can exist to differentiate between the two. Even though Farrukh does not specifically say it must be an ideal software he does say it must be highly developed which implies perfect or near perfect replication of human thinking.
    Or maybe Farrukh is a TEDbot and we are being played like lab rats!
    Lol.
  • thumb
    Jan 26 2014: I would ask: "Considering the rumor that your mother sleeps around a lot, how many sweaty gardeners did she sleep with yesterday if she slept with 3 at midday and 2 at 5 pm?"
    ...
    If your answer is 5 you're speaking to a robot.
    • Jan 27 2014: Wow... trick him with the math questions nicely done...
      • thumb
        Jan 27 2014: Lol.
        Yes. The promiscuous mother taunt is clearly a diversion.
        While the robot is processing a 'human' response to the emotive question, it will conclude it does not have a mother and will be so overcome with programmed emotion that a calculation error is bound to occur.
    • Jan 27 2014: Unless the robot has been programmed to recognize that people dislike the idea of their mothers being promiscuous and is programmed to respond accordingly.
  • thumb
    Jan 25 2014: This is a terrific question,I would ask who do you respect most and why?
  • Jan 23 2014: I would ask the " person" to create something for me , like a story .
  • thumb
    Jan 22 2014: Are you Binary?
  • Jan 22 2014: None. I would demand a face-to-face meeting.
  • thumb
    Jan 22 2014: Hi, why would you only chat with others on the internet? I think sometimes you can meet them to find out who they're. Not everything can be judged by language. Behaviors and manners are also important~. Some people's talents can not be described by language either, when you are with them, you may get to know them.
    • thumb
      Jan 22 2014: By limiting communication methods to chatting, people would not have their decision affected by visual and sound input.To judge someone a simple chat is insufficient, but it may just be enough to identify whether or not someone is a person or a really smart chat bot.
      • thumb
        Jan 23 2014: Sorry, I have to disagree, communications are not job interviews. Human's feelings sometimes only can be experienced.
    • thumb
      Jan 22 2014: "Why would you only chat with others on the internet?"


      What Farrukh is proposing is a variation of the Turing test.

      http://en.wikipedia.org/wiki/Turing_test
      • thumb
        Jan 23 2014: Thank you for your explanation.
  • Jan 22 2014: Do you want to go for coffee?
  • thumb
    Feb 3 2014: Believe me or not but I have designed literally hundreds of questions aimed at testing humanity in my little language parlor. Such questions must first of all aim at the creative center of each human being. In other words they shall not test knowledge only wisdom. Testing wisdom safeguards against random answers, as they must bear a lot of sense in order to be accepted. Having said that, I would type into the chat window the following question for example: how do you see the difference between being loyal and being faithful?
  • Jan 30 2014: Generally speaking, machines are purposed to increase the capacity of human beings to exert our will upon the world around us, whether that will be purposed towards destruction or creation is irrelevant. From a more evolutionary perspective, there are many other species which perform the same behavioral tendency of using technology as a means of exercising their will; in such cases, technology is best understood as a naturally occurring material structure which has be re-purposed to perform an unexpected function. This may come as a surprise, but hermit crabs are not born with a shell, nor does that shell which they are not born with grow in tandem to their body. Hence, they manipulate a naturally occurring material structure into performing a function unique to the user-organism's survival needs/wants.

    Specifically regarding software, we must first understand that computers are designed in such a way as to provide us with accurate information based on whatever inquiry we provide. In order for software to serve as an effective tool in that regard, it must first be able to interpret that information accurately, which requires a standardized code by which to interpret the inquiries and information presented by the user-organism. This standardized code is written language, which the artificial intelligence will be hardwired to interpret; it will not interpret misspelled words as intentionally misspelled, but rather as errors in input. Which will prompt an error message, or an alternative input which the software interprets as a more likely intended message.

    It's easy. Misspell the words repeatedly. If you're conversing with a human, they will eventually catch on that you're a terrible speller and stop prompting you to clarify your meaning (at least for most kind/respectful people, this is true). A machine will not.

    At least not for many years.

    Now if you please, define "human".
    • thumb
      Jan 30 2014: HI Kyle,

      Some very interesting points, and I agree that we're quite a way off (but perhaps not that far) from software which is be able to fool most of the people, most of the time.

      Before Bryan jumps in however, I should point out that there are some failings in your argument specifically related to software capabilities and design,

      Particularly with relation to your ideas about misspelling, As a suggestion, try misspelling a search term into Google and see what happens.
      • Feb 2 2014: Hi Stephen,

        Google's spelling correction functionality is precisely what I was referring to as the "alternative suggestion". If you mispell a word in the search bar and google tries to correct you, just delete the last letter and re-enter it, then google will let you enter the incorrect spelling.

        You could also potentially use multiple languages within the same sentence, making it more difficult for the software to interpret your meaning. Unfortunately, this also eliminates the same ability in most humans (myself included).

        Another possibility would be to use words that sound the same but have different meanings (butt eye four get the litter airy term re-guarding these types of wards). Granted, phonetic interpretive software could decode the meaning by choosing the most logical - or grammatically correct - sentence from a list of all possible intended messages.

        If however you layer the use of multiple languages with phonetic encoding, you'd be able to fool just about any machine. Unfortunately, that's because you'd also be able to fool about any human as well. Because let's be honest, the only thing which limits the ability of software to mimic humans is the ability of humans to first understand our own programming and then for programmers to codify the laws of our programming into a language the computer can understand.

        That's why I ended with "define 'human'", because the more accurate our definition of our individual selves becomes, the more inclusive it will be of the external world. It's paradoxical in nature because we as individuals only exist because of everything else that is, and everything else that was. So whenever someone asks to differentiate between man and machine, it's fundamentally asking to differentiate between a clone of our own thought. Thus the degree of differentiation is solely determined by the cloners' capacity to reproduce themselves within the machine. Which one is more human? The clone? Or the cloner?
      • Feb 2 2014: Who's to say that machine and man are even separate beings? Is it not that machines are a direct extension of ourselves? An extension which in turn alters the physical world around us, and thus alters our mental representation of that world from which we construct our perception of reality and of ourselves? Or more profoundly, an extension which has enabled us to formulate those conceptions based on the lives of billions of individuals as opposed to those select few within ears'-range?

        Artificial intelligence is simply our mirror image. I wonder, were you alluding to that very same notion in your narrative below? If so, then I salute you sir.

        Just don't go and google "butt eye"
        • thumb
          Feb 2 2014: Really interesting ideas Kyle.

          So interesting in fact that it's difficult to work out the scope/boundaries of the topic we're now discussing.

          From my prespective I'm now thinking about the future of AI and whether Strong AI or AGI (Artificial General Intelligence) will ultimately converge or diverge from human conciousness in terms of a shared set goals and values (assuming we can agree on a fundamental core set that humanity may share - which sometime seems difficult to do!).

          Certainly as far back as Asimov's three laws, it seems that this convergence or ultimate "singularity" is not seen as a given. To the extent that humanity may ultimately need protection from an artificial/emergent intelligence or consciousness which diverges from ours in a way which is alien and which may ultimately see humanity as a hurdle to attaining an intrinsic set of goals which we can't foresee.

          I've read a bit about the ideas expounded by Bostrom and Omohundro, but truthfully haven't got my brain around most of it, however it seems that the assumption of a "friendly" AGI which converges with ours in terms of morality and values is still pretty speculative.

          I accept that this is all a bit HAL/Matrix/Terminator type stuff, and as an idea did influence my selection of passage a little.

          “Sam was brushing her hair when the girl in the mirror put down the hairbrush, smiled & said, ‘We don’t love you anymore.’”

          Really reaching it might be that while Sam's reflection is dependent on a man-made mirror for its existence, then she is an extension of Sam. There may be a point at which Sam's reflection works out how to step out of the mirror and walk around (or pulls Sam through!). At that point who's to say what happens?
        • Feb 2 2014: Actually, what you are saying, correct me if i am wrong, is that it is not quite clear who is whose extension.
          Is technology extension of man or vice versa ?
          Tricky question ! :)
          We tend to externalise our tools, this is me, that is it, but it's naive. Both are involved in both, we shape our tools and they shape us, it's one process.
          And if " Artificial intelligence is simply our mirror image " is true, i think it is, then
          we can define or at least recognize ourselves in the image. What we like in it, what we don't and most impotently what scares us. Why do we feel threatened by AI ?
          - Because they may become like us.
          Biblical stuff, isn't it ?
          iow. they may use their intellect ( which we think is ours ) to serve their own interests .
          Who are we then ?
  • Jan 30 2014: Over and over, I see people proposing questions that are far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far less clever than the composers think they are. Given the conditions of the test, NONE of the questions proposed would be a strong method to determine the identity of the "conversation partner" as human or not. They all make some extreme and non-necessary assumptions. In other words, "the best of such systems" is nothing better than early 1980s technology as far as most people are concerned. IT IS NO LONGER 1980!
    • thumb
      Jan 30 2014: Hello Bryan,

      I've had a wee look through the thread and noticed you are pretty passionate about this topic, and perhaps just the teensy weensiest bit frustrated.

      Fair enough, I'm someone who has worked in the software development game a while, albeit not particularly at the coal face as a developer these days. I therefore understand that many of the contributions would be pretty trivial to deal with for even a straightforward algorithm which simply retrieved responses from a database based on parsed keywords. Having said that, the database would still have to be populated. Given your opinion that most answers could be "pre-programmed", that suggests a very prescient programmer with very sore fingers!

      I also accept that my little contributions may be trivial in 2014, however I'm obviously a bit behind the curve because I'm not aware of any story generating algorithm which could deal convincingly with the test I set out below.

      Accepting therefore the highly likely situation that I am woefully out of date, could you at least point me in the direction of the algorithms you have in mind. If you don't have anything to hand, perhaps just jot down the theoretical basis, I can follow up on my own in that case.

      Either way, I just assumed this was a bit of fun, and therefore have no axe to grind. I'm very happy to be educated, so please feel free to update me.

      Finally, given how easy it all would appear to be for someone sufficiently informed of technology post 1980, you might think about trying for the Loebner Gold Award i($100k) this year. Apparently the best and the brightest have not deigned to pick that up quite yet.
    • Jan 30 2014: Agreed. We would all benefit from your wisdom, if you would but share something clever and innovative. Or perhaps the origin of your fury is such that you find yourself among those without a clever solution, or perhaps not.

      But I most certainly agree with you on one thing: the year is not 1980
  • thumb
    Jan 30 2014: Another route would be to try some sort of crowd sourced narrative/story approach.

    The interrogator starts a short story (not available in the public domain) by providing the first paragraph, then asks the respondants to provide the following two paragraphs.

    In my experience it is quite difficult for a computer application to generate this sort of output while maintaining the flow and context of the narrative, particularly when the volume requested is non-trivial (say a couple of paragraphs).

    Where it would become particularly interesting is if each respondent took turns following on from (and with knowledge of) all previous contributions). Combinatory explosion, even after only a few contributions, would mean that a simple matching/data retrieval procedure is likely to fail.

    This isn't entirely reliable I admit, because it might be argued that a non sequitur is as likely to be generated by human as computer. Either way it would be an interesting experiment.

    Quite a famous example of this approach was undertaken via twitter a few years ago (probably lots more recent examples exist too). The narrative was started as follows:

    “Sam was brushing her hair when the girl in the mirror put down the hairbrush, smiled & said, ‘We don’t love you anymore.’”

    Asking any respondant to provide the next two paragraphs would stretch the comprehension and creativity of many humans, but it would be interesting to see what a software algorith would come up with.

    Could work?
    • Feb 2 2014: I like this test. The only loophole I can find would be the doubt of the human doing the testing. Is it not possible that the most non-sensical machine could be interpreted as the most creative human? After all, us humans do enjoy a good metaphor or turn of phrase. Unless the syntax is entirely non-sensical, I don't see why this couldn't be the case. Some of the greatest authors and directors are notorious for breaking up the story line so much that you have to really think in order to understand the narrative. Think of Cloud Atlas for example; phenomenal movie, confusing as hell, until you reach the end and see the full story revealed.
  • thumb
    Jan 30 2014: i think this is a good idea..

    http://napolesdenniscunanan.com/
  • Jan 30 2014: I would present a number of jokes to the system and note the response. Would the incongruities that seem funny to humans just be puzzling to a machine?
  • thumb
    Jan 29 2014: How long do you spend on the loo?
  • Jan 29 2014: I would ask it "what color is the text I'm typing?"
    This question asks the person to identify something that I can verify straight away, but I think the question might be too complex for AI to understand or give a sufficient answer.
    • Jan 30 2014: "I don't know, because I don't know your browser settings."--true answer, very human answer, but could be pre-programmed.
  • Jan 28 2014: La macchina saprebbe descrivere le differenti operazioni che vanno eseguite per dire le due frasi seguenti:
    I) "Bottiglia con turacciolo",
    II)"Bottiglia e turacciolo". Thanks
    • Jan 30 2014: "I don't know that language."
      • Jan 30 2014: Dear Bryan, excuse me because I speak English a little. This is the problem: the machine does not know the difference between the following sentences:
        I) "bottle with cork";
        II) "bottle and cork". Thanks
  • Jan 28 2014: I would not ask a question. I would provide an insult. The computer's response is apt to ask for a clarification, whereas the human will know just what you meant and reply in kind,
    • Jan 28 2014: Unless the computer has been programmed to react appropriately to insults.
  • Jan 28 2014: "Tell me how you feel about your childhood." or "Tell me a story about your childhood." etc...

    Few programmers/AI would have the foresight to have a pre-constructed story, let alone how they "feel" about it.
    • Jan 30 2014: So you would only sift out the crudely-done simulations. Wouldn't work in all cases, though.
  • Jan 28 2014: I am an automated software , my creator has not programmed me with the content to reply to your question.
  • thumb
    Jan 28 2014: "Are you human?"
  • Jan 28 2014: What is your sex?
    • Jan 28 2014: And if it is programmed with a built-in answer?
      • Jan 28 2014: I would ask what its preference?
        • Jan 30 2014: And if it's got a built-in answer for that? Anyway, you are allowed only ONE question, and you already expended it.
  • thumb
    Jan 27 2014: i would think that humour would be a good way to suss it but it wouldn't take the form of a single question.

    considering the possibility of the program lying, it's impossible to garner any useful information with a single question.
  • Jan 26 2014: give them to solve very difficult math problem which can't solve by humans
    • Jan 27 2014: And if it is programmed to be able to lie about being able to solve difficult math problems?
  • Jan 26 2014: Are you well, my friend?
    • Jan 27 2014: I could program a machine to answer a whole host of questions about well-being with "I'm doing okay, thanks."--I'm not even a programmer.
  • thumb
    Jan 26 2014: can we meet for coffee?
    • Jan 27 2014: Answers nothing. The machine could lie and say "Yes.".
      • thumb
        Jan 27 2014: ah, but when it turns up for the coffee, I'll know instantly. you gotta think outside the square, man. it's the only thing that separates us from the whales..
        • Jan 27 2014: But if it's lying it won't turn up. Instead, you'll get a text message saying that something came up, sorry. So, then it's either a machine or a human jerk.
      • thumb
        Jan 27 2014: will that be the defining parameter for intelligent machines in the future? - well, it's either a robot or a jerk. love it!
  • Jan 26 2014: Ask them how many toes they have .....
  • thumb
    Jan 25 2014: I say my name in three different languages, and say a different name every time, and eventually it tells me that its a robot
  • thumb
    Jan 24 2014: if you eat large amounts of pinto beans do you pass gass ?
    • thumb
      Jan 25 2014: There is a genius behind this question! The question is a yes/no question. A "Yes" answer implies a digestive system. And no Artificial Intelligence has a digestive system. Genius!

      On the other hand, my grandma taught me that if you use lots of water in the pot and boil the beans without stirring them, two things happen. First, the beans swell and float in the water. Second, they cook completely as they float in the boiling water. Stir the beans (season them) AFTER they are fully cooked. Cook beans that way and there will be no gas!

      If you stir the beans, that breaks the delicate capsule on the bean. Water soaks into the internals of the bean and they don't float anymore. When that happens it changes the chemistry inside the bean and that creates conditions where gas develops inside the digestive system.

      I haven't attempted this in about 30 years or more. But if memory serves, there is no digestive gas produced for beans prepared the way that Grandma taught. So your proposed question might not work with a really clever AI (clever - like me!).

      Has anyone out there seen the movie "Her" yet? I'm a big fan of Joaquine Phoenix. How would an AI like "Samantha" spend dollar bills? We need Bitcoin
      • thumb
        Jan 25 2014: Your Grandma sounds pretty wise .Of course there are tricks to depleting the flatulent effects of the beans , of course you can only minimize them .
        I guess I should have maybe added to the statement : If you eat large amounts of pinto beans and a side order of collard greens & ham hocks do you get gassy ?
        That would have covered all bases .
      • Jan 27 2014: What if the machine can lie?
    • Jan 27 2014: What if it's programmed to lie?
      • thumb
        Jan 27 2014: That wasn't part of the criteria of the question forum .
  • Jan 24 2014: I think it's not real to identify the perfect machine using only one question.
    Maybe If I can ask a lot of questions, I will try to ask the same questions, but in different words. So, if the answers differs - profit :)
  • thumb
    Jan 24 2014: Have you heard of the Turing test.
  • Jan 23 2014: The question is not well posed.

    Is it one question and one answer or is it a chat? If there is just one question then what is the meaning of "while you ask questions"?

    What is the point of people/programs observing responses from other people/programs if they have already answered the question? Who answers first? How long must we wait for an answer?

    Your own question seems to have identified a lot of humans because they have put more effort into making sense of the question than analysing it. Ironically it is a computer's speed and ability with numbers that sets it apart but asking a question to reveal such an ability is pointless because the program is not required to show its abilities.

    I would suggest that any question that can identify a computer program can lead to a new program that will successfully imitate a human answer i.e. there is no universal identifying question.
    • thumb
      Jan 24 2014: I didn't write this question, but I'l answer anyway, based on my assumptions:

      One question. The "chat" means you're limited to text only, no images or graphics.; The point of observing the responses is the ability to learn from previous responses. It does not matter who answers first. One has a limited time to answer, although what the exact amount of time is doesn't really matter as long as it's long enough - say an hour per person..

      In terms of putting more effort into making sense of the question, in my experience, that can be the mark of a good question. Zen koans exemplify this.
    • thumb
      Jan 24 2014: When I ask a question I try not to include any variables to the equation, that are not required for the question to achieve its goal, or that could limit the types of answers the question could get. Those variables are free to be defined by anyone answering the question as they want them to be defined. Less things set in stone - less things to misunderstand, means less deviation from the purpose of the question. Left out factors are either non-critical/redundant or intentional.

      Sometimes you have to leave terms undefined if they are common knowledge. When it comes to "chat" in this very question, it is just just any form of information exchange between two parties, where parties take turns and information they exchange is either of type request or response, and there is at least one request and one response, and each party has participated at least once. This does not work for the experiment in question, that is why there are limitations such as type of information (quote "no sound or image, just text"), interaction environment (network), and actions of the one party (the participant). I did not define what or how the other 100 people can use chat. As far as the experiment is concerned they only have limitation of the information exchange format (text) and of the chatting environment. Everything else that is allowed in chat, such as remaining silent, providing unending stream of answers, tricks, misinformation, or even responding with a question, etc. Important thing is, none of those matter in the experiment.

      If you think otherwise, perhaps the intended purpose of the thought experiment above is actually different than what you think it should be. Maybe its to test machines, or the other 97 people, or something else, or all of the above. I don't want to give out spoilers on how I am going to conclude this conversation thread when it ends, I think the information in the question is sufficient to an extent.

      I really appreciate the criticism, thank you.
  • thumb
    Jan 23 2014: When Alan Turing devised this test in 1950, he titled his paper: "I propose to consider the question, 'Can machines think?'" After further reflection, he then re-titled his paper: "Are there imaginable digital computers which would do well in the imitation game?" These are related questions.

    The key part of the Turing test is that it forces us to work on an information-only level. You are not permitted to physically inspect the other person/computer, see images of them, or hear them, or do anything else at the physical level (like ask them to mail you a picture of themselves).

    Software like IBM's Watson is the undisputed world champion of the game Jeopardy, which seemed until it's victory a uniquely human endeavor (see http://www.ted.com/talks/ken_jennings_watson_jeopardy_and_me_the_obsolete_know_it_all.html). Here we are in year 2014, and we are on the verge of producing quantum computers, which will allow us to magnify the power of Watson times billions. Naontech and 3D printing will give such software real, physical "legs" of whatever sort they or we would like to create. We're continuing to map out the human brain at a neural level, an information theoretical level, and so on - all of which is synergistic with Watson-type software, quantum computing, and the ever expanding reach of our collective human intelligence to which we (and our software!) all have greater and greater access via the Internet. Oh my that's a mouthful...

    I guess what I mean to say, is that Turing Test type questions will soon become a mute question. "Soon" is up for some debate, but I think that's another debate. As is the question of whether we (or you!) could ever in principle consider such software, sufficiently advanced, as "alive and self aware", in the same sense we consider we ourselves that way. If software were ever conferred such status, should it have the same moral or legal rights we humans have manufactured for ourselves?
  • thumb
    Jan 23 2014: Maybe would you send me a postcard? Or, would you mail me a copy of your I.D.? And I'd tell them the reason, that I'm trying to identify who is the person and who the machine, why, because this fellow on TED asked me to.
    • Jan 26 2014: That's unique!
      • thumb
        Jan 26 2014: well, I think it's cheating on what the conversation host is asking, Elizabeth, because it's not just asking a question, but requesting an action?
        • Jan 29 2014: kind of it is. Then would you care to ask different ones instead, greg?
          Would there be peculiar ways that aren't certainly, quote, "cheating" to tell 'it--the thing you're talking to' is not some sort of a robot?
      • thumb
        Jan 29 2014: well, there might not be, elizabeth, because I think he's describing a robot that is programmed to answer any question, and to lie. For instance, I could ask it how many times a day it goes to the bathroom, but if it can lie, it could say five or whatever. You or I should ask the guy if the machine is allowed to lie, if it's allowed to lie there may be no question that can differentiate it. If it's not allowed to lie then it would be easy to differentiate it?
  • Jan 23 2014: People are just physical manifestations of natures alogorithms (i.e. we are electric beings). If you doubt this think of when the minds of alzhiemers patience malfunction. We 'watch them die' (aka watch their hardware fail).

    The only way to tell a person from a computer is to have evidence, if the model is sufficiently advanced then it should be able to replicate human behavior good enough for us not to be able to tell.
    • thumb
      Jan 23 2014: I agree 99%. I'm not sure I would require that we be electric (or electromagnetic) beings, as I believe at the core we are information which can be expressed through non-electric means.

      This begs the question: At some point, does a sufficiently advanced replication of human behavior actually become "alive" in some sense?
  • thumb
    Jan 23 2014: Hello Farrukh....nice to connect with you again....interesting question!

    I might ask...how do you feel about living the human life and why do you feel that way?

    I may not be able to distinguish the three extremely good automated responses if they were changing the responses based on other's responses... those responses that were from real people or programs that were simply mimicking the responses of others. That being said, however, I probably could identify those who were genuinely living life with gusto. Sometimes, the responses of real people are like a very good software program, so I would try my best to "feel" the responses:>)
  • thumb
    Jan 23 2014: I think I would ask - what is your fondest memory?
  • thumb
    Jan 23 2014: Can you give me your skype contact?....maybe... :)
  • Jan 23 2014: How do you know that you are not talking to yourself ?
  • thumb
    Jan 22 2014: Hmm...What do you smell like?
  • Jan 22 2014: What would you do if you saw a toddler on the edge of a pool with no adults around, and suddenly falls in?

    A computer would call, get or say. A human would act-jump in, grab, pull out, etc.
  • Jan 22 2014: Hi Farrukh,I think as long as you are true,then surely you can recognize it immediately.
  • thumb
    Jan 21 2014: The Turing test. There is a vast literature out there on this topic and the study of same is a worthy endeavor especially for computer scientists. you can conduct similar tests between men and women. In my years as a CS Professor, I have conducted many such experiments. I guess I would summarize the results as follows. Being able to deceive an interrogator does not make the conclusion a fact. Because an advanced algorithm can convince a wily interrogator it is human does not make it so. Nor, does it mean the program is alive and sentient.

    In the future I am sure we will converse with machines just as we do with people. But, I hope the machines have a lot more capability to analyze, summarize and interpret than most humans. Otherwise, what's the point? No matter how complex and sophisticated our machines become they will always be tools which enhance our humanity. The other possibility is far too frightening to conceive.