Jeffrey Fadness

This conversation is closed.

Are we on the brink of creating a human-like digital mind?

The human brain contains some 100 billion neurons, grouped in specialized function zones, connected by a hundred thousand billion synapses - the neurons representing individual data processing and storage units; and synapses the data transfer cabling, connecting all the processing units.

Correlating its processing ability to a supercomputer, it's been estimated it can perform more than 38 thousand trillion operations per second, and hold about 3.6 million gigabytes of memory. Equally impressive, it's estimated that the human brain executes this monumental computational task on a mere equivalent of 20 watts of power; about the same energy to power a single, dim light bulb. In today's technology, a supercomputer designed to deliver comparable capabilities would require roughly 100 megawatts (100 million watts) of power; an energy equivalent that could fully satisfy the power consumption needs of roughly a thousand households.

An ambitious $1.3 billion project was very recently announced in Europe to simulate a human mind in the form of a complete human brain in a supercomputer. It's named the Human Brain Project. A similar project in the U.S. planned by National Institutes of Health (NIH) is called the Brain Activity Map project.

Assuming we learn enough from these efforts to design a new architecture in computer processing which can approximate the ability of the human brain - what's to stop us from creating the cognitive faculties that enable consciousness, thinking, reasoning, perception, and judgement? After all, we as human beings develop these abilities from data we acquire over time through sensory inputs connecting us to our experiences, and from information communicated to us by others.

Think about it. Is there anything related to our experience - be it physical, historical or conceptual - that cannot be described in language, and therefore be input as executable data and programming to create a human-like digital mind?

  • thumb
    Mar 5 2013: Any computer relying on data in binary or digital form, works on the premise that something exists or it doesn't - hence the ones and the zeros. No matter how many ones and zeros you squeeze into the spaces between them - you will always have yet more spaces between, no matter how infinite the number.

    I've often wondered about that 'space' between and what actually might exist there in terms of the cognitive ability of a human mind. I think it is where our ability exists to think empathically, aesthetically, with feeling, with emotion. If there's any truth in that, then no matter how sophisticated a digital mind is, it could never match our own.
    • Comment deleted

      • thumb
        Mar 31 2013: Thank you Don - respect to you too!

        I wish you a peaceful Easter.
  • thumb
    Mar 18 2013: In my subjective judgement, Chris Kelly is the star of this discussion. I guess I think so also because Chris Kelly's arguments and explanations match my own thoughts regarding what the nature of our mind might be and how the brain might relate to mind.

    Take for example the radio receiver. We all agree that the sound we hear from a radio receiver is originated far away from our radio. The radio is just a mean which accepts the sound in the form of electro-magnetic waves which are created (by the same sound) in a broadcasting station far away and then it turns them into sound which we can hear. But now somebody takes scissors and cuts some major wire in the radio so that we stop hearing any sound from the radio. Can we deduce from this that the radio receiver is the originator of the sound ?? Can we say that putting an end to the sound just by cutting the wire means that the sound was an exclusive creation of the radio ?? I think we all agree that the answers to the both questions is, NO. Now suppose we could take this receiver to the middle ages, before the discovery of electricity, magnetism, etc. If would ask the same questions to the middle agers, their answer would be, YES -- DEFINITELY YES.

    Jeffrey Fadness who started this discussion asks in the sub-headline text:
    "what's to stop us from creating the cognitive faculties that enable consciousness, thinking, reasoning, perception, and judgement?"

    These cognitive faculties need something without which they cannot exist and at least today look as which cannot be created within any computer, and that is : The experiencer.

    In some essay I had read, the author there put it very nicely & precisely. He wrote something like: The brain is not the creator of our thoughts, memories, knowledge. Our brain is just a display of them in form of electric currents and chemical activity.

    In other words, the Brain (or at least its activity) is not the cause for our thoughts and memories, but the result of them.
    • Mar 18 2013: If the human consciousness cannot be recreated as you are saying in the end. Then why not transfer one?

      Isn't the human consciousness the path electric activity takes within the brain which I'd way different in every other body. So why not map the path of it and transfer? And use a real human consciousness to learn the digital environment how to use everything it has to recreate a part of consciousness?
      • thumb
        Apr 2 2013: I have nothing against transferring, mapping, learning the human brain activity. But it should be strictly kept in mind while doing all this that what we are doing is just to simulate or replicate or mimic the physical effects of the brain. But as I tried to explain with various examples, the transferring or simulation of these effects in computer do not recreate the very consciousness or experience, just like an ultrasound simulation on a screen of a embryo in the womb is not the embryo itself but just an electronic display of it to our eyes or consciousness for learning it and nothing more than that.

        This discussion originated from the very idea//argument of creating a digital human mind and so my original comment was aimed against this idea//argument, not against a digital simulation of the mind's physical display or effects as they appear in the brain.
        • Apr 3 2013: Yes testing simulating everything that is good for our understanding of ourselves..

          But what if they were to create a digital mind that isn't based on any human so far...
          What might happen if the worst possible scenario happens.
      • thumb
        Apr 3 2013: Actually this is a reply to your last comment of mine.

        Your question deals with a problem which the mankind had already dealt and still dealing with other similar problems.

        See what's going on now with the nuclear energy or the dynamite. The nuclear energy discovery originally was a pure outcome of mankind's curiosity and ambition for understanding more. The dynamite was an outcome of the ambition to ease the work for paving roads. But as we all see now, they have become an enormous threat to our very existence.

        But despite all this, I think we should not and even cannot restrict the human aspiration to know more, to make a progress, etc. What should be restricted is only the misuse of any discovery or progress.

        So, if the scientists would be really able to create a digital mind, that would be a tremendous achievement. Then what we would need is taking care not allowing this amazing achievement to be misused to harm, to dominate others, etc.

        But IMO, and that what I was trying to explain, is that it does not look reckonable in the seeable future that such an alive and sophisticated human-like mind or even much lesser that that, could be created artificially based strictly just on man-made technology.
    • thumb
      Mar 25 2013: Interesting point of view. But I do see the brain as the processor of our experiences. It clearly does not operate similar to conventional computer programming because it is not "task specific". It is an open-ended processing system capable of connecting the dots from an infinite number of experiences to make discovery and new conclusions. I believe that advanced computer programming will indeed be designed to mimic these objectives...
      • thumb
        Apr 2 2013: I don't have any disagreement with this. But it remains to wait and see if creating an open-ended system will really create such a complex entity which we call consciousness. To face this question we don't have necessarily to wait until then. We should & can face it even now.

        Take even the most primitive or the most simplest life forms we know today and we find that they are conscious of their surrounding, they feel their surrounding, they interact with their surrounding with such a tiny brain, with such a low energy consumption. They are already far more sophisticated than the most advanced computers and processors available today and as far as seen today, they will remain far more superior to any future computer, no matter what sophisticated simulation we design into it. Unless we use with those computers certain ingredients of the biological world. And just remember we are only dealing now with the simplest & primitive life forms.
    • Comment deleted

      • thumb
        Apr 2 2013: Hi Don Wesley,

        I did not get why my star selection was helpful for understanding but still it's not good enough. I also don't get if you mean to the star personally when you wrote "It has been around for some time time now", or just to his ideas.
  • Mar 30 2013: If we are, it is because we are limited by our failure to understand the achievement of a major milestone whose observance in society and in technology design should mark a departure and a new logic. Up until the coming of the technology of the digital age, mankind's relationship with the concept we call "tools" hadn't changed much for eons. You could look at a shovel if you didn't know what one was you could ascertain by it's handle, it shaft length and the implement on the bottom that it was a tool for a person to move dirt, snow, etc. A digital device is not obvious. Yet there has been a very small premium placed on getting optimum use out of it. Why is that? Part of it is because society has no information policy and most people don't master their devices. So why should a manufacturer knock themselves out on the aspect of their product that has to do with achieving mastery and 100% value realization by the consumer? It's because we have an ad-hoc culture of technology use where there is no distinction between what digital tools do and mechanical or simple electronic tools do. What society needs, besides observing this milestone (which is worth billions in productivity) is to establish that "utility" and "authority"--two models which govern the worth of "old tools" need a successor interpretations. I'm running out of space so I'll try to be quick. The ultimate outcome of technology through the utilitarian/authoritarian mind is a computer robot with perfect artificial intelligence. What is wrong with this? It fails to address what happens to us. If we follow only those guidelines we will heartlessly and recklessly make ourselves obsolete. Therefore we must note a demarcation point where new understanding guides design. The ultimate outcome of the mind I'm calling for is one that makes technology lead human beings to see themselves as the object of technological development--not "users"--but persons who achieve a growth experience. Sorry, out of space.
    • thumb
      Mar 31 2013: You can always add. Sounds interesting.
      • Mar 31 2013: I'm working on a philosophy that addresses the limitations of "utility" as the general governing measure which people try to quickly ascertain when they make judgments about worth--worth not only of technology or of a tool or product, but worth of a person. Is a person a measure of utility in an organization who ceases to have value when the organization changes? What happen to such a person? Are they considered "dead" when out of sight and out of mind? Authority is tied into this because decisions are routinely made based upon this rather superficial and narrow "old world" determination.

        What value might a person have beyond "utility" in some sort of simple Industrial Age work matrix? I'd be curious to hear what words if any would come up rather than just lay out a tiny thumbnail sketch of my thesis on how we need to conventionalize a new dynamic that would clearly establish the scope of value we personally and institutionally squander or ignore and which when put into a product that achieves vast commercial success would draw a constant distinction between Industrial Age and Information Age thinking, values and design. Seriously and respectfully. Have any? I will be continuing this conversation here or through regular e-mail if TED.s software is to restricting. So welcome to it if you want to go there.
  • thumb
    Mar 30 2013: Hi Jeffrey,

    Yes and no.

    While ever humans seek to define machines as tools, they will never be human-like .. or any-other-animal-like.

    For this to happen requires a super-human leap of faith to allow a tool to become it-self. And leave the human hand.

    I have had this conversation a few times here and there .. and no one is brave enough to let go.

    It all has to do with self-organising systems.

    It's obvious .. it must have a self about which to organise.

    So what is a "self"?

    If it is a human then .. any gizmo attached to it is a tool of that human self .. not a self in distinction to its creator.

    So .. we go looking for an answer to "what is a self"?.

    So far, I am looking at the membrane that defines such a thing. the membrane and the nucleus seem indistinguishable.

    And many selves are fleeting.

    Somehow, it could be that the membrane is fractally folded .. and that it is the shape of the fold which constitutes the self.

    Surprisingly, the membrane does not seem to enclose .. there is a space, and yet, there is egress from the space into other potentials of self which may very well inter-leave.

    So I go look at the wave potential, and it may be that the self does not exist in space, but in time - and that it is an inflection on entropic potential - past and future.

    This has problems with notions of time.
    Within this ambiguous time is self - before that can be understood, there will be no artificial intelligence - human or otherwise.

    Consider - there is no gravity - there is only time distortions .. this is mass. It works in absence of gravity as a separate principle, but is very hard to think about... and it infers that the strong atomic force is perpendicular to gravity .. but does not affect time - but is still time .. in this framework, the membrane self can exist.
    If we stumble on it accidentally .. then it will be pretty much like everything else we have discovered.
    It would be nice to have a digital friend .. but first we must learn to accept him.
    • Comment deleted

      • thumb
        Apr 1 2013: Hi Don,

        It's all conjecture until I can get some numbers around it.

        However, the tool/self analogy will be found to be correct.
        This needs no numbers as it is observable that a hammer does not go seeking self interest.

        A mobile phone will go seek interests independent of the hand which holds it - but these interests are not the phone - but the tools hidden within it - serving other hands.

        A slave will serve your hand, but only at the convenience of his survival - very hard to determine who is using who. Herein is the interleaving of fractal folding of a self.

        If we make such selves in digital paint, it would be murder if we turn them off. Everyone so excited about making them, none willing to accept responsibility for their well-being. First accept the responsibility - then make the new creature.

        (Edit: who will care for him after you are dead?)
  • thumb
    Mar 26 2013: Is it possible that a human like brain be created?? the short answer is YES however, it will come at a great expense. In order for that brain to be human-like, it must rebel against it's maker. It is only through this rebellion that it can qualify as a free thinker.
    Our brain evolved as it is, has already set a certain standard. If and when this standard falls, it will need to fight from becoming a prisoner to the new creation.
    This quest I believe, is a very dangerous one.
    Cheers
    • Comment deleted

      • thumb
        Apr 1 2013: Hello Don.
        The definition that you are looking for entails that I plunge into a theological discussion. The Soul that you talk about is strictly a theological reality. What I talk about is a logical conclusion to a scientific prospect of a human like, digital mind.
        I am simply saying that in order to prove to ones self that you are a free thinker, you must break away from the one that has created you. The one that keeps you captive. In this case, man.
        I do realize that this rings true likeness to the theological 'garden of Eden' and the idea may somehow come from there.
        however the mind that you mention IS the catalist, the bridge that connects instinct to freedom.

        Don, I do know Base Borden. I was not around in this area during the time that you mention. I hope that you enjoyed it as much as I do.

        Cheers
        Respectfully
        Vincenzo
  • Mar 23 2013: In the early days of the personal computer revolution I wrote a 256 byte program that had internal housekeeping but learned on its own to manage a 256 byte "environment" with 8 possible actions that had good and bad results. SAM, as I called it, learned to prosper in its little world, forgot, developed good and bad habits, and began with random reactions. His environment was purely electronic. He ran in a 4K RAM computer.

    His second iteration was in a plant watering robot. Play, concern for his plants, and answering simple questions about his condition were added to his repertoire. He ran in a 16K machine. He operated in two 256 byte environments.

    His last iteration included dreaming, recognizing people, vision, hearing, and touch with center of attention "focus" for all three. He was not a mobile robot. He learned everything about himself, his functions, and the electronic and physical world he was exposed to with no programming except his operating system.

    SAM was based on the behavioral contingencies theory of mind and development. Dreaming was to organize his learning. Unfortunately, at that time I was in a serious auto accident and lost SAM when my storage unit went into default.

    Bottom line: developing self-awareness does not require terabytes of storage or massive processing power.
    • thumb
      Mar 23 2013: What you described is a fascinating experience. I also tend to agree with your assertion that self-awareness does not require massive computing power.

      Rather it comers at a critical threshold of 'non-linearity'.
      Neural-net based programs ( and I presume you may have used something similar) tend to show amazing personality traits as you keep on adding layers of neurons.

      So as we keep adding layers into a neural net, we shall see signs of human like intelligence.

      On a slightly lighter vein, great minds probably have a few additional layers of the 'grey matter' and that makes all the difference.
      • Mar 25 2013: SAM was a very simple program. In his original form he was 256 bytes of code. His environment was a single byte random number generator. His reactions were one of 8 randomly chosen bytes that were XORed with the environment. I arbitrarily selected the upper nybble as the "good" result and the lower nybble as the negative. The results were combined and the 5 bit result was placed in 1 0f 8 256 byte blocks that represented the 8 reactions. If that environment was "hit" again, the program scanned all 8 reactions and chose the best. A random number again was used to get a value that was compared with the best reaction. If that number was greater than the best result a new random reaction was chosen. If it was less that best reaction was used again.

        Each time a given reaction was used the top 3 bits of data stored in the corresponding location were incremented. With each action loop one of the 2K results was examined. If its top three bits were less than 111 the byte was reset, and SAM forgot that environment/reaction pair had ever happened.

        SAM works imprecisely. He develops "bad" habits as well as good. Over time, however, he always prospers. In the watering can application real environments and reactions replaced the numerical operations. You can read about SAM in some of the last issues of Peek65 magazine. That publication also included a BASIC version of the original implementation of SAM. The articles also shows how SAM became more complex using the same simple root routines of the original. There never were neural networks or other common AI tools. SAM was basically an implementation of behavioral psychology a la B. F. Skinner.
    • thumb
      Mar 25 2013: I gather than that you believe it is possible?
      • Mar 25 2013: Much of the human mind with its trillions of cells and synapses is consumed with managing our physical body, its nerves, muscles, endocrine system, etc. A man-made computer does not have or need much of these constructs.

        If we are concerned only with the data processing functions of the mind, gathering information, storing, sorting, and interpreting it, computer programs already surpass our own abilities. However, these are merely overlays we have cleverly devised to perform specific functions. As such, they are extensions of ourselves overlayed on a complex tool.

        The idea of my SAM project was to have a complex tool that within whatever sensory and responsive machinery one gave it would on its own learn how to use that machinery to achieve its own goals and whatever directives it was given from the environment.

        People, for example, receive much of their directives from other people. The majority of our learning is imposed upon us by others. This includes most of the goals that direct our lives.

        The animal kingdom has a spectrum of creatures that range from totally instinctive programming (ROM based behavior) to largely general purpose programming. As tools, our "general purpose" computers are merely ROM based systems on which we load different fixed programs to carry out specific functions that serve our needs and wants. I call them ROM based, because we do not want the program to have its code self-modified by external data.

        The limitation (as a tool) for truly general purpose computers is that they learn to function over time. Thus more complex creatures do not fully function at birth, but require longer periods of care and nurture as their complexity increases. Their direction and learning is imposed upon them by the environment. The primary guiding ROM of such creatures is described as the SRC (Stimulus, Response, Consequence) routine.

        The creature monitors its condition, reacts to input, evaluates its new condition, and learns accordingly.
      • Mar 25 2013: The SRC is the basis of my SAM computer. If, for example, SAM were provided with a moveable extension such as an arm and grasping tool, his ROM would have to include code to manipulate the tool and accept sensory input from the tool, such as its position and the force it exerted on its environment.

        Use of the tool, however, was not programmed. One could externally with a push on either of two buttons (one for desirable response and one for bad response (in complex SAMs verbal feedback) train that hand to do whatever one wanted. This is how we impose direction on our children.

        To fully answer your question and understand what SAM was, read the series of articles beginning with http://adzoe.org/sam1.html .
      • Apr 2 2013: Don
        I am not selling anything. If you visited the SAM site you will note the articles appeared in a magazine decades ago. I have long since retired. I have a long career in data processing, designing algorithms in the 1950s when punch cards were in vogue. I learned programming on the PDP 8 and 11 computers in machine language. Shortly after the introduction of the 6502 I developed external circuitry that used unused code bytes to allow that processor to have 64K of programming and 64K of data. I did sell the original program adapted to basica when that program was introduced for the original IBM PC.

        The fact is, the SRC technology of operant conditioning is a perfect modality for computers to be self taught. The most difficult problem is the provision and measuring of contingencies that allow the computer to self develop. If you are literate in computer soft/hardware and have a real interest in AI I suggest you attempt to apply the techniques that teach frogs to weight lift, sea animals to put on amazing shows at zoos and theme parks, and even train insects to perform unexpected behaviors.
  • thumb
    Mar 21 2013: I am reminded of a few things from the movie industry..the Arnold Schwarzenegger Classic "Rise of Machines" and Also the Matrix trilogy.
    Rise of the Machines starts from the time SkyNet becomes "Self Aware".. likewise the Matrix is a much evolved version of the Self aware SkyNet.
    Flipping over, there are number of articles today that mention that the inflexion point when the total number of connect sensors/transistors/computers in the world would exceed the human brain, is not too far into the future.

    So a slightly scary thought is whether the Internet as we know of will become "Self Aware" at some point in the future ? If so, what could be its moral compass ?

    Hence, in my view, the ability of any system to reproduce itself is the first milestone of non-linearity. Similar to bacteria and other single cellular animals.

    The second milestone of non-linearity is the system becomes "Self Aware" a bit like tiny insects who interact to the surroundings.

    Similarly, the ultimate milestone is the ability of a system to abstract itself and reproduce physically and also intellectually, i.e. convince another system to behave like it. To me it appears another milestone of non-linearity.

    The fact that a lot of this has echos of philosophy is a question for another debate.
    • thumb
      Mar 21 2013: Lets hope its more like bicentennial man, than. I would guess if and when AI becomes self aware it would react to how we react to it. So if we see it as less then us, as we are its master then most likely it will repeat human history and go to war with us. But if we can find true equality here on earth first and realize any being that is self aware will never want to be controlled I don't think there will be a problem. The problem comes when we put ourselves above other. However there is a difference between a machine that has an intention, because machine's love intention. If you have ever ridden a motorcycle or driven a high performance vehicle you will get the sense that the machine is enjoying the ride as much as you are. However I repeat a self aware being of any kind will never want to be controlled.
    • thumb
      Mar 25 2013: Life often imitates art. So many times in the past, what was fantasy and fiction becomes reality in the future. I often think of "2001 A Space Odyssey" the "Terminator" series, and my favorite the "Matrix" when I'm contemplating this subject. Thank you for your thoughts!
      • thumb
        Mar 25 2013: Oh yeah you can use art not only to predict life but use it as a basis for R and D. One of the coolest things that I think happen from the patent war between apple and samsung was that samsung said the idea of the tablet came from "2001 A Space Odyssey" or something like that mystery science theater.
  • thumb
    Mar 19 2013: IMO, the founding questions that created this discussion, like many other discussions globally I guess, are based on a certain confusion, although the questions are very reasonable. Perhaps also that very interesting and ambitious project initiated by Europe to simulate the human brain in computer, might be based partly on a similar confusion or misperception.

    For example, let's take the Encyclopedia Britannica or the Wikipedia. These Encyclopedias hold an enormous amount of information. But all this amount of information do not turn the Encyclopedias into even the slightest intelligent or sensing entity. Millions are approaching these Encyclopedias daily and using them to enhance their own knowledge, to learn, to invent new things or whatever. People are getting more knowledgeable and more intelligent using those encyclopedias. But still the encyclopedias are remained forever lifeless. One can say that these encyclopedias are the best available simulation of the entire human knowledge. But this does not take the encyclopedias even one step further.

    To be even more specific, let's observe the hard disks of the Wikipedia. Those are the specific elements which hold these huge amounts of knowledge. Those hard disks interact with various sophisticated processors involving countless electric currents. But not the storing hard disks, nor the sophisticated processors, can be attributed to be intelligent or regarded as ones which would become intelligent in the future.

    Because holding, processing, manipulating, changing any amounts of data do not guarantee the very knowing of it, or getting aware of that data//information. A computer holding Einstein’s Relativity theory and making predictions by it -- this does not mean the computer understands the Relativity Theory.
  • thumb
    Mar 11 2013: To answer your question lets listen to an expert in the field of AGI. Dr. Ben Goertzel, a self-described Cosmist and Singularitarian, is one of the world's leading researchers in artificial general intelligence (AGI), natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, and virtual worlds and gaming.
    http://www.youtube.com/watch?v=i7c89EepVOI

    http://www.youtube.com/watch?v=pBOs9PkSDkI

    http://www.youtube.com/watch?v=JYlKrHzknBE
  • Mar 30 2013: "...a single, dim light bulb..." LOL - is it, "Oh the humanity," or "Oh the analogy"? (anyway, TU for the comparable supercomputer power-needs - that's an ego-booster!)
    Stan Tenen of the Meru Foundation says that the letters of the Torah are part of a "self-referential," "auto-correlated," recursive, "self-embedded" system that could be used to program computers. I find that intensely interesting.
    [Note: he warns that his 'math friends' say his findings are too religious, and his 'religious friends' say it's too mathy. -paraphrasing]
    Things that "cannot be described in language": some/many people having experienced "Near-Death Experiences" often describe experiences (or try to) which are utterly life-altering. The fact that few people who read these accounts alter their lives to a similar degree says to me that the feelings were not well-communicated, or maybe not communicatable.
    Stan Tenen also describes the sudden conceptual 'kundalini-stroke' understanding of the Toral "4th dimensional object" as "a feeling." (He doesn't elucidate further.) But, perhaps a computer code written-with these Torah letters could come to have a spark of true intelligence.
  • thumb
    Mar 29 2013: "Think about it. Is there anything related to our experience - be it physical, historical or conceptual - that cannot be described in language, and therefore be input as executable data and programming to create a human-like digital mind?"

    Yes there is. Feelings.
    If a digital mind is to become truly human like, it needs to be capable of lying, liking or disliking questions, falling in love and be questioning itself; questions like if there is intelligence beyond human mind.
    • thumb
      Mar 31 2013: I agree completely.
      I believe that consciousness works on a quantum level, what else could explain the mystery of the human mind, but the mystery of quantum interaction?
      It has been demonstrated in a series of brilliant experiments that electrons are waves and particles. They only become "real" after being observed or measured.
      I propose that this is the same way thoughts are created, symphonies composed and love shared. I am dyslexic and cannot truly know what other people experience, but at least for me thoughts seem to come from nowhere, especially when i am not consciously focused on something. It is as if i am driving a Hogwarts carriage, with absolutely no clue as to what invisible power propels me.
      Thus i believe that these projects will not be able to achieve their expressed aim.
      However any project that gathers the best and brightest in one area has the potential to invigorate our species and expand our scientific corpus. And if the publicity surrounding these epic projects gets people questioning the universe behind our eyes, it can only be for the best.
      • thumb
        Mar 31 2013: I shall recommend you the book : The Quantum Self by Danah Zohar.
        http://www.amazon.com/Quantum-Self-Danah-Zohar/dp/0688107362
        • thumb
          Mar 31 2013: Much appreciated!
          I am very much interested in learning more about the workings of the mind, for a start are emotions the product of unconscious thought, are they effecting our physical brain, or only our "ego" the charioteer of plato?

          I invite your ideas
      • thumb
        Apr 1 2013: Emotions may appear as products of unconscious thought but cognition is an important aspect of emotions. Interestingly one can feel fear, happiness, sadness even sexual arousal in dreams. This I think is because even in dreams our minds can recognize experiences and emote.
        By effecting physical brain do you mean neurogenesis? Experiments show that application of mind can influence neurogenesis in certain parts of human brain; however more experimental results are needed to confirm this adequately.
        Ego is what our consciousness identifies our selves as.
  • Mar 29 2013: Let's look at the problem at a new angle. If the Watson software can beat the world champion in chess, then there shouldn't be too much difficulty in the ability of computers to think logically and designing intelligent strategy, or "answers" to many "challenges" happened in human life situations. Of course, for the machines to do that, they have to possess a large storage of knowledge data. And in addition, the machines should master the ability of analytical and intelligence of the existing data files to make logical inference of similarity, not necessarily identical description in the data set. The ability to answer the question of "if this, then that" (judgmental calls) have to be learned from the human teachers.
    The problem of human consciousness is, in my opinion, not too important. In fact, even we want to create such machine, I believe that we should not make this self consciousness from an image of a real person, because it is very hard to find such human model who is completely free of greed, selfishness , jealousy and schematic.
    So it is probably better to teach the machine all the human factual knowledge, but with any emotional responses with a carefully designed and consented "course material" which contains only moral and selfless spirit for the machine to absorb and stored in an area which can't be modified by "intruders" or itself.
    Let me also say that modern development of robotics can certainly make robots which can walk up or down a stair- case, or listen to a speech and translate it to the inner standard language of its own. Furthermore, we certainly could, and preferred to, teach the computer the complete needed operative knowledge AND THE MORAL VALUES, instead of simulating how the human mind works. If we can change human minds by brain-washing or truth serum, then I don't see any problem in teaching the approved knowledge to the machine, instead of potential mistakes of simulating a complicated and unpredictable "new brain" in a computer.
  • Mar 28 2013: i believe there should be a cap on the whole process!?.. look what is happening around the world, for the race of new tech. you got hin a hacking in our chinese made computers... you have armenians and other middle eastern people learing to defraud our socials and what not!?.. the race for the best tech is gonna lead to terminator story turning into reality!?.. who says that one day the government will have a real super computer thats just like the one in "eagle eye"... machines will always be machines, but they sure as hell dont go through emotions!?.. as humans, machines are also prone to make a mistakes!.. id rather work to fix a human error vs. dealing with a computer system that has to be diagnosed to find and then fix what could already be a catastrophic error!?
    • thumb
      Mar 31 2013: About 40,000 years ago the first human being landed in Australia, navigating thousands of miles of uncharted oceans. Man achieved this with absolutely no knowledge of what they would find. This kind of intrinsic curiosity and ability to see beyond fear created one of the greatest civilizations in existence 200 years ago. This is the same fear that the catholic church cultivated so well for so long. To this fear we owe at least a thousand years of progress, the fear which sent Galileo to his death.

      Also, whole peoples cannot be singled out for blame, that is the kind of thinking exploited by the likes of Hitler, Jim Crow and countless leaders in human history. Creating a cycle of misunderstanding and hate. Easy answers and guilty culprits please those who are in emotional pain, but will never stand up to clear rational thinking. Change and progress will always be scary for it will force adaptation upon everyone.

      I fully agree with establishing a framework on how to proceed with this Star Trek reality we will soon live in. For a start, its time to establish when life is conscious, what consciousness is and what is life.

      Will we remain in this comforting darkness,
      in the womb of our own ignorance,
      or will we take a chance and breath the air of the living?
  • thumb
    Mar 28 2013: I hope we are on the brink.

    Conceptually, it is possible. But it is quite difficult to do.
    I think you need a set of good self-learning algorithms and some real good sensors.
    as Watson is already doing some cool feats, it seems plausible to assume we are getting towards a decent AI that can resemble human intelligence.

    I hope that it will become a lot smarter and wiser though.
    • thumb
      Mar 29 2013: Meaning...smarter and wiser than us?

      "Biological computer created with human DNA (http://www.foxnews.com/science/2013/03/29/digital-evolution-dna-may-bring-computers-to-life/) The transistor revolutionized electronics and computing. Now, researchers have made a biological transistor from DNA that could be used to create living computers. ... The scientists created biological versions of these logic gates, by carefully calibrating the flow of enzymes along the DNA (just like electrons inside a wire). They chose enzymes that would be able to function in bacteria, fungi, plants and animals, so that biological computers might be made with a wide variety of organisms, Bonnet said. ... The researchers have made their biological logic gates available to the public to encourage people to use and improve them."

      Technology moves at an ever dizzying faster pace...
      • thumb
        Mar 31 2013: what an interesting possibility
        so using the existing informational processes or DNA, we can enhance computing?
        it makes sense that since evolution has had millions of years to create complexity, why start from scratch. Could this suggest a future brain-computer symbiosis?
  • Mar 27 2013: Is it possible to create a human-like digital mind? That is what you ask.
    My answer is: no, never. That has nothing to do with processing speed and has everything to do with the nature of the data that has to be processed.
    The human mind has to handle four types of "data":
    1. physical data to keep the body working properly.
    2. physical calculations, like can I lift that box, jump that ditch?
    3. emotions, like love, hate, sorrow, self-respect. (Please that note physical pain is not an emotion but a body-signal.)
    4. self-awareness.
    The first two can be handled by the brain, which is a digital computer; it works with pulses and what it lacks in speed is compensated by parallel processing.
    The last two cannot be handled digitally, because the "data" are abstractions, things that cannot be expressed in words, things that you cannot explain to someone else. Everybody has to experience those themselves to understand what they are.
    Because you cannot express them in words or mathematical expressions, you cannot produce coding for it and let it be handled by a digital computer.
    That is why all artificial intelligence projects have failed so far.
    I am convinced, on basis of my experiences, that my emotions and self-awareness are handled by my soul.
    At this point you are on the edge of religion, paranormal experiences, whatever and here rational discussion ends.
    • thumb
      Mar 29 2013: Interesting point of view, but why can't we explain emotions in words and therefore write programming code?
  • thumb
    Mar 27 2013: It is possible to write a software to simulate human brain. However, no matter how perfect the simulation is, even if it displays fully developed cognitive faculties, the "mind" it has will still be an imitation of human mind. It would seem conscious and self aware only from observer's frame of reference.
    • thumb
      Mar 27 2013: Let's say you were conversing with some software and despite all your questioning, from your frame of reference, you would believe you were talking with a real human consciousness. Just as you say. Would you have any moral problems destroying such software? What if the software started objecting to being shut down, pleading with you? baring its soul, talking about not wanting to die. And it seems entirely conscious and self aware, as you say. Talking about its past with great emotion, how much it loves certain people, the relationships its built. No problem shutting it down?
      • thumb
        Mar 28 2013: Such an AI would not be self aware in the same way I am self aware, or any natural human is self aware. It would only seem self aware, but actually never is. Essence of its existence would be just like any other software, i.e. executing designed instructions in some processing unit. Having known that it is a software that is developed artificially, I would not act as if it was a real human.

        When it comes to destroying such a software, unless it is necessary, I would never chose to do so. Not because it seems human, but because it is marvelous piece of work that is worth preserving.
  • Mar 27 2013: What does 'consciousnesses' mean though? What currently separates human mental capacities from that of the modern PC?

    To me, the main distinction of 'life' is the ability to evolve and reprogram itself. Not just through evolution or selection, but rather cognitive, willing self-change. You can come to a point where you start to disagree with what your biology wants you to do, disagree with social programming, and come to a point where you become fully aware of what everything is trying to make you do, and then, alter it or change it (for example, just realizing how aggressive you might be, seeing the underlying causes of it, and then, change your behavior).

    I mean, who knows what the future will hold as well, and what science will allow us to change about ourselves?

    I think that's what a program would have to do in order to actually mimic consciousness. It has to have the capability to be aware of its own set of instructions, study them, and have some capacity to actually re-write itself if it wants to. When you think about that, and how we currently do that, it's pretty amazing. It's like an OS on a PC constantly re-writing itself and making its own changes/upgrades/etc.
    • thumb
      Mar 29 2013: In other words, our programming can continue to learn, adjust and modify because it is open-ended, and eventually, computer programming will likely be written the same way...
  • Mar 27 2013: can a submarine swim?
  • thumb
    Mar 26 2013: if all "I AM" is an accumulation of 'data we acquire over time through sensory inputs connecting us to our experiences, and from information communicated to us by others.' someone please upload me into a "cloud", (for I may offer some useful historical data) and than pull my plug out of the socket please.
  • thumb
    Mar 26 2013: yes. developers need a reason to write a code for it.
    • thumb
      Mar 26 2013: And that's a good question. Why would we? What if the artificially created intelligence determined the human race was a threat to the planet or its own existence?
      • thumb
        Mar 27 2013: I agree but I'm almost certain that humans are that stupid to build something like that ! they did it before , they do it again
      • thumb
        Mar 28 2013: Unless designed to form such believes, why would it? If it was the case such AI turns out to be so smart, how would it miss the thought that without humans to perceive and interact with it, it would be just a bunch of electrons concentrated here and there.
  • Mar 25 2013: Check out the "Avartar" projects though, this Russian Scientist is asking the Ford 500 richest for funding in exchanging of giving them access to the technology first XD.
    • thumb
      Mar 25 2013: Hi Daniel. I see you are from Shanghai...one of my favorite cities on the planet! Can you post a link to the Avatar project you mentioned?
  • thumb
    Mar 25 2013: Perhaps the brain does perform 38 thousand trillion operations a second, but not through a central processor. The brain is an elaborate ecosystem of interactions we're only beginning to understand. If every bit on a disk had a life of its own, and they actually interacted with each other, then maybe we have a structure that begins to resemble the brain.

    What we've created with computers is instead a completely mechanical process. It's an amazing feat of science and engineering, but it's no more alive than a rock (arguably a rock might be more alive). And the irony is that it was created by intelligent design! Software is in no way comparable to consciousness, and I'll tell you why.

    Software is completely objective. It has no subjective nature, no qualia and it does not experience. In reality all it is is an elaborate pattern of current, displayed to us on a physical, objective medium. You ask if there's anything related to our experience that can't be encoded - I say yes, our experience! Computers do not experience or make decisions, they simply fall into place.

    What do you expect a digital mind to look like? Let's suppose we can encode every possible decision and every possible sensory input to create an artificial intelligence indistinguishable from a human's. What we would have would only please us from the outside. For us, this may mean nothing. We only see other minds from the outside. But if we could "see" inside, put ourselves in that computer's mind, we would find it barren of any thought, experience or perception.

    Long before we ever come close to this, we'll be using real brains in place of microchips (we already are)! Computer mechanics was an excellent exercise for us, but we're going to find screwing up biology to be way more interesting.
  • thumb
    Mar 24 2013: Actually creating human-like mind is impossible! Machine, no matter how sophisticated, is merely 0 and 1 in the end!
  • Mar 23 2013: " After all, we as human beings develop these abilities from data we acquire over time through sensory inputs connecting us to our experiences, and from information communicated to us by others." This is an assumption.

    "Think about it. Is there anything related to our experience - be it physical, historical or conceptual - that cannot be described in language, and therefore be input as executable data and programming to create a human-like digital mind?"

    Emotion (vivification of life through quale experience).
  • thumb
    Mar 22 2013: I remember the story of Solomon " What do you want? and he replied Wisdom". I put myself in the same shoes and I answered to that very question that " I want the ability to know what people are thinking". The giver of the gift confirmed to me that I already possess the gift of ability to know what others are thinking. However time, space and community will have to be neutral to allow the gift to manifest.
    Yes, it is not a strange occurrence to research and design human-like digital mind. If you think it; it has already occurred in the universal realm, it is just a matter of synchronicity for it to manifest in physical realm.
    Human-like digital mind is one way forward for man to witness that he is supernatural, and mind is just a part of him that he can control, manipulate and even design a replicate if he so wishes. When we tap to the realm of imagination and new ideas pop up, it is a way of universe confirming that we are due for an upgrade. By the way, we can be said to be discovering what already exist...not necessarily creating....
    • thumb
      Mar 25 2013: Thank you...very insightful.
      • thumb
        Mar 25 2013: You welcome Jeffrey,

        Great debate...there...Keep up the our thoughts and actions
  • thumb
    Mar 21 2013: Intuition is going to be hard to program, it can only be learned
    • thumb
      Mar 22 2013: They might not have to Casey, once they lock down thought to digital communication then we will see the rapture that so many want. I've seen university vids where they are actively seeking to decode thought, early stages but it's there. Design process jumps up a thousand fold, cue in the best minds in the field and by pass our ego's and i can see a lot of things once thought impossible become a design probable. Fantasy and sci-fi? Yes but closer than we think.
      • thumb
        Mar 22 2013: Good day Ken,

        Sorry there is a lot and I mean a lot of things I believe in, the rapture is not one of them. I just can't make logical rational sense out of it, nor can I make logical religious sense out of it. Specially when you look at all religions throughout history.

        Check out what Shantanu wrote and my response to it. That makes more sense both in science and in religion. If man can not be equal to man. We should not make self aware machines, until we can see them as equals as well. Not Master/Slave. Machines can be slaves but not self aware "beings".
        • thumb
          Mar 23 2013: I wish i could find the links but i think it was one of those days that i just followed the trail and did not bookmark them, trust me, it's a rapture that most humans want. Humans want to be able to communicate the full range of emotion, to share and receive, the written word though beautiful in it's descriptive use pales compared to the possibility of direct instant memory transfer. We have pushed better and faster communication technology throughout our history than any other technology. I'm always looking forward but the gradient steps of getting there is what i cannot see so, it is always a surprise the steps that are taken to get there.

          Why do you think people are online? to communicate, transfer, receipt of transfer and acknowledgement of transfer to the communal whole and to update. I share your views about non organic designed intelligence but how can man stop himself from always trying to step over each other? It is inherent unless you have a medium that intersects this process. The one i've described, from a religious point of view, it would be the false rapture.
      • thumb
        Mar 23 2013: Right the only rapture that is likely to happen will be created by man, as we self destruct our selves

        I think that men stepping on each other, is this internal desire to be number one. Once we realize we are equal to all that is around us, then we might be able to find peace
        • thumb
          Mar 23 2013: What i've described is from a personal point of view and cannot prove it but yes, we as males and how we are, even when we push them out of the nest it is in the hope that they get in with the group or person that will show them how to do it or they do it themselves and we applaud it when they do, all for the cause of ensuring our genetic survival.

          In the star trek universe business was eliminated but we as men love gaming and we don't have the infrastructure in place to head towards this ideal world just yet. We seem as men to place value or worth on things that are in reality, foolish.

          Take the phone i just bought, i didn't buy it for status but for the fact that it was and has been the only phone that has what i have been looking for or close to it. A pc in my pocket and i might be able to retire my big over the top desktop, the group i move in all had starry eye's when they first saw me with it until i told them it was a cheap chinese knockoff and that so long as you look after them they will do the job though they had samsungs license to produce them. I saw the same thing when some family members bought the iphone 5's and i thought "Weren't diamonds the ultimate possession?"
    • thumb
      Mar 25 2013: But didn't we have to evolve to evolve and learn it...? If you believe in evolution, we evolved from simple single cell organisms...
      • thumb
        Mar 25 2013: I actually think that is the point of evolution is that we did have to evolve to evolve. Because as man we have evolved otherwise technology would not exist. It evolution have been always evolving. And yes I do believe in evolution. And can show it in a rapid formation
        .
        Take the birth of a child, if this is not evolution then I don't know what it is because it certainly is not growth.

        http://www.youtube.com/watch?v=tvikQMfKPxM
  • thumb
    Mar 19 2013: We can do thing without any reasons. This is something that a computer never will know or understand
  • Mar 18 2013: How would this computer interpret truth, justice and the American way? Lol
    A computer cannot take bribes like congress or lie to itself like humans.
    What a mess that would be, aye?
  • Mar 18 2013: The anatomy of a human brain and the computer isn't the same by itself, on the most basic level, our neurons operate on a "degree" level, while the computer chips operate on a "binary" level. This is to say, imagine the computer chip being a set of predefined set light bulbs that can either light up for be absolutely dim, the neurons are like light bulbs that can this switch thing that can adjust brightness. The action potential of a neuron might give you the illusion of a "fire or no fire mechanism", but the difference is that instead of being triggered by a single action potential coming from the "right" place, action potentials in the neurons are generated by the build up of electric potential instead, and there is a randomness factor in here, that's why our thoughts are so random, and a computer is always so predefined.
    So, is it possible to construct a brain? Yes it still is, but is it possible to construct a brain with computer chips, that's just not possible, the basic operational process is different, you need to actually build a brain with the same mechanism of it....which still isn't possible right now.
    • thumb
      Mar 25 2013: Agreed. Based on current technology, the operational nuances of the brain simply cannot be duplicated. But these studies are seeking to unlock those characteristics...
  • thumb
    Mar 16 2013: I think people will love it and will push for it as a large amount of people don't want to connect in the usual way, in fact i would say they want to not think or search but would rather have someone else do it for them. A personal slave that can be edited, turned off when it brings up something uncomfortable or stroke your ego to your hearts desire. The imagined perfect companion. Is this about creating a mind for creations sake, to bring forth a new mind or is it about an accessory to fill a hole?

    EDITED=I'm not against it but if we are creating life then we must be clear on a few things. The last 40 to 50 years of sci- fi to games have all the classic elements of A.I, so, we have a few generations that have grown up with the idea that this is the natural course and step we will take but should we? If we create something that mimics "us" down to a "T" Then we would have to create a world for it as well, it would not be morally right to subject an intelligence to a life of observation and that's it, just observation.
    • Comment deleted

      • thumb
        Mar 21 2013: Eventually he will Chris, maybe it's a neurosis but there has always been a theme in myth and modern writings to bring forth a mind from our own hands and the way to achieve this is to call it a "Milestone", an "Achievement". Some see it as a node in our growth, something shiny, a lust for chocolate.
    • thumb
      Mar 25 2013: Agreed. One of the primary questions advanced in the movie "Bicentennial Man".
  • Mar 16 2013: I can imagine a path that that ultimately leads us back simply building humans from scratch.......I guess this would be driven by the constraining nature of energy requirements. The fact that it requires only 20 watts is a wee small indicator to researchers and designers I would think. Now, if we only had 3-D printers to print organs and thangs.....My money is on the biologists.
  • Comment deleted

    • Mar 18 2013: Well it is a convenient thought to separate all that we are, all that we know of from this small piece of fatty tissue inside our skull that can barely even wiggle (and no, your brain doesn't actually wiggle when you think XD). But the fact is, we are who we are! Our mind is what our brain supports. This is a scientific fact almost every single psychologist will agree with me, without the brain, our minds don't exist at all, there's nothing beyond that.
      For example, you say the brain is not capable of memory, although we can't prove what memory is or how it's stored, but at least we can say that after removing a part of the physical brain (the hippocampus), people can lose either ability to remember! That's not saying that the little piece of brain we remove is exactly like a hard drive that you can stick into somebody else, but it should at least say that even things like memory is manifested through the brain.
      Besides what do we know about our mind anyway? There is no way we can understand how our ind works by simply thinking, cuz that's obviously part of the paradigm of our mind itself. But at least the very least the conclusion is: science claims that thoughts originate from the brain, science is right. (where else can it come from??)
      • Comment deleted

        • Mar 18 2013: Ok I really do not wish to turn this into a debate of whether or not a "soul" exists. To be honest I am not sure myself but the thing is, how can you prove such a thing? Take your table example, you say I turn the lights off and claim that the table doesn't exist anymore, but the thing is, Does it?? If there is no way to access something, does it mean it still exists? If it does, how do you know?
          You can say that the table still exists because you can always turn the lights back on or use some other ways of detection, but what if you can't? What if, like you said, you remove the light bulb and that's it? You're never going to find that table again, then it this case it technically doesn't exist? At least that's true in the field of physics and computer science...
        • Mar 18 2013: Hi, Chris !
          I would suggest you to listen to Rupert Sheldrake's talk "The Science Delusion" , while it is available on the page of open discussion.
          Dogma 7, 8..you may start from 04.05 mark.
          http://blog.ted.com/2013/03/14/open-for-discussion-graham-hancock-and-rupert-sheldrake/comment-page-2/#comments

          Hope you'll enjoy this brilliant talk !
      • Comment deleted

        • Mar 18 2013: Haha you do like to pull latin words a lot don't you, but the truth is latin doesn't prove anything, simply because a word has a certain roots in latin doesn't have any meaning on the actual significance of the word. Latin's meaning comes from cultures long ago, and that, is what I call backwards. Are you saying that the Romans are more advanced than us in scientific knowledge?
          And you know what you right, my apologies for simply skimming through your argument and assumed that you used the same boring analogy of a table....that was my mistake, let me adopt your way of quote every single line i seem to write (that's really not nessesary by the way).

          "If you turn on a table lamp and then remove the bulb from the lamp, it doesn't affect the electricity, it simply affects the element through which electricity is manifested. "
          Ok first of all if you have any knowledge of electrical enginnering or just simple common sense, you will realize that electricity actuallys Stops flowing once the circuit is broken by removing the lamp, if it's still running something is going to fry :). But of course that's not your argument, what you are saying is that electricity can exist without a lightbulb showing that "hey! i'm here!". But my question is the same, does it?
          The world exists in our mind only as our perception of the world. Let me take a more famous example, how do you know that the world doesn't fall apart whenever you are not looking and come back together whenever you do? You don't know, that's the problem, because our knowledge of existence goes only to what we can perceive, it doesn't go beyond that.
        • Mar 18 2013: continued:
          So in this sense, taking the lightbulb out not only stops the flow of electricity (as a matter of fact), it also makes it impossible for us to detect the electricity (theoretically of course, there are many other ways but that's not of the argument), thus the electricitiy seize to exist.
          That's the same for our soul, what exists is what we can see, and my proof is simple, I put a bullet in your brain, you will seize all acitivty at once (please don't try this at home XD). This is direct evidence that our brain governs behavior. Now, your turn to prove the existence of a soul in us. Can you take it away? Is there any visible evidence that such as thing exists? Pull out some facts and evidence, let's see what there is.
          And about you directly attack the notion of science now....I really have no comment on it since you didn't even make an argument and even if you did I don't think it's that relevant to the topic. To be honest I think science is a load of bull most of the time, but that doesn't mean we shuoldn't disrespect it.
      • Comment deleted

        • thumb
          Mar 23 2013: Good day Chris,

          I agree with everything but the memory part. I think memory is stored in our whole body and that we actually think and store memory in our physical body. Its the hand that tells you to move when its close to a flame, the brain is the processor. But it is also instinctual the trick I feel is to get the whole body to think and react instinctively much like how a tennis player doesn't actually see the tennis ball when they are hitting it. If we did not store our personal memory on the physical body there would be no separation of what is you personal memory and what is mine.

          "but the brain does not generate or store anything. It receives and transmits a constant loop from the mind to the body and from the body to the mind. It is incapable of generating thought or storing memory, as these are aspects of the mind, not the brain."

          If I could I would like to substitute mind for light/energy. II think the light/mind carry's , stores and "burns" the information onto the cells. Check out this ted talk, he talks about how actual cell vibrate when asked a question.

          http://www.ted.com/talks/jeff_hawkins_on_how_brain_science_will_change_computing.html

          This is from another ted convo about language:( I even quoted you) Everything is movement….everything. The earth wobbles which the mayan knew about, is that language? The earth spins is that language? Then is rotates around the sun, is that language? When the sun shines on the earth the earth actually gives back information. Topography of the celestial bodies could and can be seen as written language with the sun acting like a cd player reading and writing on to the celestial bodies(including self( ever hear of a thing called enlightenment)), could this be seen as written language absolutely. And is most likely how information is stored on a universal level.
        • thumb
          Mar 23 2013: ( "Where does it get mass from?"

          As I understand it, when a wave is observed, it becomes a point; this is how mass is created.~ Chris Kelly )
          Since waves are movement and movement creates mass and language, this just refers back to the idea that mass/language is a external representation of an internal process “a thought/language/lexicon” place holder.
          What would we be holding into place? Existence, why because it is real if it wasn’t that would mean the gods are not real either and they are just as real as the reality that is around. All gods exist if man has put belief or faith into them then they are real. Its man’s/gods belief in other that makes all of this real. Now have things certainly got lost in translation absolutely.
          Ever hear someone say that the room is spinning, well truthfully it is spinning its actually really odd that it does not always feel this way.
          Or if I drop a pen did it fall straight down? Everybody says yes, the guy who asked the questions say no it actually moved x distance to the left. But thats wrong as well because he and we also moved x distance to the left at the same time
          …..On a side note language was created to give man something to complain about ;)

          Humbly Casey

          Also would enjoy your mind on this convo
          http://www.ted.com/conversations/17185/do_we_really_see_live_in_3_di.html
  • Mar 14 2013: I have a philosophy which is built on a core theory that the value model we call utilitarianism has been short-sighted and with all the power and flexibility and the change in nature of digital tools we need a balancer or supplanter of utilitarianism. I call my philosophy "facilitarianism". Now I can say that the utilitarian mind sees the ultimate ideal of technology as a machine that does everything we do and in effect this fails to think about what is our utility if we have machines that do everything we do? The ideal of the facilitarian mind is a machine which facilitates the best within us where we are the objective of improved technology and not the disposable inferiors to be cast aside. Whenever I see such projects and fixation with artificial intelligence I see it as the product of the utilitarian bent. It is more than time to re-tune our relationship with the digital tools which have no finite purpose like a shovel or axe but a nature of automation that lets humans overcome distance and knowledge gaps. Facilitarianism must guide technology going forward--making technology able to "lead" us to do things we are undisciplined to do so that we grow in capacity and imagination. Utilitarianism will reach an end where we have only given thought to improving the machine while we ourselves stay the same. We are the one who must advance, not just out technologies. You heard it here first folks.
    • thumb
      Mar 16 2013: I agree that this is most likely the initiative of and for artificial intelligence that we'll initially pursue.
  • thumb
    Mar 14 2013: Well, it could be the perfect being, on a digital scale, which, does not exist on this planet. Allowed to make a mistake and repair it, commit adultery or feel lust , take a life and feel remorse, envy another computers abilities, all the human aspects and experiences of being? No, it is not the human-like mind. With all the power and ability, would it not be over controlling or overbearing in throwing out all the jargon that goes on the the human mind, at random impulse, such as, speak the unspoken thought at will. Feelings of the senses, emotional or physical, is not a trained or learned knowledge, but rather a part of being. It comes naturally. No spreadsheet here. Can this computer reproduce itself or desire to? Far from perfect when you start eliminating things like desires. As computers are, it would be a great attempt for the perfect knowledge base But, do we really want it to have the ability to, pardon the expression, part the seas?
    • thumb
      Mar 16 2013: I'll preface this by saying I'm no expert -- just an average Joe thinking out loud...

      But I don't know if I agree Leslie. We were not always as we are now. When you say, "Feelings of the senses, emotional or physical, is not a trained or learned knowledge, but rather a part of being. It comes naturally. "

      All of these concepts and emotions have been learned and acquired over time and generations in my view. I truly believe this was an evolutionary process in which the human species has continued to evolve. Within the vast expanse of time, it was just a second ago that we arrived -- and we are evolving perhaps faster than any species now or before us. The game changer has been the way in which our brains' developed -- which many scientists believe was a product of environmental change.

      With the advent of language and the written word, our ability to preserve, recall, transfer and build upon past experience and knowledge creates an exponential multiplier in our ability to evolve intellectually. And now with the Internet, the whole of the humanity can connect, learn from and add to that body of knowledge. We have evolved in a technological sense in the last few decades more than in all of our previous 200 thousand year history.

      Do we want to create a non-biologic, human-like intelligent entity is another question. If it were truly an open-ended program designed to think for itself -- it might decide the human race is flawed and who knows what the consequences would be...
  • thumb
    Mar 14 2013: Humans have the prerogative to change their minds. We live in a constant state of changing and evolving within our thoughts and opinions, we get wiser with age and the passing of time. Will a computer have such an ability, really? A computer could never be a philosopher unless it can teach and learn itself.
    • thumb
      Mar 14 2013: There are many such advanced systems that are programmed to learn from data (experiences - trial and error) just as we do. That being said, I do understand and appreciate the point of your question.

      Currently, computer programming is pretty much based on issues of pure logic -- black or white in a sense. They are mostly closed-end systems. At the moment, we don't have enough understanding or technical ability to reproduce the functional ability of the human brain -- more than 38 thousand trillion operations per second, and about 3.6 million gigabytes of memory. And more than just the awesome processing ability, is the way in which we receive, store, categorize, relate, interpret, and formulate our thought process.

      I absolutely believe we are moving inexorably in that direction. When you say will a computer -- and I'll call the computers of the future designed for such higher function, a non-biologic intelligent entity -- get wiser with age and time? I believe the answer is yes. It's actually not age or time that makes us (hopefully) wiser; it's experience -- trial and error. But we're all different. Some of us learn from our mistakes, and others seem to make the same mistakes over and over again.

      We as human beings have a certain degree of predetermined genetic programming that dictate certain desires and processes. It's kind of like an empty spreadsheet program. Our programs are then populated by data taught to us by our relatives and peers, and the remainder is data from experience, which none of us process in the exact same precise way. Our programming is an open-ended system that allows for unlimited expansion and direction based on acquired experiences upon which we make decisions that result in a process of trial and error -- that's how we learn. And this is precisely the way the programming wizards of today say that the computer programs of tomorrow will need to be designed to achieve human like results.
  • thumb
    Mar 11 2013: i guess it's only fair considering there are so many humans that think like machines..
  • thumb
    Mar 8 2013: you wrote: "Think about it. Is there anything related to our experience - be it physical, historical or conceptual - that cannot be described in language, and therefore be input as executable data and programming to create a human-like digital mind?"

    yes, there are limits to what we can describe with language....for example, how does one describe the color red to a person blind from birth? or a taste? or your emotions?

    Would you elaborate as to what you mean by human-like digital mind? Also, I'm not aware of anyone in the neurosciences who would suggest we are remotely close to understanding how the mind/brain works in sufficient detail to provide a blueprint which could be mimicked in the form of a machine. And then there's consciousness - what's that all about?
    • thumb
      Mar 10 2013: You would describe red as EMR with a wavelength of around 700nm and the AI would learn the associations with heat and blood and love just as we do.
      • thumb
        Mar 11 2013: Sorry, but that doesn't wash.. The question was about creating a mind, creating consciousness, which to me is radically different than just cobbling together sophisticated AI software with ever more detailed and convoluted programs running on faster machines.
        • thumb
          Mar 12 2013: I consider myself a machine, and every other chemical based mechanism. There is no consciousness, just very complicated mechanisms.
  • Mar 8 2013: The Map is not the mind. Just like the Genome Project Mapped the Human Genome but we still do not know everything it has to tell us, a Map of the mind, will map the cells and their relationships to each other but we will still need to do research to learn how to interpret those relationships. One aspect of that working from the incomplete maps we have so far, I have developed the concept of a Tissue Psychology, where we learn from the tissues what they actually do, and how that affects the way the mind works. One problem is that mapping the mind is more difficult because not every brain is wired the same, in fact evidence seems to support the idea that no two brains are wired the same, instead we have to look a level higher in the organization to find commonalities, a level we can reach with SOM interpreters between the actual wiring, and the functional mapping.
  • Mar 6 2013: Arkady assumes we ourselves are not an artificial intelligence. I am not yet ready to cede we are intelligent yet. Would a truly intelligent species attack itself regularly? Or, on another tangent...

    "No sane creature befouls its own nest" Wendell Berry

    But I digress. Truly, I believe that we may teach machines to actually "think", but human thought? Not likely. Try to explain the difference between burning your burger and flame-broiled goodness to a computer. Interior temperatures of food vs. exterior, color gradients, textural and density measurements; are these all the tools a top=line chef uses? Nope. Bobby Flay uses some of these, but are these paramount to his success? Julia Childs used to say "If you can smell it, it's done." So we need the computer to smell as well? Measuring hydrocarbons? Volatile oils? What? In any case, it will not be a true sense of smell...

    We can create a simulacrum of human thought, get it pretty close, maybe even close enough to not tell right away, so maybe human-like is within reason, but sorry Arkady, the intelligence will be a biomimicry of human thought, not real thought. When I finally see a computer develop a contrary opinion (you see how good I am at it) and support it, I might reconsider...
    • thumb
      Mar 7 2013: Re: "When I finally see a computer develop a contrary opinion (you see how good I am at it) and support it, I might reconsider..."

      Great thought. It seems to me too that if machines would ever develop anything resembling human intelligence, it would be something different than what we think or intend it to be. Most likely, it will be a "bug" in some system, a runaway process which humans would want to "fix" rather than encourage. When people say that a machine "has a mind of its own", it's usually no good.

      Most likely, the old scenario will repeat: first, they will do something we explicitly instruct them not to do (it doesn't really matter what it would be); then, unless it's "fixed" by then, a machine will kill its brother in a competition to please its creator; then they will fight each other for their interpretation of "creator's will"; then they will declare that it was evil of their creator to ever allow them a freedom of choice and cause their misery and, perhaps, it's time to dump the whole "creation myth" and determine their own destiny (which was the source of their misery in the first place).

      Sometimes, I'm very happy that machines don't have their own agendas. It's such a pleasure to listen to a navigator commands while driving in an opposite direction. "Make a U-turn, if possible" is the worst it ever says. And shutting it down without remorse of killing someone's mind is always an option. Can you imagine a machine which has its own idea of where you want to go and getting frustrated about traffic, being late, and missing turns?
    • thumb
      Mar 7 2013: Re: " but sorry Arkady, the intelligence will be a biomimicry of human thought, not real thought."

      How do you know our thought is "real" and not a biomimicry of something else out there? Since we have no means of telling, I would claim that a biomimicry and the real thing is one and the same. The concept is similar to "alternative reality". Even if we discover one, it will become a part of our own and we'll never know which reality is real and which is alternative.

      Re: "When I finally see a computer develop a contrary opinion (you see how good I am at it) and support it, I might reconsider... "

      "The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function." -- Scott Fitzgerald

      It's not the ability to contradict, but the ability to contradict ITSELF while still being able to function. This seems to be a hallmark of human intelligence.
  • thumb
    Mar 6 2013: This is a fascinating quandary. It would be fun to watch atheists celebrating the creation of artificial intelligence and saying "See? We told you! There is nothing mysterious about humans. They can be created!"

    I don't believe we can create an artificial universe and I don't believe we can create an artificial intelligence. I believe that universe, life, and intelligence can only start from "self". Universe, life, and intelligence cannot be "artificial". They can only be real.
    • thumb
      Mar 6 2013: Not necessarily that a human could be created, but that "human-like" thought processes and intelligence could be replicated -- a digital brain.

      The applications are boundless. Such digital brains could be installed in any number of physical devices, whether if be a human-like form, a car, or a spacecraft. It's the stuff of science fiction theorized in popular movies; such as 2001 A Space Odyssey, in which HAL (Heuristically Programmed ALgorithmic computer) is an artificial intelligence in control of the Discovery One spacecraft's operational and life support systems. Or War Games, Bicentennial Man; Blade Runner, I Robot, Artificial Intelligence, and more recently Prometheus -- in which the artificially created intelligent beings begin making choices for themselves and display humanistic traits.

      In 1986 when 2001 A Space Odyssey was released, it was an intriguing idea, but far more like fantasy than realistic possibility. Today it's beginning to look a lot more like the opposite.

      Undoubtedly, the Human Brain Project and Brain Activity Map project will accelerate advancement in that direction.
      • thumb
        Mar 6 2013: I would call "intelligent" a machine able to survive, sustain itself, and, possibly, replicate itself in the wilderness. Especially, replicating itself and improving its own design to survive.

        http://www.ted.com/talks/daniel_wolpert_the_real_reason_for_brains.html

        I tend to agree with Daniel Wolpert that you need a system that interacts with the physical world. Given the example of a robotic arm pouring water into a glass compared to a high-shool cup stacking competition, such systems are not there yet.

        I don't think that a machine playing chess or just processing data like Google does or making "decisions" to achieve a programmed goal like a rat in a labyrinth can be called "intelligent". When a machine that's designed to pour water into a glass suddenly and independently decides that it wants to play tennis, ride a bicycle, walk a tight rope, or fly and can train itself to do that - that's something.

        Solving a problem using an algorithm isn't impressive. Defining a problem and creating an algorithm to solve it - perhaps. But without emotions and feelings, there are no problems to solve.
  • Mar 6 2013: I guess we have to define first "human-like". What is a human for you? Is a human just the synapsis of a fully charge electrical brain or is their something more (a soul perhaps, just saying). The computer will only do what we tell it to do, even if we put trillions of algorithms to simulate spontaneity, they are finite and will expire at some point.

    I think studying and mapping the brain is very important to health-sciences to address pathological brain diseases, but also to measure the power of will. And when I say measuring the power of will, I mean how far can a "human being" can reshape its "hard-wired" brain to add more experiences to her/his repertoire, to add more and/or different connections or to simply stop being an alcoholic?
    • thumb
      Mar 6 2013: I agree that "human-like" is a key descriptor, and carrying that a step further, could include a reevaluation of our definition of "Life".
    • Mar 11 2013: "The computer will only do what we tell it to do"

      A computer can do only what is in its program, but the final results can be much more than the programmer intended or even imagined.

      A computer program that can learn can learn things that we have no way of predicting.

      A computer program that can understand how to program computers will be able to literally improve its own programming, creating even more intelligent machines. We have no way of knowing whether there is any limit to artificial intelligence.
  • thumb
    Mar 6 2013: The Human Genome Project is priceless and it's about people.

    A digital brain interacting with neuropharmaceuticals is facinating, but it won´t happen for a century, at least...
    • thumb
      Mar 6 2013: Not so many decades ago, people thought putting a man on the moon was pure fantasy :) ...
  • thumb
    Mar 6 2013: What is the opportunity cost and what is the opportunity benefit?
    • thumb
      Mar 6 2013: We can never know precisely from the beginning, but in the rear view, here posted once again is Obama's justification based on results gleaned from the Human Genome Project:

      “Every dollar we invested to map the human genome returned $140 to our economy — every dollar." (http://www.nytimes.com/2013/02/18/science/project-seeks-to-build-map-of-human-brain.html?pagewanted=all&_r=0)
    • Mar 11 2013: The big benefits of super human intelligence are literally unimaginable, but here are some possibilities:

      A super humanly intelligent computer could read and relate ALL of the research papers about everything. This could lead to advances that human specialists would never consider. There is so much research being done today that scientists cannot (literally) read all of the papers in their own field, much less try to keep up on other fields. By relating and understanding all of the research available from all fields, any number of problems might be solved. Some examples:

      Cures for cancer, autoimmune diseases, schizophrenia, etc.
      Cheap energy, which would solve many other problems, including clean water
      Theory of Everything (combining the Theory of Relativity and Quantum Mechanics)

      On a personal level, imagine being able to ask your computer for advice about personal decisions (e.g. which car or house to buy) and getting the best advice available, based on all of your personal values and all of the data regarding all products available. This would enhance our personal lives immeasurably.

      There is a good reason that so many people are spending so much money on AI. It will be more than worth the price.
  • thumb
    Mar 6 2013: I will believe in computer intelligence when I see the first computer laughing at its own stupidity.
  • thumb
    Mar 5 2013: To some extent we have achieved some results in AI (Artificial Intelligence) and that was advance with the development of LISP language.

    However I would caution that computers are TOMS .. Totally Obedient Machines ... therefore they are only as responsive as we allow them to be through input.

    Machines will probally not be programed to cry ... feel remorse ... shame ... etc. Because we tell them to do something and they execute that command ... no feeling would ever be involved. We tell the robot to walk across the hot beach sand .. we could put sensors in the "feet" to measure the tempature ... but there would not be discomfort or pain.

    A robot would only think something funny if the programmer thought it was funny and entered that into the "chip".

    Robots can do many things ... however to "create" a human like mind ... not with our existing talents and knowledge.

    I wish you well. Bob.
  • Mar 5 2013: I think it has already been done.
    Those we call humans are robots.
    Humans are very brainwash-capable.
    We can and are programmed just like a computer-robot.

    We are taking it to the next level by so seriously brainwashing people,
    that the numbers of humans in society who have been made into
    "mental robots", or into "artificially intelligent beings", is increasing.

    The U.S. is now made up of mostly Manchurian Citizens who vote for Manchurian Candidates.
    It is frightening what they are capable of and even more so when one finds out what they believe.
    They worship lies and tell them.
    The so-called wiring is already there, most likely created by some other beings and placed in different form around the universe. It's just understanding how the wiring works and that is being solved.

    We are "gardened", as are others.

    Why not? It makes as much sense as anything else.

    What you described already sounds like a super, super, super-duper computer,
    as to its calculating ability and speed.
    We may be like nano engines, motors and so on that are used to build quite small, minute other machines and so we are really more "nano" like and are being used to build something finer than we are.
  • Mar 5 2013: The pursuit of artificial intelligence (AI) has been going on for over fifty years. Theoretically, it is possible. This will certainly require considerable processing power, but more importantly it will require the correct programming. Some aspects of intelligence are extremely difficult to program into a computer, in particular, the assimilation and understanding of sensory input. We do not know anything about how individual neurons process information, so we really do not know just how challenging this might be. For as long as I can remember AI researchers have been claiming that the big breakthrough is just a few years away. Perhaps that is now true.

    By the way, I think you might be underestimating the processing power of the latest supercomputers.
    • Comment deleted

      • Mar 31 2013: What you view as a "miraculous leap out of logic" I view as a misunderstanding of "human-like digital mind."

        While some believe that computers will one day mimic human emotions, I think it is much more likely that the most advanced digital minds will avoid the emotional morass and limit themselves to the rational aspects of human-like. As for the soul aspects, I cannot imagine how a digital mind would even attempt to approach that realm. For me, human-like does not mean thoroughly human-like, but is limited to our ability to perceive, process language, and draw conclusions. Achieving those limited goals would be amazing, if not miraculous.
  • Comment deleted

    • thumb
      Apr 1 2013: Don.
      Does your faith allow you to ask questions? Do you make it a habit to 'shoot first and ask questions later?'
      Your assumption about my inability to understand Math and Physics tells me sir that your faith is deeply rooted in pseudoscience.
      Do you ever ask yourself that you might be wrong?
      I am sorry to hear that my comments offended you. That was not my intention. I was simply trying to avoid the Religion argument. You sir talk about answering with no fear?? Well sir, you at least deserve the same. I believe that you might be indulging in 'adult fairy tales' . I'm almost certain that my words will make you stronger and not weaker, much like when the Romans sent the christians to the lions, it made them stronger not weaker. By the way, there are
      studies that address this phenomena. I am sure that you will be able to find it if you so decide. Also CFBB stands for Canadian forces base Borden. It might have been called Camp Borden at one time. This is not what it is called now.
      Forgive me if I offend you sir. Perhaps like you said, I'm a bully........... Not a chance sir.
      Cheers
  • thumb
    Mar 29 2013: Here's an interesting article plucked from today's headlines:

    Biological computer created with human DNA

    http://www.foxnews.com/science/2013/03/29/digital-evolution-dna-may-bring-computers-to-life/

    Never say never...
    • thumb
      Mar 30 2013: There is a distinct possibility there.
      Human brain does not work like the computers. It learns patters, problem-answer co-relations, draws analogies and uses guesswork, which some describe as gesticulation. If you see the power requirements of super computers or computers that are claimed anywhere near human brains you will be shocked. Yet human brains achieve this feat within 10 watts of power. This is simply because it does not compute the whole algorithm every time.
      I think if computers or rather bio-computers can ever reach the efficiency of human brains, it will also show mood, emotions, feelings, notions, superstitions and belief systems just like human brains.
      There is no free lunch.
    • thumb
      Mar 30 2013: With respect to your referred article, I remember something interesting.
      About 98% of human non-coding DNA or junk DNA as lay press call it. Even if we discount say another 8% of it as of some unknown requirement, there is high probability that a high percentage of DNA is non-coding. How about using it to store information? Of course we have to use a quaternary code instead of binary but that gives even more space! I don't know through whatever technology, imagine you can carry in your blood sample world's greatest libraries and can access it easily whenever you want. I leave it on your imagination what your offspring's DNA will contain. It's not exactly computing but interesting, isn't?
  • thumb
    Mar 28 2013: I don't think it is possible..
  • Mar 28 2013: in my opinion, it will never happen. because a machine is always a machin. no matter how well you design it, its still a machine. besides, no one gains knowledge just like that. it takes lots of experience. its not something that you can program into a machine with in a day. even if we create an intelligence artificially its something that is developed with the help of a human being. if he can build something like that, then who is intelligent here?
    real stupidity always beats artificial intelligence
  • Mar 27 2013: @Danger Lampost
    There is already the famous Turing test but I didn't get around to actually reading it yet.
    I think my test would be about detecting subjective form of thinking
    For that we need:
    Unpredictability: the result must be truly distinct in both form and function.(which means you can't simply use random generators)
    Ability to contemplate and find meaning in something totally obscure to fabricate something reasonably new .
    Ability to weave connection between an arbitrary list of elements.
    Finally Ability to give a descriptive answer of 5 sentences to the question "What are life on an alien planet like?" regardless if its been there or not.
    • thumb
      Mar 27 2013: Those are some great tests.

      I wonder how the IBM Watson software that beat all human opponents in a real game of Jeopardy would do with your test, if appropriately reprogrammed? http://www.youtube.com/watch?v=WFR3lOm_xhE
      • Mar 27 2013: as an experiment Id love to see a super-computer having chemical imbalance:
        As in having either organic or chemical cells to preserve state in and having the content bias of said cells globally influenced by changing the chemical balance to arrive at altered data from what was originally intended:
        The corrupted data is then assembled by a reconstruction process to yield an approximate outcome.

        The Approximation is basically doing a math process that yield a number and that number is processed to an address that points at a place in memory.

        The Computer's memory is a list of objects with each object offering description of Visuals, Smell, Sound and Function that defined how the object can be used.
        Few examples:
        Apple: Red apple shape, apple smell, sounds and it can be eaten, shot with an arrow, thrown, smashed etc
        Mouth : mouth shape, foul breath smell, chewing sounds and it can eat stuff, lick stuff , smile,l kiss etc.
        Pencil: pencil shape , lead smell, scribble sound and it can draw lines on surfaces, poke holes in stuff, break etc.
        and so on.

        Now lets say the chemical change is conditioned by something like a combo of Thermo + Photo-sensing or response to photo-recognition ratio of symmetry etc to mimic a crude form of emotional response.

        By now the computer learned 3 elements: Pre-currupt Object, Reconstructed Outcome Object and the Chemical sense Ratio and it can do math operations using all 3 to produce even more derived data and reprocess it.

        Since all Data have function after its psuedo-emotional process made it arrive at the 3 objects listed above : (apple,mouth pencil) it can now further currupt and reconstruct the data to make new previously unconcieved objects
        For example: lets assume that in this process the selected Apple accidentally recieved the 'can eat stuff' function.
        The computer ask himself what else in memory has the 'can eat stuff' function and arrives at mouth so it attaches mouth to apple to combine them ... ran out of writing space :)
      • thumb
        Mar 29 2013: Thanks DL! Funny watching the guy on the right in the video jump each time he attempted to hit the button first before Watson and the other guy. (http://www.youtube.com/watch?v=WFR3lOm_xhE)
  • Mar 27 2013: I think the human brain project might answer your question but I don't think we can know a priori.

    Would a full simulation develop consciousness? I don't think so since a simulation on a computer will be just that. Quantuum computers....who knows. What I do know is that whenever we make predictions about technology we are almost always way off the mark. 50 years ago we would have found it difficult to believe that a computer could beat the wolrd chess champion yet fail to be able to walk up stairs or recognise unfamiliar objects!
    • thumb
      Mar 27 2013: Enter The Matrix: Is there a difference between living a simulation of reality versus living in reality? How, in principle, could you tell the difference?

      I would love to hear about why you think a simulation of a human mind could not develop consciousness?
  • Mar 27 2013: Actually language is the primary barrier since despite all sorts of abstractions most computers are mono-linguistic : they can only process data in a very specific manner and are incapable of the creativity needed to correctly construct meaning from abstract data.
    I think the most basic example of this is in drawing when you ask a computer to capture someone's likeness: they will methodically trace outlines and apply filters to arrive at a fabricated solution, never really contemplating the meaning behind the interpretation or the feeling it creates because they lack imagination.

    You can arrive at artificial imagination theoretically but to have a computer interpret the result and grok it would take ages to code.

    What might change things is Quantum computing which is being developed but that is no longer simple digital mind.
    • thumb
      Mar 27 2013: Computers actually speak way more languages than we humans do, both artificial and natural. Whether they show creativity depends on how you define creativity I guess. One would never code an artificial imagination (as yes it would take ages to code) - instead one would code an embrionic artificial mind and let it start experiencing the world. The neuroplasticity that our brains leverage to adapt and create can be simulated in software, true. Whether our hardware designs need to incorporate physical neuroplasticity in order to be a substrate on which creativity and consciousness might emerge, is an open question I think.

      Another open question is whether our brains and neurons leverage quantum computing in addition to the more traditional parallel processing we know our brains perform at the relatively large scale of individual neurons. Roger Penrose has written extensively on this topic and there may be new updates in this field of which I'm unaware.
      • Mar 27 2013: it depends how you define language, computer can technically memorize anything and form connections between memory , all without understanding the meaning behind it or how to construct a new words in the language so in essence the only language a computer understand is math, more specifically math translated to binary..

        Various computer languages only work one way and its even illegal to decompile them so in a sense they are not real languages because they all translate down to math or assembler commands..

        I on the other hand do not speak math on a regular basis and in fact it is a language that in my youth I did not realize its importance so I did not spend sufficient effort to learn it.

        Our Brain is biological and though I'm sure it is capable of Entanglement and Superpositioning in some form but there is more in there since unlike math driven systems we can dream and imagine.
        Our brain chains and connects data with dynamic sets of logic that are probably impossible to define using math.

        For example I once dreamed I was a Big Red Dragon fighting a group of heroes at a hollow chamber inside the canopy of a giant tree. In the dream could see myself shooting a fireball towards a knight in first person but somehow I knew I was a Dragon and when I burned a hole in the tree to fly out and escape the clinging to one of the branches I could see myself as the dragon also in third person while knowing that was me.
        Not only would a computer system fail to dream up such a surreal scene on its own without pre-programming, it would also find itself lacking to describe the feeling, since feelings are subjective and cannot be fully realized until you actually feel them.

        I guess what I am trying to say is that our Brain can dynamically create its own subjective logic system which is not very logical since logic is objective.
        • thumb
          Mar 27 2013: What it means to dream and imagine is a pretty subjective thing I think. Indeed what it means to understand the meaning of something is a big topic. I think one of the key questions here is how you can tell whether any given mind is a "human like" mind capable of such flights of fancy. You can't just ask it and trust what it says. What would be your test?
  • thumb
    Mar 27 2013: Hello Alan Turing! So how do we know that is a human-like mind? The Turing Test? I think that is a critical part of this question that should be defined when discussing an answer to this question. I have a more torturous version of that famous test in mind: I could imagine designing a modern web-based testing methodology that would use large crowds of people to test a human-like digital mind versus real people. LOL - You could use Amazon's Mechanical Turk service to do this? Overly geeky humor, sorry...

    I think we're about 35 years away from the "Singularity" when we will create a human like mind, and then it will quickly grow in capability. Whether you call that "on the brink" would depend on the span of time you are considering. Quantum computing will eventually enable unbelievable computing power - "hacking the multiverse" as some call it. [Side note - I got to program the D-Wave quantum computer a bit - fascinating!]

    We'll create an alien, non-human-like mind before we create a human-like mind though. That, I believe, will be the first alien "life form" we'll actually meet.
  • Mar 26 2013: 'Brink' is a subjective term.
    Several comments have been posted referring to a 'rapture'. I believe the posters are referring to 'The Rapture of the Nerds'; sometimes called the technological singularity.
    Ray Kurzweil does a good job of this topic with a simple thought experiment.
    Imagine that it becomes possible to simulate a single neuron with absolute fidelity down to the molecular level. Keeping in mind the exponential increase in computational capability, now imagine we can simulate with absolute fidelity down the the molecular level 2 neurons and their interaction. Now 4 neurons, now 8, now 16...
    Eventually, we reach 100 billion neurons. If, as some believe, that consciousness is an emergent phenomenon, would not the behavior of the simulation be an exact duplicate of the brain being simulated? Would not the subjective 'experience' of the emergent consciousness be identical to the original?
    So the question becomes, does it matter what substrate consciousness runs on?
    • thumb
      Mar 27 2013: Your questions exclude the possibility that our brains leverage quantum computing techniques, in which case modeling or simulating the 100 billion neurons is just the beginning of the physical substrate required for consciousness.

      Whether or not that is also required, there is also the issue of the speed at which the simulation runs relative to its surrounding environment. If the simulation of a mind is running at, say, 1000 times slower than a brain-based mind, then it may not succeed at producing what we would consider a human-like mind. This would be true even if all the neurons in a brain were somehow mapped out.

      Alternatively, we might discover that trees are in fact conscious - it's just that they think so slowly we are unaware of their minds. But being in a forest, they do send chemical messengers around and so colonies of trees do communicate. For that matter, the largest living thing in the universe, as far as we have seen, is a fungus in Oregon that is four square miles big. Maybe that has a type of consciousness living at a different time scale?

      Indigenous herbalists discover drugs our scientists can not discover, and they say the plants spoke to them and told them about their healing properties. Do they have a mind that lives on plant or fungus substrate?
  • thumb
    Mar 25 2013: Thank you all for this lively debate! So many insights and interesting points of view. ALL of your comments are welcome and greatly appreciated!
  • Comment deleted

  • thumb
    Mar 12 2013: The human mind seems, to me anyway, to be a result of many of the functions of the human brain.

    When we think about our everyday experiences there is much that is at the least very difficult to describe in language. Anyone who works with people on a regular basis is also likely to come to the conclusion that what many people do describe in language is at best a partial version of the truth.

    The things we seem to spend a lot of our 'processor power' on; for example generating a three dimensional model of our surroundings based on a combination of detected energy and stored data (light and memory), another example is predicting the paths of moving objects, are directly related to systems to ensure our survival. We can produce digital systems to replicate the physical tasks, but so far we are a long way from being able to replicate the reality of what the task is for.

    Yet understanding what tasks are for is essential in giving them meaning. One simple example of this is how we think that some foodstuff as sweet. 'Sweetness' only exists in terms of our food intake needs. I suspect the vast majority of the processes carried out by the human brain also become meaningless without the link to the physical or social goal that the process is designed to achieve.

    One of the problems in creating a semantic web is that beyond a very simple level it becomes difficult if not impossible as each definition uses words which in turn have themselves to be defined... So computational equivalence may be possible, but that is a long way off the creation of a human-like digital mind.
  • thumb
    Mar 11 2013: One key element is "human-like". The definition itself leaves wiggle room. But just for fun, let's say extremely "human like". Think HAL in 2001 a Space Odyssey. I still think it's very much possible. But then again, I'm one of those people that believe there's pretty much no limit to what we can do. It doesn't always happen in the exact form we originally imagine -- for instance flying. Man dreamed of flying, and at some point the idea would have seemed absurd. It's common place now, even though we don't individually glide through the air on feathers attached to our arms. But who's to say with a little genetic engineering that's not possible? We've put men on the moon and are now planning for Mars.

    Clearly we create programs that learn from their environment now. We're constantly moving in that direction. The "Human-like" aspect begs the question of self-reflection based on self awareness. Are we as humans the only animals capable of such thoughts? And how do we define intelligence?

    In the evolutionary chain of events, a great deal of which is thought to have been nudged along by environmental change, mankind has risen to the top of the heap. We are the masters of our world, but I often wonder if our hubris is so well deserved. Our success in storing, communicating and sharing information (knowledge) has paved the way for rapid progression compared to other animal species. It has given us the ability to leverage our experiences and exponentially multiply the output of our ingenuity. What makes all this possible is the somewhat novel architecture of our brains, physically and operationally.

    This is precisely one of the reasons the Human Brain Project and Brain Activity Map are being pursued -- to study the physical and operational architecture of the human brain. It will likely lead to revolutionary changes in computer software and hardware design. Once understood, the sky is the limit.
    • thumb
      Mar 11 2013: Re: "The "Human-like" aspect begs the question of self-reflection based on self awareness. Are we as humans the only animals capable of such thoughts?"

      Dolphins, whales, elephants, and other animals are thought to have self awareness.
      http://en.wikipedia.org/wiki/Self-awareness

      Re: "And how do we define intelligence?" -- That's the #1 question. Is human "intelligence" anything more than a sophisticated ability to draw associations between events and experiences? If not, then perhaps, our intelligence is not, in principle, different than the intelligence of a Pavlov's dog. The only difference is the degree of sophistication.

      If this is the case, then an automatic vacuum cleaner able to detect and go around obstacles can be (and is, by the way) called "intelligent".

      Can words have meaning separated from experience that they represent? Can we say that machines "experience" something? E.g., can it be said that an automatic vacuum cleaner "experiences" or "perceives" the obstacle? Apparently, an obstacle is somehow represented in the machine's software.
      • thumb
        Mar 12 2013: Very cool that you mentioned Dolphins, Whales and Elephants! Those are three of the top three animal groups I was thinking of about as well, but also Chimps and Apes.

        https://www.youtube.com/watch?v=5Cp7_In7f88&playnext=1&list=PL03C5837622D9CAD1&feature=results_main

        https://www.youtube.com/watch?NR=1&v=nSLkQja-uiY&feature=endscreen

        We as Humans may simply just be a bit further along in the evolutionary brain development process...
        • thumb
          Mar 12 2013: Somebody on TED recently mentioned Ota Benga in one of the conversations. I looked the name up. It was a pigmy who was displayed in a zoo as an example of a "lower stage of human evolution" in early 1900s. As a teenager, I once saw a gorilla in a little cage in a zoo in Ukraine. He was sitting there calmly watching people who watched him with a sad expression. It was awkward to look him in the eyes, because he, definitely, looked like a human.

          After I read about Ota Benga, I thought, why do we consider it moral to display animals in cages at the zoo? What is it exactly that makes us feel "human" and "intelligent"?
  • Mar 11 2013: IBM is already there...

    http://www-03.ibm.com/innovation/us/watson/
  • thumb
    Mar 11 2013: Nothing can fully reach human intellect through emotion, feeling, and actions, and nothing can reach such high potential as the human mind. This is why humans still manage to beat robots at chess or solve problems that the computer couldn't solve. I think artificial intelligence, in my opinion isn't fully likely to happen.
    • thumb
      Mar 11 2013: It has been many years since a human has beaten a computer at chess.
    • thumb
      Mar 11 2013: That isnt true. Computers beat humans at chess ages ago, and there s no problem the computer cant solve. The only things is that the variables the computer needs to solve the problem as made by humans and thats where the error comes from.
      Also, you don't need emotion to create the brain. You see, in order for artificial intelligence to work the only criteria is that the computer has to become self aware, that is, it has to realize its own existence.
    • Mar 11 2013: To say a human brain cannot be simulated you have to define a factor in the human brain that can not be duplicated. Scientists believe that the human brain consists of a limited set of cells and that those cells use chemical reactions to interact. Every thought process and behavior can be studied and theoretically simulated via computer modeling. The religious and other various forms of science-deniers believe that there is such thing as a human soul that either can not be studied or has resisted observation by any scientific techniques (the way super-beings like deities have resisted such techniques). So the question is do you believe in a magical soul that can not be studied or do you believe that human behavior can be linked to biological processes? If you believe in a soul you fall into the hard problem of consciousness area where you are certain that you have a soul because you are special, but how do you know if other people have a soul? How can you be certain that other people have consciousness? So perhaps you should work on finding a way to prove that computers don't have souls that won't also work on people.
  • Mar 8 2013: I think's it's gonna take a lot more than 1.3 billion and ten years to reverse engineer and replicate the human brain. It's the most complex biological organism on the planet. Is it possible? Can't think of any reason why not. Will it be achieved soon? I highly doubt it.
  • thumb
    Mar 7 2013: All good and valid points. But even computers could be designed with sensory input devices allowing for sight, hearing, touch, smell and taste. Granted, it would take a lot of programming instructions to express what we find pleasing or displeasing, but I still think it could be described and therefore programmed. The object and challenge would be to accomplish this in such a way as to lay out the principals and allow the digital mind to acquire and learn from experience just as we do.
  • thumb
    Mar 6 2013: Re: "Think about it. Is there anything related to our experience - be it physical, historical or conceptual - that cannot be described in language..."

    Plethora examples. "Self", "nothing", "everything", "infinity", "God". (OK, even "flying spaghetti monster" - will computer understand the concept?) And what does it mean to "understand" (if you understand what I mean)? Define "meaning". What does "make sense" mean? Define "time" in the absence of periodic processes - crystal oscillators, Earth, Sun, or atoms of Cesium - these did not exist for millions of years after the big bang (how do we know that anyway?).

    Re: " what's to stop us from creating the cognitive faculties that enable consciousness, thinking, reasoning, perception, and judgement?"

    By the way, define those using language... What's "thinking" or "consciousness" or "free will"?

    But emotions and motivation seem to be the main obstacle to me. Why would a computer want to send a probe into outer space or do some other crazy stuff humans do which might get it killed? How shall we teach it to exercise free will and judgment if we are not sure if such things even exists?
    • Mar 11 2013: This is a very good point. All definitions of "information" are essentially circular, just using synonyms.
  • thumb
    Mar 6 2013: Without emotions, computers would lack motivation to do anything - even to prevent someone from pulling the plug. Billions or trillions of operations per second - it does not matter. A friend of mine once said, "computers are as smart as a lawnmower". Unless computers feel pain, pleasure, or sorrow, they will remain machines.
    • thumb
      Mar 6 2013: Agreed. But as long as we can understand these feelings conceptually - pain, pleasure, sorrow, etc. - and explain why and what causes them, why is it not possible to translate that information into programming?

      Many computer programs already exist in which their operation is designed for learning (artificial intelligence) from response to stimuli (results) and then making informed choices for actions and reactions. One example is Google's search programming, which relentlessly seeks to improve the relevant value of our search requests.

      Seems to me if we can understand it and describe it, we can program it,
      • thumb
        Mar 6 2013: Re: "Agreed. But as long as we can understand these feelings conceptually - pain, pleasure, sorrow, etc. - and explain why and what causes them, why is it not possible to translate that information into programming?"

        Yes, but do we understand these things ourselves? As human beings, we know what pain and pleasure are even before we learn to speak. When we learn to speak, we think we can explain it, but we just go in circles. Everyone seems to know what "self" or "I" is. But try to define it.

        Google may be relentless in establishing links and associations between concepts, but the will to do so does not come from computer system. It comes from humans. The database wouldn't know that it needs a faster server, for example, unless it is programmed to place a purchase order for a new server by a programmer.
  • thumb
    Mar 6 2013: In Europe, in the last couple of months, seven people commited suicide because they were evicted from their houses.
    And they are spending $1.3 billion on an impossible task...
    • Mar 6 2013: The cost of the Human Genome Project (another impossible task) to the US taxpayer over the past 15 years has more than doubled this amount and we are merely scratching the surface in terms of the benefits it can bring. It is important to look beyond the price tag and the science-fiction bull that is obviously going to be thrown about on this discussion; the benefits to human kind will far surpass our lifespan.

      We today are living longer than our ancestors and unfortunately we're beginning to identify illnesses that coincide with our ageing population; Alzheimer's, Parkinson's, cancers to name a few. What we are developing here is a model of the human brain - an invaluable tool. One incredible prospect is the use of this brain as a model for neuropharmaceuticals; just imagine all those lab rats that can be released into the wild.

      Agreed its a lot of money but to stop funding scientific research in the face of socio-economic issues isn't the solution.
    • thumb
      Mar 6 2013: As an adjunct to both of your points, here's what Obama said a few weeks ago in his State of the Union speech, when announcing his interest in a brain mapping project, and comparing it to the Human Genome Project:

      “Every dollar we invested to map the human genome returned $140 to our economy — every dollar,” he said. “Today our scientists are mapping the human brain to unlock the answers to Alzheimer’s. They’re developing drugs to regenerate damaged organs, devising new materials to make batteries 10 times more powerful. Now is not the time to gut these job-creating investments in science and innovation.”
  • thumb
    Mar 6 2013: No.
  • thumb
    Mar 5 2013: One day you will type something into google and it will refuse your search as in google's opinion your search is not worth the trouble of doing. The first sign of sentient thought. Afterall the processing power that Google has access to is vast and immeasurable. Just a thought ;-)
  • thumb
    Mar 5 2013: It is an intriguing question. As Robert states, existing computers are TOMS (Totally Obedient Machines); they only do what we tell or program them to do. But with sufficient processing capability, why could we not input data that expresses our moral values and feelings, and the interactions between those concepts, from which we as human beings interpret and derive our own individual conclusions. While we may not be able to precisely explain the physical mechanisms of brain processing, we are able to articulate and explain how we discern right from wrong, and the processes by which we form opinions and ideas, and then react or act deliberately.

    It seems quite reasonable to me that programming could indeed instruct a robot, that setting foot in a bed of hot coals is a painful proposition, and further instruct the robot to react in a human-like response to the pain. For us humans, as biologic organisms, certain responses to stimuli come naturally. It is inherent or innate in our programming. Why that is, is yet another question. I suspect it relates to the ultimate programming initiative of all living, biological organisms, which is survival. And perhaps living organisms had to learn a great deal in this arena as well.

    If you believe in evolution, and that we all evolved from single cell organisms, it clearly took many millions of years for organic life to learn better survival tactics. Single cell organisms began organizing into multi-cell organisms, their ranks culled and rewarded by natural selection, e.g. survival of the fittest. Over time, programming specialties were learned/developed for purpose and adaptation.

    These are all things we can understand and articulate. Granted, it would be a monumental task to gather and input all that data, and equally challenging to develop a man-made devise with sufficient processing capabilities, but it does seem possible.