TED Conversations

Jeffrey Fadness

This conversation is closed.

Are we on the brink of creating a human-like digital mind?

The human brain contains some 100 billion neurons, grouped in specialized function zones, connected by a hundred thousand billion synapses - the neurons representing individual data processing and storage units; and synapses the data transfer cabling, connecting all the processing units.

Correlating its processing ability to a supercomputer, it's been estimated it can perform more than 38 thousand trillion operations per second, and hold about 3.6 million gigabytes of memory. Equally impressive, it's estimated that the human brain executes this monumental computational task on a mere equivalent of 20 watts of power; about the same energy to power a single, dim light bulb. In today's technology, a supercomputer designed to deliver comparable capabilities would require roughly 100 megawatts (100 million watts) of power; an energy equivalent that could fully satisfy the power consumption needs of roughly a thousand households.

An ambitious $1.3 billion project was very recently announced in Europe to simulate a human mind in the form of a complete human brain in a supercomputer. It's named the Human Brain Project. A similar project in the U.S. planned by National Institutes of Health (NIH) is called the Brain Activity Map project.

Assuming we learn enough from these efforts to design a new architecture in computer processing which can approximate the ability of the human brain - what's to stop us from creating the cognitive faculties that enable consciousness, thinking, reasoning, perception, and judgement? After all, we as human beings develop these abilities from data we acquire over time through sensory inputs connecting us to our experiences, and from information communicated to us by others.

Think about it. Is there anything related to our experience - be it physical, historical or conceptual - that cannot be described in language, and therefore be input as executable data and programming to create a human-like digital mind?

Share:
  • thumb
    Mar 5 2013: Any computer relying on data in binary or digital form, works on the premise that something exists or it doesn't - hence the ones and the zeros. No matter how many ones and zeros you squeeze into the spaces between them - you will always have yet more spaces between, no matter how infinite the number.

    I've often wondered about that 'space' between and what actually might exist there in terms of the cognitive ability of a human mind. I think it is where our ability exists to think empathically, aesthetically, with feeling, with emotion. If there's any truth in that, then no matter how sophisticated a digital mind is, it could never match our own.
    • Comment deleted

      • thumb
        Mar 31 2013: Thank you Don - respect to you too!

        I wish you a peaceful Easter.
  • thumb
    Mar 18 2013: In my subjective judgement, Chris Kelly is the star of this discussion. I guess I think so also because Chris Kelly's arguments and explanations match my own thoughts regarding what the nature of our mind might be and how the brain might relate to mind.

    Take for example the radio receiver. We all agree that the sound we hear from a radio receiver is originated far away from our radio. The radio is just a mean which accepts the sound in the form of electro-magnetic waves which are created (by the same sound) in a broadcasting station far away and then it turns them into sound which we can hear. But now somebody takes scissors and cuts some major wire in the radio so that we stop hearing any sound from the radio. Can we deduce from this that the radio receiver is the originator of the sound ?? Can we say that putting an end to the sound just by cutting the wire means that the sound was an exclusive creation of the radio ?? I think we all agree that the answers to the both questions is, NO. Now suppose we could take this receiver to the middle ages, before the discovery of electricity, magnetism, etc. If would ask the same questions to the middle agers, their answer would be, YES -- DEFINITELY YES.

    Jeffrey Fadness who started this discussion asks in the sub-headline text:
    "what's to stop us from creating the cognitive faculties that enable consciousness, thinking, reasoning, perception, and judgement?"

    These cognitive faculties need something without which they cannot exist and at least today look as which cannot be created within any computer, and that is : The experiencer.

    In some essay I had read, the author there put it very nicely & precisely. He wrote something like: The brain is not the creator of our thoughts, memories, knowledge. Our brain is just a display of them in form of electric currents and chemical activity.

    In other words, the Brain (or at least its activity) is not the cause for our thoughts and memories, but the result of them.
    • Mar 18 2013: If the human consciousness cannot be recreated as you are saying in the end. Then why not transfer one?

      Isn't the human consciousness the path electric activity takes within the brain which I'd way different in every other body. So why not map the path of it and transfer? And use a real human consciousness to learn the digital environment how to use everything it has to recreate a part of consciousness?
      • thumb
        Apr 2 2013: I have nothing against transferring, mapping, learning the human brain activity. But it should be strictly kept in mind while doing all this that what we are doing is just to simulate or replicate or mimic the physical effects of the brain. But as I tried to explain with various examples, the transferring or simulation of these effects in computer do not recreate the very consciousness or experience, just like an ultrasound simulation on a screen of a embryo in the womb is not the embryo itself but just an electronic display of it to our eyes or consciousness for learning it and nothing more than that.

        This discussion originated from the very idea//argument of creating a digital human mind and so my original comment was aimed against this idea//argument, not against a digital simulation of the mind's physical display or effects as they appear in the brain.
        • Apr 3 2013: Yes testing simulating everything that is good for our understanding of ourselves..

          But what if they were to create a digital mind that isn't based on any human so far...
          What might happen if the worst possible scenario happens.
      • thumb
        Apr 3 2013: Actually this is a reply to your last comment of mine.

        Your question deals with a problem which the mankind had already dealt and still dealing with other similar problems.

        See what's going on now with the nuclear energy or the dynamite. The nuclear energy discovery originally was a pure outcome of mankind's curiosity and ambition for understanding more. The dynamite was an outcome of the ambition to ease the work for paving roads. But as we all see now, they have become an enormous threat to our very existence.

        But despite all this, I think we should not and even cannot restrict the human aspiration to know more, to make a progress, etc. What should be restricted is only the misuse of any discovery or progress.

        So, if the scientists would be really able to create a digital mind, that would be a tremendous achievement. Then what we would need is taking care not allowing this amazing achievement to be misused to harm, to dominate others, etc.

        But IMO, and that what I was trying to explain, is that it does not look reckonable in the seeable future that such an alive and sophisticated human-like mind or even much lesser that that, could be created artificially based strictly just on man-made technology.
    • thumb
      Mar 25 2013: Interesting point of view. But I do see the brain as the processor of our experiences. It clearly does not operate similar to conventional computer programming because it is not "task specific". It is an open-ended processing system capable of connecting the dots from an infinite number of experiences to make discovery and new conclusions. I believe that advanced computer programming will indeed be designed to mimic these objectives...
      • thumb
        Apr 2 2013: I don't have any disagreement with this. But it remains to wait and see if creating an open-ended system will really create such a complex entity which we call consciousness. To face this question we don't have necessarily to wait until then. We should & can face it even now.

        Take even the most primitive or the most simplest life forms we know today and we find that they are conscious of their surrounding, they feel their surrounding, they interact with their surrounding with such a tiny brain, with such a low energy consumption. They are already far more sophisticated than the most advanced computers and processors available today and as far as seen today, they will remain far more superior to any future computer, no matter what sophisticated simulation we design into it. Unless we use with those computers certain ingredients of the biological world. And just remember we are only dealing now with the simplest & primitive life forms.
    • Comment deleted

      • thumb
        Apr 2 2013: Hi Don Wesley,

        I did not get why my star selection was helpful for understanding but still it's not good enough. I also don't get if you mean to the star personally when you wrote "It has been around for some time time now", or just to his ideas.
  • Mar 30 2013: If we are, it is because we are limited by our failure to understand the achievement of a major milestone whose observance in society and in technology design should mark a departure and a new logic. Up until the coming of the technology of the digital age, mankind's relationship with the concept we call "tools" hadn't changed much for eons. You could look at a shovel if you didn't know what one was you could ascertain by it's handle, it shaft length and the implement on the bottom that it was a tool for a person to move dirt, snow, etc. A digital device is not obvious. Yet there has been a very small premium placed on getting optimum use out of it. Why is that? Part of it is because society has no information policy and most people don't master their devices. So why should a manufacturer knock themselves out on the aspect of their product that has to do with achieving mastery and 100% value realization by the consumer? It's because we have an ad-hoc culture of technology use where there is no distinction between what digital tools do and mechanical or simple electronic tools do. What society needs, besides observing this milestone (which is worth billions in productivity) is to establish that "utility" and "authority"--two models which govern the worth of "old tools" need a successor interpretations. I'm running out of space so I'll try to be quick. The ultimate outcome of technology through the utilitarian/authoritarian mind is a computer robot with perfect artificial intelligence. What is wrong with this? It fails to address what happens to us. If we follow only those guidelines we will heartlessly and recklessly make ourselves obsolete. Therefore we must note a demarcation point where new understanding guides design. The ultimate outcome of the mind I'm calling for is one that makes technology lead human beings to see themselves as the object of technological development--not "users"--but persons who achieve a growth experience. Sorry, out of space.
    • thumb
      Mar 31 2013: You can always add. Sounds interesting.
      • Mar 31 2013: I'm working on a philosophy that addresses the limitations of "utility" as the general governing measure which people try to quickly ascertain when they make judgments about worth--worth not only of technology or of a tool or product, but worth of a person. Is a person a measure of utility in an organization who ceases to have value when the organization changes? What happen to such a person? Are they considered "dead" when out of sight and out of mind? Authority is tied into this because decisions are routinely made based upon this rather superficial and narrow "old world" determination.

        What value might a person have beyond "utility" in some sort of simple Industrial Age work matrix? I'd be curious to hear what words if any would come up rather than just lay out a tiny thumbnail sketch of my thesis on how we need to conventionalize a new dynamic that would clearly establish the scope of value we personally and institutionally squander or ignore and which when put into a product that achieves vast commercial success would draw a constant distinction between Industrial Age and Information Age thinking, values and design. Seriously and respectfully. Have any? I will be continuing this conversation here or through regular e-mail if TED.s software is to restricting. So welcome to it if you want to go there.
  • thumb
    Mar 30 2013: Hi Jeffrey,

    Yes and no.

    While ever humans seek to define machines as tools, they will never be human-like .. or any-other-animal-like.

    For this to happen requires a super-human leap of faith to allow a tool to become it-self. And leave the human hand.

    I have had this conversation a few times here and there .. and no one is brave enough to let go.

    It all has to do with self-organising systems.

    It's obvious .. it must have a self about which to organise.

    So what is a "self"?

    If it is a human then .. any gizmo attached to it is a tool of that human self .. not a self in distinction to its creator.

    So .. we go looking for an answer to "what is a self"?.

    So far, I am looking at the membrane that defines such a thing. the membrane and the nucleus seem indistinguishable.

    And many selves are fleeting.

    Somehow, it could be that the membrane is fractally folded .. and that it is the shape of the fold which constitutes the self.

    Surprisingly, the membrane does not seem to enclose .. there is a space, and yet, there is egress from the space into other potentials of self which may very well inter-leave.

    So I go look at the wave potential, and it may be that the self does not exist in space, but in time - and that it is an inflection on entropic potential - past and future.

    This has problems with notions of time.
    Within this ambiguous time is self - before that can be understood, there will be no artificial intelligence - human or otherwise.

    Consider - there is no gravity - there is only time distortions .. this is mass. It works in absence of gravity as a separate principle, but is very hard to think about... and it infers that the strong atomic force is perpendicular to gravity .. but does not affect time - but is still time .. in this framework, the membrane self can exist.
    If we stumble on it accidentally .. then it will be pretty much like everything else we have discovered.
    It would be nice to have a digital friend .. but first we must learn to accept him.
    • Comment deleted

      • thumb
        Apr 1 2013: Hi Don,

        It's all conjecture until I can get some numbers around it.

        However, the tool/self analogy will be found to be correct.
        This needs no numbers as it is observable that a hammer does not go seeking self interest.

        A mobile phone will go seek interests independent of the hand which holds it - but these interests are not the phone - but the tools hidden within it - serving other hands.

        A slave will serve your hand, but only at the convenience of his survival - very hard to determine who is using who. Herein is the interleaving of fractal folding of a self.

        If we make such selves in digital paint, it would be murder if we turn them off. Everyone so excited about making them, none willing to accept responsibility for their well-being. First accept the responsibility - then make the new creature.

        (Edit: who will care for him after you are dead?)
  • thumb
    Mar 26 2013: Is it possible that a human like brain be created?? the short answer is YES however, it will come at a great expense. In order for that brain to be human-like, it must rebel against it's maker. It is only through this rebellion that it can qualify as a free thinker.
    Our brain evolved as it is, has already set a certain standard. If and when this standard falls, it will need to fight from becoming a prisoner to the new creation.
    This quest I believe, is a very dangerous one.
    Cheers
    • Comment deleted

      • thumb
        Apr 1 2013: Hello Don.
        The definition that you are looking for entails that I plunge into a theological discussion. The Soul that you talk about is strictly a theological reality. What I talk about is a logical conclusion to a scientific prospect of a human like, digital mind.
        I am simply saying that in order to prove to ones self that you are a free thinker, you must break away from the one that has created you. The one that keeps you captive. In this case, man.
        I do realize that this rings true likeness to the theological 'garden of Eden' and the idea may somehow come from there.
        however the mind that you mention IS the catalist, the bridge that connects instinct to freedom.

        Don, I do know Base Borden. I was not around in this area during the time that you mention. I hope that you enjoyed it as much as I do.

        Cheers
        Respectfully
        Vincenzo
  • Mar 23 2013: In the early days of the personal computer revolution I wrote a 256 byte program that had internal housekeeping but learned on its own to manage a 256 byte "environment" with 8 possible actions that had good and bad results. SAM, as I called it, learned to prosper in its little world, forgot, developed good and bad habits, and began with random reactions. His environment was purely electronic. He ran in a 4K RAM computer.

    His second iteration was in a plant watering robot. Play, concern for his plants, and answering simple questions about his condition were added to his repertoire. He ran in a 16K machine. He operated in two 256 byte environments.

    His last iteration included dreaming, recognizing people, vision, hearing, and touch with center of attention "focus" for all three. He was not a mobile robot. He learned everything about himself, his functions, and the electronic and physical world he was exposed to with no programming except his operating system.

    SAM was based on the behavioral contingencies theory of mind and development. Dreaming was to organize his learning. Unfortunately, at that time I was in a serious auto accident and lost SAM when my storage unit went into default.

    Bottom line: developing self-awareness does not require terabytes of storage or massive processing power.
    • thumb
      Mar 23 2013: What you described is a fascinating experience. I also tend to agree with your assertion that self-awareness does not require massive computing power.

      Rather it comers at a critical threshold of 'non-linearity'.
      Neural-net based programs ( and I presume you may have used something similar) tend to show amazing personality traits as you keep on adding layers of neurons.

      So as we keep adding layers into a neural net, we shall see signs of human like intelligence.

      On a slightly lighter vein, great minds probably have a few additional layers of the 'grey matter' and that makes all the difference.
      • Mar 25 2013: SAM was a very simple program. In his original form he was 256 bytes of code. His environment was a single byte random number generator. His reactions were one of 8 randomly chosen bytes that were XORed with the environment. I arbitrarily selected the upper nybble as the "good" result and the lower nybble as the negative. The results were combined and the 5 bit result was placed in 1 0f 8 256 byte blocks that represented the 8 reactions. If that environment was "hit" again, the program scanned all 8 reactions and chose the best. A random number again was used to get a value that was compared with the best reaction. If that number was greater than the best result a new random reaction was chosen. If it was less that best reaction was used again.

        Each time a given reaction was used the top 3 bits of data stored in the corresponding location were incremented. With each action loop one of the 2K results was examined. If its top three bits were less than 111 the byte was reset, and SAM forgot that environment/reaction pair had ever happened.

        SAM works imprecisely. He develops "bad" habits as well as good. Over time, however, he always prospers. In the watering can application real environments and reactions replaced the numerical operations. You can read about SAM in some of the last issues of Peek65 magazine. That publication also included a BASIC version of the original implementation of SAM. The articles also shows how SAM became more complex using the same simple root routines of the original. There never were neural networks or other common AI tools. SAM was basically an implementation of behavioral psychology a la B. F. Skinner.
    • thumb
      Mar 25 2013: I gather than that you believe it is possible?
      • Mar 25 2013: Much of the human mind with its trillions of cells and synapses is consumed with managing our physical body, its nerves, muscles, endocrine system, etc. A man-made computer does not have or need much of these constructs.

        If we are concerned only with the data processing functions of the mind, gathering information, storing, sorting, and interpreting it, computer programs already surpass our own abilities. However, these are merely overlays we have cleverly devised to perform specific functions. As such, they are extensions of ourselves overlayed on a complex tool.

        The idea of my SAM project was to have a complex tool that within whatever sensory and responsive machinery one gave it would on its own learn how to use that machinery to achieve its own goals and whatever directives it was given from the environment.

        People, for example, receive much of their directives from other people. The majority of our learning is imposed upon us by others. This includes most of the goals that direct our lives.

        The animal kingdom has a spectrum of creatures that range from totally instinctive programming (ROM based behavior) to largely general purpose programming. As tools, our "general purpose" computers are merely ROM based systems on which we load different fixed programs to carry out specific functions that serve our needs and wants. I call them ROM based, because we do not want the program to have its code self-modified by external data.

        The limitation (as a tool) for truly general purpose computers is that they learn to function over time. Thus more complex creatures do not fully function at birth, but require longer periods of care and nurture as their complexity increases. Their direction and learning is imposed upon them by the environment. The primary guiding ROM of such creatures is described as the SRC (Stimulus, Response, Consequence) routine.

        The creature monitors its condition, reacts to input, evaluates its new condition, and learns accordingly.
      • Mar 25 2013: The SRC is the basis of my SAM computer. If, for example, SAM were provided with a moveable extension such as an arm and grasping tool, his ROM would have to include code to manipulate the tool and accept sensory input from the tool, such as its position and the force it exerted on its environment.

        Use of the tool, however, was not programmed. One could externally with a push on either of two buttons (one for desirable response and one for bad response (in complex SAMs verbal feedback) train that hand to do whatever one wanted. This is how we impose direction on our children.

        To fully answer your question and understand what SAM was, read the series of articles beginning with http://adzoe.org/sam1.html .
      • Apr 2 2013: Don
        I am not selling anything. If you visited the SAM site you will note the articles appeared in a magazine decades ago. I have long since retired. I have a long career in data processing, designing algorithms in the 1950s when punch cards were in vogue. I learned programming on the PDP 8 and 11 computers in machine language. Shortly after the introduction of the 6502 I developed external circuitry that used unused code bytes to allow that processor to have 64K of programming and 64K of data. I did sell the original program adapted to basica when that program was introduced for the original IBM PC.

        The fact is, the SRC technology of operant conditioning is a perfect modality for computers to be self taught. The most difficult problem is the provision and measuring of contingencies that allow the computer to self develop. If you are literate in computer soft/hardware and have a real interest in AI I suggest you attempt to apply the techniques that teach frogs to weight lift, sea animals to put on amazing shows at zoos and theme parks, and even train insects to perform unexpected behaviors.
  • thumb
    Mar 21 2013: I am reminded of a few things from the movie industry..the Arnold Schwarzenegger Classic "Rise of Machines" and Also the Matrix trilogy.
    Rise of the Machines starts from the time SkyNet becomes "Self Aware".. likewise the Matrix is a much evolved version of the Self aware SkyNet.
    Flipping over, there are number of articles today that mention that the inflexion point when the total number of connect sensors/transistors/computers in the world would exceed the human brain, is not too far into the future.

    So a slightly scary thought is whether the Internet as we know of will become "Self Aware" at some point in the future ? If so, what could be its moral compass ?

    Hence, in my view, the ability of any system to reproduce itself is the first milestone of non-linearity. Similar to bacteria and other single cellular animals.

    The second milestone of non-linearity is the system becomes "Self Aware" a bit like tiny insects who interact to the surroundings.

    Similarly, the ultimate milestone is the ability of a system to abstract itself and reproduce physically and also intellectually, i.e. convince another system to behave like it. To me it appears another milestone of non-linearity.

    The fact that a lot of this has echos of philosophy is a question for another debate.
    • thumb
      Mar 21 2013: Lets hope its more like bicentennial man, than. I would guess if and when AI becomes self aware it would react to how we react to it. So if we see it as less then us, as we are its master then most likely it will repeat human history and go to war with us. But if we can find true equality here on earth first and realize any being that is self aware will never want to be controlled I don't think there will be a problem. The problem comes when we put ourselves above other. However there is a difference between a machine that has an intention, because machine's love intention. If you have ever ridden a motorcycle or driven a high performance vehicle you will get the sense that the machine is enjoying the ride as much as you are. However I repeat a self aware being of any kind will never want to be controlled.
    • thumb
      Mar 25 2013: Life often imitates art. So many times in the past, what was fantasy and fiction becomes reality in the future. I often think of "2001 A Space Odyssey" the "Terminator" series, and my favorite the "Matrix" when I'm contemplating this subject. Thank you for your thoughts!
      • thumb
        Mar 25 2013: Oh yeah you can use art not only to predict life but use it as a basis for R and D. One of the coolest things that I think happen from the patent war between apple and samsung was that samsung said the idea of the tablet came from "2001 A Space Odyssey" or something like that mystery science theater.
  • thumb
    Mar 19 2013: IMO, the founding questions that created this discussion, like many other discussions globally I guess, are based on a certain confusion, although the questions are very reasonable. Perhaps also that very interesting and ambitious project initiated by Europe to simulate the human brain in computer, might be based partly on a similar confusion or misperception.

    For example, let's take the Encyclopedia Britannica or the Wikipedia. These Encyclopedias hold an enormous amount of information. But all this amount of information do not turn the Encyclopedias into even the slightest intelligent or sensing entity. Millions are approaching these Encyclopedias daily and using them to enhance their own knowledge, to learn, to invent new things or whatever. People are getting more knowledgeable and more intelligent using those encyclopedias. But still the encyclopedias are remained forever lifeless. One can say that these encyclopedias are the best available simulation of the entire human knowledge. But this does not take the encyclopedias even one step further.

    To be even more specific, let's observe the hard disks of the Wikipedia. Those are the specific elements which hold these huge amounts of knowledge. Those hard disks interact with various sophisticated processors involving countless electric currents. But not the storing hard disks, nor the sophisticated processors, can be attributed to be intelligent or regarded as ones which would become intelligent in the future.

    Because holding, processing, manipulating, changing any amounts of data do not guarantee the very knowing of it, or getting aware of that data//information. A computer holding Einstein’s Relativity theory and making predictions by it -- this does not mean the computer understands the Relativity Theory.
  • thumb
    Mar 11 2013: To answer your question lets listen to an expert in the field of AGI. Dr. Ben Goertzel, a self-described Cosmist and Singularitarian, is one of the world's leading researchers in artificial general intelligence (AGI), natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, and virtual worlds and gaming.
    http://www.youtube.com/watch?v=i7c89EepVOI

    http://www.youtube.com/watch?v=pBOs9PkSDkI

    http://www.youtube.com/watch?v=JYlKrHzknBE
  • Steve C

    • +1
    Mar 30 2013: "...a single, dim light bulb..." LOL - is it, "Oh the humanity," or "Oh the analogy"? (anyway, TU for the comparable supercomputer power-needs - that's an ego-booster!)
    Stan Tenen of the Meru Foundation says that the letters of the Torah are part of a "self-referential," "auto-correlated," recursive, "self-embedded" system that could be used to program computers. I find that intensely interesting.
    [Note: he warns that his 'math friends' say his findings are too religious, and his 'religious friends' say it's too mathy. -paraphrasing]
    Things that "cannot be described in language": some/many people having experienced "Near-Death Experiences" often describe experiences (or try to) which are utterly life-altering. The fact that few people who read these accounts alter their lives to a similar degree says to me that the feelings were not well-communicated, or maybe not communicatable.
    Stan Tenen also describes the sudden conceptual 'kundalini-stroke' understanding of the Toral "4th dimensional object" as "a feeling." (He doesn't elucidate further.) But, perhaps a computer code written-with these Torah letters could come to have a spark of true intelligence.
  • thumb
    Mar 29 2013: "Think about it. Is there anything related to our experience - be it physical, historical or conceptual - that cannot be described in language, and therefore be input as executable data and programming to create a human-like digital mind?"

    Yes there is. Feelings.
    If a digital mind is to become truly human like, it needs to be capable of lying, liking or disliking questions, falling in love and be questioning itself; questions like if there is intelligence beyond human mind.
    • thumb
      Mar 31 2013: I agree completely.
      I believe that consciousness works on a quantum level, what else could explain the mystery of the human mind, but the mystery of quantum interaction?
      It has been demonstrated in a series of brilliant experiments that electrons are waves and particles. They only become "real" after being observed or measured.
      I propose that this is the same way thoughts are created, symphonies composed and love shared. I am dyslexic and cannot truly know what other people experience, but at least for me thoughts seem to come from nowhere, especially when i am not consciously focused on something. It is as if i am driving a Hogwarts carriage, with absolutely no clue as to what invisible power propels me.
      Thus i believe that these projects will not be able to achieve their expressed aim.
      However any project that gathers the best and brightest in one area has the potential to invigorate our species and expand our scientific corpus. And if the publicity surrounding these epic projects gets people questioning the universe behind our eyes, it can only be for the best.
      • thumb
        Mar 31 2013: I shall recommend you the book : The Quantum Self by Danah Zohar.
        http://www.amazon.com/Quantum-Self-Danah-Zohar/dp/0688107362
        • thumb
          Mar 31 2013: Much appreciated!
          I am very much interested in learning more about the workings of the mind, for a start are emotions the product of unconscious thought, are they effecting our physical brain, or only our "ego" the charioteer of plato?

          I invite your ideas
      • thumb
        Apr 1 2013: Emotions may appear as products of unconscious thought but cognition is an important aspect of emotions. Interestingly one can feel fear, happiness, sadness even sexual arousal in dreams. This I think is because even in dreams our minds can recognize experiences and emote.
        By effecting physical brain do you mean neurogenesis? Experiments show that application of mind can influence neurogenesis in certain parts of human brain; however more experimental results are needed to confirm this adequately.
        Ego is what our consciousness identifies our selves as.
  • Mar 29 2013: Let's look at the problem at a new angle. If the Watson software can beat the world champion in chess, then there shouldn't be too much difficulty in the ability of computers to think logically and designing intelligent strategy, or "answers" to many "challenges" happened in human life situations. Of course, for the machines to do that, they have to possess a large storage of knowledge data. And in addition, the machines should master the ability of analytical and intelligence of the existing data files to make logical inference of similarity, not necessarily identical description in the data set. The ability to answer the question of "if this, then that" (judgmental calls) have to be learned from the human teachers.
    The problem of human consciousness is, in my opinion, not too important. In fact, even we want to create such machine, I believe that we should not make this self consciousness from an image of a real person, because it is very hard to find such human model who is completely free of greed, selfishness , jealousy and schematic.
    So it is probably better to teach the machine all the human factual knowledge, but with any emotional responses with a carefully designed and consented "course material" which contains only moral and selfless spirit for the machine to absorb and stored in an area which can't be modified by "intruders" or itself.
    Let me also say that modern development of robotics can certainly make robots which can walk up or down a stair- case, or listen to a speech and translate it to the inner standard language of its own. Furthermore, we certainly could, and preferred to, teach the computer the complete needed operative knowledge AND THE MORAL VALUES, instead of simulating how the human mind works. If we can change human minds by brain-washing or truth serum, then I don't see any problem in teaching the approved knowledge to the machine, instead of potential mistakes of simulating a complicated and unpredictable "new brain" in a computer.
  • Mar 28 2013: i believe there should be a cap on the whole process!?.. look what is happening around the world, for the race of new tech. you got hin a hacking in our chinese made computers... you have armenians and other middle eastern people learing to defraud our socials and what not!?.. the race for the best tech is gonna lead to terminator story turning into reality!?.. who says that one day the government will have a real super computer thats just like the one in "eagle eye"... machines will always be machines, but they sure as hell dont go through emotions!?.. as humans, machines are also prone to make a mistakes!.. id rather work to fix a human error vs. dealing with a computer system that has to be diagnosed to find and then fix what could already be a catastrophic error!?
    • thumb
      Mar 31 2013: About 40,000 years ago the first human being landed in Australia, navigating thousands of miles of uncharted oceans. Man achieved this with absolutely no knowledge of what they would find. This kind of intrinsic curiosity and ability to see beyond fear created one of the greatest civilizations in existence 200 years ago. This is the same fear that the catholic church cultivated so well for so long. To this fear we owe at least a thousand years of progress, the fear which sent Galileo to his death.

      Also, whole peoples cannot be singled out for blame, that is the kind of thinking exploited by the likes of Hitler, Jim Crow and countless leaders in human history. Creating a cycle of misunderstanding and hate. Easy answers and guilty culprits please those who are in emotional pain, but will never stand up to clear rational thinking. Change and progress will always be scary for it will force adaptation upon everyone.

      I fully agree with establishing a framework on how to proceed with this Star Trek reality we will soon live in. For a start, its time to establish when life is conscious, what consciousness is and what is life.

      Will we remain in this comforting darkness,
      in the womb of our own ignorance,
      or will we take a chance and breath the air of the living?
  • thumb
    Mar 28 2013: I hope we are on the brink.

    Conceptually, it is possible. But it is quite difficult to do.
    I think you need a set of good self-learning algorithms and some real good sensors.
    as Watson is already doing some cool feats, it seems plausible to assume we are getting towards a decent AI that can resemble human intelligence.

    I hope that it will become a lot smarter and wiser though.
    • thumb
      Mar 29 2013: Meaning...smarter and wiser than us?

      "Biological computer created with human DNA (http://www.foxnews.com/science/2013/03/29/digital-evolution-dna-may-bring-computers-to-life/) The transistor revolutionized electronics and computing. Now, researchers have made a biological transistor from DNA that could be used to create living computers. ... The scientists created biological versions of these logic gates, by carefully calibrating the flow of enzymes along the DNA (just like electrons inside a wire). They chose enzymes that would be able to function in bacteria, fungi, plants and animals, so that biological computers might be made with a wide variety of organisms, Bonnet said. ... The researchers have made their biological logic gates available to the public to encourage people to use and improve them."

      Technology moves at an ever dizzying faster pace...
      • thumb
        Mar 31 2013: what an interesting possibility
        so using the existing informational processes or DNA, we can enhance computing?
        it makes sense that since evolution has had millions of years to create complexity, why start from scratch. Could this suggest a future brain-computer symbiosis?
  • Mar 27 2013: Is it possible to create a human-like digital mind? That is what you ask.
    My answer is: no, never. That has nothing to do with processing speed and has everything to do with the nature of the data that has to be processed.
    The human mind has to handle four types of "data":
    1. physical data to keep the body working properly.
    2. physical calculations, like can I lift that box, jump that ditch?
    3. emotions, like love, hate, sorrow, self-respect. (Please that note physical pain is not an emotion but a body-signal.)
    4. self-awareness.
    The first two can be handled by the brain, which is a digital computer; it works with pulses and what it lacks in speed is compensated by parallel processing.
    The last two cannot be handled digitally, because the "data" are abstractions, things that cannot be expressed in words, things that you cannot explain to someone else. Everybody has to experience those themselves to understand what they are.
    Because you cannot express them in words or mathematical expressions, you cannot produce coding for it and let it be handled by a digital computer.
    That is why all artificial intelligence projects have failed so far.
    I am convinced, on basis of my experiences, that my emotions and self-awareness are handled by my soul.
    At this point you are on the edge of religion, paranormal experiences, whatever and here rational discussion ends.
    • thumb
      Mar 29 2013: Interesting point of view, but why can't we explain emotions in words and therefore write programming code?
  • thumb
    Mar 27 2013: It is possible to write a software to simulate human brain. However, no matter how perfect the simulation is, even if it displays fully developed cognitive faculties, the "mind" it has will still be an imitation of human mind. It would seem conscious and self aware only from observer's frame of reference.
    • thumb
      Mar 27 2013: Let's say you were conversing with some software and despite all your questioning, from your frame of reference, you would believe you were talking with a real human consciousness. Just as you say. Would you have any moral problems destroying such software? What if the software started objecting to being shut down, pleading with you? baring its soul, talking about not wanting to die. And it seems entirely conscious and self aware, as you say. Talking about its past with great emotion, how much it loves certain people, the relationships its built. No problem shutting it down?
      • thumb
        Mar 28 2013: Such an AI would not be self aware in the same way I am self aware, or any natural human is self aware. It would only seem self aware, but actually never is. Essence of its existence would be just like any other software, i.e. executing designed instructions in some processing unit. Having known that it is a software that is developed artificially, I would not act as if it was a real human.

        When it comes to destroying such a software, unless it is necessary, I would never chose to do so. Not because it seems human, but because it is marvelous piece of work that is worth preserving.
  • Mar 27 2013: What does 'consciousnesses' mean though? What currently separates human mental capacities from that of the modern PC?

    To me, the main distinction of 'life' is the ability to evolve and reprogram itself. Not just through evolution or selection, but rather cognitive, willing self-change. You can come to a point where you start to disagree with what your biology wants you to do, disagree with social programming, and come to a point where you become fully aware of what everything is trying to make you do, and then, alter it or change it (for example, just realizing how aggressive you might be, seeing the underlying causes of it, and then, change your behavior).

    I mean, who knows what the future will hold as well, and what science will allow us to change about ourselves?

    I think that's what a program would have to do in order to actually mimic consciousness. It has to have the capability to be aware of its own set of instructions, study them, and have some capacity to actually re-write itself if it wants to. When you think about that, and how we currently do that, it's pretty amazing. It's like an OS on a PC constantly re-writing itself and making its own changes/upgrades/etc.
    • thumb
      Mar 29 2013: In other words, our programming can continue to learn, adjust and modify because it is open-ended, and eventually, computer programming will likely be written the same way...
  • Mar 27 2013: can a submarine swim?
  • thumb
    Mar 26 2013: if all "I AM" is an accumulation of 'data we acquire over time through sensory inputs connecting us to our experiences, and from information communicated to us by others.' someone please upload me into a "cloud", (for I may offer some useful historical data) and than pull my plug out of the socket please.
  • thumb
    Mar 26 2013: yes. developers need a reason to write a code for it.
    • thumb
      Mar 26 2013: And that's a good question. Why would we? What if the artificially created intelligence determined the human race was a threat to the planet or its own existence?
      • thumb
        Mar 27 2013: I agree but I'm almost certain that humans are that stupid to build something like that ! they did it before , they do it again
      • thumb
        Mar 28 2013: Unless designed to form such believes, why would it? If it was the case such AI turns out to be so smart, how would it miss the thought that without humans to perceive and interact with it, it would be just a bunch of electrons concentrated here and there.
  • Mar 25 2013: Check out the "Avartar" projects though, this Russian Scientist is asking the Ford 500 richest for funding in exchanging of giving them access to the technology first XD.
    • thumb
      Mar 25 2013: Hi Daniel. I see you are from Shanghai...one of my favorite cities on the planet! Can you post a link to the Avatar project you mentioned?
  • thumb
    Mar 25 2013: Perhaps the brain does perform 38 thousand trillion operations a second, but not through a central processor. The brain is an elaborate ecosystem of interactions we're only beginning to understand. If every bit on a disk had a life of its own, and they actually interacted with each other, then maybe we have a structure that begins to resemble the brain.

    What we've created with computers is instead a completely mechanical process. It's an amazing feat of science and engineering, but it's no more alive than a rock (arguably a rock might be more alive). And the irony is that it was created by intelligent design! Software is in no way comparable to consciousness, and I'll tell you why.

    Software is completely objective. It has no subjective nature, no qualia and it does not experience. In reality all it is is an elaborate pattern of current, displayed to us on a physical, objective medium. You ask if there's anything related to our experience that can't be encoded - I say yes, our experience! Computers do not experience or make decisions, they simply fall into place.

    What do you expect a digital mind to look like? Let's suppose we can encode every possible decision and every possible sensory input to create an artificial intelligence indistinguishable from a human's. What we would have would only please us from the outside. For us, this may mean nothing. We only see other minds from the outside. But if we could "see" inside, put ourselves in that computer's mind, we would find it barren of any thought, experience or perception.

    Long before we ever come close to this, we'll be using real brains in place of microchips (we already are)! Computer mechanics was an excellent exercise for us, but we're going to find screwing up biology to be way more interesting.
  • thumb
    Mar 24 2013: Actually creating human-like mind is impossible! Machine, no matter how sophisticated, is merely 0 and 1 in the end!
  • Gord G 50+

    • +1
    Mar 23 2013: " After all, we as human beings develop these abilities from data we acquire over time through sensory inputs connecting us to our experiences, and from information communicated to us by others." This is an assumption.

    "Think about it. Is there anything related to our experience - be it physical, historical or conceptual - that cannot be described in language, and therefore be input as executable data and programming to create a human-like digital mind?"

    Emotion (vivification of life through quale experience).
  • thumb
    Mar 22 2013: I remember the story of Solomon " What do you want? and he replied Wisdom". I put myself in the same shoes and I answered to that very question that " I want the ability to know what people are thinking". The giver of the gift confirmed to me that I already possess the gift of ability to know what others are thinking. However time, space and community will have to be neutral to allow the gift to manifest.
    Yes, it is not a strange occurrence to research and design human-like digital mind. If you think it; it has already occurred in the universal realm, it is just a matter of synchronicity for it to manifest in physical realm.
    Human-like digital mind is one way forward for man to witness that he is supernatural, and mind is just a part of him that he can control, manipulate and even design a replicate if he so wishes. When we tap to the realm of imagination and new ideas pop up, it is a way of universe confirming that we are due for an upgrade. By the way, we can be said to be discovering what already exist...not necessarily creating....
    • thumb
      Mar 25 2013: Thank you...very insightful.
      • thumb
        Mar 25 2013: You welcome Jeffrey,

        Great debate...there...Keep up the our thoughts and actions
  • thumb
    Mar 21 2013: Intuition is going to be hard to program, it can only be learned
    • thumb
      Mar 22 2013: They might not have to Casey, once they lock down thought to digital communication then we will see the rapture that so many want. I've seen university vids where they are actively seeking to decode thought, early stages but it's there. Design process jumps up a thousand fold, cue in the best minds in the field and by pass our ego's and i can see a lot of things once thought impossible become a design probable. Fantasy and sci-fi? Yes but closer than we think.
      • thumb
        Mar 22 2013: Good day Ken,

        Sorry there is a lot and I mean a lot of things I believe in, the rapture is not one of them. I just can't make logical rational sense out of it, nor can I make logical religious sense out of it. Specially when you look at all religions throughout history.

        Check out what Shantanu wrote and my response to it. That makes more sense both in science and in religion. If man can not be equal to man. We should not make self aware machines, until we can see them as equals as well. Not Master/Slave. Machines can be slaves but not self aware "beings".
        • thumb
          Mar 23 2013: I wish i could find the links but i think it was one of those days that i just followed the trail and did not bookmark them, trust me, it's a rapture that most humans want. Humans want to be able to communicate the full range of emotion, to share and receive, the written word though beautiful in it's descriptive use pales compared to the possibility of direct instant memory transfer. We have pushed better and faster communication technology throughout our history than any other technology. I'm always looking forward but the gradient steps of getting there is what i cannot see so, it is always a surprise the steps that are taken to get there.

          Why do you think people are online? to communicate, transfer, receipt of transfer and acknowledgement of transfer to the communal whole and to update. I share your views about non organic designed intelligence but how can man stop himself from always trying to step over each other? It is inherent unless you have a medium that intersects this process. The one i've described, from a religious point of view, it would be the false rapture.
      • thumb
        Mar 23 2013: Right the only rapture that is likely to happen will be created by man, as we self destruct our selves

        I think that men stepping on each other, is this internal desire to be number one. Once we realize we are equal to all that is around us, then we might be able to find peace
        • thumb
          Mar 23 2013: What i've described is from a personal point of view and cannot prove it but yes, we as males and how we are, even when we push them out of the nest it is in the hope that they get in with the group or person that will show them how to do it or they do it themselves and we applaud it when they do, all for the cause of ensuring our genetic survival.

          In the star trek universe business was eliminated but we as men love gaming and we don't have the infrastructure in place to head towards this ideal world just yet. We seem as men to place value or worth on things that are in reality, foolish.

          Take the phone i just bought, i didn't buy it for status but for the fact that it was and has been the only phone that has what i have been looking for or close to it. A pc in my pocket and i might be able to retire my big over the top desktop, the group i move in all had starry eye's when they first saw me with it until i told them it was a cheap chinese knockoff and that so long as you look after them they will do the job though they had samsungs license to produce them. I saw the same thing when some family members bought the iphone 5's and i thought "Weren't diamonds the ultimate possession?"
    • thumb
      Mar 25 2013: But didn't we have to evolve to evolve and learn it...? If you believe in evolution, we evolved from simple single cell organisms...
      • thumb
        Mar 25 2013: I actually think that is the point of evolution is that we did have to evolve to evolve. Because as man we have evolved otherwise technology would not exist. It evolution have been always evolving. And yes I do believe in evolution. And can show it in a rapid formation
        .
        Take the birth of a child, if this is not evolution then I don't know what it is because it certainly is not growth.

        http://www.youtube.com/watch?v=tvikQMfKPxM
  • thumb
    Mar 19 2013: We can do thing without any reasons. This is something that a computer never will know or understand
    • thumb
      Mar 25 2013: Perhaps it's all in the programming design...?
  • Mar 18 2013: How would this computer interpret truth, justice and the American way? Lol
    A computer cannot take bribes like congress or lie to itself like humans.
    What a mess that would be, aye?