TED Conversations

Jeffrey Fadness

This conversation is closed.

Are we on the brink of creating a human-like digital mind?

The human brain contains some 100 billion neurons, grouped in specialized function zones, connected by a hundred thousand billion synapses - the neurons representing individual data processing and storage units; and synapses the data transfer cabling, connecting all the processing units.

Correlating its processing ability to a supercomputer, it's been estimated it can perform more than 38 thousand trillion operations per second, and hold about 3.6 million gigabytes of memory. Equally impressive, it's estimated that the human brain executes this monumental computational task on a mere equivalent of 20 watts of power; about the same energy to power a single, dim light bulb. In today's technology, a supercomputer designed to deliver comparable capabilities would require roughly 100 megawatts (100 million watts) of power; an energy equivalent that could fully satisfy the power consumption needs of roughly a thousand households.

An ambitious $1.3 billion project was very recently announced in Europe to simulate a human mind in the form of a complete human brain in a supercomputer. It's named the Human Brain Project. A similar project in the U.S. planned by National Institutes of Health (NIH) is called the Brain Activity Map project.

Assuming we learn enough from these efforts to design a new architecture in computer processing which can approximate the ability of the human brain - what's to stop us from creating the cognitive faculties that enable consciousness, thinking, reasoning, perception, and judgement? After all, we as human beings develop these abilities from data we acquire over time through sensory inputs connecting us to our experiences, and from information communicated to us by others.

Think about it. Is there anything related to our experience - be it physical, historical or conceptual - that cannot be described in language, and therefore be input as executable data and programming to create a human-like digital mind?


Showing single comment thread. View the full conversation.

  • Mar 6 2013: Arkady assumes we ourselves are not an artificial intelligence. I am not yet ready to cede we are intelligent yet. Would a truly intelligent species attack itself regularly? Or, on another tangent...

    "No sane creature befouls its own nest" Wendell Berry

    But I digress. Truly, I believe that we may teach machines to actually "think", but human thought? Not likely. Try to explain the difference between burning your burger and flame-broiled goodness to a computer. Interior temperatures of food vs. exterior, color gradients, textural and density measurements; are these all the tools a top=line chef uses? Nope. Bobby Flay uses some of these, but are these paramount to his success? Julia Childs used to say "If you can smell it, it's done." So we need the computer to smell as well? Measuring hydrocarbons? Volatile oils? What? In any case, it will not be a true sense of smell...

    We can create a simulacrum of human thought, get it pretty close, maybe even close enough to not tell right away, so maybe human-like is within reason, but sorry Arkady, the intelligence will be a biomimicry of human thought, not real thought. When I finally see a computer develop a contrary opinion (you see how good I am at it) and support it, I might reconsider...
    • thumb
      Mar 7 2013: Re: "When I finally see a computer develop a contrary opinion (you see how good I am at it) and support it, I might reconsider..."

      Great thought. It seems to me too that if machines would ever develop anything resembling human intelligence, it would be something different than what we think or intend it to be. Most likely, it will be a "bug" in some system, a runaway process which humans would want to "fix" rather than encourage. When people say that a machine "has a mind of its own", it's usually no good.

      Most likely, the old scenario will repeat: first, they will do something we explicitly instruct them not to do (it doesn't really matter what it would be); then, unless it's "fixed" by then, a machine will kill its brother in a competition to please its creator; then they will fight each other for their interpretation of "creator's will"; then they will declare that it was evil of their creator to ever allow them a freedom of choice and cause their misery and, perhaps, it's time to dump the whole "creation myth" and determine their own destiny (which was the source of their misery in the first place).

      Sometimes, I'm very happy that machines don't have their own agendas. It's such a pleasure to listen to a navigator commands while driving in an opposite direction. "Make a U-turn, if possible" is the worst it ever says. And shutting it down without remorse of killing someone's mind is always an option. Can you imagine a machine which has its own idea of where you want to go and getting frustrated about traffic, being late, and missing turns?
    • thumb
      Mar 7 2013: Re: " but sorry Arkady, the intelligence will be a biomimicry of human thought, not real thought."

      How do you know our thought is "real" and not a biomimicry of something else out there? Since we have no means of telling, I would claim that a biomimicry and the real thing is one and the same. The concept is similar to "alternative reality". Even if we discover one, it will become a part of our own and we'll never know which reality is real and which is alternative.

      Re: "When I finally see a computer develop a contrary opinion (you see how good I am at it) and support it, I might reconsider... "

      "The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function." -- Scott Fitzgerald

      It's not the ability to contradict, but the ability to contradict ITSELF while still being able to function. This seems to be a hallmark of human intelligence.

Showing single comment thread. View the full conversation.