TED Conversations

Jeffrey Fadness

This conversation is closed. Start a new conversation
or join one »

Are we on the brink of creating a human-like digital mind?

The human brain contains some 100 billion neurons, grouped in specialized function zones, connected by a hundred thousand billion synapses - the neurons representing individual data processing and storage units; and synapses the data transfer cabling, connecting all the processing units.

Correlating its processing ability to a supercomputer, it's been estimated it can perform more than 38 thousand trillion operations per second, and hold about 3.6 million gigabytes of memory. Equally impressive, it's estimated that the human brain executes this monumental computational task on a mere equivalent of 20 watts of power; about the same energy to power a single, dim light bulb. In today's technology, a supercomputer designed to deliver comparable capabilities would require roughly 100 megawatts (100 million watts) of power; an energy equivalent that could fully satisfy the power consumption needs of roughly a thousand households.

An ambitious $1.3 billion project was very recently announced in Europe to simulate a human mind in the form of a complete human brain in a supercomputer. It's named the Human Brain Project. A similar project in the U.S. planned by National Institutes of Health (NIH) is called the Brain Activity Map project.

Assuming we learn enough from these efforts to design a new architecture in computer processing which can approximate the ability of the human brain - what's to stop us from creating the cognitive faculties that enable consciousness, thinking, reasoning, perception, and judgement? After all, we as human beings develop these abilities from data we acquire over time through sensory inputs connecting us to our experiences, and from information communicated to us by others.

Think about it. Is there anything related to our experience - be it physical, historical or conceptual - that cannot be described in language, and therefore be input as executable data and programming to create a human-like digital mind?

+4
Share:

Showing single comment thread. View the full conversation.

  • Mar 23 2013: In the early days of the personal computer revolution I wrote a 256 byte program that had internal housekeeping but learned on its own to manage a 256 byte "environment" with 8 possible actions that had good and bad results. SAM, as I called it, learned to prosper in its little world, forgot, developed good and bad habits, and began with random reactions. His environment was purely electronic. He ran in a 4K RAM computer.

    His second iteration was in a plant watering robot. Play, concern for his plants, and answering simple questions about his condition were added to his repertoire. He ran in a 16K machine. He operated in two 256 byte environments.

    His last iteration included dreaming, recognizing people, vision, hearing, and touch with center of attention "focus" for all three. He was not a mobile robot. He learned everything about himself, his functions, and the electronic and physical world he was exposed to with no programming except his operating system.

    SAM was based on the behavioral contingencies theory of mind and development. Dreaming was to organize his learning. Unfortunately, at that time I was in a serious auto accident and lost SAM when my storage unit went into default.

    Bottom line: developing self-awareness does not require terabytes of storage or massive processing power.
    • thumb
      Mar 23 2013: What you described is a fascinating experience. I also tend to agree with your assertion that self-awareness does not require massive computing power.

      Rather it comers at a critical threshold of 'non-linearity'.
      Neural-net based programs ( and I presume you may have used something similar) tend to show amazing personality traits as you keep on adding layers of neurons.

      So as we keep adding layers into a neural net, we shall see signs of human like intelligence.

      On a slightly lighter vein, great minds probably have a few additional layers of the 'grey matter' and that makes all the difference.
      • Mar 25 2013: SAM was a very simple program. In his original form he was 256 bytes of code. His environment was a single byte random number generator. His reactions were one of 8 randomly chosen bytes that were XORed with the environment. I arbitrarily selected the upper nybble as the "good" result and the lower nybble as the negative. The results were combined and the 5 bit result was placed in 1 0f 8 256 byte blocks that represented the 8 reactions. If that environment was "hit" again, the program scanned all 8 reactions and chose the best. A random number again was used to get a value that was compared with the best reaction. If that number was greater than the best result a new random reaction was chosen. If it was less that best reaction was used again.

        Each time a given reaction was used the top 3 bits of data stored in the corresponding location were incremented. With each action loop one of the 2K results was examined. If its top three bits were less than 111 the byte was reset, and SAM forgot that environment/reaction pair had ever happened.

        SAM works imprecisely. He develops "bad" habits as well as good. Over time, however, he always prospers. In the watering can application real environments and reactions replaced the numerical operations. You can read about SAM in some of the last issues of Peek65 magazine. That publication also included a BASIC version of the original implementation of SAM. The articles also shows how SAM became more complex using the same simple root routines of the original. There never were neural networks or other common AI tools. SAM was basically an implementation of behavioral psychology a la B. F. Skinner.
    • thumb
      Mar 25 2013: I gather than that you believe it is possible?
      • Mar 25 2013: Much of the human mind with its trillions of cells and synapses is consumed with managing our physical body, its nerves, muscles, endocrine system, etc. A man-made computer does not have or need much of these constructs.

        If we are concerned only with the data processing functions of the mind, gathering information, storing, sorting, and interpreting it, computer programs already surpass our own abilities. However, these are merely overlays we have cleverly devised to perform specific functions. As such, they are extensions of ourselves overlayed on a complex tool.

        The idea of my SAM project was to have a complex tool that within whatever sensory and responsive machinery one gave it would on its own learn how to use that machinery to achieve its own goals and whatever directives it was given from the environment.

        People, for example, receive much of their directives from other people. The majority of our learning is imposed upon us by others. This includes most of the goals that direct our lives.

        The animal kingdom has a spectrum of creatures that range from totally instinctive programming (ROM based behavior) to largely general purpose programming. As tools, our "general purpose" computers are merely ROM based systems on which we load different fixed programs to carry out specific functions that serve our needs and wants. I call them ROM based, because we do not want the program to have its code self-modified by external data.

        The limitation (as a tool) for truly general purpose computers is that they learn to function over time. Thus more complex creatures do not fully function at birth, but require longer periods of care and nurture as their complexity increases. Their direction and learning is imposed upon them by the environment. The primary guiding ROM of such creatures is described as the SRC (Stimulus, Response, Consequence) routine.

        The creature monitors its condition, reacts to input, evaluates its new condition, and learns accordingly.
      • Mar 25 2013: The SRC is the basis of my SAM computer. If, for example, SAM were provided with a moveable extension such as an arm and grasping tool, his ROM would have to include code to manipulate the tool and accept sensory input from the tool, such as its position and the force it exerted on its environment.

        Use of the tool, however, was not programmed. One could externally with a push on either of two buttons (one for desirable response and one for bad response (in complex SAMs verbal feedback) train that hand to do whatever one wanted. This is how we impose direction on our children.

        To fully answer your question and understand what SAM was, read the series of articles beginning with http://adzoe.org/sam1.html .
      • Apr 2 2013: Don
        I am not selling anything. If you visited the SAM site you will note the articles appeared in a magazine decades ago. I have long since retired. I have a long career in data processing, designing algorithms in the 1950s when punch cards were in vogue. I learned programming on the PDP 8 and 11 computers in machine language. Shortly after the introduction of the 6502 I developed external circuitry that used unused code bytes to allow that processor to have 64K of programming and 64K of data. I did sell the original program adapted to basica when that program was introduced for the original IBM PC.

        The fact is, the SRC technology of operant conditioning is a perfect modality for computers to be self taught. The most difficult problem is the provision and measuring of contingencies that allow the computer to self develop. If you are literate in computer soft/hardware and have a real interest in AI I suggest you attempt to apply the techniques that teach frogs to weight lift, sea animals to put on amazing shows at zoos and theme parks, and even train insects to perform unexpected behaviors.

Showing single comment thread. View the full conversation.