TED Conversations

Howard Yee

Software Engineer @ Rubenstein Technology Group,

TEDCRED 50+

This conversation is closed.

Can technology replace human intelligence?

This week in my Bioelectricity class we learned about extracellular fields. One facet of the study of extracellular field I find interesting is the determination of the field from a known source (AKA the Forward Problem) versus the determination of the source from a known field (AKA the Inverse Problem). Whereas the forward problem is simple and solutions may be obtained through calculations, the inverse problem poses a problem. The lack of uniqueness to the inverse problem means the solution requires interpretation, which may be subjective. We may also apply a mechanism for the interpretation; this mechanism is known as an AI. However, this facet of AI (document classification) is only the surface of the field.

Damon Horowitz gave a recent presentation at TEDxSoMa called “Why machines need people”. In it, he says that AI can never approach the intelligence of humans. He gives examples of AI systems, like classification and summarization. He explains that those systems are simply “pattern matching” without any intelligence behind them. If true, perhaps the subjective interpretation of inverse problems is welcomed over the dumb classification. Through experience, the interpreters may have more insight than one can impart on an algorithm.

However, what Damon failed to mention was that most of those AI systems built to do small tasks is known as weak AI. There is a whole other field of study for strong AI, whose methods of creating intelligence is much more than “pattern matching”. Proponents of strong AI believe that human intelligence can be replicated. Of course we are a long way off from seeing human-level AI. What makes human intelligence hard to replicate? Can it be simulated? What if we created a model of the human brain, would it be able to think?

Related Videos (not on Ted):
“Why Machines need People”
http://www.youtube.com/watch?v=1YdE-D_lSgI&feature=player_embedded

Share:
  • thumb
    Mar 8 2012: My Grandmother once told me "in a marriage, if both people are the same, one of them is redundant."

    Why all this emphasis on making a computer simulate human intelligence when there are so many intelligent humans up to the task, and computers are so good at things our brains are poor at?

    It is the symbiosis between humans and computers that drives the mutual evolution of both. For instance, everybody knows that the best chess player in the world is a computer. However there are also chess competitions where humans with the support of computer programs compete with each other. Interestingly, the team is better than either the best human or the best computer on its own. Furthermore, it is not necessarily the best chess player that makes the best team mate to a computer and vise versa.

    Another thing with human intelligence is that humans are intelligent in different ways. Intelligence is measured across a variety of aptitude, and peoples strengths and weakness vary. This talk really drives the point home

    http://www.ted.com/talks/lang/en/temple_grandin_the_world_needs_all_kinds_of_minds.html

    On the question of why human intelligence is so hard to simulate, part of it is the amount of input we receive throughout our life in the process of developing into an intelligent adult. It would be interesting to see if people are trying to make a program that simulates the learning capacity of a toddler.

    Computer intelligence will evolve in a way that is different from human intelligence. Though, I admit the exercise of trying to simulate human intelligence could lead to insight into both AI and human intelligence. Nevertheless, if we restrict our ideas about AI to human definitions of intelligence, we limit the potential of AI that will eventually exceed us.
  • thumb
    Mar 8 2012: I have written a number of blog posts on this and related questions. The topics below transition from where we are and why we're "not there yet" with creating humanlike AIs, through how to create non-intelligent machine learning systems that at least do useful things, through some views on what we should be doing to create humanlike intelligence, through to some musings on intelligence, entropy, the universe and everything. As far as the issue of consciousness, I try not to touch that with a 10-foot pole :-)

    "Watson's Jeopardy win, and a reality check on the future of AI":
    http://www.metalev.org/2011/02/reality-check-on-future-of-ai-and.html

    "Why we may not have intelligent computers by 2019":
    http://www.metalev.org/2010/12/why-we-may-not-have-intelligent.html

    "Machine intelligence: the earthmoving equipment of the information age, and the future of meaningful lives":
    http://www.metalev.org/2011/08/machine-intelligence-earthmoving.html

    "On hierarchical learning and building a brain":
    http://www.metalev.org/2011/08/on-hierarchical-learning-and-building.html

    "Life, Intelligence and the Second Law of Thermodynamics":
    http://www.metalev.org/2011/04/life-intelligence-and-second-law-of.html

    I hope some of this is at least thought-provoking!
    --Luke
    • thumb
      Mar 8 2012: So luke basically without reading these links yet, what are your thoughts on a learning thinking AI?
      • thumb
        Mar 8 2012: Ken -- most of my current thoughts are in the links above. Happy to discuss once you've had a chance to peruse them :-)
        • thumb
          Mar 8 2012: Ok i've read the first two of them and yeah it goes along the same lines of what i thought, which is uneducated.I've kind of followed how intel have stayed on course with moores law but this year or last year it "Ticked"? but theirs no "Tock" til two more years? and it's been a programmers nightmare trying to develop for the multicore bottleneck?

          I know this is not what i asked but i can't see today's chip development ever getting to what kurzweil states unless there is a new element or design introduced?Here's what i found trawling one day.

          http://scitechdaily.com/penn-researchers-build-a-circuit-with-light/

          It takes me awhile to read things as i tend to think it through then reread it.it's slow i know but it woorks for me.
      • Comment deleted

        • Mar 9 2012: There isn't a multi-core dilemma. There is just people, that don't know any electronics and how to write compilers, and people that don't use modern technology. They will think this is a problem because they don't know any better.

          Computers made past that point long ago. Computers today that is used for heavy calculations has thousands of cores. With common graphic cards you can do many thousands of calculations in parallel.

          Ever heard of Open-CL? Check it out, you aught to know it.

          Ever heard of High-frequency trading for example? They now use FPGA's, and can calculate and respond to hundreds of thousands parameters in parallel. These languages to program these parallel systems has been around since the 1980s.

          And if you think thats difficult, then there is computer languages that fix that also. Check out Mitrion-C

          This isn't a problem. Even programs like word and photoshop and webbrowsers, scales to thousands of calculation units. Just insert a modern graphics card in your computer.
  • Mar 7 2012: I think you need to do a little more work on scoping this idea and conversation.

    What I think Damon Horowitz was describing is "the chinese room" problem. i might be wrong of course.

    Your question poses for me, lots of questions which i suppose is the point.

    Define what you mean by Replacing humans. Replacing humans in what context. All decision making? some decision making?

    Define what you mean by intelligence in this case?

    Are you in fact alluding to the difference between symbolic (typically human) and subsymbolic (as far as im aware machines best attempts) logic?

    If this is the case are you in fact asking if we can make machines concious ie: self aware. Which is a very different problem and leads into - for me at least - moral ethical and philosophical conversations.
    • thumb
      Mar 7 2012: I believe "the chinese room" problem questions where the consciousness lies. Is the conscious being the human in the room because he's aggregated the information the room contains? Or is it the data source (the book/computer) in the room? Or is it the entire room itself? This problem is very thought provoking because we can easily give humans a "consciousness" but given a situation where the knowledge does not come from a human directly, it's harder to admit whether other objects are conscious.

      When I say replace humans, I mean replace a human completely. Can we give it an artificial body and with all intents and purposes, it will act exactly like a human being, think it's alive, and assimilate perfectly with other human beings?
      • thumb
        Mar 7 2012: We could also consider how the "Chinese Room" problem isolates an accepted intelligent being behind a deterministic program, and we find issue in whether the system shares the man's intelligence. This isolation suggests that even if we replicated every cell and function in a human being artificially, down to the atom, that artificial clone could not by our current means be determined one way or the other. Until we have ways to determine that another human hosts the same intelligence we observe ourselves to host, we cannot determine the same of a nonhuman being.
      • Mar 8 2012: Mm, you could say that. I interpret the more lower level implication. The human in the room isn't the important point here. The "system" in the case of the Chinese room is the room and all it contains. This scenario teaches me that the system does not understand what its doing, it simply responds to an input with a pre determined output. Now, you could argue that human behaviour and intelligence is simply billions of cross talking Chinese rooms, and therefore human intelligence to be an emergent behaviour, but I think this is hard to prove.
  • Mar 11 2012: On an ethical point of view, I think technological machines or devices lack a lot that is desirable in human intelligence. We tend often to forget that man is not only a deductively processing brain, but also a heart that loves, cares and feels. For instance, these machines produced by technology when confronted with new set of data not pre-programmed are stuck. My position is that technology is a useful tool, but it cannot do without human intelligence. Human intelligence can transcend to reach new heights.
    • thumb
      Mar 11 2012: Agreed - but couldn´t we teach technology to take care and feel? Two questions go with this:

      1.) Do we really want this? Don ´t we loose our uniqueness then? It is the same with animals - we believe also that man can do specific things that animals can not do... But maybe the delphin in the zoo is playing with us - instead we with him ?

      2.) If we want it, is it really good technology still? Imagine a car that is full of fear to drive at night? A gun refusing because empathy for the enemy... A Strange question, I know - but a question that is logical once you want to make technology more intelligent and more human.
      • Mar 14 2012: I am happy with the way you answered . However, I have this feeling as if you are afraid to concede that technology cannot be given such feelings... Mankind is able to play with the material aspect of nature, but things or aspects beyond this materiality have proven to be beyond their reach. For instance, if you like movies, probably you have watched Bourne, how scientists tried to control man's loyalty, obedience... and later they had to face the sole result reached up to now that these feelings are metaphysical. I am contending that maybe one day we will achieve that, but for the time being... no illusion there.
        The other thing you said about our uniqueness as human being. Do not worry about that... I most of the time argue that the scientific world has repressed more of our uniqueness to the extent that we think that will be the only thing we may loose. I hope you do not think that if given feeling they will be stronger than we are. maybe, maybe not. we are more than what is seen, measured. our uniqueness is beyond the current measures.
      • Mar 14 2012: Interesting, technology that feels might not necessarily be helpful. I think the hardest part to replicating human intelligence exactly is finding a way to copy the emotions and seemingly irrational thoughts of humans. Sometimes I decide to sleep in on a day instead of doing work that I know I have. I can see AI making logical decisions and being more efficient than humans at some tasks, but find it hard to imagine them having emotional responses. I am unaware on how exactly our feelings are activated by our brain, so modeling this might be easier than I know.
        Also, did you know there was a TED conversation similar to this: http://www.ted.com/conversations/1528/artificial_intelligence_will_s.html
  • Mar 10 2012: Some of the posts within this thread refer to the P-Zombie (philosophical zombie), if not by name, then at least in concept.

    The p-zombie is essentially advanced automata that acts identically to a human being, but possesses no consciousness.

    My argument is that a p-zombie is an inherently contradictory idea if you have an inkling of how our own consciousness works.

    Essentially, our actions - the things we do, the things we are capable, betray some degree of our internal minds. That we show capacity for learning, capacity for cognition, capacity for information processing... are because we do have those capacities. We can't show it, if we don't have those things. And we can't show the emergent results of the massively parallel, modular, auto-associative, probabilistic brain functions if we don't have those things.

    And that... is pretty much the nature of our consciousness. The comingling of all these complex signal processing units, their iterative interactions, in the timely manner that they do, relative to each other as they are... can only result in the sensation of consciousness that we experience.

    That said, if you suppose that we had the ability to capture all cases, present and future, and provide an output for each unique case (of which there would be infinite numbers), then I suppose the idea of the p-zombie would have some traction. Like a chinese room, receiving one input, then throwing out another.

    But it would be more difficult to account for all the cases then it would be for an auto-associative adaptive learning system... like the brain... to emerge through natural forces. Would also be less efficient for us to design an 'intelligent' system that way, then to design one that could adaptively learn things.

    That said... will the nature of machine conciousness... be even similar to human consciousness? Doubtful. Unless you were to replicate the critical conditions that make us 'human', including, but not limited to processing speed.
    • thumb
      Mar 11 2012: Huh. That's a good point, George. Never thought about that.
  • thumb
    Mar 9 2012: Not to spam the forum with lots of my rambling and randomly researched material, but there's a field of science whose job is to apply quantum theory to cognitive phenomenon:

    http://en.wikipedia.org/wiki/Quantum_cognition

    All of this just reinforces the idea that the universe is an amazingly vast, intricately detailed place! With so many things still hidden from us, in plain sight, it makes me pause and ask the question: What if we *are* just matter? And then, taking a look at the way MATTER acts, we could be *just* matter and STILL have things exist beyond what we've seen so far or even what we *can* see here *because* of as yet unknown rules and interactions that science could determine.

    Take for instance, notions within the scientific community that the universe is just a hologram:

    http://www.wired.com/wiredscience/2010/10/holometer-universe-resolution/

    http://www.universetoday.com/59921/holographic-universe/

    or the idea of encoding information on the surface of a black hole:

    http://www.theory.caltech.edu/people/preskill/blackhole_bet.html (Summary of article---"Your most precious theories have been (or must be) altered. Pray I do not alter them further.")

    In fact, as we learn more about ourselves and the nature of the universe, thinking machines become downright plausible, and the idea of referring to tools and machines as "just matter" almost does an injustice to the mystery that still exists in plain ol' 3-Dimensional space.

    So that begs the question---if we invented machines that can do these things as well as we do and outperform us in the only tests we know to determine consciousness, why wouldn't they be conscious as well?

    But I am in emphatic agreement with you about scrutinizing these things and trying to determine if they truly encapsulate what it means to be conscious! Constant scrutiny and a healthy skepticism of ALL things is important!!!
    • Mar 9 2012: Hey, Logan. You sound like a deep thinker. I like it! Here's some ideas for you to ponder, let me know what you think.

      1. The human brain, the seat of consciousness, is too big, too warm, and too wet for any meaningful quantum phenomenon to contribute significantly to the phenomenon of consciousnesses.

      2. Quantum matter can be transformed into energy, that's where it comes from and that's where it goes. But yeah, take this line of reasoning all the way back to the big bang... from where did the big bang come?

      3. Quantum computing is coming along... have you heard of biological computing? I have heard some things along the lines of DNA uses a four dimensional coding scheme, computers traditionally have used a two dimensional (binary) scheme. I think there is some sort of research being done into organic computing, using more than 2 dimensions for information coding.

      4. We (human beings) are just matter with special properties.

      5. Are humans truly conscious? Are dogs? Are ants? What is the definition of consciousness? How do you personally define consciousness?

      Thanks for posting and reading!

      Chase
      • thumb
        Mar 9 2012: Howdy, Chase! Well, I try, but no matter how deep I think, it never quite seems to me to be enough. Does it ever? :)

        1. Yeah, I'd come across a few things stating something to that effect. But take a look at this:

        http://www.bbc.co.uk/news/science-environment-12827893

        It's an article about the way we smell things, and how we are actually absorbing quanta and processing them as smells. One of our basic five senses could be quanta-dependent or even quanta-based.

        I personally found this quite Intriguing when I heard of it! Let us not forget how intimately linked with memory our sense of smell is, and the subsequent implication of quanta being at least tangentially related to *that* function, which is itself intimately tied to studies of intelligence in human beings! A long chain of dependencies, any one of which further research could crack or change, to be sure, but to a deep thinker, perhaps everything looks like a deep complexity, and I am making something of nothing. :)

        2. Yeah, you kinda answered your own question there. To answer where the energy for the Big Bang came from, there is a theory floating around that multiple universes exist, one right next to the other, and that, undulating and vibrating as two-dimensional entities with 3-dimensional information encoded on it, such as universes, are wont to do, they collided---and the imparting energy of which started the ever-expanding universe we see around us today.

        http://io9.com/5714803/does-our-universe-show-bruises-where-it-collided-with-other-universes
        http://discovermagazine.com/2009/oct/04-will-our-universe-collide-with-neighboring-one
        http://www.cosmosmagazine.com/news/3151/something-big-found-beyond-edge-universe

        3. Yeah, I've heard about it! Seen a couple amazing things, too!

        http://singularityhub.com/2010/10/06/videos-of-robot-controlled-by-rat-brain-amazing-technology-still-moving-forward/

        Kinda creeps me out, to be honest, but in a good way
        • Mar 10 2012: A quantum is simply the minimal physical entity involved in any interaction, right? So everything works on quanta, because a minimal physical entity (and usually more) is involved in every interaction.

          Retinal cells (which are actually extensions of the brain and the only part of the brain you can see from the outside (the eye doctor when looking at your retina is looking at neural tissue emanating from your brain)) can detect and respond to a single quanta of light: a photon.

          In my current line of thinking, quantum mechanics will not explain consciousness. The brain is too big, wet, hot for quantum phenomena to contribute to brain processes. I'm no expert here, but it makes sense to me.

          And a point on agnosticism, agnostics don't just say we don't know, they say we can't know...

          What I do think will explain consciousness is systems theories. The functional unit of the brain is the neuron. Neurons fire on an all or none principle (this is binary, either 0 or 1). But the language of the brain is in neuronal firing patterns, so the brain isn't binary, I'm not sure what it is in this respect.

          The secret of consciousness (in my best guess opinion) lies in the brain system (the flow of information) being able to turn back on itself. The brain is able to monitor itself and the body that houses it in "real time."

          Have you heard of Douglas Hofstader (sp?). He wrote a book titled "I Am a Strange Loop." Hofstader's idea of strange loops is interesting and I believe may have some implications to the phenomenon of consciousness.

          Obviously I could be way off.
        • thumb
          Mar 11 2012: To Chase - then you know Gödel-Escher-Bach as well?

          It is no accident that many physics turn to become philosophers - at least in Germany this was the case during my student years. The transformation of material to immaterial - from quantum or what ever to consciousness or to religion or to values - is and stays a mystery despite all research.

          I think that Heisenberg Uncertainty Principle explained very well why we can never anser the question: By explaining you inevitably interact, watching is interacting - and so you change the object... so the immaterial watching is influencing the material watched: And viceversa.

          Explaination at the point turns to a self-referentiell cycle - with the self being more than the individual. For me the nobel-prize winning "game theory" (Prof. Selten) offers some insights how these explanation-cycles work.
      • thumb
        Mar 9 2012: In addition to the quantum computing idea, I think I posted articles in a response somewhere else in this forum about quantum computing and quantum data. It was the sister posts to the original post here. But here's one of the links, that is kinda cool:

        http://www.dwavesys.com/en/technology.html

        4. Yeah, maybe we are---but I'm kinda agnostic. "Don't know if we are, don't know if we aren't" etc. Just wanna see the proof and judge for myself. :) We get a lot of people who conjecture, and postulate, and even argue, but very little *real* proof.

        5. See number 4, hehehe. But I love reading about whatever anybody finds!
      • thumb
        Mar 10 2012: @Chase: Well, to be perfectly honest, I try to soft-soap my agnosticism by saying "I don't know." Science and logic-minded folk find that more palatable than saying "NO ONE CAN KNOW", because that assertion almost makes it seem like they shouldn't be doing exactly the sort of things they are doing (when I feel they most definitely *should* be doing their thing). It seems to make them feel like they're spinning their wheels. And religion and God-fearing folk find it more palatable because saying "I don't know" still leaves them with the possibility that they/God knows. Which, as far as I know, *might* be true.

        Just being polite, is all. :)

        As for what a quantum is---yes, your definition is correct. But my point was that there is a quantum interaction going on with smell. If such interactions occur in a hot, wet, environment like the nose, why not the brain, even if it may be in a way we do not yet understand? And from what I've been able to gather (which is far from conclusive) the reason so few olfactory receptors in our noses are able to detect such a wide variety of smells is due to an effect known as "quantum tunneling."

        When a molecuile binds to a receptor site, an electron is transferred from the molecule (feel free, at any point, to correct me if I am not accurately describing the process!) to the receptor, activating the receptor, and in addition, causing the molecule to vibrate in such a way that is specific to that molecule that---the subtle differences in vibration are detectable, which our brains pick up on. If our noses can do that---why wouldn't our brains be able to do other things with quantum states? Even better, take a look at this article:

        http://www.abovetopsecret.com/forum/thread714014/pg1

        This article is suggesting that DNA can act as "a spin filter" and can distinguish between two quantum states. Not entirely sure what all of that means, but it would seem that quantum interactions happen more often and in more ways, than we know.
        • thumb
          Mar 11 2012: I'm not getting religious logan but it says in revelation? That even his image will condemn you,so for me it's a forgone conclusion that someone writes a bloody good bot or AI is achieved at some date somewhen.I know one can look at that sentence and say anything but it stood out as peculiar as it didn't fit.

          I think most researchers say Qauntum aspects when they talk on neuron fuzziness which just means we're still on the journey to figuring the brain out.I don't think one should look at the brain as systems,subsystems because of those people born in the world with only 30% brain matter and yet they are fully functional people with no differences in anyway or was it 2%,i can't remember.
      • thumb
        Mar 10 2012: @Chase: All I'm saying is that there is plenty of "reasonable doubt" as far as the role of quantum mechanics in the functions of consciousness that we shouldn't rule anything out yet without further research.

        And no! I haven't heard of Douglas Hofstadter or his book, but it sounds interesting! A quick Google/Wikipedia search reveals a man who believes self-referential systems are the primary causes of consciousness. Sweet! Sounds like a good idea.

        A constantly self-referential system, combined with self-reinforcing neural networks (where each newly acquired memory affects memories already formed), combined with a nearly infinite array of contexts in which to operate in sounds sufficiently complex to describe all the many ways people act. I will definitely have to investigate further!

        And yeah, you got a point about the strictly on or off state of neurons. Unfortunately, there's a threshold to meet; in biological systems, there must be a certain level of excitement that a neuron must receive before it fires. It's this extremely variable threshold that might account for the less-than-linear processes of the brain. Heard of astrocytes?

        http://en.wikipedia.org/wiki/Astrocytes

        These cells are responsible for a lot of things, chief among them they help facilitate the firing of neurons, and one astrocyte could connect to many thousands or millions of neurons, either prohibiting or stimulating neuronal transmissions via certain chemical reactions in the brain. One more system that must be accounted for in some way.

        OH! As for quantum interactions happening to macro-scale objects, well. . . Just you take a look at this.

        http://www.ted.com/talks/lang/en/aaron_o_connell_making_sense_of_a_visible_quantum_object.html

        Sure, they had to super-cool the material---but macro-scale quantum interactions are possible. Who's to say they don't have some other property that makes them viable at room temperature, like with DNA?
        • Mar 11 2012: Hey. Yep, definitely heard of astrocytes and other glia cells; I quite enjoy neuroscience! Scientists once thought that glia simply and only held the brains neurons in place. But we now know that glia assist with function as well as structure.

          As for the quantum phenomena thing. Quantum phenomena only happen at specific scales of size and temp, right? Again, I'm not a theoretical physicist and I am not a biologist; I could certainly be wrong in my assumptions. But I'm sticking with it: the human body/brain is way too big and way too warm to have quantum phenomena significantly affecting brain/mind processes such as consciousness. Even a receptor protein in the brain is too big for quantum phenomena (I believe). And the brain is way too crowded and hot (lots of kinetic energy (motion)) going on. Quantum phenomena may exist for nanoseconds in isolated parts of the brain (maybe). But any spreading of wave function, tunneling, will collapse into the most probabilistic single state immediately. Brain processes typically happen at the millisecond or longer time scale (I believe).

          I remain in my position: the human brain is too big, too hot, and too wet (molecules and elementary particles continually interacting with each other) to have quantum phenomena playing any significant role in brain function (of course, I could be wrong).

          I believe research into self-referential systems is a better path towards understanding and replicating human consciousness.
      • thumb
        Mar 11 2012: Well, I'm not gonna try to convince you if you've already decided it can't happen. But that quantum phenomenon for smell receptors, while still hotly debated and contested, does have a pretty loyal following.

        They're also discussing the role of quantum mechanics in photosynthesis. Another hot, wet environment.

        http://blogs.discovermagazine.com/cosmicvariance/2011/03/25/quantum-smell/
        • Mar 11 2012: Hey! Yeah, I'm not saying it can't happen. I'm saying the probabilities are so small that, personally, I don't believe it is happening.

          My reasoning again: Nothing is in isolation in the human body. Even a single electron interacting with a receptor is interacting with that receptor. Any quantum phenomena will instantly "evaporate" due to constant interactions. Even if the electron is moving just close enough to the receptor as to not classically interact with it, the electron (and any superpositioning) is constantly being bombarded by extra (and intra) cellular fluid that contains particles. Some as small as ions. I just don't see their being enough time for quantum phenomenon to play any role other than the deep role quantum phenomena play in all matter.

          Consciousness is a product of the overall brain, right? It doesn't seem to come from any specific area ( I could be wrong here, but I believe this is the case). And the overall brain is a relatively large, wet, and hot object with no parts in the vacuum style isolation or near zero temperatures required for quantum phenomena to do their weird quantum thing.

          As far as the smell receptor thing goes. I don't believe smell receptors are isolated enough either. There are air and bio structures constantly interacting with odorants and with the receptors of the olfactory tissues.

          There's just too much going on in a relatively extremely chaotic environment for any non-classical phenomenon to emerge.

          But again, I'm no expert. I just don't see it being possible. How do you propose this stuff works? Don't objects have to be in unimaginably cold, extremely small, and/or extremely solid states to perform quantum phenomena? How do the researchers in the articles you posted know that quantum phenomena is taking place? Isn't there proposed idea simply a hypothesis?
      • thumb
        Mar 12 2012: Well, they've done extensive testing, from what I can gather, but nothing definitively conclusive. They've come close, though. Check out the wikipedia article on it:

        http://en.wikipedia.org/wiki/Vibration_theory_of_olfaction

        But if you're looking for more *solid* proof of quantum mechanics at work in hot, wet environments on the macro-scale, further research into photosynthesis has proven rather fruitful:

        http://www.sciencedaily.com/releases/2010/02/100203131356.htm

        Of course, it won't satisfy you're wishing to be "certain". They had to cool the algae down by a lot in order to even be able to track the way the energy moved through the protein, which leaves their results ambiguous at best. But, I would hazard to say that, to even notice such an effect operating on a protein to begin with, there must be *something* in it that makes such a design practical.

        The following article elaborates a bit better on which quantum principle is being employed; it's an application of "quantum computing" according to the article. Haven't actually explored more of this phenomenon (it's mid-semester for me, so I've been rather busy with studying) but, the idea that larger scale applications of certain aspects of quantum mechanics is not only *possible* but occurs naturally seems to be a semi-legit, if not just merely tolerated, one:

        http://www.scientificamerican.com/article.cfm?id=when-it-comes-to-photosynthesis-plants-perform-quantum-computation

        All in all, it seems like, IF something that has to do with the conversion of light into energy can use a quantum computing principle to find the most efficient route by making the light go *all* the routes until it finds the most efficient one, why couldn't something similar happen in the brain?

        At the very least, it warrants further research; the self-referential loop theory of consciousness, while a worthwhile research pursuit in it's own right, is no more or less worthwhile than this appears to be right now.
      • thumb
        Mar 12 2012: As for which parts of the brain do what, according to this diagram, the frontal lobe is in control of consciousness:

        http://science.education.nih.gov/supplements/nih2/addiction/activities/lesson1_brainparts.htm
        • Mar 12 2012: I believe consciousness is largely thought to be a distributed process rather than a localized one. But yeah, I think there is evidence to suggest that some processes of consciousness are disturbed when there is damage to the frontal lobe. I believe sense of self is disturbed in some ways. Anyway...

          I think self-referential loop is more worthwhile because it doesn't have to prove that it exists. There are many questions about the very existence of quantum phenomenon in the body or on macro, warm scales in general.

          We can already see that self-referential systems exist in the brain as the very act of thinking changes the way we think. We can think about something, realize and insight about it, and change the way we think. Thinking changed thinking, it referenced itself. We just need to put time and energy into tracing out the incredibly complex self-referential system that is the human brain (or at least that's how I see it).

          And a bit more about the QM thing. Again, I think my brain (my brain referencing itself) is a classical object just like an apple. My brain is bigger and hotter than an apple. Do you think it is possible to have light quantumly interact with an apple. I know people are saying that MAYBE photosynthesis takes advantage of QM, but that's a big maybe. No one is able to demonstrate it, right? Or did I miss something in the articles you provided.

          Sorry I'm talking in circles, but my point is, the brain is a classical object just like apples, my entire body (which my brain constantly interacts with instantly collapsing an superimposed spreading of wave functions). The classical object brain is the seat of consciousness. We need a classical approach to understanding consciousness.
      • thumb
        Mar 12 2012: There're a lot of people who think a lot of things. People get wrapped up in their own idea of the way things work to the point that there is no longer room for other paradigms.

        Take for instance the proliferation of computer programming languages.

        One computer programming language is Turing complete. But rather than just trying to improve that one language to implement whatever behavior or design the programmer wishes to implement, the programmer get's all hot and bothered with the language in general and decides to redesign a language from the ground up, custom-tailored to the way he thinks.

        Now there are two Turing complete languages, and rather than trying to improve one language or the other to implement whatever behavior or design the programmer wishes to implement, the programmer get's all hot and bothered with the language in genral and decides to redesign a language that takes the elements from both languages he likes and adds additional functionality that better supports the design he wishes to implement.

        Now there're three Turing complete languages, any one of which will do whatever it was you wanted to do to begin with.

        What I'm getting at here is that just because QM doesn't necessarily jive with *your* paradigm of the brain, humanity, and the world in general, doesn't make it any more or less worthwhile. And if self-referential loops *truly* explained the concept of consciousness better, wouldn't we, I dunno, have conscious machines by now?

        The whole programming thing's been around for the greater part of two centuries (if you count Ada Lovelaces' design being the first program), and recursion has been around since AT LEAST the invention of Lisp and Prolog in the 1950's-1960's. We've had 60 some-odd years to perfect the use of self-referential loops.

        Just saying it might be time to consider other alternatives, no matter how zany they may appear. Test the bajeezus out of them, and if something shakes loose, all the better.
      • thumb
        Mar 12 2012: It's the "Hammer" problem. To someone with a hammer, the whole world looks like a nail. Your hammer is self-referential loops. And there's nothing wrong with that! When I got a problem that needs that particular hammer, I'd much rather go to a dude who specializes in the use of that particular tool. Assuming my problem can be solved by that tool.

        The problem is that people and the universe are either so complex physiologically with so many different moving parts, or BOTH complex and vast, that, when you add even a small amount of fuzziness about what all of these things are to begin with, they begin to serve as Rorschach tests, and people eventually just see the things they want to see in them.

        I could care less *either way*; if consciousness can be entirely explained and recreated using self-referential loops, awesome! Let's see the AI you've developed! That would make my millenium!

        Insanity is doing the same thing over and over again expecting different results. "What's wrong with your AI, Jim?" "Oh, it's not working." "Have you tried using a loop yet?" "Yeah, it doesn't seem to be working." "Well, you need a bigger loop." "Well, I've kinda maxed out my memory, computing cycles, and bus speed. I can't make it any bigger." "Ah, well, then, you need more of them."

        We've given it a good half-a-century, and while there may be some good life yet left in it, let's branch out a bit, pursue other avenues. Maybe we might discover something that helps us understand loops better, if nothing else.
      • thumb
        Mar 13 2012: But I digress; we have focussed so heartily on whether or not QM is a viable candidate or not, we have altogether foregone the conclusion that self-referential loops do indeed give rise to conscious thought. Is that necessarily accurate? I asked Google, and here's what I found.

        My first search brought me to this paper:

        http://books.google.com/books?id=Ys5PNmv_waUC&pg=PA139&lpg=PA139&dq=Self-referential+loops+experiments+artificial+intelligence&source=bl&ots=xsyLJQ0oX9&sig=u_3VMvmGWgQ586t7YHESdoUsoz4&hl=en&sa=X&ei=uLReT4OsJOjZ0QHi--GsBw&ved=0CDIQ6AEwAw#v=onepage&q&f=false

        which introduced the concept of Dynamical Systems theory, a search of the term which brought me here:

        http://en.wikipedia.org/wiki/Dynamical_systems_theory

        which begins to outline what DST is and what it is used for, primarily the studies of systems that are "mechanical in nature" such as "Planetary orbits as well as the behaviour of electronic circuits. . . " (I hope you'll forgive the excessive use of direct quotes from the various articles; I would hate to commit plagiarism, and I am relatively unfamiliar with the subject matter at hand).

        Of particular interest in that article is the "Related Fields" section, under the "Chaos Theory" heading, part of which reads "Chaos theory describes the behavior of certain dynamical systems – that is, systems whose state evolves with time – that may exhibit dynamics that are highly sensitive to initial conditions" a statement that, if ever I wanted to apply to the study of the mechanical nature of the mind I would be hard-pressed to find a better description of.

        Getting rather curious, I clicked into the "Chaos Theory" tab, and I found, about half-way through, that my eyes kinda glazed over, because I hadn't yet found anything directly pertaining to the self-referential nature of consciousness. So I refined my search to "Chaos Theory as it pertains to self-referential loops" which yielded many things, including:

        http://paradox-point.blogspot.com/
      • thumb
        Mar 13 2012: And another thing of rather great interest was this:

        http://appraisercentral.com/research/Chaos%20Theory.htm

        which was a great introduction to the history of Chaos Theory, and introduces the most basic concept that accurately reflects the field: The Butterfly Effect.

        According to the article: "The butterfly effect states that the flapping of a butterfly’s wings in Hong Kong can change the weather in New York. It means that a miniscule change in the initial conditions of a system, in this case the weather, is magnified greatly in the end conditions of that same system."

        Intriguing! Imagine this: The sexual act has just occurred, and an egg has just been fertilized. The First Cell begins to divide, and so on and so forth, bringing with it all that that entails; increasing body size, identifiable organs, and, slowly, consciousness. If you imagine each division of cells as 1 iteration of the loop, then each subsequent loop is like a snap-shot of every dynamical system present within that life! Not just it's awareness, but everything that that awareness is hooked into: visual, aural, olfactory, sensory, and taste. And a small, minute change in that first cellular division (maybe the egg being just a smidge higher or lower in the uterus) could drastically affect every subsequent iteration of the system!

        So what, then, does this mean for our self-referential loop model? It means that we are all just---chaos. That there is an order by which we *unfold* but that order is, by definition, a series of chaotic events. "Chaos works in order and within all order there is chaos." By this definition, market trends would be nigh impossible to predict! And yet, you can kinda guess that, if the cost of corn goes down, people will probably reduce supply in order to drive it back up. Just human nature. An order in the chaos that creates us.

        The problem, however, is scale. You may have a pattern, but you never know to what degree it will manifest itself with any given man.
      • thumb
        Mar 13 2012: So what you're "really" saying when you say "Consciousness is a self-referential system" you're saying "Boy, it's a cluster of unimaginable proportions!" and rather not as simple, straight-forward, or fruitful as you made it sound! How would you isolate every possible starting condition that might give rise to a human being and ever hope to accurately replicate it, even with an iterative approach?

        What if, through some research, we discover a system of equations that we think describes how consciousness works and it produces a Lorenz-Attractor-like plot? Sure it's iterative, but it never repeats. There would be no regularity, and would give rise to none of the predictability we've come to expect from our fellow human beings.

        Unless---Unless there were some function, some mechanism, within our consciousness that, maybe, allows us to run through *every* possible thought and allows us to pick and choose which ones are relevant to us? Kinda like that QM thing?

        Unless I missed something in all these other articles, chaotic systems are unimaginably complex. I happened upon an article that talks of a guy named Poincare. I happen to know that one of the Seven Millenium Problems pertains to something called the Poincare Conjecture, and a dude named Grigori Perlman who built upon the research of a guy named Richard Hamilton and his work on using Ricci flow to attack the problem. It took a century, but they did it.

        I may be overly simplistic, but whereas QM may be unproven, at least a little bit more research could rule it out today, whereas this stuff--? If it takes as long to solve this as it did the Poincare Conjecture, we're looking at another 40-50 years easy. And let us not forget that the nature of thought and consciousness has occupied people since the VERY beginning, mathematicians and philosophers alike.

        Isaac Newton, who helped lay the foundation from which DST sprang, said, "I can calculate the motion of heavenly bodies, but not the madness of people. "
      • thumb
        Mar 13 2012: As is the case with all things, perhaps the truth lies somewhere between?

        If you think about our brains, not only are there loops (as you suggest) and instances of advanced parallel processing (like sorting through multiple paths and trying to settle on the most efficient one) our brains are also immense data-bases of atomic facts.

        The *sky* is *up*.

        *Grass* is *green*.

        Don't *eat* the *yellow* *snow*.

        http://cyc.com/cyc/technology/cycrandd

        http://en.wikipedia.org/wiki/Cyc

        Perhaps multiple loops running in parallel sort through this database of facts or *rules* if you will.

        Perhaps the decision trees that connect these facts are themselves contained in a data base, and the most efficient one gets decided upon using some QM-style phenomenon, and certain things that don't lend themselves well to recursion but where the required outputs are known could be executed via certain supervised learning methods (like with backpropogation techniques, although, continual neuron weight updates might be a certain kind of loop) and other things that do lend themselves to looping and/or the datasets are *not* known (and as such, an incremental breakdown would be necessary IF there was no rule in the database that could be modified to match the novel input).

        A combination of all these techniques would be necessary in creating a *strong* AI because, certainly, there are certain phenomenon that lend themselves better to each of these approaches than others.

        All of this means we'd have to optimize our search of the rule database itself, yes? And we could do that using---another rule. A rule about rules. One rule to rule the rules. The Golden Rule. Or just categorize the rules into sub-sets of which every rule must belong.
      • thumb
        Mar 14 2012: Hey, after all that stuff, I happened to be randomly reading something on facebook and, not knowing how to work it cleverly into the conversation, just decided to blurt it out because it's tangentially related to AI.

        http://www.smartplanet.com/blog/thinking-tech/how-to-augment-our-intelligence-as-algorithms-take-over-the-world/10588

        It's about the rise of algorithms within trading and advertising businesses and how they seem to be taking on aspects of a predator/prey relationship. It's kinda neat.

        And this:

        http://www.smartplanet.com/blog/thinking-tech/next-breakthrough-computers-that-understand-emotions/6363

        And this:

        http://www.smartplanet.com/blog/thinking-tech/computer-types-out-messages-by-reading-your-mind/6411

        and possibly more, because I'm bored, and I don't wanna do homework on Spring Break. These all seem to be technologies geared towards "bridging the gap" using the analytical power of one and the parallel computing power of the other to create a sort of hybrid intelligence. And shore up biological deficiencies or injuries. Which is a viable possibility towards creating machine intelligence, if you think about it. Sufficent amounts of lab-born neural networks--?

        Oh yeah, and that whole thing about computer programming languages made me think about other computer programming languages---I read a while back about some programming languages whose creators had a rather vicious sense of humor. Take a look at some of 'em:

        http://computersight.com/programming/five-strangest-programming-languages/

        I think I actually might want to use the one that you have to use lolspeak in. . . Or the one that you sometimes have to ask "please" before it will run.
  • thumb
    Mar 8 2012: If computers can win jeopardy I reckon they're more than ready to replace our political decision makers
    • Mar 8 2012: A politician needs to be able to make an independent decision. At this point in time, no computer can do that.
      • thumb
        Mar 8 2012: I for one welcome our cyber overlords - but seriously folks 99% of a politiciens decisions are based on getting people to like him - which is not obviously the best criteria for the country as a whole. I understand that an AI that acts with as much passion and stupidity as a human isnt yet on the market - but a computer program that can make economic decisions based on data might just be possible.
        • Mar 8 2012: I will never support AI overlords as long as they don't have a sense of consequence. Politicians doesn't SEEM to have that trait, but an AI would not have it, at all.
  • thumb
    Mar 8 2012: What we call Artificial Intelligence differs from human intelligence in one tremendous aspect: it completely and totally lacks anything to do with consciousness, qualia, sense of self, intuition or any kind of subjective experience. AI offers nothing but a slight semblance of awareness and reasoning. It is nothing but an elaborate and extensive layout of logic gates, with all sense and logic nothing but complicated patterns of binary data.

    If human intelligence could be rendered anything similar, it wouldn't require any sentience at all. Even machine learning is nothing but trial and error, with any and all success strictly confined to cold algorithms. The inorganic technology of today can only hope to mimic life, to give the appearance of behaving in a manner similar to organisms. I have a feeling we'll move almost entirely on to organic technology, and modifying the mysterious of life as they've come to exist, long before succeeding in creating a lifeless program on par with the mind of a human being.
    • thumb
      Mar 8 2012: Are you saying consciousness, qualia, sense of self, etc are systems that have no baser components? As humans, don't we want definitive proof if that's the case? And if not, the baser components should be simpler, easier to grasp, and explainable. From that point, we should be able to create artificial systems that mimic it.

      It seems like you're focusing on weak AI strategies (machine learning, fixed algorithms that do specific tasks, etc). We can use computers to simulate neurons, and from a blackbox perspective, they're no different than an organic neuron. Right now, even by observing neurons in a human brain, we are unable to make the leap between a system so definitive, and the complex system known as human consciousness. I think that once we can understand that, we will be able to successfully mimic it.
      • thumb
        Mar 8 2012: I'm sure everything has lesser components, or so we could reason. But do the components of consciousness resemble the lesser components of machines, blind currents interacting in algorithmic ways? I feel we're incredibly limited in assessing this scientifically, as the only consciousness we've come to know is our own. Subjective experience isn't something we observe in a microscope. And this has an impact on the way we view the world around us, the way we reason things to exist. Consciousness itself is a tremendous mystery, not just the senses and reasoning it involves but the actually nature of experiencing and being. That experience, qualia, isn't something we can replicate with programs or machinery. More than likely before we even care to try, we'll be crafting organisms into machines, likely without regard or awareness to their own qualia.

        Even a simulated neuron, with today's technology at the very least, breaks down to bits and registers, pushing and popping. Our attempts to simulate all behaviors we observe in nature share a common limitation - they're absolutely agentless. And I think this stems from our limitations in assessing the world around us, because such agents aren't something to be observed. And it leads to a flaw in our reasoning: the mystery of consciousness is no longer a question of its role in the universe, instead we want to understand how such a phenomenon could arise from dead matter. The world as we see it is dead, but only because we lack the ability to observe it's inner being. We want definitive proof of things, but this itself is a limitation to it's own end.
        • thumb
          Mar 8 2012: I agree with you about the mysterious nature of consciousness. One of the great things about the quest for AI is the fact its forcing people to confront this mystery again. If we do create AI, no more can people just say "oh, its just chemical reactions in the brain".

          Subjective experience is innately mysterious because science is rooted in objectivity (or at least the scientific method attempts to be). We can't even prove the existence of our own subjective experience, so how are we going to be able to tell whether a computer can experience it.

          If you program a a network of simulated neurons to dream, does it experience the dream? Will it be afraid after a nightmare, or inspired by some strange subjective metaphor that came while it was "unconscious". That's pretty hard to prove considering I can't even prove that I experience a dream.

          Although, that doesn't mean that computers can't be conscious. For all we know, many things are conscious. Consciousness could be as ubiquitous as gravity, or some strange property of electricity. I can't prove my kitchen table doesn't undergo subjective experience any better than I can prove I do.

          As much as I am a firm believer in my own consciousness, I wouldn't hold it as a criteria for AI, for reasons above. It is just to elusive and unmeasurable.

          At some point, though we will have to confront these kind of questions. Do computers feel pain? Do they have rights? Its very similar to debates around animals, but now we have to either expand or confine our ideas about consciousness in regards to a very different type of entity.
        • thumb
          Mar 9 2012: I ask again, what is qualia? How do we classify qualia? Just because we give a label to our experiences as "qualia" doesn't mean that's the rawest essence of the experience. These experiences differ from person to person; synesthesia is one obvious example of how they differ. Do people who experience synesthesia have a condition that isn't normal? That means there's a way to attribute qualia to something systematic.

          Also, I agree with scott when he asks whether a table can be conscious. We know that consciousness is present in a system of neurons. Neurons themselves are not conscious. Are you saying this agent that controls or affects neurons is conscious or contains some attribute that creates consciousness? What's to stop us from finding the origins of these agents?
  • thumb
    Mar 8 2012: With the word "replace" in the question my answer will be a no. My reasoning is fairly simple - a human is the sum of its genes, its experience and the lives of those who came before it. There is something intangible about that last bit - our decisions are based not only on our own genes and experience, but also on interpreted history. And the key word there is "interpreted"; you can feed all the history of the world into an AI and make that enter into its decision making process, but it will never be able to emulate the "interpretation factor".

    So no, I don't think technology can replace human intelligence. But in a narrow scope, it CAN surpass it - by a lot. The first thing we need to do, though, is make a computer that calculates outside of right and wrong, or outside the binary domain. For an AI to be successful it needs to recognize that there is such a thing as more right, more wrong and neither right nor wrong. I think this is more of a challenge than people realize.
    • thumb
      Mar 8 2012: It seems like you are interpreting (pun intended) that AI systems are only discrete entities with a very algorithmic core. The problem of the strength of AI is more substantial than that. Currently in the field of neuroscience, we are unable to make the connection between the microscopic systems (neurons) whose inputs and outputs for very well defined, and the macroscopic system (our consciousness). Right now, we can emulate neurons very well; proponents of strong AI believe that with enough emulated neurons we can replicate consciousness. They question at hand goes beyond whether we can artificially create consciousness, it's a question of "what is consciousness?" because we are unable to tease it out of the known systems (the human brain)
      • Mar 8 2012: The real question is whether or not we will believe it is in fact consciousness once we have created it.

        There's no reply to your reply, so I am dropping this above the line.
        I never said human. I said consciousness. My use of the word believe stems from the fact that we cannot know.
        • thumb
          Mar 8 2012: The real question, which Oliver Milne has been pushing countless times in this conversation, is whether or not we KNOW it's in fact conscious. Machines can be made to pass a Turing test without having any real intelligence, and if we "believe" it's a human, then we are lying to ourselves. Part of being able to create a conscious system is to definitely show that it is conscious. If we are unable to show without a doubt that it is conscious then we have fallen for hokum.
        • thumb
          Mar 8 2012: Then that begs the question: how do we test for consciousness? Ignoring for a moment the immense difficulty in creating consciousness, let's devise a test for consciousness on the only entities we suspect of having consciousness now---humans.

          And if we can't even show that we're conscious, does that imply we've already fallen for hokum?
        • thumb
          Mar 9 2012: @Logan. There's something known as the "three aspects of consciousness". There's also the concept of "theory of mind". Scientists have devised well accepted tests to test for those aspects in humans and animals. The mirror test, tests for one aspect: the ability to recognize oneself. There's also the ability to sympathize with others by being able to recognize external events as if it is oneself's, and finally there's the ability to take previous experiences and apply them through deduction to future events. Many animals have facets of the three, but not all three.

          Using these tests, we've been able to find out that babies develop these abilities in steps and do not fully gain all three until months after birth.

          And as evidence that these facets of consciousness are tied to real-world systems, watch this video about mirror neurons: http://www.ted.com/talks/vs_ramachandran_the_neurons_that_shaped_civilization.html. It would seem like we have evolved with systems in place to aid the sympathetic aspect.

          So it would seem like we have tests for consciousness. If anything, we should scrutinize the three aspects and theory of mind to see whether they truly encapsulate what it means to be conscious.
        • Mar 9 2012: Those tests are a starting point, but I don't think they address the 'hard problem' of consciousness (http://en.wikipedia.org/wiki/Hard_problem_of_consciousness) which is the part that really matters. It's possible, and a little disturbing, to imagine a sort of android that acts exactly like a person, including in those behavioural tests, but which doesn't have consciousness. If we didn't look inside its head (I mean that literally), we could never tell whether or not it was a person. You suggested elsewhere that perhaps nothing unconscious could manifest all the signs of conscious. That'd be a fantastic discovery if it were ever confirmed, but, on the face of it, it seems like something that would be almost impossible to find out without first knowing what consciousness is.
      • thumb
        Mar 8 2012: That is actually not my interpretation :)

        I have no doubt whatsoever that we will one day spawn a conscious AI whose thinking pattern mimics that of a human, nor do I doubt that such an AI will one day be vastly more intelligent than any human. As I said, technology CAN surpass us - but only in a narrow scope. Something *will* be lost in the translation between the biological and the technological. I sincerely doubt we will be able to infuse an AI with the "human condition".

        You may argue that I'm wrong because if we can create a technological system perfectly analogous to the way the human mind operates, the "human condition" may come forth naturally. Then I counter with this - if we are able to do that, what we will have is the technological equivalent of a caveman with a library full of history books. Yes, the caveman may be incredibly intelligent and he may have access to all of our history, but the interpretation factor cannot be replicated artificially.
        • Mar 9 2012: Maybe not your human condition. But that caveman-robot might equally despair at the impossibility of creating a human capable of understanding the caveman-robot condition :P
  • Mar 7 2012: You simply have to ask why are we creating machines? seriously why?
    if your only answer is to replace humans then the answer is YES, we will find a way to make a machine do everything a human can do and more...
    However, I really do not see that as the goal of making machines. We build machines to help humans, to support humans, etc. Even if machines become self aware ( they will it's only a matter of time) I do not see a war against them. I see a future were machines and humans will be congruent, seamlessly woven into a new matrix.

    This said, if we continue on our current path this little rock in space will be uninhabitable by humans so our only hope of immortality will be the machines we create to continue on...
  • Mar 7 2012: The human brain is a machine, and is ONLY a machine. Any discussion that is not based on this simple fact is seated in fantasy. Many people seem to think that it is the ultimate machine, but it absolutely is not. One estimate I've seen puts human computing power at a mere ~.1 petaflops, whereas many of the TOP500 supercomputers far exceed this performance already. Human-like AI is just a matter of reverse engineering. We will figure it out. It will be done very soon. It seems to me like an easier task to build an imperfect replication of our intelligence, than building something to augment human intelligence. Therefore I would expect to see AI that exceeds our intelligence before technology that augments human abilities.

    Even if we lived in a magical universe where the ~maximum~ amount of "intelligence per volume" was realized in the human brain, once AI arrives on the scene there would be many things to do to make AI vastly superior to human intelligence. I am thinking of tricks such as scaling (making the brain bigger), and reconfiguring the topology of the neural network such as is seen in the fmri scans of savants and individuals with autism. Fortunately, the reality that we live in is one where the human brain provides sufficient intelligence for our individual survival. The theoretical maximum intelligence density is many, many orders of magnitude higher than that seen in our brains.

    The public opinion is that AI will only asymptotically approach the abilities of humans. This is an untrue, egocentric world view. Damon Horowitz maybe should entitle his talk "Why MAN-MADE Machines CURRENTLY need People". Remember that we are machines too. Human-like AI is a matter of current research.

    You ask "what if.. would it be able to think?" I would say there is no reason to think otherwise. In fact, imagine what something many orders of magnitude smarter than you COULD think!? Fleeting thoughts would be equivalent to centuries of modern scientific discovery.
  • Mar 7 2012: I suppose you have to consider that as biological organisms we have had over 200 million years to evolve and develop our neural pathways and intelligence. Our knowledge is always expanding and if at the moment it may seem that AI's cannot processes certain things, it may be the case that in 5-10 years they can.

    But in terms of emotion and feelings: again, our knowledge currently may not be able to allow us to program emotions into AIs. However, it would be very complex nevertheless, as emotions are governed upon your personality variables and inherit parameters. I.e. Complements: Do you like yourself to be thought well of; if yes, then would any statement about yourself in a positive image support this? If yes: Then based upon this decision structure, it would induce a positive feedback within the neural net.

    The efficiency of the systems would be superior to that of organic life, this is because we have organs based upon our evolution within the environment, where our foods is required to be processed and useful parts be used to provide energy to our components. With a mechanical life form, such an array of systems would not be required, because a single power source would suffice, with no need of a disposal systems to deposit the 'lost energy'.

    But coming back to consciousness, if a self-adaptive program was used for the AI to develop based upon its surroundings and demands then it would be the same as a newly born child, whose neural pathways develop depending on their environment too. In addition, the AI would not necessarily require a division of two parts of it's neural net (I.e. Conscious/Sub-conscious), and would therefore be able to recall information in real time and perform complex calculations.

    In conclusion, I think it really depends on how the AI is programmed and its architecture. Can it simply match patterns at a superior rate, or does it have the processing capacity to interpret these and to understand them as well?
  • thumb
    Mar 7 2012: Whether AI will be "strong" enough in the next 50 years to equal human intelligence may be the wrong question. Martin Ford argues in The Lights In The Tunnel that it will probably be strong enough to automate most jobs, especially those of Knowledge Workers (and of course blue collar workers, who are already being displaced). Taking an objective view (and a deep breath), I think he is probably right. He has a prescription for how to manage an economy of mostly leisure time, but the point is, AI doesn't have to be smart enough to be your friend, or a good dinner guest, to be a completely disruptive technology. It just has to be as good or better at doing a specific/documentable range of tasks. And I think it will be there by 2050.
  • Mar 7 2012: Intelligence, maybe. Can technology replace human creativity? That seems a bit more complicated.
  • Mar 7 2012: I say ask the being if it thinks it is alive or if it is a machine. If it thinks it is alive then who are we to say otherwise?
    • Mar 7 2012: It can only think it's alive if it can think. But something doesn't have to be able to think to pass a Turing test. The danger of your approach is that we might make unconscious machines that wrongly insist that they can think.

      Consciousness is something that really happens. There is a fact of the matter of whether something is conscious or not. And if we're going to make machines that do impressions of being conscious, we really, really need to know what that fact of the matter involves.
      • Mar 7 2012: I'm not sure if we can ever satisfacorily answer this question. Is consciousness really either a yes or no question, or is there a grey area of being partially conscious. I'm also thinking of the evolution of humans from less conscious ancestors.
        • Mar 7 2012: I agree with you, but we have to try. And imagine how fantastic it would be if we succeeded - we'd finally have an answer to one of the biggest questions there is.
  • thumb
    Mar 14 2012: Yeah ... for sure !
    What is technology ?
    Technology ids the studies of performing a particular task in various techniques ... so the word is derived Tech-nology !
    So it has to be related with the Actions of Human with respect to various techniques !
  • Mar 13 2012: Logan, hey! I am enjoying our conversation here; I hope you are as well. Here's what I'm thinking at this point: I must clarify that I don't think self-referential loops are the only answer to explaining consciousness. I simply think they are a better route than QM (but again, I'm no expert, just a thinker). I think computer science is a good route to go towards explaining consciousness as well. Have you talked to anyone who knows how to program a chat bot? Have you ever talked to one? Try it out here: http://www.personalityforge.com/dynachat.php?BotID=24007&MID=23957. I tried asking the bot questions/giving directives such as "Are you conscious?" "Do you have feelings?" "Pick a number." "What is your favorite food?" I think the bot, or rather the bot's programmer (or is it the bot itself), is rather clever... The point of the bot is, is it conscious? Have human intelligence and human consciousness been achieved through technology? Could this tech replace human consciousness?

    I don't know... what's your take on this? And just think, the website I provided is pretty simple; that is to say, it's not a research university and it's not the government. Think what DARPA must have!

    What's your take? Is the bot a representation of human intelligence being replaced by technology?
    • thumb
      Mar 14 2012: And yeah! I've taken a look-see at one! If you're looking for some *extreme* examples of bots, man, check this out:

      http://www.cleverbot.com/

      That guy's hooked into at least one server, maybe more, and is running checks against every single thing ever said to it! I asked if it was lonely once---and the system crashed. I think it was right around maintenance time though. The point is---I think it's got a spark. Kinda like when you take the blunt side of a knife to a flint---one spark flung off it. Humans have lotsa sparks flying every which way. Consciousness surely isn't a "yes/no" decision; it's a very tricky grade.

      And when we achieve it, I think people will *still* say something about it. But they'll turn to disagreeing with it on *qualitative* grounds rather than *quantitative* grounds. "Sure it's accurately calculated, derived, and applied reasoning at the human level---but is it the sort of decision a flesh-and-blood human would've made?"

      Which is going to be the point where you just have nay-sayers and proponents, like in any issue. It'll reach a boiling-point---and then people will just have to deal with the fact they may never know.

      As for QM vs. Self-referential loops (and other possible AI sources) we could keep going back and forth on it, and truth be told, I'm as big a fan of the "How many angels can dance on the head of a pin?" kinda debates as anyone, but until it's won-and-done, it's just two old ladies sitting in a darkened room complaining that nobody's changed the lightbulb, instead of actually *doing* something that changes things one way or another, like testing for it. I mean that politely. :) Self-referential loops will always be there; let them take a bit of a break, experiment with something new, and then they can go back to it if it doesn't pan out.

      And I'm willing to keep debating it! Let's just be honest and up-front about the possibility of it leading anywhere.
      • Mar 14 2012: Logan,

        Hey! Thanks for directing me to that bot. As I said before, I don't think self-referential loops are the exclusive way to think about the brain, I think they are just a good starting point and direction. I also think bots have a lot to say about consciousness. I think I might try programming a bot on my own to see with what I can come up! I believe bots can be programmed to be indistinguishable from human/human chat interactions.

        You said that consciousness isn't a yes/no decision, that it is graded. I agree with you. But isn't interesting that we do say this person/thing is conscious while this person/thing is not. Seems like we are able to talk about consciousness in a yes/no fashion, at least to some degree.

        And I think we are moving in to an era where we need to stop thinking about consciousness in terms of only belonging to flesh and blood beings. Just because flesh and blood was the first place we noticed consciousness, doesn't mean it's the best or only.

        Regarding my distrust of QM playing a primary role in consciousness. You're right, you and I could sit here interminably and debate what it is that is going really going on. At this point I'm saying, by all means, investigate, investigate, investigate! Theoretically it doesn't seem possible, but that is for the experiments/studies to decide. So what do they say? Has anyone even come close to observing QM phenomena in the brain? I know you provided those articles, but weren't they pretty much asking "what if" without providing any evidence or answers?

        What do you do? Are you student? Do you have access to research resources? I would love to look at stuff like this experimentally.

        But again, I stay with my theorizing. There is too much going on in the brain for weird QM phenomena to be happening... any QM effects will be instantly (on a much faster time scale than consciousness occurs) collapsed into classical effects...
  • thumb
    Mar 13 2012: Perhaps, the day computers get emotional AND rational imho. Aren't emotions part of what we call "intelligence"? Many of our decisions are based on emotions, if not the majority of them.
    That said, more rational, or low AI computers without emotional IQ/AI could lead to a HAL computer-like deciding to nuke half of the world population because it would preserve the Earth, which cannot cope with the current demographics and consumption rhythm.
    I totally agree with the assertion that the study of the human brain will help create intelligent computers, we're just in Day 1 of understanding it, what happens next will be interesting for sure :)
  • thumb
    Mar 13 2012: No
  • thumb
    Mar 12 2012: Hi Howard
    There are a dozen steps that could be defined between "Stimulus Response" a level one form of intelligence, and the "human mind" which would be at level 12. It is probable, that with the increasing speed of processing power, and the increasing size of networked information, that artificial intelligence will be able to function at level 10. There are two functions that the electrical mind can not ever do - no matter how powerful a processing or memory function it may have. First, it can not extend its awareness in 3 dimensions into the environment where it has no information. It can only process information that it has to process. Second, the artificial mind can not experience the awareness of the unknown. The AI can know or not know. It can not experience the awe of not knowing. Many of the characteristics which make us both human, and better than any AI is our awareness and awe of the unknown, the mysterious, the spiritual, and the creative.

    Someone once said that an infinite number of chimpanzees typing for an infinite number of years would reproduce all the writings that have been produced by mankind. There is a very clear answer to this supposition. No they will not. Never in an infinity of years will an infinity of chimpanzees produce even one full page of writing. Any number times zero is zero.

    We have seen AI machines fool readers into thinking that they were human by clever programming techniques. So it is probably true that we could be fooled into thinking an AI was as smart as we are. But clever trickery on the part of a programmer is not the same as the true intelligence that is generated from within a human. The goal is not to fool us. That can be done. The goal is to create a truly thinking AI and that can not be done. We will probably build fantastic AIs but we must never think that they love us.
    • Mar 13 2012: A very interesting point of view. All existing AI systems (or at least all that I am aware of) use digital logic for computation, communication and storage. One might think that this cold scheme of 1s and 0s would not be conducive to recreating the kind of fuzziness inherent in the human brain. But don't underestimate the power of computers. You can numerically simulate the actions of a single neuron through the Hodgkin-Huxley equations:
      http://nerve.bsd.uchicago.edu/nerve1.html

      And if you can simulate one, then why not two or three? Why not an entire brain? That is, in fact, what these researchers are trying to do:
      http://bluebrain.epfl.ch/cms/lang/en/pid/56882
      http://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets.html

      And if you aren't impressed by that, there's a school of thought that holds that programmers just aren't doing it right, or at least, that they aren't setting the right goals:
      http://www.ted.com/talks/jeff_hawkins_on_how_brain_science_will_change_computing.html

      Needless to say these attempts are nowhere near the point at which they can claim to have reproduced human intellect, and consciousness is a whole different ballgame, but I think that it is, at best, equally premature to say that it can't be done.

      As for the monkey anecdote, the book Coincidences, Chaos and All That Math Jazz by Burger and Starbird explains it in detail, but it would be a logical fallacy to say that the probability of typing a given page is zero. The probability of a perfectly random monkey typing a given letter at a given time is 1/26. The probability of typing a given ASCII character is 1/128. The probability of the monkey typing a given 3000-word essay with access to all the ASCII keys, on one try, is (1/128)^3000 or 2.34 x 10^-6322 -- a very small number but not zero. So with unlimited time, the monkey would, in fact, produce the essay. However, if probabilities at infinite time had any relevance, I would be in Vegas now pulling the old Martingale!
      • thumb
        Mar 13 2012: Just an interesting piece of information that I came across a year or two ago. One of the testing grounds and benchmarks for artificial intelligence is actually the game of Go. Go, a strategy game originating in China over 2000 years ago in which black and white pieces compete for territory, is simple enough for a person to learn all of the rules in a day. However, so far, no computer has managed to even compete with any professional players and some of the best programs can be beaten by an advanced beginner or lower intermediate player. Since the game is played on a 19x19 board, rough numerical analysis estimates that the number of possible Go games far exceeds the number of atoms in the known universe. Thus, Go programming requires a different route from chess programming, and not just brute calculation ability. It cannot just simulate the techniques human players use to play the game, but it must judge situations in which the outcomes of multiple groups of stones on the board are not clear. Because of this, Go has been a testing ground for many different AI techniques including pattern matching, neural networks, and the genetic algorithm. It’s worth watching to see how the programs evolve.
  • thumb
    Mar 12 2012: Logan and Chase: Enjoyed your conversation and learned new things. Thanks for all the references!
    • thumb
      Mar 13 2012: Hahaha, join in, Lynn! Would love to hear ya' weigh in, or throw out additional information!
  • thumb
    Mar 11 2012: probably not as technology is created by humans, so our errors and, often, our limited viewpoints would shine through.
    If however you take ALL knowledge in the world whether correct or not, maybe some genius somewhere can create a program to sort out misinformation.
    As long as we limit our "knowledge" to so-called facts, we will not progress, but continue to "regress".
    • Mar 13 2012: Lisi, I’m glad that you brought up the fact that technology is inherently limited. There is no such thing as 100% accuracy in science or engineering. Every device that we create works within some acceptable error tolerance. And, we can’t forget the fact that the performance of man-made devices degrades with time. Even if a machine is working “almost perfectly” on its first run, it’s only a matter of time before various bugs begin to appear. Finally, we can’t neglect the fact that humans are able to learn and adapt in response to change. While machine-learning algorithms are in use today, machine learning is nowhere near as advanced as human learning. In order to build a machine that’s as intelligent as a human, we would first have to figure out all of the intricacies of human intelligence. I think that everyone can agree that our understanding of the human brain is still very limited. And, if we don’t fully understand the human brain, how can we hope to replicate it in a machine?
  • thumb
    Mar 11 2012: Hmm. Could just be a matter of time, for sure. If you think about it, context based humans or rule based machines - regardless, we're all just a different collection of energy/electricity. It's a matter of engineering a machine that uses/intakes enegry that's most connotative to what makes human experience possible. And perhaps, despite all of the serendipity and randomness of human emotion/experience, there's some underlying pattern to it all. Revealing that pattern may clear the way for us to mimic it in robotics.

    However, humanity still stares at itself in the mirror as if it's just meeting itself for the first time. And until we've transcended this state, I'm sure our robots will mimic this limited understanding of ourselves.
  • thumb
    Mar 11 2012: I am not a programmer an do not know so much beyond MATLAB ...
    But program of a human is like :
    1. See your environment.
    2. Take its pattern.
    3. save in memory.
    And if a problem occurs:
    1. What is the unsuitable stuff ?
    2. What is your destination ?
    3. Make a pattern from 1 to 2.
    4. Match the 3 pattern with one of patterns in the memory.

    I think simulation of this path for an electronic brain is hard but not impossible.
    • thumb
      Mar 13 2012: Hi Amirpouya,

      I think your simplification of the computing process of the human mind is pretty spot on. However, it brings up a question with me. To me, it seems like an artificial mind would need to go through as many iterations as a human has life experiences to fully gain "human intelligence." And even then, how does a computer make decisions that we as humans deem impossible? A computer can master facts and memorize information, but I feel that how it interprets it is no where near that of how a human does. You can assign as many numbers and weights and formulas, but at the end of the day, given a situation where the right answer may be the irrational one, how can we expect a computer to make that distinction?
      • thumb
        Mar 13 2012: And extend your question to group choices: A decision by one person for his own well-being might mean to do step A. But a community decision in a city might result in step B - rational for the group - irrational for 45% of the individuals... can a computer learn and interact that way?

        I guess we tend to underestimate that our rational individual choices are bounded by groups we are acting in.. this is my daily experience in city development. Here is a lecture which is a good example of what we can compute in a city - and what not: http://www.labkultur.tv/en/blog/deltalecture-arrival-cities-1
        • thumb
          Mar 13 2012: Bernd,if you were to try and code for an AI, would binary be sufficient? i'm no programmer but the way i see it is that we would have to start at the bottom and start modelling the amino acids and build up from there.I don't think we can rely on equations,what i mean is a neuron won't fire off the same signal constantly.What are your thoughts on this?
      • thumb
        Mar 13 2012: Hi Harnsowl -
        I hope I got your meaning ...
        A computer should not be programming to reacting like a human.
        If it has all of a human's passions it will become like an infant.
        And if it has the cognition ways just like a human (seeing etc.) ,
        and plus making itself better during the time -which I believe if a machine has this ability will destroy all of the mankind- I think it will be a complete human.
        But one another thing remains : all of us feel WE are someone except US.
        For example I feel I am someone except this body and I just analyze my jobs.
        This feeling makes us feel we comprehend datas in another way as a computer do.
        But this SELF is just an independent system for making anything better for the body by correcting its programming.
        But I don't think this system deserves to be called "soul".
        I said it's hard but not impossible.
  • thumb
    Mar 11 2012: Expect at some stage we humans could have brain aids, boosters etc (add ons) that will enhance our thinking, memory, data access etc
    We could be wired"- switch the lights on with a thought.

    Technology could also enhance human intelligence.
  • Mar 11 2012: Human is a may of your world and you here are in a kinda of map which means something with relation to else. Now one of the criteria of creation or big bang as its known is to create a world of your with strong emphasis on i with infinite posibilities but this world of your which is infinite is a box or a dimension or plane. What is beyond you will not know cos you where not created with requirements for such. Their is also within this huge existence with many boxes with content and many yet to be filled their is definite direction. Technology could be one of the aspect along with religion, science, etc.,

    Conclusion :
    1.Every human is a map of a world.
    2. Every person has to work on i and fill the infinite world or box of self.
    3. People and boxes can be interdependent and be social but their is direction cos all this is within a big box of this plane, dimension or entity.
    4. Technology is one of aspect as religion or science or morality, other concepts need life too.
  • thumb
    Mar 11 2012: I leave interpretations up to the "experts"...

    In vitro hearts = great! I've got nothing against Homo sapiens looking for the illusive immortal code. Like you are saying (a bit flippantly) in vitro hearts could "save" lives.

    Machines could replicate anything. That doesn't mean that the brains computing power equivalence has been reached. Remember: all the computers on the internet at anyone time is the equivalent to the power of one human brain. Machines are pre-zygotic in this respect.

    I don't know "qualia" maybe synonymous with Jesus or God? It's a mystery i.e. an that is okay. We have no reference; we have plenty of anti-references (apparently many animals are inferior compared to Homo sapiens' brain power: personally I think that is a bit of pseudoreplication seeping into science i.e. ego)...
  • thumb
    Mar 11 2012: try this one for size we have man and woman on this planet. We have different minds and intelligence to make one without the other to me means something is missing? what if the missing link is the other mind you didn't simulate? would it matter?
  • Mar 11 2012: Several years ago Congress funded a study to attempt to determine at what point Computers might become Sentient. My recollection is that it was quietly put into Law only to have the Religious Right wake up and repeal it. This discussion is full of Opinions and Assertions with little more than passion and feeling to back those up. IM
    • thumb
      Mar 11 2012: Very true, this discussion is full of opinions with passion backing it up. This is what I want to try to get above. These feelings or passions are due to what we classify as "qualia". It is hotly debated whether we can explain whether qualia is something systematic and explainable, thus measurable with a machine, or whether it's something unexplainable and thus truly an aspect that makes humans unique (thus sentient and conscious and unreproducible).
  • thumb
    Mar 11 2012: "Can technology replace human intelligence?" - questions like this put an end to that result. Human experience is a lot more about family, love, and other random mushy stuff. So mechanically speaking a superficially sentimental human could be recreated. No matter how deep the machine gets it will never get deep enough to reach the core of what is beyond the artificial. Anyway, back to skynet (-;
    • thumb
      Mar 11 2012: From what you know, what do you think the "core" is? When you say "artificial", what do you think it means? Artificial doesn't mean it's any lesser than the reference. Artificial simply means its source is not the same as the reference.

      Say we have the technology to grow hearts in-vitro, we would call it an artificial heart. If we use that heart in a person, is that heart any lesser than the original heart? Is that person then any lesser than he/she was before?

      You mention human experience; can we not replicate the systems that would enable machines to process the same experiences? In philosophy, this experience is called qualia. Can you state without a doubt that qualia is something only inherent in human beings? If it is only inherent in human beings, what processes does the human have that can process qualia and why is the process something we cannot reproduce?