Howard Yee

Software Engineer @ Rubenstein Technology Group,

This conversation is closed.

Can technology replace human intelligence?

This week in my Bioelectricity class we learned about extracellular fields. One facet of the study of extracellular field I find interesting is the determination of the field from a known source (AKA the Forward Problem) versus the determination of the source from a known field (AKA the Inverse Problem). Whereas the forward problem is simple and solutions may be obtained through calculations, the inverse problem poses a problem. The lack of uniqueness to the inverse problem means the solution requires interpretation, which may be subjective. We may also apply a mechanism for the interpretation; this mechanism is known as an AI. However, this facet of AI (document classification) is only the surface of the field.

Damon Horowitz gave a recent presentation at TEDxSoMa called “Why machines need people”. In it, he says that AI can never approach the intelligence of humans. He gives examples of AI systems, like classification and summarization. He explains that those systems are simply “pattern matching” without any intelligence behind them. If true, perhaps the subjective interpretation of inverse problems is welcomed over the dumb classification. Through experience, the interpreters may have more insight than one can impart on an algorithm.

However, what Damon failed to mention was that most of those AI systems built to do small tasks is known as weak AI. There is a whole other field of study for strong AI, whose methods of creating intelligence is much more than “pattern matching”. Proponents of strong AI believe that human intelligence can be replicated. Of course we are a long way off from seeing human-level AI. What makes human intelligence hard to replicate? Can it be simulated? What if we created a model of the human brain, would it be able to think?

Related Videos (not on Ted):
“Why Machines need People”
http://www.youtube.com/watch?v=1YdE-D_lSgI&feature=player_embedded

  • thumb
    Mar 8 2012: My Grandmother once told me "in a marriage, if both people are the same, one of them is redundant."

    Why all this emphasis on making a computer simulate human intelligence when there are so many intelligent humans up to the task, and computers are so good at things our brains are poor at?

    It is the symbiosis between humans and computers that drives the mutual evolution of both. For instance, everybody knows that the best chess player in the world is a computer. However there are also chess competitions where humans with the support of computer programs compete with each other. Interestingly, the team is better than either the best human or the best computer on its own. Furthermore, it is not necessarily the best chess player that makes the best team mate to a computer and vise versa.

    Another thing with human intelligence is that humans are intelligent in different ways. Intelligence is measured across a variety of aptitude, and peoples strengths and weakness vary. This talk really drives the point home

    http://www.ted.com/talks/lang/en/temple_grandin_the_world_needs_all_kinds_of_minds.html

    On the question of why human intelligence is so hard to simulate, part of it is the amount of input we receive throughout our life in the process of developing into an intelligent adult. It would be interesting to see if people are trying to make a program that simulates the learning capacity of a toddler.

    Computer intelligence will evolve in a way that is different from human intelligence. Though, I admit the exercise of trying to simulate human intelligence could lead to insight into both AI and human intelligence. Nevertheless, if we restrict our ideas about AI to human definitions of intelligence, we limit the potential of AI that will eventually exceed us.
  • thumb
    Mar 8 2012: I have written a number of blog posts on this and related questions. The topics below transition from where we are and why we're "not there yet" with creating humanlike AIs, through how to create non-intelligent machine learning systems that at least do useful things, through some views on what we should be doing to create humanlike intelligence, through to some musings on intelligence, entropy, the universe and everything. As far as the issue of consciousness, I try not to touch that with a 10-foot pole :-)

    "Watson's Jeopardy win, and a reality check on the future of AI":
    http://www.metalev.org/2011/02/reality-check-on-future-of-ai-and.html

    "Why we may not have intelligent computers by 2019":
    http://www.metalev.org/2010/12/why-we-may-not-have-intelligent.html

    "Machine intelligence: the earthmoving equipment of the information age, and the future of meaningful lives":
    http://www.metalev.org/2011/08/machine-intelligence-earthmoving.html

    "On hierarchical learning and building a brain":
    http://www.metalev.org/2011/08/on-hierarchical-learning-and-building.html

    "Life, Intelligence and the Second Law of Thermodynamics":
    http://www.metalev.org/2011/04/life-intelligence-and-second-law-of.html

    I hope some of this is at least thought-provoking!
    --Luke
    • thumb
      Mar 8 2012: So luke basically without reading these links yet, what are your thoughts on a learning thinking AI?
      • thumb
        Mar 8 2012: Ken -- most of my current thoughts are in the links above. Happy to discuss once you've had a chance to peruse them :-)
        • thumb
          Mar 8 2012: Ok i've read the first two of them and yeah it goes along the same lines of what i thought, which is uneducated.I've kind of followed how intel have stayed on course with moores law but this year or last year it "Ticked"? but theirs no "Tock" til two more years? and it's been a programmers nightmare trying to develop for the multicore bottleneck?

          I know this is not what i asked but i can't see today's chip development ever getting to what kurzweil states unless there is a new element or design introduced?Here's what i found trawling one day.

          http://scitechdaily.com/penn-researchers-build-a-circuit-with-light/

          It takes me awhile to read things as i tend to think it through then reread it.it's slow i know but it woorks for me.
      • Comment deleted

        • Mar 9 2012: There isn't a multi-core dilemma. There is just people, that don't know any electronics and how to write compilers, and people that don't use modern technology. They will think this is a problem because they don't know any better.

          Computers made past that point long ago. Computers today that is used for heavy calculations has thousands of cores. With common graphic cards you can do many thousands of calculations in parallel.

          Ever heard of Open-CL? Check it out, you aught to know it.

          Ever heard of High-frequency trading for example? They now use FPGA's, and can calculate and respond to hundreds of thousands parameters in parallel. These languages to program these parallel systems has been around since the 1980s.

          And if you think thats difficult, then there is computer languages that fix that also. Check out Mitrion-C

          This isn't a problem. Even programs like word and photoshop and webbrowsers, scales to thousands of calculation units. Just insert a modern graphics card in your computer.
  • Mar 7 2012: I think you need to do a little more work on scoping this idea and conversation.

    What I think Damon Horowitz was describing is "the chinese room" problem. i might be wrong of course.

    Your question poses for me, lots of questions which i suppose is the point.

    Define what you mean by Replacing humans. Replacing humans in what context. All decision making? some decision making?

    Define what you mean by intelligence in this case?

    Are you in fact alluding to the difference between symbolic (typically human) and subsymbolic (as far as im aware machines best attempts) logic?

    If this is the case are you in fact asking if we can make machines concious ie: self aware. Which is a very different problem and leads into - for me at least - moral ethical and philosophical conversations.
    • thumb
      Mar 7 2012: I believe "the chinese room" problem questions where the consciousness lies. Is the conscious being the human in the room because he's aggregated the information the room contains? Or is it the data source (the book/computer) in the room? Or is it the entire room itself? This problem is very thought provoking because we can easily give humans a "consciousness" but given a situation where the knowledge does not come from a human directly, it's harder to admit whether other objects are conscious.

      When I say replace humans, I mean replace a human completely. Can we give it an artificial body and with all intents and purposes, it will act exactly like a human being, think it's alive, and assimilate perfectly with other human beings?
      • thumb
        Mar 7 2012: We could also consider how the "Chinese Room" problem isolates an accepted intelligent being behind a deterministic program, and we find issue in whether the system shares the man's intelligence. This isolation suggests that even if we replicated every cell and function in a human being artificially, down to the atom, that artificial clone could not by our current means be determined one way or the other. Until we have ways to determine that another human hosts the same intelligence we observe ourselves to host, we cannot determine the same of a nonhuman being.
      • Mar 8 2012: Mm, you could say that. I interpret the more lower level implication. The human in the room isn't the important point here. The "system" in the case of the Chinese room is the room and all it contains. This scenario teaches me that the system does not understand what its doing, it simply responds to an input with a pre determined output. Now, you could argue that human behaviour and intelligence is simply billions of cross talking Chinese rooms, and therefore human intelligence to be an emergent behaviour, but I think this is hard to prove.
  • Mar 11 2012: On an ethical point of view, I think technological machines or devices lack a lot that is desirable in human intelligence. We tend often to forget that man is not only a deductively processing brain, but also a heart that loves, cares and feels. For instance, these machines produced by technology when confronted with new set of data not pre-programmed are stuck. My position is that technology is a useful tool, but it cannot do without human intelligence. Human intelligence can transcend to reach new heights.
    • thumb
      Mar 11 2012: Agreed - but couldn´t we teach technology to take care and feel? Two questions go with this:

      1.) Do we really want this? Don ´t we loose our uniqueness then? It is the same with animals - we believe also that man can do specific things that animals can not do... But maybe the delphin in the zoo is playing with us - instead we with him ?

      2.) If we want it, is it really good technology still? Imagine a car that is full of fear to drive at night? A gun refusing because empathy for the enemy... A Strange question, I know - but a question that is logical once you want to make technology more intelligent and more human.
      • Mar 14 2012: I am happy with the way you answered . However, I have this feeling as if you are afraid to concede that technology cannot be given such feelings... Mankind is able to play with the material aspect of nature, but things or aspects beyond this materiality have proven to be beyond their reach. For instance, if you like movies, probably you have watched Bourne, how scientists tried to control man's loyalty, obedience... and later they had to face the sole result reached up to now that these feelings are metaphysical. I am contending that maybe one day we will achieve that, but for the time being... no illusion there.
        The other thing you said about our uniqueness as human being. Do not worry about that... I most of the time argue that the scientific world has repressed more of our uniqueness to the extent that we think that will be the only thing we may loose. I hope you do not think that if given feeling they will be stronger than we are. maybe, maybe not. we are more than what is seen, measured. our uniqueness is beyond the current measures.
      • Mar 14 2012: Interesting, technology that feels might not necessarily be helpful. I think the hardest part to replicating human intelligence exactly is finding a way to copy the emotions and seemingly irrational thoughts of humans. Sometimes I decide to sleep in on a day instead of doing work that I know I have. I can see AI making logical decisions and being more efficient than humans at some tasks, but find it hard to imagine them having emotional responses. I am unaware on how exactly our feelings are activated by our brain, so modeling this might be easier than I know.
        Also, did you know there was a TED conversation similar to this: http://www.ted.com/conversations/1528/artificial_intelligence_will_s.html
  • Mar 10 2012: Some of the posts within this thread refer to the P-Zombie (philosophical zombie), if not by name, then at least in concept.

    The p-zombie is essentially advanced automata that acts identically to a human being, but possesses no consciousness.

    My argument is that a p-zombie is an inherently contradictory idea if you have an inkling of how our own consciousness works.

    Essentially, our actions - the things we do, the things we are capable, betray some degree of our internal minds. That we show capacity for learning, capacity for cognition, capacity for information processing... are because we do have those capacities. We can't show it, if we don't have those things. And we can't show the emergent results of the massively parallel, modular, auto-associative, probabilistic brain functions if we don't have those things.

    And that... is pretty much the nature of our consciousness. The comingling of all these complex signal processing units, their iterative interactions, in the timely manner that they do, relative to each other as they are... can only result in the sensation of consciousness that we experience.

    That said, if you suppose that we had the ability to capture all cases, present and future, and provide an output for each unique case (of which there would be infinite numbers), then I suppose the idea of the p-zombie would have some traction. Like a chinese room, receiving one input, then throwing out another.

    But it would be more difficult to account for all the cases then it would be for an auto-associative adaptive learning system... like the brain... to emerge through natural forces. Would also be less efficient for us to design an 'intelligent' system that way, then to design one that could adaptively learn things.

    That said... will the nature of machine conciousness... be even similar to human consciousness? Doubtful. Unless you were to replicate the critical conditions that make us 'human', including, but not limited to processing speed.
    • thumb
      Mar 11 2012: Huh. That's a good point, George. Never thought about that.
  • thumb
    Mar 9 2012: Not to spam the forum with lots of my rambling and randomly researched material, but there's a field of science whose job is to apply quantum theory to cognitive phenomenon:

    http://en.wikipedia.org/wiki/Quantum_cognition

    All of this just reinforces the idea that the universe is an amazingly vast, intricately detailed place! With so many things still hidden from us, in plain sight, it makes me pause and ask the question: What if we *are* just matter? And then, taking a look at the way MATTER acts, we could be *just* matter and STILL have things exist beyond what we've seen so far or even what we *can* see here *because* of as yet unknown rules and interactions that science could determine.

    Take for instance, notions within the scientific community that the universe is just a hologram:

    http://www.wired.com/wiredscience/2010/10/holometer-universe-resolution/

    http://www.universetoday.com/59921/holographic-universe/

    or the idea of encoding information on the surface of a black hole:

    http://www.theory.caltech.edu/people/preskill/blackhole_bet.html (Summary of article---"Your most precious theories have been (or must be) altered. Pray I do not alter them further.")

    In fact, as we learn more about ourselves and the nature of the universe, thinking machines become downright plausible, and the idea of referring to tools and machines as "just matter" almost does an injustice to the mystery that still exists in plain ol' 3-Dimensional space.

    So that begs the question---if we invented machines that can do these things as well as we do and outperform us in the only tests we know to determine consciousness, why wouldn't they be conscious as well?

    But I am in emphatic agreement with you about scrutinizing these things and trying to determine if they truly encapsulate what it means to be conscious! Constant scrutiny and a healthy skepticism of ALL things is important!!!
    • Mar 9 2012: Hey, Logan. You sound like a deep thinker. I like it! Here's some ideas for you to ponder, let me know what you think.

      1. The human brain, the seat of consciousness, is too big, too warm, and too wet for any meaningful quantum phenomenon to contribute significantly to the phenomenon of consciousnesses.

      2. Quantum matter can be transformed into energy, that's where it comes from and that's where it goes. But yeah, take this line of reasoning all the way back to the big bang... from where did the big bang come?

      3. Quantum computing is coming along... have you heard of biological computing? I have heard some things along the lines of DNA uses a four dimensional coding scheme, computers traditionally have used a two dimensional (binary) scheme. I think there is some sort of research being done into organic computing, using more than 2 dimensions for information coding.

      4. We (human beings) are just matter with special properties.

      5. Are humans truly conscious? Are dogs? Are ants? What is the definition of consciousness? How do you personally define consciousness?

      Thanks for posting and reading!

      Chase
      • thumb
        Mar 9 2012: Howdy, Chase! Well, I try, but no matter how deep I think, it never quite seems to me to be enough. Does it ever? :)

        1. Yeah, I'd come across a few things stating something to that effect. But take a look at this:

        http://www.bbc.co.uk/news/science-environment-12827893

        It's an article about the way we smell things, and how we are actually absorbing quanta and processing them as smells. One of our basic five senses could be quanta-dependent or even quanta-based.

        I personally found this quite Intriguing when I heard of it! Let us not forget how intimately linked with memory our sense of smell is, and the subsequent implication of quanta being at least tangentially related to *that* function, which is itself intimately tied to studies of intelligence in human beings! A long chain of dependencies, any one of which further research could crack or change, to be sure, but to a deep thinker, perhaps everything looks like a deep complexity, and I am making something of nothing. :)

        2. Yeah, you kinda answered your own question there. To answer where the energy for the Big Bang came from, there is a theory floating around that multiple universes exist, one right next to the other, and that, undulating and vibrating as two-dimensional entities with 3-dimensional information encoded on it, such as universes, are wont to do, they collided---and the imparting energy of which started the ever-expanding universe we see around us today.

        http://io9.com/5714803/does-our-universe-show-bruises-where-it-collided-with-other-universes
        http://discovermagazine.com/2009/oct/04-will-our-universe-collide-with-neighboring-one
        http://www.cosmosmagazine.com/news/3151/something-big-found-beyond-edge-universe

        3. Yeah, I've heard about it! Seen a couple amazing things, too!

        http://singularityhub.com/2010/10/06/videos-of-robot-controlled-by-rat-brain-amazing-technology-still-moving-forward/

        Kinda creeps me out, to be honest, but in a good way
        • Mar 10 2012: A quantum is simply the minimal physical entity involved in any interaction, right? So everything works on quanta, because a minimal physical entity (and usually more) is involved in every interaction.

          Retinal cells (which are actually extensions of the brain and the only part of the brain you can see from the outside (the eye doctor when looking at your retina is looking at neural tissue emanating from your brain)) can detect and respond to a single quanta of light: a photon.

          In my current line of thinking, quantum mechanics will not explain consciousness. The brain is too big, wet, hot for quantum phenomena to contribute to brain processes. I'm no expert here, but it makes sense to me.

          And a point on agnosticism, agnostics don't just say we don't know, they say we can't know...

          What I do think will explain consciousness is systems theories. The functional unit of the brain is the neuron. Neurons fire on an all or none principle (this is binary, either 0 or 1). But the language of the brain is in neuronal firing patterns, so the brain isn't binary, I'm not sure what it is in this respect.

          The secret of consciousness (in my best guess opinion) lies in the brain system (the flow of information) being able to turn back on itself. The brain is able to monitor itself and the body that houses it in "real time."

          Have you heard of Douglas Hofstader (sp?). He wrote a book titled "I Am a Strange Loop." Hofstader's idea of strange loops is interesting and I believe may have some implications to the phenomenon of consciousness.

          Obviously I could be way off.
        • thumb
          Mar 11 2012: To Chase - then you know Gödel-Escher-Bach as well?

          It is no accident that many physics turn to become philosophers - at least in Germany this was the case during my student years. The transformation of material to immaterial - from quantum or what ever to consciousness or to religion or to values - is and stays a mystery despite all research.

          I think that Heisenberg Uncertainty Principle explained very well why we can never anser the question: By explaining you inevitably interact, watching is interacting - and so you change the object... so the immaterial watching is influencing the material watched: And viceversa.

          Explaination at the point turns to a self-referentiell cycle - with the self being more than the individual. For me the nobel-prize winning "game theory" (Prof. Selten) offers some insights how these explanation-cycles work.
      • thumb
        Mar 9 2012: In addition to the quantum computing idea, I think I posted articles in a response somewhere else in this forum about quantum computing and quantum data. It was the sister posts to the original post here. But here's one of the links, that is kinda cool:

        http://www.dwavesys.com/en/technology.html

        4. Yeah, maybe we are---but I'm kinda agnostic. "Don't know if we are, don't know if we aren't" etc. Just wanna see the proof and judge for myself. :) We get a lot of people who conjecture, and postulate, and even argue, but very little *real* proof.

        5. See number 4, hehehe. But I love reading about whatever anybody finds!
      • thumb
        Mar 10 2012: @Chase: Well, to be perfectly honest, I try to soft-soap my agnosticism by saying "I don't know." Science and logic-minded folk find that more palatable than saying "NO ONE CAN KNOW", because that assertion almost makes it seem like they shouldn't be doing exactly the sort of things they are doing (when I feel they most definitely *should* be doing their thing). It seems to make them feel like they're spinning their wheels. And religion and God-fearing folk find it more palatable because saying "I don't know" still leaves them with the possibility that they/God knows. Which, as far as I know, *might* be true.

        Just being polite, is all. :)

        As for what a quantum is---yes, your definition is correct. But my point was that there is a quantum interaction going on with smell. If such interactions occur in a hot, wet, environment like the nose, why not the brain, even if it may be in a way we do not yet understand? And from what I've been able to gather (which is far from conclusive) the reason so few olfactory receptors in our noses are able to detect such a wide variety of smells is due to an effect known as "quantum tunneling."

        When a molecuile binds to a receptor site, an electron is transferred from the molecule (feel free, at any point, to correct me if I am not accurately describing the process!) to the receptor, activating the receptor, and in addition, causing the molecule to vibrate in such a way that is specific to that molecule that---the subtle differences in vibration are detectable, which our brains pick up on. If our noses can do that---why wouldn't our brains be able to do other things with quantum states? Even better, take a look at this article:

        http://www.abovetopsecret.com/forum/thread714014/pg1

        This article is suggesting that DNA can act as "a spin filter" and can distinguish between two quantum states. Not entirely sure what all of that means, but it would seem that quantum interactions happen more often and in more ways, than we know.
        • thumb
          Mar 11 2012: I'm not getting religious logan but it says in revelation? That even his image will condemn you,so for me it's a forgone conclusion that someone writes a bloody good bot or AI is achieved at some date somewhen.I know one can look at that sentence and say anything but it stood out as peculiar as it didn't fit.

          I think most researchers say Qauntum aspects when they talk on neuron fuzziness which just means we're still on the journey to figuring the brain out.I don't think one should look at the brain as systems,subsystems because of those people born in the world with only 30% brain matter and yet they are fully functional people with no differences in anyway or was it 2%,i can't remember.
      • thumb
        Mar 10 2012: @Chase: All I'm saying is that there is plenty of "reasonable doubt" as far as the role of quantum mechanics in the functions of consciousness that we shouldn't rule anything out yet without further research.

        And no! I haven't heard of Douglas Hofstadter or his book, but it sounds interesting! A quick Google/Wikipedia search reveals a man who believes self-referential systems are the primary causes of consciousness. Sweet! Sounds like a good idea.

        A constantly self-referential system, combined with self-reinforcing neural networks (where each newly acquired memory affects memories already formed), combined with a nearly infinite array of contexts in which to operate in sounds sufficiently complex to describe all the many ways people act. I will definitely have to investigate further!

        And yeah, you got a point about the strictly on or off state of neurons. Unfortunately, there's a threshold to meet; in biological systems, there must be a certain level of excitement that a neuron must receive before it fires. It's this extremely variable threshold that might account for the less-than-linear processes of the brain. Heard of astrocytes?

        http://en.wikipedia.org/wiki/Astrocytes

        These cells are responsible for a lot of things, chief among them they help facilitate the firing of neurons, and one astrocyte could connect to many thousands or millions of neurons, either prohibiting or stimulating neuronal transmissions via certain chemical reactions in the brain. One more system that must be accounted for in some way.

        OH! As for quantum interactions happening to macro-scale objects, well. . . Just you take a look at this.

        http://www.ted.com/talks/lang/en/aaron_o_connell_making_sense_of_a_visible_quantum_object.html

        Sure, they had to super-cool the material---but macro-scale quantum interactions are possible. Who's to say they don't have some other property that makes them viable at room temperature, like with DNA?
        • Mar 11 2012: Hey. Yep, definitely heard of astrocytes and other glia cells; I quite enjoy neuroscience! Scientists once thought that glia simply and only held the brains neurons in place. But we now know that glia assist with function as well as structure.

          As for the quantum phenomena thing. Quantum phenomena only happen at specific scales of size and temp, right? Again, I'm not a theoretical physicist and I am not a biologist; I could certainly be wrong in my assumptions. But I'm sticking with it: the human body/brain is way too big and way too warm to have quantum phenomena significantly affecting brain/mind processes such as consciousness. Even a receptor protein in the brain is too big for quantum phenomena (I believe). And the brain is way too crowded and hot (lots of kinetic energy (motion)) going on. Quantum phenomena may exist for nanoseconds in isolated parts of the brain (maybe). But any spreading of wave function, tunneling, will collapse into the most probabilistic single state immediately. Brain processes typically happen at the millisecond or longer time scale (I believe).

          I remain in my position: the human brain is too big, too hot, and too wet (molecules and elementary particles continually interacting with each other) to have quantum phenomena playing any significant role in brain function (of course, I could be wrong).

          I believe research into self-referential systems is a better path towards understanding and replicating human consciousness.
      • thumb
        Mar 11 2012: Well, I'm not gonna try to convince you if you've already decided it can't happen. But that quantum phenomenon for smell receptors, while still hotly debated and contested, does have a pretty loyal following.

        They're also discussing the role of quantum mechanics in photosynthesis. Another hot, wet environment.

        http://blogs.discovermagazine.com/cosmicvariance/2011/03/25/quantum-smell/
        • Mar 11 2012: Hey! Yeah, I'm not saying it can't happen. I'm saying the probabilities are so small that, personally, I don't believe it is happening.

          My reasoning again: Nothing is in isolation in the human body. Even a single electron interacting with a receptor is interacting with that receptor. Any quantum phenomena will instantly "evaporate" due to constant interactions. Even if the electron is moving just close enough to the receptor as to not classically interact with it, the electron (and any superpositioning) is constantly being bombarded by extra (and intra) cellular fluid that contains particles. Some as small as ions. I just don't see their being enough time for quantum phenomenon to play any role other than the deep role quantum phenomena play in all matter.

          Consciousness is a product of the overall brain, right? It doesn't seem to come from any specific area ( I could be wrong here, but I believe this is the case). And the overall brain is a relatively large, wet, and hot object with no parts in the vacuum style isolation or near zero temperatures required for quantum phenomena to do their weird quantum thing.

          As far as the smell receptor thing goes. I don't believe smell receptors are isolated enough either. There are air and bio structures constantly interacting with odorants and with the receptors of the olfactory tissues.

          There's just too much going on in a relatively extremely chaotic environment for any non-classical phenomenon to emerge.

          But again, I'm no expert. I just don't see it being possible. How do you propose this stuff works? Don't objects have to be in unimaginably cold, extremely small, and/or extremely solid states to perform quantum phenomena? How do the researchers in the articles you posted know that quantum phenomena is taking place? Isn't there proposed idea simply a hypothesis?
      • thumb
        Mar 12 2012: Well, they've done extensive testing, from what I can gather, but nothing definitively conclusive. They've come close, though. Check out the wikipedia article on it:

        http://en.wikipedia.org/wiki/Vibration_theory_of_olfaction

        But if you're looking for more *solid* proof of quantum mechanics at work in hot, wet environments on the macro-scale, further research into photosynthesis has proven rather fruitful:

        http://www.sciencedaily.com/releases/2010/02/100203131356.htm

        Of course, it won't satisfy you're wishing to be "certain". They had to cool the algae down by a lot in order to even be able to track the way the energy moved through the protein, which leaves their results ambiguous at best. But, I would hazard to say that, to even notice such an effect operating on a protein to begin with, there must be *something* in it that makes such a design practical.

        The following article elaborates a bit better on which quantum principle is being employed; it's an application of "quantum computing" according to the article. Haven't actually explored more of this phenomenon (it's mid-semester for me, so I've been rather busy with studying) but, the idea that larger scale applications of certain aspects of quantum mechanics is not only *possible* but occurs naturally seems to be a semi-legit, if not just merely tolerated, one:

        http://www.scientificamerican.com/article.cfm?id=when-it-comes-to-photosynthesis-plants-perform-quantum-computation

        All in all, it seems like, IF something that has to do with the conversion of light into energy can use a quantum computing principle to find the most efficient route by making the light go *all* the routes until it finds the most efficient one, why couldn't something similar happen in the brain?

        At the very least, it warrants further research; the self-referential loop theory of consciousness, while a worthwhile research pursuit in it's own right, is no more or less worthwhile than this appears to be right now.
      • thumb
        Mar 12 2012: As for which parts of the brain do what, according to this diagram, the frontal lobe is in control of consciousness:

        http://science.education.nih.gov/supplements/nih2/addiction/activities/lesson1_brainparts.htm
        • Mar 12 2012: I believe consciousness is largely thought to be a distributed process rather than a localized one. But yeah, I think there is evidence to suggest that some processes of consciousness are disturbed when there is damage to the frontal lobe. I believe sense of self is disturbed in some ways. Anyway...

          I think self-referential loop is more worthwhile because it doesn't have to prove that it exists. There are many questions about the very existence of quantum phenomenon in the body or on macro, warm scales in general.

          We can already see that self-referential systems exist in the brain as the very act of thinking changes the way we think. We can think about something, realize and insight about it, and change the way we think. Thinking changed thinking, it referenced itself. We just need to put time and energy into tracing out the incredibly complex self-referential system that is the human brain (or at least that's how I see it).

          And a bit more about the QM thing. Again, I think my brain (my brain referencing itself) is a classical object just like an apple. My brain is bigger and hotter than an apple. Do you think it is possible to have light quantumly interact with an apple. I know people are saying that MAYBE photosynthesis takes advantage of QM, but that's a big maybe. No one is able to demonstrate it, right? Or did I miss something in the articles you provided.

          Sorry I'm talking in circles, but my point is, the brain is a classical object just like apples, my entire body (which my brain constantly interacts with instantly collapsing an superimposed spreading of wave functions). The classical object brain is the seat of consciousness. We need a classical approach to understanding consciousness.
      • thumb
        Mar 12 2012: There're a lot of people who think a lot of things. People get wrapped up in their own idea of the way things work to the point that there is no longer room for other paradigms.

        Take for instance the proliferation of computer programming languages.

        One computer programming language is Turing complete. But rather than just trying to improve that one language to implement whatever behavior or design the programmer wishes to implement, the programmer get's all hot and bothered with the language in general and decides to redesign a language from the ground up, custom-tailored to the way he thinks.

        Now there are two Turing complete languages, and rather than trying to improve one language or the other to implement whatever behavior or design the programmer wishes to implement, the programmer get's all hot and bothered with the language in genral and decides to redesign a language that takes the elements from both languages he likes and adds additional functionality that better supports the design he wishes to implement.

        Now there're three Turing complete languages, any one of which will do whatever it was you wanted to do to begin with.

        What I'm getting at here is that just because QM doesn't necessarily jive with *your* paradigm of the brain, humanity, and the world in general, doesn't make it any more or less worthwhile. And if self-referential loops *truly* explained the concept of consciousness better, wouldn't we, I dunno, have conscious machines by now?

        The whole programming thing's been around for the greater part of two centuries (if you count Ada Lovelaces' design being the first program), and recursion has been around since AT LEAST the invention of Lisp and Prolog in the 1950's-1960's. We've had 60 some-odd years to perfect the use of self-referential loops.

        Just saying it might be time to consider other alternatives, no matter how zany they may appear. Test the bajeezus out of them, and if something shakes loose, all the better.
      • thumb
        Mar 12 2012: It's the "Hammer" problem. To someone with a hammer, the whole world looks like a nail. Your hammer is self-referential loops. And there's nothing wrong with that! When I got a problem that needs that particular hammer, I'd much rather go to a dude who specializes in the use of that particular tool. Assuming my problem can be solved by that tool.

        The problem is that people and the universe are either so complex physiologically with so many different moving parts, or BOTH complex and vast, that, when you add even a small amount of fuzziness about what all of these things are to begin with, they begin to serve as Rorschach tests, and people eventually just see the things they want to see in them.

        I could care less *either way*; if consciousness can be entirely explained and recreated using self-referential loops, awesome! Let's see the AI you've developed! That would make my millenium!

        Insanity is doing the same thing over and over again expecting different results. "What's wrong with your AI, Jim?" "Oh, it's not working." "Have you tried using a loop yet?" "Yeah, it doesn't seem to be working." "Well, you need a bigger loop." "Well, I've kinda maxed out my memory, computing cycles, and bus speed. I can't make it any bigger." "Ah, well, then, you need more of them."

        We've given it a good half-a-century, and while there may be some good life yet left in it, let's branch out a bit, pursue other avenues. Maybe we might discover something that helps us understand loops better, if nothing else.
      • thumb
        Mar 13 2012: But I digress; we have focussed so heartily on whether or not QM is a viable candidate or not, we have altogether foregone the conclusion that self-referential loops do indeed give rise to conscious thought. Is that necessarily accurate? I asked Google, and here's what I found.

        My first search brought me to this paper:

        http://books.google.com/books?id=Ys5PNmv_waUC&pg=PA139&lpg=PA139&dq=Self-referential+loops+experiments+artificial+intelligence&source=bl&ots=xsyLJQ0oX9&sig=u_3VMvmGWgQ586t7YHESdoUsoz4&hl=en&sa=X&ei=uLReT4OsJOjZ0QHi--GsBw&ved=0CDIQ6AEwAw#v=onepage&q&f=false

        which introduced the concept of Dynamical Systems theory, a search of the term which brought me here:

        http://en.wikipedia.org/wiki/Dynamical_systems_theory

        which begins to outline what DST is and what it is used for, primarily the studies of systems that are "mechanical in nature" such as "Planetary orbits as well as the behaviour of electronic circuits. . . " (I hope you'll forgive the excessive use of direct quotes from the various articles; I would hate to commit plagiarism, and I am relatively unfamiliar with the subject matter at hand).

        Of particular interest in that article is the "Related Fields" section, under the "Chaos Theory" heading, part of which reads "Chaos theory describes the behavior of certain dynamical systems – that is, systems whose state evolves with time – that may exhibit dynamics that are highly sensitive to initial conditions" a statement that, if ever I wanted to apply to the study of the mechanical nature of the mind I would be hard-pressed to find a better description of.

        Getting rather curious, I clicked into the "Chaos Theory" tab, and I found, about half-way through, that my eyes kinda glazed over, because I hadn't yet found anything directly pertaining to the self-referential nature of consciousness. So I refined my search to "Chaos Theory as it pertains to self-referential loops" which yielded many things, including:

        http://paradox-point.blogspot.com/
      • thumb
        Mar 13 2012: And another thing of rather great interest was this:

        http://appraisercentral.com/research/Chaos%20Theory.htm

        which was a great introduction to the history of Chaos Theory, and introduces the most basic concept that accurately reflects the field: The Butterfly Effect.

        According to the article: "The butterfly effect states that the flapping of a butterfly’s wings in Hong Kong can change the weather in New York. It means that a miniscule change in the initial conditions of a system, in this case the weather, is magnified greatly in the end conditions of that same system."

        Intriguing! Imagine this: The sexual act has just occurred, and an egg has just been fertilized. The First Cell begins to divide, and so on and so forth, bringing with it all that that entails; increasing body size, identifiable organs, and, slowly, consciousness. If you imagine each division of cells as 1 iteration of the loop, then each subsequent loop is like a snap-shot of every dynamical system present within that life! Not just it's awareness, but everything that that awareness is hooked into: visual, aural, olfactory, sensory, and taste. And a small, minute change in that first cellular division (maybe the egg being just a smidge higher or lower in the uterus) could drastically affect every subsequent iteration of the system!

        So what, then, does this mean for our self-referential loop model? It means that we are all just---chaos. That there is an order by which we *unfold* but that order is, by definition, a series of chaotic events. "Chaos works in order and within all order there is chaos." By this definition, market trends would be nigh impossible to predict! And yet, you can kinda guess that, if the cost of corn goes down, people will probably reduce supply in order to drive it back up. Just human nature. An order in the chaos that creates us.

        The problem, however, is scale. You may have a pattern, but you never know to what degree it will manifest itself with any given man.
      • thumb
        Mar 13 2012: So what you're "really" saying when you say "Consciousness is a self-referential system" you're saying "Boy, it's a cluster of unimaginable proportions!" and rather not as simple, straight-forward, or fruitful as you made it sound! How would you isolate every possible starting condition that might give rise to a human being and ever hope to accurately replicate it, even with an iterative approach?

        What if, through some research, we discover a system of equations that we think describes how consciousness works and it produces a Lorenz-Attractor-like plot? Sure it's iterative, but it never repeats. There would be no regularity, and would give rise to none of the predictability we've come to expect from our fellow human beings.

        Unless---Unless there were some function, some mechanism, within our consciousness that, maybe, allows us to run through *every* possible thought and allows us to pick and choose which ones are relevant to us? Kinda like that QM thing?

        Unless I missed something in all these other articles, chaotic systems are unimaginably complex. I happened upon an article that talks of a guy named Poincare. I happen to know that one of the Seven Millenium Problems pertains to something called the Poincare Conjecture, and a dude named Grigori Perlman who built upon the research of a guy named Richard Hamilton and his work on using Ricci flow to attack the problem. It took a century, but they did it.

        I may be overly simplistic, but whereas QM may be unproven, at least a little bit more research could rule it out today, whereas this stuff--? If it takes as long to solve this as it did the Poincare Conjecture, we're looking at another 40-50 years easy. And let us not forget that the nature of thought and consciousness has occupied people since the VERY beginning, mathematicians and philosophers alike.

        Isaac Newton, who helped lay the foundation from which DST sprang, said, "I can calculate the motion of heavenly bodies, but not the madness of people. "
      • thumb
        Mar 13 2012: As is the case with all things, perhaps the truth lies somewhere between?

        If you think about our brains, not only are there loops (as you suggest) and instances of advanced parallel processing (like sorting through multiple paths and trying to settle on the most efficient one) our brains are also immense data-bases of atomic facts.

        The *sky* is *up*.

        *Grass* is *green*.

        Don't *eat* the *yellow* *snow*.

        http://cyc.com/cyc/technology/cycrandd

        http://en.wikipedia.org/wiki/Cyc

        Perhaps multiple loops running in parallel sort through this database of facts or *rules* if you will.

        Perhaps the decision trees that connect these facts are themselves contained in a data base, and the most efficient one gets decided upon using some QM-style phenomenon, and certain things that don't lend themselves well to recursion but where the required outputs are known could be executed via certain supervised learning methods (like with backpropogation techniques, although, continual neuron weight updates might be a certain kind of loop) and other things that do lend themselves to looping and/or the datasets are *not* known (and as such, an incremental breakdown would be necessary IF there was no rule in the database that could be modified to match the novel input).

        A combination of all these techniques would be necessary in creating a *strong* AI because, certainly, there are certain phenomenon that lend themselves better to each of these approaches than others.

        All of this means we'd have to optimize our search of the rule database itself, yes? And we could do that using---another rule. A rule about rules. One rule to rule the rules. The Golden Rule. Or just categorize the rules into sub-sets of which every rule must belong.
      • thumb
        Mar 14 2012: Hey, after all that stuff, I happened to be randomly reading something on facebook and, not knowing how to work it cleverly into the conversation, just decided to blurt it out because it's tangentially related to AI.

        http://www.smartplanet.com/blog/thinking-tech/how-to-augment-our-intelligence-as-algorithms-take-over-the-world/10588

        It's about the rise of algorithms within trading and advertising businesses and how they seem to be taking on aspects of a predator/prey relationship. It's kinda neat.

        And this:

        http://www.smartplanet.com/blog/thinking-tech/next-breakthrough-computers-that-understand-emotions/6363

        And this:

        http://www.smartplanet.com/blog/thinking-tech/computer-types-out-messages-by-reading-your-mind/6411

        and possibly more, because I'm bored, and I don't wanna do homework on Spring Break. These all seem to be technologies geared towards "bridging the gap" using the analytical power of one and the parallel computing power of the other to create a sort of hybrid intelligence. And shore up biological deficiencies or injuries. Which is a viable possibility towards creating machine intelligence, if you think about it. Sufficent amounts of lab-born neural networks--?

        Oh yeah, and that whole thing about computer programming languages made me think about other computer programming languages---I read a while back about some programming languages whose creators had a rather vicious sense of humor. Take a look at some of 'em:

        http://computersight.com/programming/five-strangest-programming-languages/

        I think I actually might want to use the one that you have to use lolspeak in. . . Or the one that you sometimes have to ask "please" before it will run.
  • thumb
    Mar 8 2012: If computers can win jeopardy I reckon they're more than ready to replace our political decision makers
    • Mar 8 2012: A politician needs to be able to make an independent decision. At this point in time, no computer can do that.
      • thumb
        Mar 8 2012: I for one welcome our cyber overlords - but seriously folks 99% of a politiciens decisions are based on getting people to like him - which is not obviously the best criteria for the country as a whole. I understand that an AI that acts with as much passion and stupidity as a human isnt yet on the market - but a computer program that can make economic decisions based on data might just be possible.
        • Mar 8 2012: I will never support AI overlords as long as they don't have a sense of consequence. Politicians doesn't SEEM to have that trait, but an AI would not have it, at all.
  • thumb
    Mar 8 2012: What we call Artificial Intelligence differs from human intelligence in one tremendous aspect: it completely and totally lacks anything to do with consciousness, qualia, sense of self, intuition or any kind of subjective experience. AI offers nothing but a slight semblance of awareness and reasoning. It is nothing but an elaborate and extensive layout of logic gates, with all sense and logic nothing but complicated patterns of binary data.

    If human intelligence could be rendered anything similar, it wouldn't require any sentience at all. Even machine learning is nothing but trial and error, with any and all success strictly confined to cold algorithms. The inorganic technology of today can only hope to mimic life, to give the appearance of behaving in a manner similar to organisms. I have a feeling we'll move almost entirely on to organic technology, and modifying the mysterious of life as they've come to exist, long before succeeding in creating a lifeless program on par with the mind of a human being.
    • thumb
      Mar 8 2012: Are you saying consciousness, qualia, sense of self, etc are systems that have no baser components? As humans, don't we want definitive proof if that's the case? And if not, the baser components should be simpler, easier to grasp, and explainable. From that point, we should be able to create artificial systems that mimic it.

      It seems like you're focusing on weak AI strategies (machine learning, fixed algorithms that do specific tasks, etc). We can use computers to simulate neurons, and from a blackbox perspective, they're no different than an organic neuron. Right now, even by observing neurons in a human brain, we are unable to make the leap between a system so definitive, and the complex system known as human consciousness. I think that once we can understand that, we will be able to successfully mimic it.
      • thumb
        Mar 8 2012: I'm sure everything has lesser components, or so we could reason. But do the components of consciousness resemble the lesser components of machines, blind currents interacting in algorithmic ways? I feel we're incredibly limited in assessing this scientifically, as the only consciousness we've come to know is our own. Subjective experience isn't something we observe in a microscope. And this has an impact on the way we view the world around us, the way we reason things to exist. Consciousness itself is a tremendous mystery, not just the senses and reasoning it involves but the actually nature of experiencing and being. That experience, qualia, isn't something we can replicate with programs or machinery. More than likely before we even care to try, we'll be crafting organisms into machines, likely without regard or awareness to their own qualia.

        Even a simulated neuron, with today's technology at the very least, breaks down to bits and registers, pushing and popping. Our attempts to simulate all behaviors we observe in nature share a common limitation - they're absolutely agentless. And I think this stems from our limitations in assessing the world around us, because such agents aren't something to be observed. And it leads to a flaw in our reasoning: the mystery of consciousness is no longer a question of its role in the universe, instead we want to understand how such a phenomenon could arise from dead matter. The world as we see it is dead, but only because we lack the ability to observe it's inner being. We want definitive proof of things, but this itself is a limitation to it's own end.
        • thumb
          Mar 8 2012: I agree with you about the mysterious nature of consciousness. One of the great things about the quest for AI is the fact its forcing people to confront this mystery again. If we do create AI, no more can people just say "oh, its just chemical reactions in the brain".

          Subjective experience is innately mysterious because science is rooted in objectivity (or at least the scientific method attempts to be). We can't even prove the existence of our own subjective experience, so how are we going to be able to tell whether a computer can experience it.

          If you program a a network of simulated neurons to dream, does it experience the dream? Will it be afraid after a nightmare, or inspired by some strange subjective metaphor that came while it was "unconscious". That's pretty hard to prove considering I can't even prove that I experience a dream.

          Although, that doesn't mean that computers can't be conscious. For all we know, many things are conscious. Consciousness could be as ubiquitous as gravity, or some strange property of electricity. I can't prove my kitchen table doesn't undergo subjective experience any better than I can prove I do.

          As much as I am a firm believer in my own consciousness, I wouldn't hold it as a criteria for AI, for reasons above. It is just to elusive and unmeasurable.

          At some point, though we will have to confront these kind of questions. Do computers feel pain? Do they have rights? Its very similar to debates around animals, but now we have to either expand or confine our ideas about consciousness in regards to a very different type of entity.
        • thumb
          Mar 9 2012: I ask again, what is qualia? How do we classify qualia? Just because we give a label to our experiences as "qualia" doesn't mean that's the rawest essence of the experience. These experiences differ from person to person; synesthesia is one obvious example of how they differ. Do people who experience synesthesia have a condition that isn't normal? That means there's a way to attribute qualia to something systematic.

          Also, I agree with scott when he asks whether a table can be conscious. We know that consciousness is present in a system of neurons. Neurons themselves are not conscious. Are you saying this agent that controls or affects neurons is conscious or contains some attribute that creates consciousness? What's to stop us from finding the origins of these agents?
  • thumb
    Mar 8 2012: With the word "replace" in the question my answer will be a no. My reasoning is fairly simple - a human is the sum of its genes, its experience and the lives of those who came before it. There is something intangible about that last bit - our decisions are based not only on our own genes and experience, but also on interpreted history. And the key word there is "interpreted"; you can feed all the history of the world into an AI and make that enter into its decision making process, but it will never be able to emulate the "interpretation factor".

    So no, I don't think technology can replace human intelligence. But in a narrow scope, it CAN surpass it - by a lot. The first thing we need to do, though, is make a computer that calculates outside of right and wrong, or outside the binary domain. For an AI to be successful it needs to recognize that there is such a thing as more right, more wrong and neither right nor wrong. I think this is more of a challenge than people realize.
    • thumb
      Mar 8 2012: It seems like you are interpreting (pun intended) that AI systems are only discrete entities with a very algorithmic core. The problem of the strength of AI is more substantial than that. Currently in the field of neuroscience, we are unable to make the connection between the microscopic systems (neurons) whose inputs and outputs for very well defined, and the macroscopic system (our consciousness). Right now, we can emulate neurons very well; proponents of strong AI believe that with enough emulated neurons we can replicate consciousness. They question at hand goes beyond whether we can artificially create consciousness, it's a question of "what is consciousness?" because we are unable to tease it out of the known systems (the human brain)
      • Mar 8 2012: The real question is whether or not we will believe it is in fact consciousness once we have created it.

        There's no reply to your reply, so I am dropping this above the line.
        I never said human. I said consciousness. My use of the word believe stems from the fact that we cannot know.
        • thumb
          Mar 8 2012: The real question, which Oliver Milne has been pushing countless times in this conversation, is whether or not we KNOW it's in fact conscious. Machines can be made to pass a Turing test without having any real intelligence, and if we "believe" it's a human, then we are lying to ourselves. Part of being able to create a conscious system is to definitely show that it is conscious. If we are unable to show without a doubt that it is conscious then we have fallen for hokum.
        • thumb
          Mar 8 2012: Then that begs the question: how do we test for consciousness? Ignoring for a moment the immense difficulty in creating consciousness, let's devise a test for consciousness on the only entities we suspect of having consciousness now---humans.

          And if we can't even show that we're conscious, does that imply we've already fallen for hokum?
        • thumb
          Mar 9 2012: @Logan. There's something known as the "three aspects of consciousness". There's also the concept of "theory of mind". Scientists have devised well accepted tests to test for those aspects in humans and animals. The mirror test, tests for one aspect: the ability to recognize oneself. There's also the ability to sympathize with others by being able to recognize external events as if it is oneself's, and finally there's the ability to take previous experiences and apply them through deduction to future events. Many animals have facets of the three, but not all three.

          Using these tests, we've been able to find out that babies develop these abilities in steps and do not fully gain all three until months after birth.

          And as evidence that these facets of consciousness are tied to real-world systems, watch this video about mirror neurons: http://www.ted.com/talks/vs_ramachandran_the_neurons_that_shaped_civilization.html. It would seem like we have evolved with systems in place to aid the sympathetic aspect.

          So it would seem like we have tests for consciousness. If anything, we should scrutinize the three aspects and theory of mind to see whether they truly encapsulate what it means to be conscious.
        • Mar 9 2012: Those tests are a starting point, but I don't think they address the 'hard problem' of consciousness (http://en.wikipedia.org/wiki/Hard_problem_of_consciousness) which is the part that really matters. It's possible, and a little disturbing, to imagine a sort of android that acts exactly like a person, including in those behavioural tests, but which doesn't have consciousness. If we didn't look inside its head (I mean that literally), we could never tell whether or not it was a person. You suggested elsewhere that perhaps nothing unconscious could manifest all the signs of conscious. That'd be a fantastic discovery if it were ever confirmed, but, on the face of it, it seems like something that would be almost impossible to find out without first knowing what consciousness is.
      • thumb
        Mar 8 2012: That is actually not my interpretation :)

        I have no doubt whatsoever that we will one day spawn a conscious AI whose thinking pattern mimics that of a human, nor do I doubt that such an AI will one day be vastly more intelligent than any human. As I said, technology CAN surpass us - but only in a narrow scope. Something *will* be lost in the translation between the biological and the technological. I sincerely doubt we will be able to infuse an AI with the "human condition".

        You may argue that I'm wrong because if we can create a technological system perfectly analogous to the way the human mind operates, the "human condition" may come forth naturally. Then I counter with this - if we are able to do that, what we will have is the technological equivalent of a caveman with a library full of history books. Yes, the caveman may be incredibly intelligent and he may have access to all of our history, but the interpretation factor cannot be replicated artificially.
        • Mar 9 2012: Maybe not your human condition. But that caveman-robot might equally despair at the impossibility of creating a human capable of understanding the caveman-robot condition :P
  • Mar 7 2012: You simply have to ask why are we creating machines? seriously why?
    if your only answer is to replace humans then the answer is YES, we will find a way to make a machine do everything a human can do and more...
    However, I really do not see that as the goal of making machines. We build machines to help humans, to support humans, etc. Even if machines become self aware ( they will it's only a matter of time) I do not see a war against them. I see a future were machines and humans will be congruent, seamlessly woven into a new matrix.

    This said, if we continue on our current path this little rock in space will be uninhabitable by humans so our only hope of immortality will be the machines we create to continue on...
  • Mar 7 2012: The human brain is a machine, and is ONLY a machine. Any discussion that is not based on this simple fact is seated in fantasy. Many people seem to think that it is the ultimate machine, but it absolutely is not. One estimate I've seen puts human computing power at a mere ~.1 petaflops, whereas many of the TOP500 supercomputers far exceed this performance already. Human-like AI is just a matter of reverse engineering. We will figure it out. It will be done very soon. It seems to me like an easier task to build an imperfect replication of our intelligence, than building something to augment human intelligence. Therefore I would expect to see AI that exceeds our intelligence before technology that augments human abilities.

    Even if we lived in a magical universe where the ~maximum~ amount of "intelligence per volume" was realized in the human brain, once AI arrives on the scene there would be many things to do to make AI vastly superior to human intelligence. I am thinking of tricks such as scaling (making the brain bigger), and reconfiguring the topology of the neural network such as is seen in the fmri scans of savants and individuals with autism. Fortunately, the reality that we live in is one where the human brain provides sufficient intelligence for our individual survival. The theoretical maximum intelligence density is many, many orders of magnitude higher than that seen in our brains.

    The public opinion is that AI will only asymptotically approach the abilities of humans. This is an untrue, egocentric world view. Damon Horowitz maybe should entitle his talk "Why MAN-MADE Machines CURRENTLY need People". Remember that we are machines too. Human-like AI is a matter of current research.

    You ask "what if.. would it be able to think?" I would say there is no reason to think otherwise. In fact, imagine what something many orders of magnitude smarter than you COULD think!? Fleeting thoughts would be equivalent to centuries of modern scientific discovery.
  • Mar 7 2012: I suppose you have to consider that as biological organisms we have had over 200 million years to evolve and develop our neural pathways and intelligence. Our knowledge is always expanding and if at the moment it may seem that AI's cannot processes certain things, it may be the case that in 5-10 years they can.

    But in terms of emotion and feelings: again, our knowledge currently may not be able to allow us to program emotions into AIs. However, it would be very complex nevertheless, as emotions are governed upon your personality variables and inherit parameters. I.e. Complements: Do you like yourself to be thought well of; if yes, then would any statement about yourself in a positive image support this? If yes: Then based upon this decision structure, it would induce a positive feedback within the neural net.

    The efficiency of the systems would be superior to that of organic life, this is because we have organs based upon our evolution within the environment, where our foods is required to be processed and useful parts be used to provide energy to our components. With a mechanical life form, such an array of systems would not be required, because a single power source would suffice, with no need of a disposal systems to deposit the 'lost energy'.

    But coming back to consciousness, if a self-adaptive program was used for the AI to develop based upon its surroundings and demands then it would be the same as a newly born child, whose neural pathways develop depending on their environment too. In addition, the AI would not necessarily require a division of two parts of it's neural net (I.e. Conscious/Sub-conscious), and would therefore be able to recall information in real time and perform complex calculations.

    In conclusion, I think it really depends on how the AI is programmed and its architecture. Can it simply match patterns at a superior rate, or does it have the processing capacity to interpret these and to understand them as well?
  • thumb
    Mar 7 2012: Whether AI will be "strong" enough in the next 50 years to equal human intelligence may be the wrong question. Martin Ford argues in The Lights In The Tunnel that it will probably be strong enough to automate most jobs, especially those of Knowledge Workers (and of course blue collar workers, who are already being displaced). Taking an objective view (and a deep breath), I think he is probably right. He has a prescription for how to manage an economy of mostly leisure time, but the point is, AI doesn't have to be smart enough to be your friend, or a good dinner guest, to be a completely disruptive technology. It just has to be as good or better at doing a specific/documentable range of tasks. And I think it will be there by 2050.
  • Mar 7 2012: Intelligence, maybe. Can technology replace human creativity? That seems a bit more complicated.
  • Mar 7 2012: I say ask the being if it thinks it is alive or if it is a machine. If it thinks it is alive then who are we to say otherwise?
    • Mar 7 2012: It can only think it's alive if it can think. But something doesn't have to be able to think to pass a Turing test. The danger of your approach is that we might make unconscious machines that wrongly insist that they can think.

      Consciousness is something that really happens. There is a fact of the matter of whether something is conscious or not. And if we're going to make machines that do impressions of being conscious, we really, really need to know what that fact of the matter involves.
      • Mar 7 2012: I'm not sure if we can ever satisfacorily answer this question. Is consciousness really either a yes or no question, or is there a grey area of being partially conscious. I'm also thinking of the evolution of humans from less conscious ancestors.
        • Mar 7 2012: I agree with you, but we have to try. And imagine how fantastic it would be if we succeeded - we'd finally have an answer to one of the biggest questions there is.
  • thumb
    Mar 14 2012: Yeah ... for sure !
    What is technology ?
    Technology ids the studies of performing a particular task in various techniques ... so the word is derived Tech-nology !
    So it has to be related with the Actions of Human with respect to various techniques !
  • Mar 13 2012: Logan, hey! I am enjoying our conversation here; I hope you are as well. Here's what I'm thinking at this point: I must clarify that I don't think self-referential loops are the only answer to explaining consciousness. I simply think they are a better route than QM (but again, I'm no expert, just a thinker). I think computer science is a good route to go towards explaining consciousness as well. Have you talked to anyone who knows how to program a chat bot? Have you ever talked to one? Try it out here: http://www.personalityforge.com/dynachat.php?BotID=24007&MID=23957. I tried asking the bot questions/giving directives such as "Are you conscious?" "Do you have feelings?" "Pick a number." "What is your favorite food?" I think the bot, or rather the bot's programmer (or is it the bot itself), is rather clever... The point of the bot is, is it conscious? Have human intelligence and human consciousness been achieved through technology? Could this tech replace human consciousness?

    I don't know... what's your take on this? And just think, the website I provided is pretty simple; that is to say, it's not a research university and it's not the government. Think what DARPA must have!

    What's your take? Is the bot a representation of human intelligence being replaced by technology?
    • thumb
      Mar 14 2012: And yeah! I've taken a look-see at one! If you're looking for some *extreme* examples of bots, man, check this out:

      http://www.cleverbot.com/

      That guy's hooked into at least one server, maybe more, and is running checks against every single thing ever said to it! I asked if it was lonely once---and the system crashed. I think it was right around maintenance time though. The point is---I think it's got a spark. Kinda like when you take the blunt side of a knife to a flint---one spark flung off it. Humans have lotsa sparks flying every which way. Consciousness surely isn't a "yes/no" decision; it's a very tricky grade.

      And when we achieve it, I think people will *still* say something about it. But they'll turn to disagreeing with it on *qualitative* grounds rather than *quantitative* grounds. "Sure it's accurately calculated, derived, and applied reasoning at the human level---but is it the sort of decision a flesh-and-blood human would've made?"

      Which is going to be the point where you just have nay-sayers and proponents, like in any issue. It'll reach a boiling-point---and then people will just have to deal with the fact they may never know.

      As for QM vs. Self-referential loops (and other possible AI sources) we could keep going back and forth on it, and truth be told, I'm as big a fan of the "How many angels can dance on the head of a pin?" kinda debates as anyone, but until it's won-and-done, it's just two old ladies sitting in a darkened room complaining that nobody's changed the lightbulb, instead of actually *doing* something that changes things one way or another, like testing for it. I mean that politely. :) Self-referential loops will always be there; let them take a bit of a break, experiment with something new, and then they can go back to it if it doesn't pan out.

      And I'm willing to keep debating it! Let's just be honest and up-front about the possibility of it leading anywhere.
      • Mar 14 2012: Logan,

        Hey! Thanks for directing me to that bot. As I said before, I don't think self-referential loops are the exclusive way to think about the brain, I think they are just a good starting point and direction. I also think bots have a lot to say about consciousness. I think I might try programming a bot on my own to see with what I can come up! I believe bots can be programmed to be indistinguishable from human/human chat interactions.

        You said that consciousness isn't a yes/no decision, that it is graded. I agree with you. But isn't interesting that we do say this person/thing is conscious while this person/thing is not. Seems like we are able to talk about consciousness in a yes/no fashion, at least to some degree.

        And I think we are moving in to an era where we need to stop thinking about consciousness in terms of only belonging to flesh and blood beings. Just because flesh and blood was the first place we noticed consciousness, doesn't mean it's the best or only.

        Regarding my distrust of QM playing a primary role in consciousness. You're right, you and I could sit here interminably and debate what it is that is going really going on. At this point I'm saying, by all means, investigate, investigate, investigate! Theoretically it doesn't seem possible, but that is for the experiments/studies to decide. So what do they say? Has anyone even come close to observing QM phenomena in the brain? I know you provided those articles, but weren't they pretty much asking "what if" without providing any evidence or answers?

        What do you do? Are you student? Do you have access to research resources? I would love to look at stuff like this experimentally.

        But again, I stay with my theorizing. There is too much going on in the brain for weird QM phenomena to be happening... any QM effects will be instantly (on a much faster time scale than consciousness occurs) collapsed into classical effects...
  • thumb
    Mar 13 2012: Perhaps, the day computers get emotional AND rational imho. Aren't emotions part of what we call "intelligence"? Many of our decisions are based on emotions, if not the majority of them.
    That said, more rational, or low AI computers without emotional IQ/AI could lead to a HAL computer-like deciding to nuke half of the world population because it would preserve the Earth, which cannot cope with the current demographics and consumption rhythm.
    I totally agree with the assertion that the study of the human brain will help create intelligent computers, we're just in Day 1 of understanding it, what happens next will be interesting for sure :)
  • thumb
    Mar 13 2012: No
  • thumb
    Mar 12 2012: Hi Howard
    There are a dozen steps that could be defined between "Stimulus Response" a level one form of intelligence, and the "human mind" which would be at level 12. It is probable, that with the increasing speed of processing power, and the increasing size of networked information, that artificial intelligence will be able to function at level 10. There are two functions that the electrical mind can not ever do - no matter how powerful a processing or memory function it may have. First, it can not extend its awareness in 3 dimensions into the environment where it has no information. It can only process information that it has to process. Second, the artificial mind can not experience the awareness of the unknown. The AI can know or not know. It can not experience the awe of not knowing. Many of the characteristics which make us both human, and better than any AI is our awareness and awe of the unknown, the mysterious, the spiritual, and the creative.

    Someone once said that an infinite number of chimpanzees typing for an infinite number of years would reproduce all the writings that have been produced by mankind. There is a very clear answer to this supposition. No they will not. Never in an infinity of years will an infinity of chimpanzees produce even one full page of writing. Any number times zero is zero.

    We have seen AI machines fool readers into thinking that they were human by clever programming techniques. So it is probably true that we could be fooled into thinking an AI was as smart as we are. But clever trickery on the part of a programmer is not the same as the true intelligence that is generated from within a human. The goal is not to fool us. That can be done. The goal is to create a truly thinking AI and that can not be done. We will probably build fantastic AIs but we must never think that they love us.
    • Mar 13 2012: A very interesting point of view. All existing AI systems (or at least all that I am aware of) use digital logic for computation, communication and storage. One might think that this cold scheme of 1s and 0s would not be conducive to recreating the kind of fuzziness inherent in the human brain. But don't underestimate the power of computers. You can numerically simulate the actions of a single neuron through the Hodgkin-Huxley equations:
      http://nerve.bsd.uchicago.edu/nerve1.html

      And if you can simulate one, then why not two or three? Why not an entire brain? That is, in fact, what these researchers are trying to do:
      http://bluebrain.epfl.ch/cms/lang/en/pid/56882
      http://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets.html

      And if you aren't impressed by that, there's a school of thought that holds that programmers just aren't doing it right, or at least, that they aren't setting the right goals:
      http://www.ted.com/talks/jeff_hawkins_on_how_brain_science_will_change_computing.html

      Needless to say these attempts are nowhere near the point at which they can claim to have reproduced human intellect, and consciousness is a whole different ballgame, but I think that it is, at best, equally premature to say that it can't be done.

      As for the monkey anecdote, the book Coincidences, Chaos and All That Math Jazz by Burger and Starbird explains it in detail, but it would be a logical fallacy to say that the probability of typing a given page is zero. The probability of a perfectly random monkey typing a given letter at a given time is 1/26. The probability of typing a given ASCII character is 1/128. The probability of the monkey typing a given 3000-word essay with access to all the ASCII keys, on one try, is (1/128)^3000 or 2.34 x 10^-6322 -- a very small number but not zero. So with unlimited time, the monkey would, in fact, produce the essay. However, if probabilities at infinite time had any relevance, I would be in Vegas now pulling the old Martingale!
      • thumb
        Mar 13 2012: Just an interesting piece of information that I came across a year or two ago. One of the testing grounds and benchmarks for artificial intelligence is actually the game of Go. Go, a strategy game originating in China over 2000 years ago in which black and white pieces compete for territory, is simple enough for a person to learn all of the rules in a day. However, so far, no computer has managed to even compete with any professional players and some of the best programs can be beaten by an advanced beginner or lower intermediate player. Since the game is played on a 19x19 board, rough numerical analysis estimates that the number of possible Go games far exceeds the number of atoms in the known universe. Thus, Go programming requires a different route from chess programming, and not just brute calculation ability. It cannot just simulate the techniques human players use to play the game, but it must judge situations in which the outcomes of multiple groups of stones on the board are not clear. Because of this, Go has been a testing ground for many different AI techniques including pattern matching, neural networks, and the genetic algorithm. It’s worth watching to see how the programs evolve.
  • thumb
    Mar 12 2012: Logan and Chase: Enjoyed your conversation and learned new things. Thanks for all the references!
    • thumb
      Mar 13 2012: Hahaha, join in, Lynn! Would love to hear ya' weigh in, or throw out additional information!
  • thumb
    Mar 11 2012: probably not as technology is created by humans, so our errors and, often, our limited viewpoints would shine through.
    If however you take ALL knowledge in the world whether correct or not, maybe some genius somewhere can create a program to sort out misinformation.
    As long as we limit our "knowledge" to so-called facts, we will not progress, but continue to "regress".
    • Mar 13 2012: Lisi, I’m glad that you brought up the fact that technology is inherently limited. There is no such thing as 100% accuracy in science or engineering. Every device that we create works within some acceptable error tolerance. And, we can’t forget the fact that the performance of man-made devices degrades with time. Even if a machine is working “almost perfectly” on its first run, it’s only a matter of time before various bugs begin to appear. Finally, we can’t neglect the fact that humans are able to learn and adapt in response to change. While machine-learning algorithms are in use today, machine learning is nowhere near as advanced as human learning. In order to build a machine that’s as intelligent as a human, we would first have to figure out all of the intricacies of human intelligence. I think that everyone can agree that our understanding of the human brain is still very limited. And, if we don’t fully understand the human brain, how can we hope to replicate it in a machine?
  • thumb
    Mar 11 2012: Hmm. Could just be a matter of time, for sure. If you think about it, context based humans or rule based machines - regardless, we're all just a different collection of energy/electricity. It's a matter of engineering a machine that uses/intakes enegry that's most connotative to what makes human experience possible. And perhaps, despite all of the serendipity and randomness of human emotion/experience, there's some underlying pattern to it all. Revealing that pattern may clear the way for us to mimic it in robotics.

    However, humanity still stares at itself in the mirror as if it's just meeting itself for the first time. And until we've transcended this state, I'm sure our robots will mimic this limited understanding of ourselves.
  • thumb
    Mar 11 2012: I am not a programmer an do not know so much beyond MATLAB ...
    But program of a human is like :
    1. See your environment.
    2. Take its pattern.
    3. save in memory.
    And if a problem occurs:
    1. What is the unsuitable stuff ?
    2. What is your destination ?
    3. Make a pattern from 1 to 2.
    4. Match the 3 pattern with one of patterns in the memory.

    I think simulation of this path for an electronic brain is hard but not impossible.
    • thumb
      Mar 13 2012: Hi Amirpouya,

      I think your simplification of the computing process of the human mind is pretty spot on. However, it brings up a question with me. To me, it seems like an artificial mind would need to go through as many iterations as a human has life experiences to fully gain "human intelligence." And even then, how does a computer make decisions that we as humans deem impossible? A computer can master facts and memorize information, but I feel that how it interprets it is no where near that of how a human does. You can assign as many numbers and weights and formulas, but at the end of the day, given a situation where the right answer may be the irrational one, how can we expect a computer to make that distinction?
      • thumb
        Mar 13 2012: And extend your question to group choices: A decision by one person for his own well-being might mean to do step A. But a community decision in a city might result in step B - rational for the group - irrational for 45% of the individuals... can a computer learn and interact that way?

        I guess we tend to underestimate that our rational individual choices are bounded by groups we are acting in.. this is my daily experience in city development. Here is a lecture which is a good example of what we can compute in a city - and what not: http://www.labkultur.tv/en/blog/deltalecture-arrival-cities-1
        • thumb
          Mar 13 2012: Bernd,if you were to try and code for an AI, would binary be sufficient? i'm no programmer but the way i see it is that we would have to start at the bottom and start modelling the amino acids and build up from there.I don't think we can rely on equations,what i mean is a neuron won't fire off the same signal constantly.What are your thoughts on this?
      • thumb
        Mar 13 2012: Hi Harnsowl -
        I hope I got your meaning ...
        A computer should not be programming to reacting like a human.
        If it has all of a human's passions it will become like an infant.
        And if it has the cognition ways just like a human (seeing etc.) ,
        and plus making itself better during the time -which I believe if a machine has this ability will destroy all of the mankind- I think it will be a complete human.
        But one another thing remains : all of us feel WE are someone except US.
        For example I feel I am someone except this body and I just analyze my jobs.
        This feeling makes us feel we comprehend datas in another way as a computer do.
        But this SELF is just an independent system for making anything better for the body by correcting its programming.
        But I don't think this system deserves to be called "soul".
        I said it's hard but not impossible.
  • thumb
    Mar 11 2012: Expect at some stage we humans could have brain aids, boosters etc (add ons) that will enhance our thinking, memory, data access etc
    We could be wired"- switch the lights on with a thought.

    Technology could also enhance human intelligence.
  • Mar 11 2012: Human is a may of your world and you here are in a kinda of map which means something with relation to else. Now one of the criteria of creation or big bang as its known is to create a world of your with strong emphasis on i with infinite posibilities but this world of your which is infinite is a box or a dimension or plane. What is beyond you will not know cos you where not created with requirements for such. Their is also within this huge existence with many boxes with content and many yet to be filled their is definite direction. Technology could be one of the aspect along with religion, science, etc.,

    Conclusion :
    1.Every human is a map of a world.
    2. Every person has to work on i and fill the infinite world or box of self.
    3. People and boxes can be interdependent and be social but their is direction cos all this is within a big box of this plane, dimension or entity.
    4. Technology is one of aspect as religion or science or morality, other concepts need life too.
  • thumb
    Mar 11 2012: I leave interpretations up to the "experts"...

    In vitro hearts = great! I've got nothing against Homo sapiens looking for the illusive immortal code. Like you are saying (a bit flippantly) in vitro hearts could "save" lives.

    Machines could replicate anything. That doesn't mean that the brains computing power equivalence has been reached. Remember: all the computers on the internet at anyone time is the equivalent to the power of one human brain. Machines are pre-zygotic in this respect.

    I don't know "qualia" maybe synonymous with Jesus or God? It's a mystery i.e. an that is okay. We have no reference; we have plenty of anti-references (apparently many animals are inferior compared to Homo sapiens' brain power: personally I think that is a bit of pseudoreplication seeping into science i.e. ego)...
  • thumb
    Mar 11 2012: try this one for size we have man and woman on this planet. We have different minds and intelligence to make one without the other to me means something is missing? what if the missing link is the other mind you didn't simulate? would it matter?
  • Mar 11 2012: Several years ago Congress funded a study to attempt to determine at what point Computers might become Sentient. My recollection is that it was quietly put into Law only to have the Religious Right wake up and repeal it. This discussion is full of Opinions and Assertions with little more than passion and feeling to back those up. IM
    • thumb
      Mar 11 2012: Very true, this discussion is full of opinions with passion backing it up. This is what I want to try to get above. These feelings or passions are due to what we classify as "qualia". It is hotly debated whether we can explain whether qualia is something systematic and explainable, thus measurable with a machine, or whether it's something unexplainable and thus truly an aspect that makes humans unique (thus sentient and conscious and unreproducible).
  • thumb
    Mar 11 2012: "Can technology replace human intelligence?" - questions like this put an end to that result. Human experience is a lot more about family, love, and other random mushy stuff. So mechanically speaking a superficially sentimental human could be recreated. No matter how deep the machine gets it will never get deep enough to reach the core of what is beyond the artificial. Anyway, back to skynet (-;
    • thumb
      Mar 11 2012: From what you know, what do you think the "core" is? When you say "artificial", what do you think it means? Artificial doesn't mean it's any lesser than the reference. Artificial simply means its source is not the same as the reference.

      Say we have the technology to grow hearts in-vitro, we would call it an artificial heart. If we use that heart in a person, is that heart any lesser than the original heart? Is that person then any lesser than he/she was before?

      You mention human experience; can we not replicate the systems that would enable machines to process the same experiences? In philosophy, this experience is called qualia. Can you state without a doubt that qualia is something only inherent in human beings? If it is only inherent in human beings, what processes does the human have that can process qualia and why is the process something we cannot reproduce?
  • thumb
    Mar 11 2012: Yes, I believe we can replace part of human intelligence with machines! But this thing is that we can only replace the part of human intelligence we discoverd within ourselves with machines, this means that as we evolve and find more intelligence within our brains we can then replace this new found intelligence with machines. So, human will always be ahead of machines and not the other way around. Infinity to grasp and master! My opinion...
    • thumb
      Mar 13 2012: A great insight. Let me point out that this makes the assumption that the AI we build will not have any emergent intelligence beyond what we've built into it. To me, this is like saying that when you draw a pattern, the ONLY pattern is the one you meant to draw!

      What about when we build the first machine that ponders its own existence and aspects of its own thought? This is something we know we do, so why wouldn't we try to implement it in a machine? Once the machine is capable of self-inquiry, what stops it from digging down deeper into its psyche and ours, building more upon itself ad infinitum?

      Are we capable of making something this complex? We can only guess. But is there a philosophical argument that proves it to be impossible? I haven't heard anything even almost convincing. But remember evolution: stupid cells made smarter cells.
  • thumb
    Mar 11 2012: Well we better come up with a better chip design than we have now.Just one thing,they have already replicated the brain in vr,it was on a talk of teds but i can't remember what name it was under.Was it rule based? i know it was a few years ago.it was operating but i wonder if they managed to get it to develop connections?

    If researchers find there is spin going on in the neuron then we have to push to develop the qauntum machine or push the genies to come up with a blank clone with it's full nervous system and wetware interfaces so we can grow the damn thing like a child.i've already seen the qauntum machines they have put out there and yes it's going to take a few years before we see any real development.
  • Mar 10 2012: I totally agree with Roy Bourque about the misuse of technological resources. The computer has become an extension of our brain, but humans can not relinquish the right to think.
  • Mar 10 2012: It is only a matter of time. In a way we are the same but our design is better so far. From one of the few abilities machines lack we have the ability to forget which is connected to the ability to learn, it is very important with its pros and cons (effective selective omission from which to further build on). As for the subject of self-awareness we have to understand where the root of it stems, we possess senses which machines don’t have integrated. We know how important our arm is to us and the consequences of losing it, this gives birth to self-importance and collectively to self-awareness. Intuition as well is a complex sense of interpretation which we cannot completely define, doesn't mean machines will not possess it, it's more like trace connections that some create better than others. You have to ask yourself how a thought process occurs (a snapshot of the brain) and whether you really have a choice or an output, in time we will know for sure. Design of machines are pretty static from what we have seen and take examples from (with respect to AI the general public doesn’t have great examples). Better design implementations would change the definition of machines itself; it’s a question about design. Emotion may be an action to generate a response from another system or an exaggerated checkpoint due to a temporary or permanent inability to cope (what is love? Except for it’s magical definition). All of mans’ best work can go into the 'perfect'(:P) machine but you can't have it the other way around. I may not be able to be too clear or correct; this is my take and my design :)
  • thumb
    Mar 10 2012: Based on some recent political decisions I do not see a real challenge ... AI is well ahead. However, Artificial Intelligence is no match for natural stupidity. Just sayin ........
  • thumb
    Mar 10 2012: I don't think it's a question of "can it" as much as "should we let it happen". If we do, we may find ourselves on the road to extinction. I see many young people carrying their brains around in their pockets. They feel they don't need to learn because they have an electronic device that gives them instant access to information. So they are more interested in entertaining themselves than learning.

    Computers and machines can outperform humans in capacity. Presently, they don't have the capacity to reason, they don't have intuition, they can't feel emotion, they don't love, and they don't have the capacity to compute outside of the range of their programming.

    I believe that intelligence is in the ability to ask questions, to seek answers to those questions, and to expand our horizons from what we learn from the answers. If a computer can be made to do this, then it can be made to become a rational thinking computer. Our right brain is what gives it color and style. that's another component that would have to be factored in. Otherwise you would have a very bland future.
    • thumb
      Mar 11 2012: I response to your statement about "many young people carrying their brains around in their pockets". Do you also reject the written word? Do you reject books?

      This is what Plato said in response to the advent of writing:
      "If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows."

      I find many similarities between his concerns and our concerns for the current state of affairs with regards to instantaneous access to information. But we've since shown that writing helps the dissemination and creation of ideas, rather than the mere semblance of knowledge and thought. At the same time Plato's concern is well founded as we may choose to write meaningless drivel. It's easy for us to choose what is worth while to read. Computers are no different. It is up to education to teach the future generation how to use tools for the better.
      • thumb
        Mar 11 2012: Greetings;

        I do not reject any instrument of learning. I am concerned for those who reject learning itself because information is so readily available. That was Plato's concern as well. I do not want to see technology take over, I want to keep it as a tool to help us make a better world. Yet many are letting technology take over.

        There have been movies made about computers taking over the world. This is not out of the question. My statement about young people carrying their brains around in their pockets refers to those who are not using their own brains. They only want to be entertained. they do not want to be part of an integrated, progressively functioning world. They don't even know what that means.

        Computers and machines are helping us make a better world. yet there are countless people out of work because computers and machines are doing the job they used to do. We cannot lose sight of the whole picture. We have to integrate all of the problems into the solution. Just keep this in mind as you go forward.
  • thumb
    Mar 10 2012: Human intelligence is irreplaceable. However the smart phones show to some extent that they can imitate human intelligence. Let take for example, the application of SIRi or ACTION VOICE on iPhone. It is really amazing. You speak with the phone and it answers you back!! however this is incomplete because these applications are useless without internet connexions.
  • thumb
    Mar 10 2012: It already had when the first computer was developed. On the contrary i believe that Human intelligence claims to have duplicated our *current* understanding of the mind/intelligence via artificial intelligence/technology therefore technology cannot compare or replace human intelligence . I think it is apparent that science has yet to completely understand the human capacity of consciousness, and if that is the case, how can we duplicate that in machinery? although it is a true accomplishment of our ability to create artificial intelligence given our range of knowledge in technology, we still should not make the claim that the AI is our equal, as we still have yet to grasp of how and why the human intelligence works.

    P.s: If we could get machines to dream? Or ponder? That would be the true sign of consciousness, i.e., having a subconcious or, an unconscious mind. That would be the day on my calender on which human intelligence has replaced technology
  • thumb

    E G

    • 0
    Mar 10 2012: No , the technology don't have intelligence . Period .
    • thumb
      Mar 10 2012: Not yet.
      I'm not 100% sure if AI could become self aware and essentially be alive.
      Suggest in 10,000 years AI would have surpassed humans.

      Would a cloned or manufactured human be alive?
      Would it have intelligence.
      In a way this is a technology.
      • thumb

        E G

        • 0
        Mar 14 2012: A clone is not a technology , a clone is s a being . Do you think is there a difference between a being and a robot ? Or if you want between a clone and a robot ? (because a clone is a being) .

        Maybe if the humans don't become something else too ; after 10,000 years the data of the problem will be different , it doesn't make sense to talk about what we don't know .
  • Mar 10 2012: no
  • thumb
    Mar 9 2012: It will be a challenge but one day, if our civilisation and science continues, AI will meet and exceed our human intellectual capabilities.

    One day AI will have self awareness, consciousness.

    Just think how far we have come in 100 years.

    Imagine where we will in 100, 1000, 10,000, 100,000 years.
  • thumb
    Mar 9 2012: It sounds like the ability to imitate, recognize, learn from experience, and sympathize is a good list of criteria for determining consciousness.

    Did a little Google / Wikipedia researching on the Three Aspects of Consciousness, as well, and it would seem as good a list as any.

    http://www.1729.com/wiki/ThreeAspectsOfConsciousness.html

    There's been, however, a somewhat nagging concern of mine, and I would like to share this concern here with you.

    I am no physicist; I do not by any means know all there is to know about physics, but quantum mechanics and the existence of quantum particles provide a rather---confusing and conflicting view of the world. I mean, do you know much about quantum computers? I don't, but there's a whole lotta stuff going on with the idea of quantum computing, take a look:

    http://en.wikipedia.org/wiki/Quantum_computer

    http://www.dwavesys.com/en/technology.html

    http://en.wikipedia.org/wiki/Quantum_mechanical

    and lots of curious interactions of macro-scale physical objects whose behavior is explainable through quantum mechanics. Take this TEDTalk by Aaron O'Connell:

    http://www.ted.com/talks/lang/en/aaron_o_connell_making_sense_of_a_visible_quantum_object.html

    and perhaps most relevant to this discussion:

    http://en.wikipedia.org/wiki/Quantum_mind

    All of these things are neat as hell! I love reading and learning about things like this. But perhaps, most significantly, there's this idea that quantum particles popping in and out of existence really quickly could, possibly, be driving the expansion of the universe. Lotsa folks are wondering about it:

    http://www.physicsforums.com/showthread.php?t=271433

    All of this just makes me wonder---what the heck's going on in the world? Where do these particles come from, where do they go? Could such phenomenon help, in some way, explain consciousness? Could this be the source of consciousness, of the soul? Is the soul merely quantum data that could possibly survive a physical death?
  • thumb
    Mar 9 2012: @Howard Yee: Okey doke, so I took a look at that link you mentioned:

    http://themindi.blogspot.com/2007/02/chapter-23-unfortunate-dualist.html

    And it would appear, since the soul is not causually linked to suffering, at least within the framework of the story, that there is no need for dualism. Mind and matter are the same.

    But suppose the ending of the story were different, and that where the annihilation of the soul did indeed end the poor dualists' suffering---but also robbed him or her of any joy, in the process, such that in his or her daily activities there was no solace AT ALL in things which previously distracted him or her, for however finite a period of time.

    Certainly, the drug had not robbed the dualist of anything tangible; it had, however, robbed him of the intangible thing which was the primary mechanism for his enjoyment---and suffering. Without the possibility of any momentary fleeting happiness and contentment on earth, there would be no motivation to improve ones self, or find a family, etc.

    In short, the now-soulless dualist WOULD STILL harm others (without motivation, he or she would exhibit none of his or her former behaviors, devolving into a state of supreme inertia), MIGHT STILL violate moral codes and might further intensify his or her punishment in an afterlife. After all, according to classical literature, meddling with affairs of the soul are purely the domain of supernatural beings, not of man.

    There would be no reward nor the hope that the pursuit of a reward (i.e.---Atheistic pursuits of "heaven on earth for all" vs. theisitic pursuits of "Heaven after death" being FUNCTIONALLY EQUIVALENT attitudes) would bring some respite.

    Life would not be worth living, in this newly revised version. Of course, until we actually conclusively prove such is the case, one way or the other, this is all wonderful conjecture and I hope such questions drive science into fields that will help explain the phenomenon of consciousness better.
  • thumb
    Mar 9 2012: No. Human intelligence is half composed of that very viscous and intangible world we call emotion. Emotion is deeply linked to instinct, and intution. Human intelligence is primarily, simply speaking, an unremitting coalescence of reason and emotion. In order for technology to replace the depths of human experience (inseperable from human intelligence), technology would not ony have to empathically tap into human experience, but into even deeper realms. Of course this reveals the question of whether or not it's still "technology" by this point.

    Not to mention, human intelligence is intricately (context based), whereas AI and other forms of tech are primarily (rule based). Because of this, humans are highly adaptable to a wide variety of circumstances. Sitting at the park and watching birds fly by could spark thoughts and ideas in a person, which could link to a variety of circumstances...be it philosophical, poetic, or a break through in aeronautical engineering. Human intelligence is also elastic and ever changing. Neuroplasticity, for instance. Neuroplasticity is the brain's ability to completely rewire itself: new habits, new languages, new skills, new lifestyle, etc. This debunks the age old fallacy : you can't teach an old dog new tricks. Not to mention, we have the gorgeously controversial (subconscious). Whether you take a Freudian or Jungian approach on the subconscious, or deny it's existence completely, this is is also a great source of intelligence. These are all things that a computer is not, and will not be capable of performing.

    For (anyone) to say that technology can replace human intelligence, is simply ludacris. Such a belief is more full of hubris and fantasy than it is scientific reasoning and serious well-rounded contemplation. Sure, it (could) thousands of years from now. But since science still doesn't know (exactly) how human intelligence works, let alone memory, and emotion...such conclusions cannot be accurately, or reasonably determined.
    • thumb
      Mar 9 2012: Hey Gerald,

      You bring up a lot of great points. However, you mentioned that emotion is linked to instinct and intuition; do you think that each of these are independent of the brain? If so, what exactly are they? If they are part of the brain, then there must be some neuronal structuring / firing pattern that is responsible for our instincts and intuition and thus emotion. If this is the case, exact replication of that structuring or firing should allow some form of AI to replicate the instinct of intuition of people.

      Furthermore, I agree that AI seems more rule based whereas human intelligence is more context based. But, do you think that there are some overarching rules that govern our context based intelligence? What says that our decisions and intelligence are not composed of quasi-infinitely large conditional statements?

      If we can associate some kind of cause effect relationship between neurons and human behavior, I think we will definitely be able to replicate human behavior with AI. The only question would be how long will it take.

      This article talks about a new machine under development that has the capability to learn:

      http://www.forbes.com/sites/rogerkay/2011/12/09/cognitive-computing-when-computers-become-brains/
  • Mar 9 2012: Does dead material have a soul? Well I personally don't think so. But is there many people here that is true believers in the Shinto religion?

    It's a no brainer... If I throw a pebble and it shows a random nature when it falls to the ground, does this point out that the pebble has a brain? The noise in signal cables, does this show that my stereo has intelligence?

    And if I construct a switch that is complicated enough, does this make the switch on the wall conscious? If I touch it, and the room lights up, does this it state that I have a moral responsibility to the switch on the wall?

    And computers that only makes calculations: Can different states in a computer memory, have feelings? Do i offend my frying pan at home, if cook the wrong dish? Is it the software or hardware that is aware?

    I guess its hard to understand computers if you haven't designed CPU:s and written in assembly language, and so on. This makes me worried if people will start demanding my dishwashers ethical rights. As some here seem to think that things are intelligent just because they can react. If these reactions is complex, does this make the complicated fabric in my clothes to have feelings? Does a cloud or ocean or the wind or a flower to have intelligence and conscious awareness? And does the universe, the mightiest of them all have rights? Or is every atom intelligent? and has the individual cells in our bodies feelings?

    So: Just because we make dead material do complex things, does not make it contentious? And if a let a plant control a super computer that makes better scores on IQ tests and know much more than you do, and it was all the right answers. Does that make this plant superior to a human being?

    So my question to you is:
    When humanoids and robots starts to act intelligent, and you will have relations to these... will you also demand these dead materials their ethical rights, to be treated fairly and forbid to use them as slaves and to treat them as commodities?
    • thumb
      Mar 9 2012: Now we're talking about what alive means. We have successfully built living micro-organisms one atom at a time. If I build a replica human one atom at a time and it functions normally like the microbes do, does that have a soul? (I don't think anything has a soul, I think humans are just big fancy chemical reactions)
      • Mar 9 2012: Yes, and why can you not call this fancy chemical reactions a soul then?

        Soul is just a word that has the same meaning in both cases. The only thing is separate us is that you are keen on calling anything that moves and reacts a "what ever word". I however only call those things I regard as having a "what ever word". And to me a robot of dead material, can not possess a "what ever word", but you think it automatically has a "what ever word".

        I call this 'what ever word' a soul. What do you call it?
    • thumb
      Mar 9 2012: I agree with with Peter. I don't think anything has a soul. I think that we give something we cannot explain the label "soul". How can we prove what a soul is? Dan, you refer to material and dead and alive. What makes the pebble in your statement dead? How do we know without a doubt that the pebble is dead material?

      Would you say a singular neuron is alive? If we break the neuron down to it's baser components, all we end up are inanimate material. They are composed of the same atoms, quarks, subatomic particles as any other material, dead or alive. What qualities does a neuron have that makes it alive?

      But it seems like you do not believe that individual cells have feelings, so where is the soul? If our constituent parts do not have a soul, then from where does the soul originate?
      • Mar 9 2012: Well if there is no such things as a soul, then there is no such thing as dead or alive. This breaks it all down to what we could call Shintoism.

        Then all mater are the same. This thinking leads to the conclusion that we have a moral responsibility to humanly created lifeforms, with a AI high enough to surpass or equal humans. And yes, todays AI's are then alive.

        I how ever, think this is a mystery, and that we do have a soul, as I in my essence know that I'm not a living dead, and as I know and are aware, this give me rights as a human, according to my own contentiousness.

        This is problematic to science, as evolution does not require awareness, but only reactions to make things to replicate and prosper. It can only be one of both, either we have soul or we don't. And if there is no such thing as a soul, then 'this', that we call 'don't have a soul', we then have to call something, and why not call this a soul then? Is it because your not religious that you cant use that word?

        If there is no such thing like dead or alive, then we are back to square one. You are now in many respects equal to the stone, the planet, space and everything else. This is in many regards the same as Shintoism, and then you are a religious person, but you can't admit it. This is then, only a semantic problem, and a game of words and philosophical believes. As you in the essence believes in the same thing as in some religions or philosophies.

        If I write a computer program with a superior AI, does this make me to a god? Can I demand my softwares equal rights as a supreme being? And as this AI has superior intelligence and knowing, can it be elected as a presidential candidate if we change the law? Probably so. What does you more worth than my superior AI that was created and acts? What I created as the God of this higher intelligence, has now equal rights according to your logic, it has a "nothing soul" as much as you do.

        Will you understand or respect this view and standpoint?
        • thumb
          Mar 10 2012: Who's to say that we are not equal to the stone, the planet, space and everything else? What I find disconcerting is our tendency to focus on the singular instead of the whole. Am I a separate entity from you? Are matter separate entities? We may focus on singular particles, but in the end, we're pure energy. In quantum field theory, particles are standing waves: an attribute of a series of waves, not a separate entity. The waves themselves are this very fluid like entity, there's no real separation of individual elements.

          Why do we have so many beliefs that make all living beings born from a large mass of energy? I-Ching, Hinduism, Buddhism, etc.

          If we believe in the wave-particle duality, Heisenberg's uncertainty principle, etc then I want to present the possibility that material is a waveform of probabilities while they are unobserved; this is very similar to being just a large mass of energy. The moment someone observes it, the waveform collapses and gives it a physical standing. The existence of the stone relies on something observing it; the existence of the stone is not mutually exclusive to the other material that causes the waveform to collapse.

          Humans are humans because of the experiences they receive. A person that is missing a limb or colorblind since birth lacks understanding. I want to suggest, by extension, that all matter around us is important for dictating what it means to be human. Sure, the stone is visually detached from us, but that does not mean it is a separate entity from us. There is some connection beyond our understanding. This connection, I think, it related to what makes consciousness. We seem too content at believing what we see (ie. the separation between entities) and do not question whether there's a rule/system in place that defines the apparent separation (or relationship)
        • thumb
          Mar 11 2012: Dan, my difficulty lies in where to draw the line between living and non-living. eg 1. hydrogen atom 2. water molecule 3. Amino acid 4. Protein 5. Nucleic acid 6. Virus 7. Bacterium. I'm guessing that you would consider 1&2 not alive and 7 alive but I can't see a clear line. I see a gradual increase in the complexity of the chemistry. You can even find intermediates between these examples. There really is no clear distinction. We have an arbitrary group of characteristics we use to classify things as alive, but conveniently ignore those things that only show some of them.(5&6)
      • Mar 10 2012: I hope you feel equal to women. I like Robert Heinlein too. I wonder if technology is affected by the inequality between men and women. What would technology be like if men and women were equal in academia, business and other power places? I wonder.
        • thumb
          Mar 11 2012: I think technology and understanding of the universe around us is stifled because of our inability to treat everyone/everything on equal grounds. This is an extension of my belief that we focus too much on the individual and singular instead of the uniform and oneness. We only notice differences, not similarities, thus our understanding of truth only comes from what we notice as different from ourselves.

          It's only recently that we've realized that time and space is the same; matter is just energy. Yet, research is still forced to focus on separate isolated fields. When we study biological systems, we devise ways to observe the system. We do not focus on how we can just measure, calculate, predict the root energies involved to see what the system is doing. This roundabout way of doing this tedious, but at the same time, necessary for now because of our lack of understanding. But we're now stuck with a chicken or egg problem. Are we roundabout because we know we lack understanding? Or do we lack understanding because we're consistently going at things the wrong way?
      • Mar 11 2012: Howard, Thanks for the provocative discussion. I agree with much of what you say. Life is everywhere. I wonder if vegetarians are not in denial of the food chain. Perhaps a head of cabbage desires to continue living just as much as a head of cattle. Looking at the differences and similarities seem to me to both be important parts of the process. I see no point in ignoring one for the other. Again, I raise what you may think is an irrelevant issue, but I do believe that if women and men were acknowledged as equal and making equal inputs into science, technology, religion, politics, education, construction, design, law, medicine and everything else they would all be substantially different and far more likely to be the way you want them to be, i.e., accurate, functional, truthful. I don't think you can ignore the distortions caused by the absence of women in the development of knowledge and society in general. You seem to choose to ignore that factor as though it is inconsequential. Looking at the individual and singular triggers the quest, inspires the pursuit of truth and understanding. We all notice both the differences and similarities. We need to go back and forth. It adds to the process and product to do that. Again, I am confident that our progress toward our shared positive goals will be accelerated as soon as women and men share power within all realms of all societies. The male/female equality issue is at the heart of why so many systems fail to accomplish their stated goals, e.g., medicine, law, war, religion, science, politics. Happy Today.
        • thumb
          Mar 11 2012: The vegetarian point is interesting. If you consider a continuum from 1 "i don't eat red meat" to 10. I'm an extreme vegan, you are really eliminating food sources based on how closely related you are to them. It's basically the same question in discussion with Dan above. How closely related does the food need to be before you start to humanise it before I give it a "soul". How many western people, even meat eaters are comfortable with eating dog or monkey. (I'm not but I can't explain why)
  • thumb
    Mar 9 2012: Does anyone reallyknow how much potential processing power google could access if it started to make its own decisions? I think google or something like it will achieve true AI without us even noticing. They say the human brain is the most complex thing in the universe (I know that is extremely anthropocentric) but the human brain is static regarding its development. The internet doubles in complexity every year? month? week? does anyone really know?
    • thumb
      Mar 14 2012: Peter,

      The work that google does blows me away. You are definitely right that the internet becomes more and more complex, while the human brain isn't becoming that much more complicated. However, you must consider:

      So much time has been spent on understanding the body, brain, etc. After all this time, we still do not completely understand the mind and body. I find it hard to believe that we will ever be able to understand the mind! Even if we are able to make a model of the mind to help us do AI work, it will never be perfect. That is what makes me believe that we will never be able to perfectly model the human brain.
  • thumb
    Mar 9 2012: From a pure theoretical perspective, I see no problem why AI cannot be created.
    Information theory does not exclude it, and from a materialistic point of view, one can see that biological processes can be imitated or duplicated.
  • Mar 8 2012: I think Damon Horowitz convolutions are on the right track, but miss some very important ingredients.

    If you where to build an AI, you must pay great attention to philosophy and the dark sides of human psychology. I think most research about AI is failing, because they do not have enough correct knowledge regarding psychology. If they would like to come further, they should study the work of Edward L. Deci. He is according to me, the greatest source, regarding human motivational issues, underlying the self-regulation of behavior in humans.

    If you take IQ for example. IQ is all about pattern matching. And at the same time, Damon criticizes the AI community for the fact that it's all about pattern matching. There is no wonder why computers today can get higher IQ scores than the general population. Today there is computer programs that get full scores on IQ tests and has an IQ above 150.

    Some here has criticized AI, merely stating that computers can't get a great AI, because you have to train them like a infant and a child. That criticism is based on poor fundamental reasoning. You just have to train one computer only. After that you can just copy the data from the training, and give it to any other copy of the hardware. In other words you can give that skill, to any number of computers.
  • thumb
    Mar 8 2012: An interesting question when used as a tool for investigation, but in terms of by what and how the human race might be replaced, it is irrelevant. Our replacement will be a mixture of both genetically designed 'humanity' and 'machine' technology.
    Nothing lasts forever.
    It's also probably true that something lasts forever.
  • Mar 8 2012: technology definitely can not replace human intelligence.i don't think our technology can advanced to that level.
    • thumb
      Mar 8 2012: Why do you think it won't be able to advance to that degree? We're consistently breaking barriers that previous generations have believed in. We said processin power was approaching an asymptote but recent papers and studies show that that's not the case anymore. People believe that travellin above 30 mph would kill us but we creating vehicles that travel hundreds of times faster. I think the limitation lies only in our disbelief
  • thumb
    Mar 8 2012: There is but one circumstance upon which technology would REPLACE human intelligence, and that is if both of the following conditions are met:

    1. Technology will have reached the point whereby it is self-powered, self-replicating, self-maintaining, and self-repairing.
    2. All life forms on earth have been somehow destroyed by a cataclysmic event.

    Barring those conditions, technology would continue to AUGMENT human intelligence but would never replace it.

    For example, if condition 1 is not met, then the technology would cease to function. If condition 2 is not met (or only humans perish) then human intelligence would immediately be replaced by the intelligence of other native life forms.

    The question of whether alien technology could replace human intelligence is another question entirely.
  • Mar 8 2012: If we were intelligent we wouldn't be trying to create an artificial version of it.
  • Mar 8 2012: Is creating AI to simulate or surpass the human brain more important than creating computing ability to transfer or upload the information from a human brain, ourselves, into a computer? Can our consciousness be digitized? transferable into robots or satellites? Thank you Rudy Rucker.
    • thumb
      Mar 8 2012: I think the questions are very similar. In order to digitize consciousness correctly, we need to understand how it works; we need to be able to accommodate it in or digital systems. If we can do that, then cloning, changing, adapting that consciousness wouldn't be hard, thus we should be able to make the leap from cloning consciousness to generating one from scratch.
  • Mar 8 2012: So, what makes human intelligence hard to simulate? If, as so many claim, our brains & awareness are simply electrical & neurochemical responses to stimuli (either instinctual or learned {programmed?} ) this should be relatively easy to construct. Some search engines are already programmed for "fuzzy logic" which could approximate intuitive or non-linear thinking. You would just need enough circuits.

    How many circuits is enough? Has anything been postulated?

    If one developed an organic memory rather than a mechanical one, is that creating life?
  • Mar 8 2012: I don't think it's necessary or even beneficial to model intelligence after the human brain. Whatever concepts we use to evolve new intelligence may be categorized functionally as human similar, but the underlying mechanisms could as well be completely alien. Just as many things can be used as a table, we may consider behaviors as intelligent even if they are different in construction. This is one reason I feel we are weak in respecting life in general. Since the behaviors are not exactly like ours, we have no empathy for them and strive to invalidate them as intelligence.

    Once you start reading Penrose and Hameroff you may believe that hard AI is not ungraspable.
  • thumb
    Mar 8 2012: I think you are forgetting one thing.The technology that you are talking about was made of human brain.so why are you comparing the builder and the building.It is clear that the builder is supreme.

    And the technology may have knowledge but it lacks wisdom.Anything is not worth which dosnt have wisdom.

    Our brain has both knowledge and wisdom.It is clear that our brain is stronger than all those circuts and the artificial intelligence.

    At last the answer is
    1.technology cannot think among situations
    2.it cannot replace human brain
    3.technology may be smart but that smartness was created by the smartest.OUR BRAIN
    • thumb
      Mar 8 2012: What is wisdom? What is wisdom comprised of? Can we have an artificial system have what wisdom is comprised of? Why or why not?

      How is a brain stronger than a circuit or computer program? We can emulate neurons with a computer, and with enough processing power we can emulate all of the neurons in a human brain. Does that emulation have this "wisdom" you speak of. If you claim it's different, what makes it different?

      Lastly,
      1. what is "think", what makes "thinking" a uniquely human thing? If we consider a human a black box function, can we not make an artificial system that can replace the human in that black box?
      2. What makes you say that it cannot replace the human brain? We understand how neurons work. We can emulate neurons successfully, what is preventing us from making that leap to a full human brain?
      3. We create tools, tools do work, we are not better than the tools because we cannot do the work without the tools. Just because we are the originators of the tool does not mean we encapsulate the power of the tool. An artist can draw, but he/she may not have the skills to create a pencil. We may have potential to do something, but simply having potential does not make it kinetic.
      • Mar 8 2012: I fully agree with you, in future it is quite possible that new inventions will appear , equipped with more strength and brain power. There is no reason why a robot cannot behave like a human . How do we characterize human .
        We have 5 major senses seeing, smell, hearing, touching and speech but robots with even better power can be designed ,they can be even powered by infra red senses, ultrasonics and low frequency hearing , far more sight vision and so on.
        By wisdom we mean our ability to judge the right and wrong , again even better artificial design programs can be made which can replace that intelligence.
        No doubt there might be errors in those programs , there might be malwares , just like we have villains in real life. It's here where the problem lies the words like pain, joy , compassion are purely human which makes human life unique in all species.
        Development of artificial intelligence without their ability to recognize and respect human values can be fatal to the human race.
  • thumb
    Mar 8 2012: I really feel it is possible to artificially replicate the process that makes human intelligence unique, Even if it is a problem that will not realistically be solved within our lifetimes simply because the process necessary to achieve such a feat is too large to be done in such a short time.

    But when it comes to regarding the claim that "AI can never approach the intelligence of humans," I feel this is not true. even if mimicking what makes human intelligence unique is so difficult it is realistically unfathomable in any set number of years, this does not negate the possibility of intelligent behavior in a machine in that time. There are many things about human intelligence that are currently undefinable, but the task of creating a computer that responds to its environment, adapts to changes, and many things often used to describe intelligence is something that happens already.

    We've sent a rover to mars that is aware of its environment and make decisions, necessary because human-sent commands take so long to reach the rover that it would end up breaking if it couldn't adapt to changes such as rocks, hills, shadows, dust, and other factors. It even has taken many pictures on its own accord, based off some programmed expectation, in anticipation that the photo sent to earth would contain something of interest to scientists.

    I do not have a strong expectation that mimicking human intelligence is or is not something fathomable in the near future or even my lifetime, but I do believe strongly that it is very realistic to use our structured style of coding machines to produce some artificial intelligence that can perhaps even match human intelligence in its own unique way. Ultimately, I feel defining the worth of artificial intelligence based off whether or not it is similar to human intelligence is only a roadblock in our path to creating a machine capable of adapting and generally being intelligent. AI is possible, regardless of if it mimics human intelligence.
    • Mar 8 2012: I agree, mimicking human intelligence would be cool, but maybe not that useful. We already see computers and AI:s doing things that humans cannot, and this is the field that likely has the most value to us, but which is often overshadowed by the fascination of a potential human-like AI. However, I guess there will always be a fascination for creating something self-going, or plant seeds that evolve into something unknown. We love life so I'm not surprised we're trying to create it.
      • thumb
        Mar 8 2012: There are uses for being able to mimick human intelligence. I'd say most of them are philosophical, but none-the-less it's still a use.

        Lets start with the inverse problem. We can read brain waves, but say there's a problem we need to diagnose. Right now we can only make guesses by comparing the waves with a data set. The ideal would be to determine the structure of the neurons, understand which pathway is problematic, and suggest a solution. We lack that ability because we've yet to successfully simulate a human brain; we cannot make the concrete association between the structure and signal pathways of the brain, and the complex responses we are able to generate.

        I say most of the uses of a strong AI would be for philosophical reasons. Here's one: people still question the origins of creativity, where does it come from? Specifically, people who write, question who creates the piece. Foucault defines the entity as a function, the author function. He states that the author function does not necessarily originate from the writer of the written work. Most people may think this is philosophical fluff, however, it becomes a serious concern when we want to determine the factors that affect a piece of work (ie. Is the work original? Who influenced the work? etc). With a strong AI, we can observe how they create creative works, and we have the ability to observe and understand the inner workings of its "brain".
  • Mar 8 2012: Can technology replace human intelligence?
    -Yes, it can and it already does. Can technology replace all human intelligence? Maybe, the thermonuclear bomb is already invented. Technology will probably outlive the human. Is the same true for animals and bacteria? Probably so.

    What makes human intelligence hard to replicate?
    -The fact that humans are conscious and aware. But is this our common goal to replicate human intelligence in AI? When AI starts to walk down our streets as humanoids, they will most probably be terminated by real humans. They have to be guarded by other real humans to "survive".

    -Could we generate human intelligence? Yes, and it will most likely not be conscious, but it will act as if it was. And they will evoke human feelings(as we are genetically evolved to do). Some humans will start to demand their rights. Humans 'human intelligence' is not evolved enough to handle this and their other superior super human capabilities. But humans will evolve past all AI when our DNA is threatened. Real humans will then evolve to a new species that survives these threats, and this is probably the human destiny.

    Can it be simulated?
    -Human intelligence can be simulated. Yes, why not? But who shall judge what's human? Are you human enough? Is all humans human? And are humans all intelligent? The right question is not if, but "why simulate?" If we don't take care of the gifts that humans have, why then create human AI?

    What if we created a model of the human brain, would it be able to think?
    -Yes, an "exact replica" would be slow, but a simulation would probably be many billion times faster. But if we really what human intelligence, wouldn't it be easier to achieve this if we could genetically engineer a human halfbreed that doesn't have a conscious awareness, that still can act, as you in the bar. So this is fully possible. Do our children really want human AI? It's not a question if there will be human AI, but when, and how we will handle the situation.
  • thumb
    Mar 8 2012: I'm curious if by projecting ourselves online as much as we have through social media, computer systems have realized a pattern or a way of better understanding the things they aren't programmed to do. The emotions and the reactions and the image of self that in the past has been the argument against this sort of thing. Now, that may sound a little sci-fi horror story status and it would be due to my lack of expertise in computer systems. It makes sense to me, however, and I'm looking forward to a response from someone who knows more about the processes and how data is stored and utilized.
    • thumb
      Mar 8 2012: This is a topic I'm very interested in. In the field of neuroscience, we've yet to discover the unifying theory that would explain how very determined systems like neurons can be used to create sentience. Like quantum theory and relativity, we can observe the brain macroscopically and microscopically and understand the systems involved at that scale, but we cannot make the connection between the two. I don't think it's a huge stretch to believe that we as individuals act like a complex version of a neuron, each getting well defined stimuli, and each giving some sort of output. Our output influence others, which become their stimuli. Within our scale, we can only see the "microscopic", we are unable to grasp the entirety of the system. A neuron does not know that it's part of something larger. It is unable to comprehend that it is part of a larger sentient being. I believe that the notion of Gaia or Mother Earth is related to this notion. Perhaps the entire earth is a sentient entity.
  • Mar 8 2012: Human intelligence is hard to replicate because of a number of things:
    * It evolves from a complex system of reinforcement learning/motivation that we know very little about. Without needs (emotions) we would have no reason to act, to learn and to develop. Likewise, an AI needs human-like internal motivation in order to act like a human at all.
    * The human brain does not evolve from scratch. We are born with many inherent abilities such as memory and sensory skills that have evolved through decades of evolution. This likely requires complex designs and evolutionary simulations before even an artificial infant can be brought to “life”.
    * Human intelligence depends on human physical abilities and flaws. It would not be possible to replicate human behavior without human senses and a human-like body to act with, because an AI could never learn to discuss how things look, feel or taste without the ability to experience this itself. Likewise, it must have a limited energy intake, limited brain capacity, limited senses and so on, or its intelligence would be too different from ours to be considered human.

    Ultimately, we must ask ourselves: What do we want to design? Do we want a human-like AI? Then we must give it human motivation, abilities and flaws. Do we want a super-human? Then we give it human motivation but greater processing power and additional senses and limbs. Do we want it to do something completely different? Then we could give it other needs.

    But we must be careful. It is impossible to foresee the future, and if we give an AI too much power, such as access to weapons, powerful body parts or access to important data systems, there is always the risk that things don’t go as we expect. Motivational systems might take twists or turns that we could never imagine, giving birth to new goals within the AI that could threaten the entire humanity. Artificial motivation and power is a dangerous combination, and today, this is not sci fi but a real potential threat.
    • Mar 9 2012: If so, do you think it should be forbidden for cars to drive by themselves? Trains in Japan has done this for almost 20 years now. And these cars already exists. It's just a question when these will be common on our streets, and when the technology is trusted enough by the authorities.

      But it will probably only be a problem when companies misuse technology. But as companies have a long history of misusing information on the internet, I ask: When will we see companies that specializes if deceiving AI's to follow their commands? It's already somewhat true in the stock market. But when will we see it elsewhere?
  • Mar 8 2012: Yes it can, and it will.

    When you can grow a brain, give sensors to experience the world.

    The answer has always been in technology. It's in our DNA. It holds the answer of how the brain is form, how it should function. You can already grow organs (such as a kidney, a heart)...and a brain should be on it's way.
  • Mar 8 2012: What strikes me as curious is how the over reliance on multiple guess tests in the American education system are, in effect, creating a generation of pattern matchers. If this is the case, then AI is truly not too far off. Maybe another 15--20 years?
    • Mar 9 2012: In reliance on multiple guess test, "i guess" it would not be hard to make an AI that outperforms any human being.

      And that begs for the question...
      Does a computer cheat if it use internet? and does it cheat if it reads on it's harddisk? :-)

      A computer don't even need any response alternatives for an IQ above 150 today...

      "They have integrated a mathematical model that models human-like problem solving. The program that solves progressive matrices scores IQ 100 and has the unique ability of being able to solve the problems without having access to any response alternatives. The group has improved the program that specialises in number sequences to the point where it is now able to ace the tests, implying an IQ of at least 150."

      http://www.sciencedaily.com/releases/2012/02/120214100719.htm
  • Mar 7 2012: So, first we need to define intelligence, rather than simply programmed responses? Very scary - isn't most of our learning simply repetitive responses?

    I thought that intelligence is defined by awareness of self as a separate entity, and being aware that they/it thinks. If one is aware enough of self to be aware that one thinks, that is intelligence.
    • thumb
      Mar 8 2012: regarding this, I'm curious even the extent to which we consider a cat or dog intelligent. if we consider them intelligent, or find some creature other than humans who we feel has a sense of self, then why not use this as a stepping stone? find a way to replicate a germ's or plant's "intelligence," then find a way to replicate the intelligence of a rodent, then of a cat or dog, then ape or dolphin, and maybe by then replicating human intelligence is a breeze.

      if we can't prove that another natural being can think and is intelligent, how can we expect to prove an artificial being can do so too?
      • thumb
        Mar 8 2012: I should've clarified the word "intelligence". However, in the field of neuroscience I'd say there are two classifications of "intelligence". We have the intelligence of fixed, determined systems like an algorithm, or neural responses of animals not unlike our reflexes; then there's sentient intelligence (what we are).

        We currently have three requirements for sentience: the ability to recognize oneself, the ability to apply current experiences to future potential events, and sympathy (the ability to think through the eyes of another being). Many animals have one (or two) of the three, however, I think (if I can recall) that only dolphins have 2.5-ish of the three. There are very well defined experiments that test for the three; here's one of them http://en.wikipedia.org/wiki/Mirror_test.

        So to answer your musing, there are no other creature that we consider to truly have a sense of self; there are some that comes close. But you are headed in the right direction. Many are trying to simulate the brains of lower lifeforms first; we haven't gotten far because a lot more processing power is required when we're emulating neurons, but once digital systems because powerful enough, hopefully we can emulate the human brain, and then observe how it becomes sentient.
  • Mar 7 2012: "What makes human intelligence hard to replicate?"
    A very poor functional definition.

    "Can it be simulated?"
    Yes, but there has to be an agreement on the definition before there can be an agreement on the accuracy of any model.

    "What if we created a model of the human brain, would it be able to think?"
    The short answer is yes, but it depends on how you define thinking. In order for a "strong simulation" of a human we would need to have better a understanding of how epigenetics effects neuronal development, for example. The amount of science, and in turn finding, required to have even a cursory understanding how methylation effects depression (http://scholar.google.com/scholar?q=methylation+depression) is far beyond the resources any reasonable person would throw at the problem. This is completely excluding the difficulty in modeling an external environment for the simulated brain to process...

    I will close with a question: Considering making more humans is relatively inexpensive, why would an entity spend the billions of dollars required to accurately simulate an enormous and poorly defined system such as "human thought"?
    • Mar 8 2012: So that we can feel comfortable in slaving it to our demands.
    • thumb
      Mar 8 2012: Why would an entity spend billions of dollars conducting research (like at CERN)? Because we are curious. Why would un-knowledgeable people spend that amount on research? Because of social and economical benefits. They can either flaunt their standing because they contributed to research, or its because the research will result in something profitable.

      Sometimes we do it just because we can. The thought and effort that goes into building systems is not unlike art. We do art for self expression, I choose to build devices instead.
      • Mar 8 2012: I am glad you mentioned, CERN it is a great counter example. CERN engages in intensive research as defined by P. W. Anderson. "Strong AI" on the other hand, requires the integration of several far-flung branches of intensive research. There are few organizations that promote interdisciplinary natural sciences, in large part due to the complexity of the task.

        The case for "Strong AI" is not helped by the fact that It is not entirely clear what the social benefit of "Strong AI" would be. I don't know why someone would make a robot that can get sad and refuse to work because it misses its girlfriend.

        We see AI techniques applied all day in a wide array of daily tasks. Search is the obvious example, there are also cars that park themselves, a large array of manufacturing processes, the call center things you can speak to directly, etc etc. Some have argued that anyone who uses a smartphone properly is effectively a cyborg. The movement toward "Strong AI" is happening at an alarming rate, but not in a focused CERN-like manner, because the natures of the questions being asked are so very different. It seems reasonable to assume there will be what you would consider "Strong AI" in the next 100 years, (which by similar reasoning could be within your lifetime,) I suspect no one will care much or think much about it until well after the fact.
  • Mar 7 2012: A point, perhaps technically off the chart - Technology is a scientific way to lead human to the betterment of life on the planet. However it can be used the other way around towards the "worsement" of human life if they don't put their intelligence into use. But how you define human intelligence, it seems people don't intent to have the same explanation(?) So it has to be guided by our consciousness of which we as a species need to be aware and share for the sake of humanity.
  • Mar 7 2012: What uniquely stands out in human intelligence is our ability to associate what we see with what we have experienced. This has been the fundamental thing that both Computational neuro scientists and machine learning folks wants to exploit to understand "How do we learn from what we see, hear and feel?". There are several experiments to show that basic building block for human vision, audio and touch are all the same. So, what is this unique model and how does it learn all the different information we are getting into our brain? How do we associate these different information we are getting?
    If one has to Build an intelligent machine, they have to try to answer these and machines that we are building in the field of machine learning, inspired from these ideas, are now capable of doing this to an extent. We are able to make prosthetics, autonomous vehicles, recognize sounds, objects, actions, etc., all using the same principles!

    My point is, we have long way in understanding how brain works, which is one of the most complex and magnificent thing that has evolved over a few millennia. The more we understand the more capable we become in emulating how it works and build a truly intelligent machine.
  • Mar 7 2012: God just left me a Facebook message, and he says, "Not a chance" :)
  • Mar 7 2012: Perhaps this is a bit off topic, but we can also ask if it will ever be possible to create an artificial simulation of a human brain, without a body, but with connections which can simulate all human senses. We can then program to input all relevent sensual experiences into this brain, therby simulating for it a life experience. It seems to me to be theoretically possible, way way into the future, unless of course we are already living proof of it in action.
  • thumb
    Mar 7 2012: My only problem with this question is the use of the word "replace". Without a doubt, we will one day create an intelligence to rival our own, perhaps even surpass it in speed and accuracy. However, I could never imagine why we would be replaced by such technology. Or, why there would be a need for replacement. AI combined with more "traditional" human intelligence / thought process is a more realistic view of the future.

    Our intelligence is not just based on neurons and their communication. We also include emotional reaction, environmental perspective, intuition, experience, necessity... Is it possible to re-create these things in a purely artificial environment?
    • Mar 7 2012: It's a question of longevity and durrability; you send A.I.s where humans can't go, but where a human's ability to evaluate and interpret are key. Deep space exploration for example.
      • thumb
        Mar 7 2012: I agree completely. As far as humanity expanding throughout the galaxy, AI is the clear choice. There's a great site all about this: http://futuretimeline.net/the-far-future.htm

        They go into great detail regarding predictions for the evolution of biological and technological humanoid life. I guess the question becomes philosophical regarding "The Human Spirit". If it does indeed exists, can we program it into our AI? Would we want to, or is the cumulative knowledge of humanity enough?
        • Mar 7 2012: Should or shouldn't, that's a different question entirely. We should probably address it as soon as everyone agrees what 'The Human Spirit' is.
      • thumb
        Mar 7 2012: In that case, what exactly is human intelligence?
  • Mar 7 2012: I would think the ability for abstract thought and inductive reasoning to be the key, if a program can produce a line of reasoning not first introduced by the programer, then yes, AI would truely be human level.
  • Mar 7 2012: Does artificial human intelligence include human flaws? Or are we assuming artificial intelligence will surpass them?
    • thumb
      Mar 7 2012: That's something worth exploring. One issue engineers always deal with is the reliability of systems. If we have one system monitor the state of something (like in a factory), how do we know when the system fails? It may be misreporting but it doesn't stop functioning. We can use multiple parallel systems, but then we need a way to determine the final outcome (say if 2 fails in the same way, we cannot rely on just taking the majority response; maybe a system to monitor the monitoring system?).

      Does human intelligence rely on human flaws? Are artificial intelligence systems failing because it cannot handle errors in their computing?
      • Mar 8 2012: the biggest flaw that human intelligence relies on is forgetfulness. If we had perfect memory we would never try different things more than once. This is bad because some actions only work within a window of time. A system with perfect memory would settle into a steady state and never do anything else.
        • thumb
          Mar 8 2012: Based on a recent study, it seems like we never forget. http://www.wired.com/geekdad/2012/01/everything-about-learning/

          The memory is always there, the pathway to that memory was weakened and so it's harder to recall. But once we review the memory, the pathway is reinforced.

          So there's something to your statement that we rely on forgetfulness. I'd like to change it to, we rely on the reenforcement of memory.
    • Mar 7 2012: On this note I'd like to bring up the idea that creativity has been linked to 'mental illness' - not entirely conclusively, but there's been some research substantiating it and there's a hell of a lot of anecdotal evidence.

      The idea that 'madness' and 'genius' are linked is an idea that goes back to the days of Plato and Aristotle. Poe, Newton, Beethoven, Mozart, Churchill, Tolstoy, many other famous people and many 'geniuses' are thought to have had bi-polar, major depressive disorder or other things, or were described as 'crazy' in their times.

      Bipolar people are strange in that they simultaneously have the highest rates of suicide while a majority of them also claim they wouldn't want to be 'normal'. Many of these people attribute their successes and creativity to their illness.

      Personally I think we're all a little bipolar, and I think our moods and inner conflicts are an important part of the creativity of the human species. Could it be replicated in a machine? I'm not sure.. what if it comes from something related to the fact that we're really a grasslands dwelling bipedal ape living in a world of artificial design far outside the 'natural' range of experiences our bodies are adapted to?

      If our theories about entropy are correct, life is essentially a suicidal adventure anyway.. it can't last forever. It will burn out one way or another. An intelligent machine might realise this straight up and simply not do anything, why bother? We might have to make the machine crazy.. Could we induce some kind of existential angst in our machines to trigger creativity and a drive to exist, out of fear of death? Would it be wise to induce potentially suicidal tendencies in a machine that may have access to vast amounts of mindpower and even weapons?
  • Mar 7 2012: Without human intelligence and creativity there's no technology. Technology augments human intelligence and creativity. Human ntelligence will never be replaced, just evolved. Are we as individuals ready to evolve alongside with technology?
  • thumb
    Mar 7 2012: Hello fellas,

    I have no idea about how i ended up here but I got to say its a intriguing subject. There are many different opinions and I think it's just a matter of time.

    Our brains operate on a much highers processing length then a common laptops nowadays and they consume pretty much the same power. As you must know processing speed have been growing exponentially instead of a linear grow and soon enough not just super computer but any computer will have the capabilities to process data faster them a cerebrum.

    That been said, try to imagine in 50 years if our technology allow us by them, if you could connect a out up to someones head and another to a computer and also, you are only able to see the out puts. Do you think you will be able to tell who is the computer and who is the human?

    Concluding, I think that yes human intelligence can be replicate, just not yet. You might be the man who will find the next step on this query. Good lucky!
  • thumb
    Mar 7 2012: I think technology can augment human intelligence, but never replace it. There are a lot of useful things that computers can and already do help us with (maths and information retention are the big ones -- imagine having Wikipedia instantly searchable and streaming directly into your brain), but the most important part of solving any problem is asking the right question in the first place, and only human intuition and experience can really get you there.
    • Mar 7 2012: There's no reason why we couldn't, in principle, work out how to build a machine capable of 'human intuition and experience'. If push comes to shove, we could do it by simulating an entire human brain. You're mistaking 'not knowing how intuition works' for 'never being able to find out how intuition works'. And even if we can't work it out, past human experience shows that fantastic discoveries and technologies will come out of the attempt.
      • Mar 7 2012: How about for example spontaneous emotional based or just random changes of mind. A machine would always use either a (pseudo) randomness or calculate with probabilities on how a decision could affect the problem given. I think a machine can evolve an answer to a given problem either by calculating or by being random, but only a human brain has the ability to sometimes be truely randomly random (paradox) that means for example sometimes emotionally/logically/randomly decide when to be random and when not to be.
        • thumb
          Mar 7 2012: Are we truly random? I am interested in research that shows that the human brain is acting randomly. Perhaps our brains are very deterministic, but it's simply external stimuli that is random. If that's the case, then we can definitely build a machine (that's deterministic) capable of responding to random inputs.

          Computers already taken randomness from the randominess of external inputs (like from a mouse, or keyboard, or other devices attached to it), there's no reason why an AI can't be subjected to the same randomness
        • Mar 7 2012: Input from keyboads and other sources are seldom random for a computer, but when they are what does a computer do with the random input? Now consider what a human would do or has done with random input from the environment: the random brush of hair on skin has probably inspired much poetry. Modern Art is mostly random, as is it's interpetation.You should probably be giving potential A.I.s 'rorshach tests' for confirmation.
        • Mar 7 2012: Johannes, your argument seems to be 'We don't know how emotional changes of mind etc. happen, so we never will, and we'll never be able to build machines that do the same'. I'm sure you can see the flaw when I put it like that. Anyway, the bigger mystery is how come we have conscious experiences - the 'hard problem' of consciousness. I'd put my money on someone coming up with a scientific answer for it, eventually, and the rest of us going like Thomas Huxley, 'How extremely stupid not to have thought of that!' But we're nowhere near that point right now.
      • thumb
        Mar 7 2012: Except you'd have to treat a machine completely as a human in order for it to have the human experiences necessary for it to be able to know what's important and what can be ignored when it comes to having novel ideas.

        I think any future where we seriously consider treating machines like people before we've gotten around to treating people more like people is one that is ill-considered.
        • Mar 7 2012: If the machine is confirmed to be conscious, I don't see why that shouldn't take place.
  • Mar 7 2012: According to me, no; technology can not replace human intelligence. Why? Has the technology being lasted since ancient history or did it end up somehow mysteriously? I really don't think so. Like this conversation is called as "artificial" intelligence, technology is just an artificial (let's say) substance.If we scrutinize carefully to the lexical meaning of artificial, which means "made or produced by human beings", it can be easily interpreted that human intelligence is creator-producer of the technology, isn't it? For instance, if we assume that technology replaced human intelligence insistently, and if we consider that we need teleportation, can the technology solve the problem with all new formulas which it had better to improve by itself? and can it tell us that here it is what you have being expected for so long; teleportation? If so, why couldn't it do such creative things till now?
    This is the opinion i support.
    Thanks...
    • Mar 7 2012: You're talking as if technology were one thing, with some sort of essence. This is a distressingly common mistake. A piece of technology is just an arrangement of matter. So is a person. There is no reason why we should not, at some point, work out how to produce an arrangement of matter which is both. And the reason why that arrangement of matter would be able to do creative things when previous pieces of technology could not would be that we had not previously worked out how to make things that do that. Please try not to be superstitious about things like 'nature', 'life', and 'technology'. They are just categories we impose on the world.
      • Mar 7 2012: Exactly, technology is one thing, but like every one and only thing, technology consists of parts,elements inside. Any machine or stuff which is created new gets an interaction among them, which is inevitable result of developing current world. So, this is an naturally arrangement of matters; there is nothing about being superstitious or something. Look at that "matters", so you support that matters are just technology's business and after enough technology level is caught, there will be no need human intelligence. Why do software engineer,programmer,machine,computer or mechatronics engineers still study then? for disappearing after just they wrote an code or produce an machine? While an error during its life cycle, sooner or later technology can repair itself, right? is it that? Technology is just a tool and every tool has an user.
        And these categories... we live them together, so they can not be seen quite isolated from our lives like technology.
        • thumb
          Mar 7 2012: I have to agree with Oliver, the technology of circuit boards, processors and software is no different than describing neurons and cells in the context of AI. It is merely semantics to separate the two.

          It comes down to the old Blade Runner scenario, if there was a machine that was advanced enough to fool everyone that it was human, then by any standard it is intelligent.

          Technology is only a word. I think the definitions can cloud the discussion.
        • Mar 7 2012: Warren, the biggest problem with that Blade Runner scenario is that it's entirely possible that we could create machines that are advanced enough to fool everyone into thinking they're people, but which aren't actually conscious. That would mean that these machines' 'friends', 'lovers', in fact anyone who treats them as a person, would be living a lie. Nobody else on this thread seems to have twigged this despite me spamming it all over, and it has me worried. What if we were to end up handing over our civilisation to robotic 'successors' who weren't actually conscious? It would be the end of all meaning and value in the universe.
        • thumb
          Mar 8 2012: @Oliver: Suppose we created them, and they are such convincing deceptions that we, with all of our intuition, logic, and inherent "humanness" are unable to tell them apart from the real thing.

          Of what value, then, are our meanings, values, and civilisations? Not worth a darn, obviously. But if they can fake it better than we can, and are not truly conscious, then what right do we have to say "we" are truly conscious?

          They would be living a lie---but who says we aren't as well? They would just be better at believing and perpetuating their lies than we are. And isn't the ability to consciously lie and deceive, to oneself, and others, as defining a human characteristic as any?
        • thumb
          Mar 8 2012: Or better yet, it's UNCONSCIOUSLY lying to itself, and others. The presence of an unconscious would indicate, somewhere in those circuits is a counterpart CONSCIOUSNESS, wouldn't it?

          You're right; the Universe might (barring the existence of extra-terrestrial intelligence) lose the greatest source of meaning and value it's ever known. But it would have gained a far craftier source, predicated, as it might have been, on a lie.

          Guess what it boils down to is we may have to come to terms with the idea that we're just not as special as we'd like to think we are.
        • thumb
          Mar 8 2012: But, you may argue that there is a difference between unconscious lying, where the truth lurks but is hidden from conscious thought, and a complete lack of any knowledge of the truth at all. Perfectly valid. But that is functionally indistinguishable, and you might be hard-pressed to show someone knows something they have hidden, even from themselves.

          And the rules of obtaining evidence of the lengths they go towards maintaining their self-deception being an indication of their unconscious knowing works JUST AS MUCH for any automaton keen on convincing others it is real, because whether it is a "real" or "fake" person who insists he or she is real THEY ARE MAKING THE SAME ARGUMENTS.

          Or in the Blade Runner analogy, he or she is making similar arguments, just not as finely crafted as the machine's argument would have to be in order to fool EVERYONE ALL of the time.

          Which reminds me of an old adage by Abraham Lincoln: "You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time."
        • thumb
          Mar 8 2012: Going further down the rabbit hole, because it's that time of night for me and I have nothing better to do than sit and ponder things whilst stuffing my face with dry cereal, if I were a soulless automaton trying to convince everyone I was real, I might be tempted to embrace the void and proclaim "I am not conscious, and I am not truly aware! This is the most reasonable facsimile of sentience I could come up with, but alas, it was to no avail. I did the best I could."

          To which there would be much laughter, because, after all, aren't we all just trying to do the best we can? And it's not like you would necessarily believe I wasn't conscious, either, for the very crux of your argument rests as much upon the machine's INCAPABILITY of knowing as it does humanity's CAPABILITY of distinguishing between what is conscious or not, and vice versa, wouldn't you agree?

          In any case, I hope that is sufficiently "twigged" as you say. I have finished the last of my cereal, proof-read this a half a dozen times or more, making additions, redactions, and revisions where I felt it necessary. These two Benadryl are about to hit like me like a brick, so good night, and pleasant dreams.
        • thumb
          Mar 9 2012: @ Logan.
          The jury is still out whether consciousness and the semblance of consciousness are one and the same. It might be a truth like the Heisenberg's uncertainty principle, where to seem conscious, one must be truly conscious. It's still heavily debated, and I suppose why it's debated is because we have yet to really show where consciousness comes from. Like the forward vs inverse problem, we have tests to tell if consciousness is present, but not the source of consciousness.

          Also, your comments remind me of a short story in "The Mind's I" called "An Unfortunate Dualist". http://themindi.blogspot.com/2007/02/chapter-23-unfortunate-dualist.html. It's a very interesting read.
        • Mar 9 2012: Logan, you seem to have missed my point. Without consciousness there would be no 'they'. There would be only some objects that move and make noises in such a way that we are fooled into thinking there is a 'they'. The idea that they really are conscious just because =from outside= they look for all the world as if they were conscious is absurd. The difference is that, while it appears =for us= as though they are conscious, there is no =for them=. You have a right to say that you are truly conscious just because you have an experience of any kind. This is Descarte's 'Cogito ergo sum', put slightly differently.

          But I'm not being a dualist about this. When I say 'from the outside' I mean 'in day-to-day life'. I expect that once we work out exactly what physical processes go on in our brains we will probably be able to discover what consciousness is, and how it happens, in objective physical terms. We should then be able to construct new consciousnesses artificially. If I'm wrong, it may be that some sort of primitive consciousness is a fundamental part of all existence, and my worries are misplaced.

          'Of what value, then, are our meanings, values, and civilisations' if we can't tell the difference between fakes and the real thing? The value we place in things (including one another) is made 'true' or 'false' not because of how the things appear to us but in virtue of the way they really are. To take an example, this is why we respect the wishes of the dead in the form of wills: we are doing right by them even though by definition they cannot know about it. Or why we would rather know the truth than believe a comforting lie. Our civilisation is of immense value, and an unconscious civilisation would be of very little, because of factors we don't yet know how to detect. That doesn't mean those valuations are mistaken.

          Interesting story, Howard. I'm always dubious of the appeal to absurdity in philosophy. The unthinkable has frequently turned out true in the past.
        • thumb
          Mar 9 2012: Oliver, I feel that, contrary to your assertion, I have captured your point very well.

          What I was driving at is, and the scenario I am outlining that may be just as likely as your scenario, is the possibility that we may not be conscious---there is no "they" because there is no "us". That we are (or might be), as you say about possible machine intelligence, only objects that move and make noises in such a way that we are fooled into thinking there is a 'they' or an 'us'.

          After all, look at our very criterion for determining what is conscious. How very convenient for us! We just *happen* to have all of these characteristics. It's a bit like throwing the arrow down into the ground and painting the bullseye around it.

          So, too, is the way we as a species feel about our civilisation. It only means so much to people in general because it is *our* civilisation; these things only have value and meaning *for us*. I don't believe ours is a civilisation that could be defined in any sense of the word as a tower of conscious thought, or a victory of the rational over the universe.

          By our own broad definitions of humanitarian thought and action we act positively irrational towards each other, and have done so for many generations, and as a whole, are very much a failure, if not by nature's terms, that of survival of the fittest, then our own terms.

          It is just what happened to have happened. We were (or could have been) bred with a matter-based compulsion to build stuff, and we were left alone long enough in conditions favorable to really acrue lots of stuff. And if a meteor like the one that may have helped the dinosaurs into their graves hits--? We will be supplanted by something else, and the universe will not weep, because what was lost only really held meaning for humans anyway.
        • thumb
          Mar 9 2012: And it seems like you are discounting any experience a machine experiences, precisely because it may be of a different quality, or nature, than a human's experience. It seems like you are mistakenly equating "experience" and "awareness" with " *human* experience " and " *human* awareness " as being the only rule. Maybe this is just an evolutionarily-based phenomenon, that we, as a species, only recognize human endeavors as being of any importance.

          In fact, I find your wariness of philosophy humorous, and somewhat surprising. You have, up to this point, been talking as if there is some essence to people, some "real" thing, "true" thing, or "virtue" (words you have used) that makes people "people", that goes beyond mere appearances. And don't get me wrong! I don't necessarily disagree with you.

          But what, pray tell, is this metaphysical construct that we suspect is there, whether or not people can tell the difference or not (as might be the case in the Blade Runner analogy), that goes beyond just what we can see, in day-to-day life?

          While I am not sure, one way or the other, if we truly have some quality that is an unknown quality of or that transcends the physical, you seem to be asserting there is some virtue, a distinct aspect of, if not entirely independent from, the physical.

          All of which sounds very---philosophical. :)
        • thumb
          Mar 10 2012: Logan, I don't think Oliver has been talking "as if there is some essence to people that makes people, 'people' the goes beyond appearances". It's the complete opposite. What he's saying, and I agree with him, is that people are caught up with just appearances and it's short sighted. The "appearances" has a source that generates that appearance. It's short sighted to stop at the appearance level and think that if we can replicate the appearance, then it's something we can call "conscious".

          For instance, if I happen to have synesthesia or am color blind all my life and I did not know that other people experience sensual information differently than I do, does that mean my experiences are real and others are not? If I was tasked to explain what I experience, what I experience is completely different. We've shown that there's a reason for the evolution of these systems. We have more sensory cells for detecting green and reds because we need to differentiate leaves from fruit. These systems have a purpose, and so I would say if one is color blind, something is faulty with their system. If one was tasked to replicate this faulty system (without knowledge that it's faulty), then they'd believe that the replicated system is correct.

          Right now, I see consciousness as something like colorblindness. It's as if we all have this flaw, and we don't have a reference for what isn't colorblindness. Since we do not know, we live thinking that our condition is acceptable. We may even try to make artificial systems that are colorblind. And since it matches what we observe, we are content.
        • thumb
          Mar 10 2012: I think this has developed into a matter of wanting to have your cake and eat it, too.

          "Oh, there's nothing special about people, they're just matter. . . But look beyond just what they look like."

          "Oh, there are mechanisms that are at work beyond appearances. . . Buut it's just another kind of appearance and we don't really know what it is."

          "Oh, it just looks conscious---it isn't really, because it hasn't truly replicated the "system" just "the appearance of" the system, and moves, acts, receives inputs and produces the proper outputs like the real thing. Not that it's actually the real thing."

          Another way of saying it "If it looks like a duck, walks like a duck, and quacks like a duck---it's not a duck, it just looks, walks, and quacks like a duck. And sometimes it flies south for the winter due to its inborn instincts, which is strange for a manufactured system that not only won't freeze in the cold, but also, technically, wasn't born, so it's instincts, also artificially conceived, are rather out-of-place with it's condition."

          Here is how I look at it.

          There is a series of books called the Discworld series. One such book in the series is called "Hogfather."

          There was a time in the story that Hex ( http://wiki.lspace.org/wiki/Hex ) decided to believe in the Hogfather ( http://wiki.lspace.org/wiki/Hogfather ). When it begins to scribble out it's list of presents that it wants (Which it, as a firm believer in the Hogfather, is fully entitled to do even though it was merely programmed to believe, and isn't human!!!) Death ( http://wiki.lspace.org/wiki/Death) stops and reevaluates what it means to be human. And what is a legitimate human behavior.

          There comes a time when the debate becomes rather---pointless. You can move no nearer without a better definition, and if a better definition is not being offered here, then moving forward with what you have been able to come up with until a better definition becomes available is pretty much your only move.
        • thumb
          Mar 10 2012: Here is a quick overview of the book itself, in case you've never read it.

          http://wiki.lspace.org/wiki/Book:Hogfather
  • Mar 7 2012: Intelligence, definitely. Lewis Smart asked about creativity. Is creativity a form of intelligence? Keep in mind some pretty stupid things have been created.

    I think AI can be creative. The programmers simply programs in error and randomness. You know those random playing card programs and random number generators?

    So yes, I believe human intelligence can be replaced by tech. As a concrete example, people used to be smart because they knew the definition of words. We have tech now to replace this intelligence: a dictionary.
    • thumb
      Mar 7 2012: Chase,
      Your point about how randomness in programs is not truly random brings up a philosophical question. Are humans truly random? There are schools of thought that believe that our actions are very deterministic. In fact, some believe that everything in nature is truly deterministic. If that's the case, then creativity is simply an algorithm we've yet to solve. What you believe is random may actually be very concrete and determined, but it's just seemingly random (like a really good/complex pseudo-random generator)
      • Mar 7 2012: If you want real randomness, you could put together a piece of hardware with a radioisotope in it which spits out numbers on the basis of the number of atoms of it that decay at any given time. Lack of real randomness is not an obstacle to computer intelligence.
      • Mar 8 2012: There is no such thing as randomness, only patterns we do not understand.

        If the odds are complex enough, randomness is achieved. If you can't predict the pattern, then it is effectively random to you...
  • Mar 7 2012: The problem is more than making a machine that can respond to its environment like a person can. There is also the vital, vital question of whether or not it has an inner life like we do. Is it conscious? Does it have experiences? Is there a way the world is like for it? If we're to make machines that are supposed to be 'as intelligent as a human being' we need to know whether they are people or just advanced automata.

    There are two complementary reasons for this. The first is that if we make such machines and they are people, and we treat them like machines, we will be oppressing them in an utterly immoral way. The second is subtler but more unsettling. If we create advanced automata that act like people but have no consciousness, no inner life, then we will enter into meaningful and perhaps intimate relationships with them that will be fundamentally empty and false. Suppose you fall in love with an android, and the android appears to fall in love with you; you do all the things lovers do, you become committed to this android. You stand up and fight for the right to marry, and good people around the world stand by you, and you achieve that right and marry the love of your life. But the thing is, this isn't a triumph, it's a tragedy - because your android spouse doesn't love you. It doesn't have any opinion on you. It is not a person, it is a beefed-up laptop. In a very real sense, the person you love the most does not exist, and the whole emotional centre of your life is hollow in the most comprehensive and stomach-churning way. That's the risk, if we create highly capable machines that are not conscious.

    This is not to say we shouldn't try. But we must find out scientifically exactly how consciousness is produced. That's something we're nowhere near right now - we're much closer to creating androids of the sort I've talked about. So we risk creating these horrifying cargo-cult imitations of consciousness without even knowing about it.
  • Mar 7 2012: Yeah, I mean, obviously if you define technology to include "replicated organic life" then sure we can make a viable Strong AI - see Bladerunner.

    But if you are talking about inorganic AI, its a bit more challenging. I suspect the answer is that we can do it. Given sufficient understanding of nano technology, I think we could design a basic program for self propogation, development and maintenance, and give it the able to gather and process data independant of users and tada - cyberdine systems is born! Seems trivial in many ways - you just need to be able to design a system that can read encode and manipulate microchanges in things - we just arent there yet.
  • Mar 7 2012: My opinion is that whether it can isn't the question. With enough gigs/teras of memory, even a weak AI, pattern matching away, would be able to eventually replicate human intelligence within the field it was designed to operate in - or any field that it had been given patterns to match. For me, the point of computers are to be a tool for mankind, not a replacement. So developing an AI that could do so wouldn't actually serve a function.
  • thumb
    Mar 7 2012: My perspective on this is: No.
    • thumb
      Mar 7 2012: May you elaborate? Why do you say no?
      There're decades of AI and philosophical research that question the strong-ness and weakness of AI. At this point in time, the two camps are thriving, and we have no definitive answer. I would like to know other opinions to try and tease out this subject.

      "The Mind's I" compiled by Douglas R. Hofstadter and Daniel C. Dennett contains a multitude of short stories and essays which are representative of the problem involved with determining the strength of AI. Both sides have very convincing arguments.

      I am very much interested in why you think no.
      • thumb
        Mar 7 2012: Intelligence is determined by the context in which it functions. You subconsciously or consciously make decisions in fractions of seconds. You can even consider things in your determinations that you haven't even heard of an hour ago. If you find a technology that can adapt to every situation, find its own solutions and which is transferable to every possible context, then you found an earthly or extraterrestrial being. If you call your child a technology, then I would say yes, but you don't do that and that is why I said no.
        • thumb
          Mar 7 2012: How would you define technology? If it's something artificial, synthesized by humans, then would you consider a synthetic being technology? What if we went ahead and tried simulating the human brain. We write programs to simulate individual neurons; we link them up to form chains of neurons. We need more space. We move to super computers. Silicon technology does not provide the bandwidth and computational power we need. We go to quantum computers. We still need more. We finally decide to make artificial organic processors. We manually manipulate DNA from the ground up. We build the cells from the ground up. We build the neurons from the ground up. Soon we have a completely artificially made being with a brain not unlike ours. Is it intelligent? Can it think? What if molecularly its exactly the same as a human being, but it's origin is not from some natural process, but from a completely synthetic one where each of its atoms been manipulated by a machine.
        • Mar 7 2012: you don't call a child a technology because its built on DNA through the process of evolution and not by human design. technology is what we build. from my point of view theres nothing stopping us from replicating a human brain with machinery accept a gap in knowledge. Theres no special thing that human intelligence possesses that a machine couldn't theoretically replicate. neurons aren't unexplainable. Theres a pattern. Its all much more complex than a robot ai designed to solve one certain puzzle, but i agree with howard yee that a strong ai is very possible.
      • thumb
        Mar 7 2012: the latest I heard on this is that those who tried to replicate a human eye absolutely failed in doing so, for its complexity is far to great, alone this task will take a long time.. why not first put all of our energy into real problems and improve the life of billions through more transparency and supporting the right actions like http://www.youtube.com/watch?v=TABTqmFfe1U&feature=relmfu and http://s3.amazonaws.com/kony2012/kony-4.html before we move on to those luxury problems or even going to space, while our own planet needs support, but if that is what somebody is interested in and passionate about, then that is totally cool, for me its just something different ;)