This conversation is closed.

Artificial Intelligence: Can a machine think?

Is it really possible for a machine to have intelligence?

What constitutes intelligence?

Will a computer program ever be able to learn or have understanding?

  • thumb
    Apr 19 2011: In a way we are machines too. Our hardware has been adapted to almost anywhere on this planet and the software is always learning and re-writing itself to make better hardware. The human DNA is the best written code for survival on earth, although the resulting operating system lacks the quality of the other ones like better hearing or stronger muscles, its capacity for processing information has no equal. Since survival is the main purpose of existence on this planet, it is logical that we want to extend our abilities to exceed and excel in our environment and through the use of tools we aim to achieve such goal.
    We want not only to run, fly or swim like the other beings in our planet, we want to be better, so our tools have become so sophisticated to the end that with no surprise we want to mimic ourselves too.
    Why do we want machines that imitate human thinking? Will this help us in our survival? Yes, I think they can help us a lot, but they will do exactly what we need them to do for us. Not because we want slaves but because that will be the true extension of their urgent and critical use.
    I can envision robots patrolling the bottom of the oceans or the edge of our outer atmosphere ready to detect and alert of any danger to human life. I can also envision them guarding our homes or monitoring our health, programmed to help us have happy and live long lives.
    Yes, there will be machines that can imitate thinking but never have human consciousness with all its implications.
  • thumb
    Apr 16 2011: As we begin to understand the processes in Neuroscience, it is becoming clearer every day that we have a collection of machines running in our brains. To date, we do not have the ability to match the scale of interconnectedness that these machines share (see See Dan Dennett's talk). One day we will be able to scale. That is not the interesting question - because the day will come when we can match and exceed the scale of the human brain.

    Conscious biologies in our ecosystem, have evolved without direction over millions of years. The architecture of these biochemical machines has accidentally evolved along the way. One could assert that the design we have thus far is the result of what was ideally suited to the savanna for a newly upright ape. Once we figure out how to at least mimic the basic machinery that gives rise to the emergence of consciousness - how would we optimally design it? What would it be like to have infinite eidetic memory? Would we wire in emotion? How would we direct such a consciousness without emotion?

    Those are the interesting questions.
  • Apr 11 2011: The intelligence as a matter of fact is exclusively related to human being ,The machines can only recall the ideas that we store on it and use it as a reference to respond to your orders . It practically can not take a decision on its behave .It is simply can not take either a positive or negative decision ,but the human being himself
  • Apr 11 2011: You should take a look at the conversation that Christopher Cop started.

    All your questions are likely addressed on that thread.