Mitch SMith


This conversation is closed.

What is "intelligence?"

This is related to the kill decision for robots.

There are a couple of primary words in play here:
"Awareness", "Consciousness", "Morality" and "Intelligence"..
None of these seem to have satisfactory definitions - and yet we behave as if there are.

Please discuss?

I am interested in the dynamics of agency and advantage as primary determinators.

  • Jun 17 2013: First of all, intelligence is the ability to solve problems, the harder the problems you can solve the more intelligent you are.

    Secondly, intelligence alone is not enough to be aware, conscious or moral. Awareness is the ability to explain either internal o external facts, so in order to be aware you need senses or sensors to detect whats going on outside, and an associative memory to record what happens both inside and outside your mind in order to be able to link causes with effects, and that way be able to explain your actions.

    Thirdly, awareness alone is not enough to have a proper understanding of good and evil. Consciousness is the ability to foresee or predict the consequences of your acts, so if you are intelligent and aware you just need a "prediction engine" and lots of trail an error tests (experience) in order to become conscious. If you reach the point in which you can predict with some accuracy the consequences of you actions then you can also understand concepts like: good, bad, responsibility, etc.

    Finally, morality is the ability to tell what's right and what's wrong. The problem here is that the most obvious way to define morality is in therms of survival, but since you are a conscious being, who's survival is more valuable? yours of course. Morality comes in two levels, personal and collective, at the personal level your morality aims to preserve your life, health and property, at the collective level your morality aims to avoid conflicts with the people you like. I don't see a functional reason why this should be different in an artificial being, since as far as I can understand, consciousness is tightly linked to survival, so you can not suppress the desire of survival without suppressing consciousness.

    If you take a close look at the Asimov's rules you would realize it is all about survival, but again the problem is that if you are conscious, then there is nothing to prevent you from questioning you own morality and even changing it.
  • thumb
    Jun 27 2013: I read about "Emotional intelligence" and make several attempts to explore it by muself. I must confess it could be better driving force then ever mind intelligence, may be because it lays deeper in animal part of human behaviour and brain structures. Things people obviously should do they don`t, but using properly emotional intelligence they able to do things right and easy. I don`t know is it borders in mind intelligence, or emotional intelligence itself, but it seems the same animals have. Emotional intelligence uses in management now so we should at least confess it as "intelligence".

    Next... People friquently use the term "intelligence" to make hierarchy, here is intelligence (with them at the top), here is not intelligence so robbish. It`s just for self benefits. Sometimes people calls "unintelligent" just things they don`t understand, may be above understanding. So the term should be out hierarchy.

    Lets try to define. Intelligence should have a possibility, an informationsl structure, may be special brain network, to think one step forward. Also to make properly adaptations to changes and ability to optimize it. Intelligence should accept feedback, even to search it, to monitor changes and to check own status, is it still correct. Intelligence should includes methods to accept things unacceptable on the ego level (such as personal death). Intelligence should includes the ability to think about society and time as a systems. Personally I prefer Intelligence to sacrifice own mistakes, even beloved, comfortable preferencies, to the better behaviour, for exampl using the best scientific model instead of instinctive behaviour. It`s the problem of business. Even as the best methodology exists, proved and available, owners prefers to rule instinctively, even to kill own organization but not implement models.
    So I connect intelligence with ability to use own resourses and found (create) new to be effective and efficient in short and long run.
  • thumb
    Jun 26 2013: Matter conscious of itself.
  • thumb
    Jun 18 2013: If Emanuel Swedenborg is correct in his (following) definition of it then I think robots are without intelligence: QUOTE: "It is no proof of a man's understanding to be able to confirm whatever he pleases; but to be able to discern that what is true is true, and that what is false is false; this is the mark and character of intelligence."
    • thumb
      Jun 26 2013: Hi Edward,

      Thanks for your contribution.
      A thought came to me - according to Alan Turing, described as the father of computer science, you can deem a machine intelligent if you can have an intelligent conversation with it. I don't think we're there. I don't think any of the robots would pass Turing-test (as this is called) right now.

      Robots are not AI.

      I sometimes used the term AI in TED conversations, but in this term I refered to the 'intelligence' part of it as agency. Example - British/American etc intelligence has gathered information about... etc.

  • Jun 18 2013: Ok, few off the net:

    Intelligence has been defined in many different ways including, but not limited to, abstract thought, understanding, self-awareness, communication, reasoning, learning, having emotional knowledge, retaining, planning, and problem solving.( Perhaps more to the point artificial intelligence (

    Situational Awareness-The perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the future. In generic terms, the three levels of situational awareness are level 1-perception, level 2-comprehension, and level 3-projection. There is both individual and group or team situational awareness.

    Seems like we can control the sensory inputs, the interpretation of the sensory inputs, the probability of the interpretation being correct, and the programmed response of the machine. The context with which the responses are used to solve problems becomes an extension of the programer's philosophy. I think the machine should be able to have its own consciousness as it humanizes its memory, processing speed and decision making capability by comparison to the humans around it. Depending on hardware, there may be more similarities possible. If the computer has a true understanding for the difference between a biological organism and an electronic consciousness.

    Independent computer morality requires decision making about "right" and "wrong" that transcends initial programming with correct emotional, political, and social interpretation of sensory inputs. There might not be enough similarity between cultures to determine intent of an action with a high enough degree of certainty to make the kill decision, nor time to learn all behaviors, interpret merged or predict new behaviors of humans encountered.

    How important is the recognition of an enemy's change in will for peace?
    • thumb
      Jun 18 2013: That's a key question Robert!

      I tend to classify "empathy" as the ability to model the actions of others. The will for peace would have to be a component of the self before it would have meaning when observed in others.
      On top of that, "peace" would have to demonstrate some advantage - personal, or personal-through-mutual.
      This must be a component of human soldiers - and yet they still kill.
      • thumb
        Jun 23 2013: Re: I tend to classify "empathy" as the ability to model the actions of others.

        This seems to be a limited explanation of empathy. OTher animals display empathy also without modeling actions of another.

        Robert Sapolsky addresses the issues you are discussing in this talk regarding what makes us human.

        Theory of Mind
        The Golden Rule
        Pleasure in anticipation and gratification
        • thumb
          Jun 23 2013: No Sapolsky is not saying that at all.

          He is actually saying the same as what I have said - that these other animals ARE modelling the actions of others.

          I think the confusion is the semantic fog around the word - and even Sapolsky falls victim to it.

          You both confuse empathy with compassion - these are 2 very separate concepts.
          Empathy is the fundamental register of the theory of mind - compassion is an optional behaviour that might arise after empathy has occurred.

          The work being done on mirror neurons supports my definition.
          When we drop the erroneous usage of these words, we gain access to the causalities that lead on from them.

          Here is some better material on Sapolsky's work that might help:

          Many thanks for pointing this out .. linguistic clarification is annoying but necessary.

          The aggression aspect has to do with advantage - it is a field that seems to ask for a slightly different approach for inquiry .. something to do with the role of status in a tribal scenario. This is the dynamics of totem in the emergent super-organism.. I'm looking into that this year.
        • thumb
          Jun 23 2013: The ability to retrieve infomation and use it to plan four possible future courses of action with the one most probable highlighted and then decide 24 hours a day. The fuzzy off air tv static screen of envisioning 5 years ahead to an exact moment makes most ignore it as too statistically improbable to strike a correct scenario but the complete blackness of ten years in the future scares us.
      • thumb
        Jun 23 2013: Do you know this person?

        More conscious or less conscious?

        Baroness Susan Greenfield CBE, is a British scientist, writer, broadcaster and member of the House of Lords. Specialising in the physiology of the brain, Susan researches the impact of 21st century technologies on the mind, how the brain generates consciousness and novel approaches to neurodegenerative diseases such as Alzheimer's and Parkinson's.
        • thumb
          Jun 23 2013: I suppose we had better look at Brian Pollard's work?

          Damasio still has the foundation work where consciousness is concerned.

          I think we would be better employed to look at awareness Vs. Consciousness.

          (many edits here .. I'm not sure why the link? In any case - it's been entertaining)

      • thumb
        Jun 24 2013: Hmmm, I contemplate this idea of empathy here and find that it is "other" oriented, and is influenced by the perceptive we form of the self, our self. What is our relationship to self? Our mindset can be a fixed mindset or a growth mindset, and this is important to consider in our view of our self and how it extends to other. This is why there is an empathy issue.
        This video speaks about relationship having fractal structures that start with self.
        • thumb
          Jun 24 2013: Yes - I remember this one .. one tends to digest a lot of these and they all get sublimated into the theme.

          I agree with Cyrus to some extent, but after all this time, I can see a few bits he has yet to understand. Like the nature of truth .. he hasn't got there yet - he will because he already acknowledges the difference between the journey and the destination. Truth is unobtainable - that's the whole point of it.

          With the fractal nature of the self .. yes and no. A self has to have a static factor by which to define itself - it can't be just a strange attractor. This does not stop the fractal nature of the universe imposing on the dynamic .. but it sets certain limitations.
          It all has to do with the word "potential". This is the big one .. what's the difference between this moment and the next? It is potential. And potential is not prescriptive. Otherwise, all the laws of Newton and Einstein would be absolute - and they are demonstrated not to be so. They are useful only in the main - useful none the less - and we do use them to great advantage.

          Now - you have to go off and decide what "advantage" is.

          Have I sent you a copy of my draft thesis on the field theory of self organising systems?
        • thumb
          Jun 24 2013: Just a postscript on the Baroness's talk.

          I got the feeling she was delivering a 101 on neural science. - and I, like most the rest of the audience nodded off. I did all that field effect experimentation in my neural modelling back in the 90's .. there is a saturation problem in the synaptic potentials if you don't have a changing input and an adaptive motion output that alters the input outside of the system .. in other words, if the field effect is not engaged with an open system . it burns itself out.. One wonders if she has anything to add or is just an apologist for the British aristocracy .. her barony appears to be a military firing range just outside of Oxford - it's a swamp and by her feudal lineage gets her into the house of lords. I'm not all that impressed.
  • Jun 17 2013: It might be worthwhile to collect all of the terms that defy definition and then determine their similarities. I found one source that states that all languages have a core of indefinable words. Perhaps that is an inherent quality of language. That would explain why we "behave as if" these words have definitions. Words that have no satisfactory definition still have meaning.

    One reason that "intelligence" has no satisfactory definition is because we do not understand what it is. We determine the level of someone's intelligence by observing their behavior. Intelligence is a word for something that we assume is the causative factor of the behavior. If you were to thoroughly examine this observational process, you might come to doubt the existence of intelligence. It is possible that the entire paradigm is a very bad description of the reality, and that a good understanding of the reality will require a new vocabulary.
    • thumb
      Jun 25 2013: Hi Barry,

      Took a bit of time to examine your comment .. and I think you have highlighted a key thing:
      "We determine the level of someone's intelligence by observing their behavior."

      In this thread and others, I'm beginning to get a picture of the recursion of observation.
      Specifically looking at the recursive "depth" that gets called the "theory of mind".

      So there's a couple of things I'd like to develop:

      1. A general acknowledgement of the nature of words and definitions - we use them all the time, but have an erroneous assumption that what we say is what is being heard. Dictionaries help a lot, but they fail in identifying etymological drift - and they often do not offer any true useful definition. So I suppose this thread is to get a definition of intelligence that is a bit sharper than what is found in the dictionary .. I'll have a go at that in the conclusion statement. (for what it's worth ;) it will become my own functional definition for further personal analysis.

      2. The Recursive nature of observation shows up in our capacity to observe behaviour, not only in others, but between others .. to be able to observe observation itself. That is the key to the recursion - and it is potentially infinite (the observation of the observation of the observation of ...). This will form the basis of my definition - a measure of the depth of recursion. Others like to think it's a continuum .. but I am not so sure .. I think that it is a breakthrough effect where an observer can or cannot go to these recursive levels of perception .. certainly, when one meets an exceptionally intelligent person, they seem to have orders of magnitude more capacity to analyse behavioural situations. This definition would require a different word to describe "savant" or "talent" types of skill which do not involve recursion .. so a mathematical genius would not be counted as intelligent.
      Such recursion definition is more helpful in describing social dynamics.
  • thumb
    Jun 17 2013: Do you think people actually think they all understand these abstract concepts in the same way? I am more familiar with people putting forward the definitions they are using in their work- but then, that may just be in the sorts of things I read. I am excluding social conversation on these subjects. Among people talking socially about things, there will inevitably be those who assume there are hard and fast definitions of things that are, in fact, ambiguous in meaning.

    I agree with George below that the ability to solve problems is often considered a component of intelligence and the ability to grasp complex ideas faster than is typical is another. Nobody, I think, believes there is a clean metric for measuring this complex capability. Some people know, and some do not, that intelligence is now considered to be variable for people rather than fixed at birth.

    I agree also with George that morality is considered by most people to mean the distinguishing of right from wrong, but people disagree about what is "right", what is "wrong", and whether there are absolute identifications for particular behaviors or whether there can be a legitimate subjective component.

    The problem with the term "awareness" is that it is so widely used in popular culture and often as a label for people who understand things the same way the labeler does. So when someone uses that term, a little flag should immediately go up. It has come to mean in many contexts "get's it."

    I think scholars who study consciousness recognize there is not a tight operational definition.
    • thumb
      Jun 25 2013: Yes.

      in microcosm, it seems that words act as an index to a personal aggregation set of "meaning-maps".
      These maps might be similar between individuals, but there is no guarantee of it, and unlikely to be exactly matched .. although I can see how the empathy process might achieve that.

      Another variable in operation with meaning-map-aggregations" is that they can have multiple entry and exit points (framing). So the frame also participates in the understanding process.
      There might be a need to include the capacity for multiple frame management into the definition of intelligence. Although, at the high end, managing multiple frames simultaneously might become "loose association" .. I'll think on it. .. .. how about this: is recursive observation dependent on frame perception"? For instance, is recursion dependent on accurately perceiving the frames at operation in each level of recursion? e.g. "I know what he's thinking about that other person because I know how he thinks" or another level: "I know what he's thinking about what that other person thinks about yet another because I know how all of them think - and I know what each would think about what is being thought about them, and I can see who is correct and who is going to get it wrong - and what they are likely to do because of all this"
      And this sort of thing is not uncommon - we think in these terms all the time.
      • thumb
        Jun 25 2013: I agree that what you have described here is consistent with what those who study the field put forward, except in a different vocabulary. Assessing/interpreting/building a new observation into ones set of associated ideas is both inductive and deductive, or involves processing from the bottom (inductive/from the senses) and top-down (based on what you are calling the frame).
        • thumb
          Jun 26 2013: I think the vocabulary is important.

          The recursion applies to very specific circumstances of social perception - what I call secondary perception .. it does not apply to linear perception - what I call primary perception.

          Extensive linear-primary perceptions were once associated with intelligence, but are now being more associated with autism.

          This is why the vocabulary needs to move along.
          Psychology is no longer such a soft-science.
          Have you seen this guy?

      • thumb
        Jun 26 2013: I will listen to it when I have an hour.

        There are several senses in which thinking is recursive, as we check observations against our theories based on previous observations and then alter our theories based on new observations.

        What you describe in your sample of recursion is part of what neural scientists call having a theory of mind, which is, as you say, where those with autism fall short.

        My thought was only that vocabulary in this discipline does move along continuously. We enhance communication if we share a vocabulary more than if we create new vocabulary for what people in the discipline are already talking about.
        • thumb
          Jun 26 2013: Yes. We all hate jargon .. because it is a language of exclusion. Or so it seems.
          Jargon is the identity of a tribe. The tribe is the locus of understanding.
          We are just humans. We have to make it work as humans or it won't work.
          This is the failure of academia.

          As it turns-out, TED is a tribe. You will notice that there are a couple of TED tribes.

          The other thing is the temporal span of communication. I don't "dis" the etymological drift. I see that drift as important, but if we do not understand the function of it, we get lost in an infinite universe - while our survival reality is strictly local. A moving locality.

          we .. as viable organisms, need to keep a few things from drifting too far. I have identified a couple .. they are:
          1. The local minimum. This is the thing that cements perception into a way of surviving ambient conditions. But for the children of that survival, the act of survival actually changes the frame .. the context, and therefore, the child has the motion, the parent must die. So bang goes all religion and law.
          2. The index - symbols. Symbols are the way by which frame-sets are communicated in social organisms (strength in numbers). There are 2 classes of symbolic index: the static, and the dynamic .. in reality they are both dynamic, but have different "quanta" based upon "grain size". For instance, what is in our faces now has to be dealt with now .. but "policy" has a long term benefit or risk. Both of these temporal sets correspond to perceptional break-points: Primary perception and secondary perception.
          Primary perception has to do with what is in your face - it includes the un-named senses of the visceral stasis. Secondary perception is all about the adaptation .. the gradual motion towards what we are becoming - language, history, geography .. external change.
          We have 2 places. The personal and the societal. Beyond us and beneath us, the granularity of framing extends to others - bacteria, molecules, solar systems and galaxies ..
        • thumb
          Jun 26 2013: (part 2)
          Humans need to attend to the grain-size of humans - beyond that we are incompetent.
          We need to acknowledge that - and we cannot know without that .. we can make telescopes and other sensory extensions such as large hadron colliders Sensory extension is a minor affect in the grand scheme of the universe .. we make much of it, but we do little more than reduce the noise component of our perceptive ranges - and those who exist in the other granularities rule them. Let them do their job .. it's nice to know that .. it lets us stop being so lost.
          Chaos happens at boundaries - have some chaos .. but stop calling it "stochastic" start calling it "theirs-not-ours" - "they" will appreciate it..

          Personally .. If I was the Sun .. I would get pissed off if the slime moulds on Earth started butting into my business .. I would naturally observe that they are incompetent. Let them worry about why I snub them (Maunder Minimum) or look hard .. they have no idea - I got better things to do.

          And so we approach a new Maunder minimum .. and we could avoid that by being a little more accepting about what the Sun is up to .. we have our own grain to attend - the more we attend to our grain, the better being-in-it becomes.

          I am tempted to induce that a "self" is the boundary between 2 quanta of granularity - and it is the boundaries of grain that will ultimately define what "life:" is.
      • thumb
        Jun 26 2013: It all seems a reasonable perspective except two things. First, why does adaptation necessarily suggest an end to religion and law? Second, why would the sun care and what do you mean by not being accepting of what the sun is up to? Are you referring to efforts to understand or control the environment?
        • thumb
          Jun 27 2013: 1. Religion and law represent a social local minimum that will become destructive as the dynamic frame moves on. There is another aspect - tradition. Tradition is a skeletal remnant of local minima stripped of contextual irrelevancies by the forward motion of context.
          2. We get very concerned when smaller quanta impact us negatively. Thinking of pathogens and termites for instance. Of course, it is a wild conjecture to assert that the Sun is a part of the life continuum as an active entity (apart from being our local entropic source). I just find it fascinating how the Maunder minimum occurred shortly after methodical records of sunspots were started - and in this current minimum shortly after dedicated satellites have been placed to monitor the Sun - perhaps a loose association? But coming at the same time as we now discover unpreventable global warming .. it's, at least, intrigueing.
        • thumb
          Jun 27 2013: Religion and law represent a social local minimum that will become destructive as the dynamic frame moves on ...... Mitch I really enjoy reading the exchange between you and Fritzie ... However, even as the dynamic moves on isn't there a need to draw a line in the sand that reference where we were and where we are going otherwise we would be doomed to make errors of the past. History is a great teacher so both religion and law should be held in high regard to all of those wishing to go forward.

          The society you speak of is IMO exclusive ... the society I dream of is inclusive.

          This is way beyond me ... but as I read I learn. Thanks my friend. Bob.
      • thumb
        Jun 27 2013: I got a chance to listen to the Lakoff video. I had heard him talk about metaphors before in his Edge interview, but he did not in the interview use the word "framing." I am sure I have heard it in other contexts as well. In most neural science contexts the term I have heard is "mental models."
        • thumb
          Jun 27 2013: "mental models" refers to map sets.
          Framing refers to entry/exit points from those map-sets.
          Lakoff does not use the term to indicate how the frame induces entry/exit points. That is something that I am bringing to the table.
          Lakoff uses the example of a coffee-cup to illustrate a frame in terms of static definition. I prefer to widen the example by using water - water has a "mental map" in all brains - we encounter it, it has a map. But it has contextual frames - drinking water is not the same as drowning in it - it's still the same water - different frame. I would use the word "context" except that context does not communicate the boundary effect of framing. This is borne out by the experiments conducted that show how opinions can be influenced by the events immediately prior to the polling of the opinion. Such things fall out of the sensory frame, but remain active in the perceptual frame. This suggests that there are multiple focussing dynamics at work that are akin to the field of attention but operate through transecting "dimensions" of the "mental" mapping schema.
          Metaphor is not the only thing at work - metaphor simply describes associations .. we could say correlations, frame describes the causal mapping - and entry point governed by prior stimulus which navigates, topologically connected by causal mapping, to a specific exit point. If that exit point is unresolved, then the causal mapping will seek exit (curiosity).
          There is not a lot of material available to describe how temporal(causal) mapping occurs, but it's coming in bit-by-bit - exciting stuff!
      • thumb
        Jun 27 2013: From the experimental iterature I am used to seeing the introduction of events prior to polling labelled as "priming."
        • thumb
          Jun 27 2013: Yes.

          I choose my word sets as best I can so they can be reduced into a universal framing method.
          It's an analytical tool that allocates meaning across several field definitions.
          It consists of the fields: sensory, primary-perceptional and secondary-perceptional.
          Across that lies the existential loop:
          local energy state-->sense-->perceive-->compare--> adjust-->remember-->field of potential agency-->sortation advantage/disadvantage-->decision-->agency-->changed local energy state .. repeat.

          This is the model for simple organisms which have capacity for any form of memory.
          The words "perception" and "belief" are identical. However the notion of step-wise levels of potentiation explains dogma and trenchant ignorance (local minima - see Minsky).
          For social organisms, the existential loop has a recursive loop imbedded in it to evaluate indirect advantage/disadvantage (field of secondary perception).

          Starting with that guide-set, a lot of things yield to analysis - but also requires some refinement of words which cross the boundaries described by the set.
          It is not empirical per se, but can be reduced to observed network processing dynamics. A lot of it is drawn from Damasio's observations and when his self descriptions are added (proto/core/autobiographical) and Wolpert's work on Bayesian behaviour is also added, the model gains a certain unexpected power.
          Vocabulary comes under significant stress to conform .. but I think the effort can yield advantage. And I don't mind sharing it - one can detect deliberate perceptional blocks .. which is an advantage for all of us.
          In this part of the study, I look at the word "intelligence".
      • thumb
        Jun 27 2013: I know the terminology you choose, Mitch, helps you think, just as others may use a visual schema to help them think. But when the unique language you choose makes what you describe seem more complex than the model you are actually describing, it makes intelligent people like Robert think this is way beyond him. It isn't, but the constructed language can seem to make it so!

        The brain and nervous system are complex but these ideas are not nearly as complex as you make them seem.

        I am used to reading the very same ideas in medical school neuroscience text (circa 2013) and scholarly articles in psychology journals, including some of the people you cite, and these sources are so much less encumbered with jargons, old or new. The language has become cross-disciplinary as the field now is.

        Some jargon is not yours, of course, like Bayesian behavior, but couldn't you see to define such a term here rather than making people look it up who don't have familiarity with probability models of a conditional kind?

        It is only your choice of language that excludes people, I believe, who could otherwise engage with this easily.

        I make these suggestions with respect and for the possible benefit of the larger community here.

        You might take interest in how the Nobel laureate Eric Kandel, for example, writes on this same material.

        Above you referred to the use of exclusive jargon as "the failure of academia." But academic use of language in this area is much plainer than your own, without loss of power, meaning, or cross-disciplinary integration.
        • thumb
          Jun 27 2013: The model translates quite well visually. But this interface does not support diagrams.
          As a work in progress, it is necessarily untidy for now.
          Your suggestions are helpful. So I will review the use of "frame" and "context".

          However, some terms are inappropriate without undergoing some questioning. Which is what I am doing here with the question about what "intelligence" is.

          You see, the model aims, in part, at getting stability in symbology itself .. a mapping of "symbol-space" if you like. My method tries to avoid being too influenced by historic linguists as I think there is a deeper structure to be had - particularly in regard to recursive perception (theory of mind).
          Jargon is not under attack here - in fact, what I think I was saying is that jargon may be exclusive, but is also part of the tribal totem .. the locus of social identity. And within that locus one gains far richer communication.

          Certainly, there will be an attempt at linguistic unification once the model becomes refined and nearing publication - but it will not be at the expense of the function of the model. At that point, I am quite happy for others to include or exclude themselves based on their own totemic biases - that is how it works - those inside will be inside, those outside will be outside. I have no concern for those outside.

          BTW - I've read a bit of Kandel .. would not hurt to read some more ;)
      • thumb
        Jun 27 2013: I think the words frame and context are clear. "Context" is colloquial English." Frame" is easy to define by example as it can be used in different ways and all you need to do is convey yours

        Expressions like "mapping of symbol space" or "stability in symbology" may appeal to me as a mathematician, but I do not think such language makes for richer communication.

        Clarity of expression will, I expect, be important if you actually look to publish something other than by self-publishing.
        • thumb
          Jun 28 2013: Hmm .

          Would you have a little time to help me sort out some math?
      • thumb
        Jun 28 2013: I can try. I am no longer any good at differential equations, for example. It's been awhile.
        • thumb
          Jun 28 2013: it's about mutual Bayesian convergence.

          I have a rough understanding of the empathy process as an open system.
          But if it's abstracted into a closed pair situation conducted in what Damasio calls "autobiographical self" - then we might be able to get some mathematical models to describe it.
          Here's how it goes:
          Let's have person A and person B
          They each have a topological map of .. say .. a rock ... a circle .. an angle .. the colour blue whatever. However these maps are divergent in each - and for the sake of simplicity, the map could be a 2 dimensional array.
          SO let's put the pair into a Bayesian series by which each in turn presents the topology to the other, then compares and adjusts until both of the pair have the identical topological map.
          Here is the structure:
          each person constructs 2 copies of their starting topology - one for self one for other
          This creates 4 copies of the subject topology . say (Aa,Ab) and (Bb,Ba). Let's call it a "quadrad".
          The convergence between AaBa and BbAb can be independent and yield information arising from disparities between the final convergences - this disparity then becomes the basis of known deference between the participants .. i.e. the theory of mind.

          The examination would reveal the convergence dynamics - how many iterations .. and, indeed if the identical map is achievable, and if any chaotic characteristics, or regular oscillations, emerge if the process is divorced from frame-step and is treated asynchronously.
          A tight formula would be nice because it could be plugged into more expansive models.

          Oh! Forgot to add - the convergence is done in a noisy environment - with divergence expressed as + and - regardless of the actual positive or negative difference - this should be reducing as convergence occurs.

      • thumb
        Jun 28 2013: How could you possibly estimate parameters for such a system?
        • thumb
          Jun 28 2013: Estimate ..what they are? Or the measure of them?

          I think it's possible - it's essentially a Bayesian noise reduction .. I know there is some vector math that might be applicable .. it's used in the GPS noise reduction that was used before US military stopped scrambling the satellite signals..

          The nature of the noise might be separated as a parameter .. could be Gaussian or chaotic or whatever .. but useful to be able to play with it.
        • thumb
          Jun 28 2013: here's what I'm thinking -

          If such a mathematical reduction is possible, then it could be programed into a NetLogo experiment .. and you could track what happens to a shared topological construct in all sorts of conditional environments. Specially if the construct had cogence to survival of the turtles(netlogo agents) .. Minsky had this question in his TED talk - the problem with network processors .. local minima and the need to switch method between network, heuristic and genetic adaptation methods.

          You will see there is a spread of answers to the question "what is intelligence?"
          There are a number of ways people perceive it .. and you can see that net/genetic/heuristic divergence .. and more .. It's going better than I imagined.
  • thumb
    Jun 27 2013: Thank you for the links, I am very appreciate you do it. I`ll listen in carefully.
    Concerns researchs about "emotional intelligence" - I have nothing of it, just read things you call "opinions" and observe how it applies in real life, is it effective or not and who gain the benefits. It was enough to make several practical notes but of course conldn`t be a science research. I has not general but practical interest to learn what is it, so if you have more it will be helpful. Thank you again for so detailed answer!
  • thumb
    Jun 27 2013: Amazing video, thanks!
    I asked about animals, because it could be an anthropomorph projection covers understanding. Also sometimes human looks unintelligent, even less then animals. May be universal definition of intelligence exists, so i am curious could it cover all human behaviour or not.
    • thumb
      Jun 27 2013: Hi Anna,

      We are unable to escape our projections - being human. But we can, at least, expand the limits of our perception - and universality will have to be constrained to our human perceptions .. so let's push that constraint out as far as possible?

      So far, in the discussion, there seems to be a splitting of the definitions we have for this word "intelligence".
      There is the static autistic-savant, the dynamic recursive of politic, the inspirational-spiritual and the entrenched dogmatic/pragmatic of heuristics.
      Other analysts mention "emotional intelligence" but I'm not convinced it has an empirical basis - it seems more akin to politics ... but can be kept in mind as a candidate.

      So anything you would like to add is welcome - how do you define intelligence?
  • thumb
    Jun 26 2013: One questions before: could you divide intelligence in human and in animals, even in hight animals like shimps and dolphins? Are we speaking about only human qualities?
    • thumb
      Jun 27 2013: I think it would be the height of vanity to assume that humans are separate from life (as some religious traditions would have us believe).
      Have you seen how Dolphins and Belugas can blow air-rings and play with them like hoops?
  • Jun 26 2013: Understanding that you are not God. All else follows from that. The fundamental trait of "non-intelligence" boils down to "I am God"--acting automatically, as if whatever act one takes is the best of all possible acts. Whether this is driven by molecular processes or simple "stupidity", it's the same thing--presumption of omniscience and omnipotence.

    The smartest thought anyone has ever had is "I don't know."
    • thumb
      Jun 27 2013: Agree to an extent.

      I would say "I am not nothing". This is the only honesty.
  • thumb
    Jun 17 2013: It's funny how some people expect human-like intelligence to emerge out of pure computing power.
    A machine is never going to give a s***t unless it's been designed to. And it won't even matter if you programm it to pretend to care or to care for real, since both are exactly the same goddamn thing.
    Most of our humanity is rather our primateness, the fact that we haven't been designed by anything, the fact that we're sex machines evolved to be experts at making more sex machines. Self awareness, language and love are just very specific programms and have nothing to do with the amount of intelligence in an entity.
    • thumb
      Jun 19 2013: That is all true if you assume the old computer model of serial calculation.
      But it's not like that anymore.
      The emergence of adaptive systems changes everything.
      These things such as language and love are not programs - they are emergences.
      The programs we observe mostly result from "attractors" discovered by the adaptive dynamic.
      The computational model described by Wolfram seems closer to the mark. Things such as sex, hunger, fear, sociality .. these things we call "drives" are the computational configuration which might be called programmed - from these configurations, everything else is dynamically emergent.
      The basis of most AI these days depends on guided emergence - network pattern recognition and synaptic association is very good at this, also genetic algorithms .. they all behave in the Bayesian mode of continual adaptation.
      Biological systems have the added advantage of physical evolution - the machines, so far, cannot execute physical redefinition or adaptation .. but that problem is being rapidly dissolved.
      • thumb
        Jun 19 2013: Very good point! And very well explained too!
        • thumb
          Jun 20 2013: Marvin Minsky is good value on this point.
          Unfortunately, in his TED talk he blows-off his own analysis and tells jokes instead.
          You have to freeze-frame the diagrams and try to read the captions .. there's some interesting insights there.

          Another point is that an AI does not really have to resemble a human.
          Assuming that intelligence is a function of an organism, then it will conform to that organism's existential loop. For instance, some organisms are not at all dependant on sunlight and will not need to have sunlight as a primary element of cogitation.
  • thumb
    Jun 17 2013: Intelligence already exists in machines. There's no question about it.
    What's interesting to me is that a human being is an entity that compensates the lack of intelligence with gambling (creative space) and taking responsability. This will be more obvious in the future.