TED Conversations

This conversation is closed.

Computer engineers,and technicians are not qualified to conceptualize or design systems that emulate individual human behavior.

There is nothing, absolutely nothing in engineering or computer programming education that prepares for accurate emulation of human cognition, behavior, human-computer interface, or human to human interface.

Share:
  • thumb
    Jun 24 2013: Well, computer engineers and technicians are themselves human beings, so they have some preparation.
  • Jul 4 2013: You (George QT) say, “In your opinion and experience, the responsibility of evaluating the user's performance belongs to the user interface or to the application itself?”
    I say, neither the user interface or the application can have “responsibility” such a burden is purely human, so I do not understand your question, George. Why? No clue as I don’t understand your question.
    Where to start (assuming that I am correct)
    I earnestly believe that I have presented sufficient information and evidence of a blatantly obvious problem with artificial intelligence, expert systems, and the fantasy of programmers and engineers actually emulating human cognition and the subsequent behavior.
    The next step could be to engage me with a contract sufficiently rewarding to pull me out of retirement for lectures and/or consulting or ignore my preachings and argument and continue with the delusional status quo..
    Respectfully,
    Wes
  • Jul 3 2013: You say, "I still have the unsolved puzzle of how to achieve that kind of user's data retrieval on a business software interface running on a personal computer, however your example gave me an idea, I need to do a lot of thinking first but maybe I found the key."
    While I admire your positive attitude, and with all due respect, I do not believe that you have "the key", George.
    You say, "...what is you opinion about gamification applied to business software? would it be a good idea to incorporate game elements to business software interfaces, in therms of user performance?"
    Most of my career as training manager and instructional designer was in support of what is commonly referred to as "critical skills", where failure of the trained could result in loss of life, severe injury, and/or considerable organization risk. I am a big proponent of simulations as learning devices and not necessarily ones employing complex control panels, but even those that include individuals, interpersonal effectiveness, team building, and etc. These are games complete with very sophisticated challenges, and severe individual and organziational risks. Gamification offers many advantages, it is at least, in many cases, at least a step toward acknowledging individual knowledge, skills, and attributes, as fundamental as they exist at the hands of engineers and programmers.
    Fini
    • Jul 3 2013: As always you response is very complete, thanks again, but you leave me with a doubt: in your opinion and experience, the responsibility of evaluating the user's performance belongs to the user interface or to the application itself? and why?

      If you are right and like you say: "this is far outside of the programmer's or engineer's set of knowledge and skills", then there are some unanswered questions : where to start? how reliable are the most obvious sources of information? are there some practical examples out there that can be used to learn?
  • Jul 3 2013: you say "...so with the appropriate knowledge you can (hopefully) design an interface that enables both the machine and the user to learn from each-other, and cooperate in a progressively smoother way."

    Yes but it must be appropriate knowledge of human performance factors as well as knowledge of computer system capabilities and such a necessary condition is not commonly available mainly because of the attitude that computer engineers and programmers believing they know all they need to know about the human side of the performance equation and that is so far off base as to keep me shaking my head for many years. What the machine can learn from the human's behavior based upon his or her movement of controls is, at this time, very elementary to say the most. Computer systems and even games can learn about the human as the human can learn the peculiarities about the game and interface. But it must be a result of design from a different perspective. Machine, game design is usually based upon a statistical or anecdotal analysis to determine the average user. The average user is an illusion and does not exist, The individual user is the player or operator and he or she has individual qualities, knowledge, skills, and attributes that must be acquired by the learning machine or application. Again this is far outside of the programmer's or engineer's set of knowledge and skills, and, given the arrogance Ive witnessed over the years, outside the necessary open-minded attitude.

    Just think of the possibilities and advantages of an application or game that could really get to know the individual player and adjust the challenges, paths, risksks, rewards, and punishment for the individual user. It IS possible with today's systems but not with the parochial attitudes held by engineers and programmers.
    more---
  • Jul 3 2013: You continue, "For example, with the right sensors, you could know how hard they are pushing a stick, in which direction and how strong is their grip, with such data you can measure not only the user's performance but also how stressful is a particular situation, directly and in real time..."
    The indicators you use as examples are correct you can record such actions for is-is not and how-much, pressure, direction, time, etc. This will produce a record however such behaviors or actions are, as I mentioned at the end of functional cognitive processes and since these are unobservable are not. This means we have to use an analytical approach to judging the pilot's or operator's pre-action cognition. To rull out subjectivity as much as possible on our part, there must be standards of worthy performance with which we can compare. This may be acquired via a census of opinion among experts.
    Let's say for instance, at the time of the event (a critical event that may indicate a pending cascading failure, one system's operational condition is failing causing a dependent system to suffer), the operator is presented with a few options, what button or control would he or she push or adjust first, second, etc. Would there be alternative paths to correction to consider, one may be better than several others, would an indicator light or a system sensor failure be the cause of the alarm? Let's say such an event never happened before and was never considered by the designers, what standard would be used to compare the accomplshment to judge worthy performance? Did the operator simply not notice the deteriorating operation in time for reasonable correction? Were there too many indicators competing for the operator's attention (a Three Mile Island #2 disaster cause)?
    more --
  • Jul 3 2013: You said, “In the case you describe the user (pilot) is presented with a physical user interface, so they interact with the computer though sticks, pedals, buttons, etc. In this case is not too hard to figure out ways to get information about the user's actions and reactions without interfering with their work flow and train of thought.”
    Just a comment to establish perspective given the complexity of the human/machine interface, control systems like those of a modern electric generating station or air plane, have supervisory sub-systems that follow algorithms to correct or to handle a pending system failure or effect necessary control changes to handle simpler predictable control events by monitoring system sensor (is-is not or how much) indicators. Given that fact, the human operator (pilot, etc) is there to handle the unpredictable events not covered or accounted for by the supervisory sub-system or sub-routine.
    You are correct regarding acquiring information about a user’s actions. However, when you wander into the field of user “reactions” you have increased the difficulty of interpreting or even observing the “reaction” given it my be unobservable. That isn’t to say that it cannot be done but not using direct observation ( pushing pedals, buttons, and etc). The "operator action" is at the end of cognitive functional processes that cannot be directly observed. Still, there is an accomplishment, a product produced at the end of the not visible perception, judgment, decision making or problem solving process based upon that accomplshment, product, and/or results but this may only be judged via analysis against a standard to determine if the action was taken in an acceptable time limit, with the expenditure of a maximum resources, if the action was a correct one as judged by comparing with a known standard, and etc.Knowing how to set up such a human performance system is not within the capabilities of a programmer or engineer.
    More --
  • Jul 1 2013: From my handbook, "We are all aware of websites and e-learning programs, simulations, and computer games with beautiful graphics and extremely well done interaction features. With such technological advantages, we may seek to fill a performance gap (between existing and desired) and if we do so only by supplying information (knowledge) or data), the most dazzling programming and graphics will not directly improve performance. Needless to say, actually measuring such performance will be difficult to impossible. Solid performance measurements enable us to provide the individual trainee with more personal direction. For instance, as a result of good performance measurement, we may realize that the job holder's performance problem is not due to a lack of information or knowledge but due to his or her inability or unwillingness to remain situationally aware. I suspect billions of dollars each year are wasted in trying to improve an employee's bad performance by giving them more information."
  • Jun 30 2013: Personally, I would really appreciate a human- like robot that looks like Debra Winger, Sandra Dee, Jennifer Aniston, or even Jenna Fischer. On the serious side, at this point, technologically I don't believe we have to worry about it in this lifetime. Until engineers and programmers actually knew the implications of faithully and completely emulating.human perception, beyond elementary human judgment, decision-making, and problem solving, it should not be a concern. Should they actually understand and know how to integrate human cognitive and functional systems, then we likely should begin concerning ourselves with the ethical or risks to humanity. .
  • Jun 30 2013: there is a key word missing in this the word 'yet' my thought is not whether or not we could build robots that can emulate humans it is whether or not we should
  • Jun 29 2013: End of response to George QT:
    There are
    1) simple tasks that include a cue (telling the person to begin the process) and an existing procedure guiding the individual through the task with an existing standard for accomplishment;
    2) tasks where the cue is really determined by the player/operator, no formal cue exists. The player's initiative determines the cue and an existing standard for the accomplishmient;
    3) tasks requiring some analysis against an existing standard to determine a starting cue with an existing standard; 4) tasks where the starting cue has not been determined, no process exists, and but a standard for performance and accomplishment exists;
    5) there are tasks that have never presented themselves -- have no cue, where analysis or synthesis is the burden of the performer, no process exists and must be created by the player, and no standard has been established-- yet the performance is critical.
    Computers and programming, while able to speedily handle the simple tasks cannot do the more complex. And in complex challenges involving a dialog with a human or a team, dramatic improvement could be made if the points I am making are considered.
    I really love the opportunity to share what I know about human performance and am very gratedful for your time, courtesy, and patience. I am tired but looking foward to your response or that of anyone interested..
    *the term problem is meant in the mathematical or philosophical sense; that is, the problem is the difference between the desired and the existing condition.
    • Jul 2 2013: First of all, thank you again for you time, I really do appreciate it.

      In the case you describe the user (pilot) is presented with a physical user interface, so they interact with the computer though sticks, pedals, buttons, etc. In this case is not too hard to figure out ways to get information about the user's actions and reactions without interfering with their work flow and train of thought. For example, with the right sensors, you could know how hard they are pushing a stick, in which direction and how strong is their grip, with such data you can measure not only the user's performance but also how stressful is a particular situation, directly and in real time... so with the appropriate knowledge you can (hopefully) design an interface that enables both the machine and the user to learn from each-other, and cooperate in a progressively smoother way. I still have the unsolved puzzle of how to achieve that kind of user's data retrieval on a business software interface running on a personal computer, however your example gave me an idea, I need to do a lot of thinking first but maybe I found the key.

      Like I said I need to do some thinking before commenting any further on this particular subject, but I'd like to ask you a question: what is you opinion about gamification applied to business software? would it be a good idea to incorporate game elements to business software interfaces, in therms of user performance?

      You arguments are very enlightening, I do appreciate it very much you time. Hope to hear from you soon.
  • Jun 29 2013: This is precisely why most engineers and programmers are not great sales persons -- likewise most great sales persons are not great engineers or programmers. Wouldn't the complex ideas, concepts, models, such that it corresponds to the psychological style of the individual, be an improvement? I believe that it would. And it can as the needed technology exists. Just think of the advances in game and computer simulation could be made if the system really understood the player or operator?

    The effective supervisory system threshold is where known issues requiring system intervention are simple and basic, 1) is or is not conditions; 2)how-much against known standards; 3) if the problem requires more information or data items to resolve, 4) where to get the necessary input, e.g. from a human monitor, sensors, or other reference. Functionally such simple computer tasks equate to the individual human psyche. However, to consider the functional human psyche only within such limits is really underestimating its capabilities. The complex of the psyche includes functions that are broader than anything yet derived by computer programmers and engineers. Their are functions and routines that can emulate the human in some respects and in-fact can do so faster than the human. There are many functional tasks that are way beyond the capabilities of computer systems and computer programming. For instance more complex dialog between individuals, individual styles, and individual atttitudes.
    Diagrammed it would appear like this: [perceive (is or is not)] > [judge (logic, objectct function, value, knowledge, history and etc] > decide or conclude (what has to be done including engaging a sub routine, calling or searching for more data)] > [do it].
    more to follow
  • Jun 29 2013: That being said, why have a pilot and co-pilot in the cock-pit at all (more modern aircraft systems have enabled the elimination of the navigator in the cockpit as a redundant technical position. The reason to include a human interface even with the most complex supervisory position is to handle the unexpected and unplanned or unprogrammed events to handle problems not accounted for in the programmed supervisory system. I know from direct experience that the weakest part of a fighter plane to pilot interface is the communication link, like the head's up display. I am also quite certain that this problem would be reduced if the communication concept was achieved in collaboration between the technical and the human performance engineers. Done with just the basics, the plane could learn about the pilot and adjust the plane to pilot display and communication to the pilot's individual styleand compensate for his or her weaknesses or take advantage of his or her strengths.
    At the point where the unexpected, unforeseen, and/or unprogrammed computer routine no longer sufficient to the challenge begins the necessity for human's interaction. This is the point at which engineers and programmers should, if only for economic reasons, be considering; for instance, how to include or integrate sound psychological factors in system processes to 1) eliminate, 2) reduce, 3) aid, 4)assist, and 5) improve the human performance part of the process. And this is precisely where my philosophical argument begins. The engineers and programmers are unprepared to wander into that territory. If they do, they are likely drawing conclusions that are based only upon anecdotal assumptions and not science -- something they would not do in their system design or programming. The human factors and functions territory is full of unknowns and risks and yet, when properly studied include useful models, design, and worthwhile elements that are scientifically supported.
    more to follow
  • Jun 29 2013: Thank you George for responding to my, what may seem, terse or discourteours responses. I really appreciate your patience and courtesy and for the opportunity to continue the dialog..
    To your hypothetical problem:
    General accounting, engineering, and many human-resources challenges or problems* would be responded to by the human based upon existing policy or proceedure. For the most part, the human responsible simply has to recall or use reference or guidance to tell them what to do first, second, and so-forth to achieve their goal. More than likely if such a problem is common enough to warrant a place in existing policies or procedures, a computer supervisory system or program can effectively handle it, and likely can handle it better and faster than the human. The technical side of the issues at hand are basic "what-if.. -- then..." experiences. I believe it is commonly understood that all you have to do to fly a modern passenger plane from Boston to Chicago, is place the plane at the beginning of the runway and somehow let the plane know that the pilot has received a proceed command. The plane can pretty much take over from that point. There is little for the pilot to do during most of the flight as the aircraft can ably handle the common tasks associated with the trip. Early after the introduction of such aircraft supervisory systems, airlines knew they were saving huge amounts of money if only because the system improved fuel economy over a human pilot take-off routine.
    more to follow
  • Jun 27 2013: I see no argumemt of substance from engineers, programmers, and etc. regarding my charge beyond their subjective opinion that what I said was "insulting".. I say the facts I highlighted are the insulting thing and not my intent on raising the issue. Given the responding comments that reflect the lack of knowledge and appreciation of the problem I see nothing changing an d we will all be looking for the delivery of AI and Expert Systems promises, which were actually more fantasy than sound prediction.
  • Jun 26 2013: Thanks for asking about me. I'd be happy to supply my Resume but I am not looking for a job but would consider project work. I consider myself a Human Performance Engineer and I guess that may be described in my publications downloadable at http;//hallowquest.com/index.htm (a rather primitive non-commercial academic and not a commercial site).
    I agree completely on your suggestion about collaborative, interdisciplined development teams. The importance ofo such a standard goes up considerably with the risk. Unfortunately, the so-called experts or professionals from the human performance side of the equation are also lacking in actual ""individual" psychology more likely than not educated in statistics and behaviorism and that's a big problem..

    Regarding your next statement, all projects of any substance have risk factors to be accounted for and I have had to justify and defend why i included actual psychology in the development of such programs. Here are a few comments from (behaviorist, worked for Skinner at Yerkes), Dr. Thomas F. Gilbert regarding the critical skills programs"
    (Under standards for decisions to revise and create courses) "No standard in the American training industry is met so poorly. But Penelec Generation Training (completely redeveloped in five years under the direction of Wes Stillwagon) seems to be meeting this standard better than any other training department we have witnessed..."

    In conducting our study, we had to be quite wary of Penelec Generation Training's (Wes Stillwagon's) assurances that their training really is performance-based because this is a term much in use these days by people who don't have a good idea of what it means. But we found PGT follows an exacting procedure (developed by Wes Stillwagon) for making their training truly performance based."

    This was accomplished using the concepts I am preaching.
    • Jun 29 2013: First of all thanks for the so complete response, I do appreciate your time. You sound like a person with a very broad and long experience so let's try something different. You say I'm giving you excuses, ok, avoid the excuses, I'm open for learning, just give me a set of practical parameters I can use as a starting point.

      So let's suppose I have to develop a system, whatever administrative system, general ledger, budget, inventory, whatever... in order to implement any of the features you mention in such a system I need to perform some metrics on the user, isn't it?, but that doesn't worry me since everything can be reduced to research and math, all the knowledge is out there, right?... what does really worry me is: how to gather those data without interfering with the user's work and train of thought?... and also without making him/her doing something that is not part of his/her normal work? To me that is quite a challenge since I can't figure out how to do it. So... please tell me what should be my strategy. How did you solve this problem?
  • Jun 26 2013: With all due respect to your credentials, I do not believe you really grasp the breadth, scope, complexity, and dynamics of the term, “cognitive” E.g. what is actually going on in the individual human psyche under that umbrella term. If you did, we would not be debating in this thread. Even terms, such as “become familiar”, or “understand”, although both commonly used, have at best fuzzy definitions. No reasonable person with a real psychology background would accept them in scientific dialog. Needless to say if you were attempting to program a system to “understand” you’d have darn little to work with to objectively define the goal, wouldn’t you?

    Your excuses for not including sound individual psychology concepts, models, terms, and language in your programming objectives begs the question, how do you know whether or not your emulation goals are achieved? Or is the industry satisfied with accepting unfulfilled AI and Expert System promises from engineers and programmer? Perhaps they are but only because they do not have the necessary knowledge of the terms, concepts, models, or language to judge? They really must be using fuzzy math in their accounting too.
  • Jun 26 2013: The paragraph to which you refer in your brief thread posting, where the term “accuracy” was used, was a quote of your own that I preceded with the qualifier, “You said”. So the use of the term “accurate” that you are objecting to, is to your own statement and not mine.
    In your (George QT) latest thread posting, you say, “So perhaps the curricula of computer science related carriers ought to include a little bit of physiology. I don't have any problem with that.”
    If we can agree with the Wikipedia definition of “physiology”, “…is the scientific study of function in living systems. This includes how organisms, organ systems, organs and bio-molecules carry out the chemical or physical functions that exist in a living system, ” then,

    Given what I know about the application of actual “psychology”; and not the nearly useless study of “behavior” (?Behaviorism, ala B.F. Skinner, et al), I say that in today’s world what is NOT known about individual human psychology by programmers, system analysts and engineers would fill dozens of volumes. To state, as you have, that the curriculum of computer science “ought to include a little bit of physiology” ( mere anecdotal assumption on your part) tells me that you do not really understand and appreciate the gap between the existing knowledge about the human psychology and what is required to develop AI, Expert Systems applications, hardware, and interfaces and what is required to actually and faithfully emulate individual human communication, perception, judgement, decision making, problem solving, risk taking, and etc. The sad part is that it seems like you have concluded that you do not have to learn actual psychology at all. And this is at the core of and supports my original statement.
  • Jun 25 2013: You say, “. In the field of human computer interface we do the best we can with the knowledge and tools at hand, just like in any other industry. “
    I emphatically say, you do NOT do the best you can and that there is a world of improvement that can be made toward the goal of emulating the human psyche in computer systems and applications.

    With very little work but a huge change in attitude< one could easily expect:
    . computer systems to adjust and respond to
    individual user’s attitude
    Strengths
    Adult maturity
    Willingness to take risks
    Decision making
    Attention to detail
    Interpersonal effectiveness
    Abstract thinking
    Intuitive thinking
    Value judgment and more
    Could compensate for abrasive personal styles in machine – human dialog
    Assist in team building because it understood team/individual weaknesses
    Improve hiring, training, and promotion of individuals for a job
    We cannot expect this with existing attitudes within the engineering and programming fields. And the loss is the failure of Artificial Intelligence and Expert Systems to live up to their promises.
    Relative to your “training” comments I’ve been engaged in critical skills training for decades where failure of the trained may result in loss of life, severe injury, or great financial loss to the organization and I’ve employed my common sense concepts in doing so resulting in exceptional, award winning programs that were recognized for accuracy to the on-the-job needs by the New York State Board Of College Regents.
    Regarding your opinion that I’m being “unfair”, If what I say is perceived as “unfair” and yet inarguable, what is your point?
    Respectfully,
    • Jun 26 2013: First of all, why didn't you started the debate by telling us a little bit about your field of expertise, formation, etc.? Personally I don't know if I'm talking to a physiologist, an engineer a scientist or a military, I think that info is necessary in order to establish a more friendly talk.

      Of course, if we are talking about critical systems, features like those you mention easily qualify above the "nice to have" level, but at least to me it is obvious that that kind of systems requires an interdisciplinary team to develop and maintain. It is plain dumb to try to develop or maintain a safety critical application relaying in just one kind of experts, even mission critical systems are developed by interdisciplinary teams, so if critical systems fail to have an adequate human-computer interface the blame is not on the programmers / engineers, but in the management, because it is a management task to gather all the necessary experts and make them work together effectively and efficiently.

      Now, let's be honest, there are few safety critical systems and most of them depend on governments, and not all organizations have or require mission critical systems, so in real life how many computer engineers / programmers work on critical system? the majority of us are busy in more trivial tasks for which an intuitive user interface is just enough. But beyond that and more importantly, most of us work with limited budget and time, so honestly we do the best we can with the resources available.

      Finally, a point you are overlooking is the fact that as computers become more powerful the demands on them also grow evenly, so in practical therms we still have to face the same challenge we had since the beginning: to make them respond quickly, that is why AI has failed to live up their promises, and not otherwise.
  • Jun 25 2013: You say, “Human behavior is an area in which I have serious doubts it is a good idea to try to accurately emulate in a computer, in my experience actual human working behavior is mostly partial, subjective, inefficient, biased, in some cases irrational, and even corrupt in extreme cases, so why to bother trying to accurately emulate such vices?. “
    I fully understand your opinion on this matter because you are an empiricist and given the fact that psychology seems to have no common language or concepts among its educators or professionals adds in no small measure to supporting such an opinion, However it doesn’t take too much research into works that have been ignored in modern psychology education to understand what went wrong. I suggest that the simple reading of the two volume set “The Principles of Psychology” by Harvard Professor William James (still available) would awaken the interested reader to psychology far outside “Behaviorism” the school that’s received most research funding in the last 100 years. This is because of the erroneous belief that statistical studies will serve the greater need. This of course means that the study and consideration of the “Individual” human psyche is cast aside in support of defining the center of the bell-curve.
    As I mentioned in other postings on this thread, “behavior” is a product at the end of considerable unobservable cognitive processes and therefore in today’s more complex world, it is less relevant to the study of human perception, judgment, decision-making, problem-solving, etc. than the days of time and motion study in manufacturing. This however does not excuse our abandoning empirical study or consideration as you suggest in attempting to emulate the human within computer systems.
    More to follow
    • Jun 26 2013: Ok, in this point I do acknowledge you are right, however I must point out to the fact you used the word "accurate" in a misleading way.

      So perhaps the curricula of computer science related carriers ought to include a little bit of physiology. I don't have any problem with that.
  • Jun 25 2013: You say, “You are right, however there are few points you seem to be overlooking, human cognition is a very broad field, which implies and includes associative memory, senses, pattern recognition, thought, prediction, and other mind functions which are really hard to emulate in both, hardware and software”
    I say, it is exponentially more difficult when the individuals charged with the tasks of emulating those things in a computer system do not have the language, models, and concepts necessary to fulfill the objectives, and they do not. Sadly they do not even though the necessary prerequisites exist. I suspect that the first step necessary to overcome this mental block would be to eliminate the arrogant belief that the engineer or programmer knows all they need to know to attend to such tasks.
    You say, “Hard problems require time to solve, harder problems require even more time, and so on, so the reason for us not having that taring or knowledge is purely economic (as time is money).“
    With all due respect for your opinion, solving such problem requires the first step of acknowledging the existence and I believe the postings thus far at least from other participants in this thread indicate this hasn’t happened yet. Honestly the knowledge exists but the walls that keep thinking within the box prevents progress.
    More to follow
    • Jun 26 2013: Could you please explain how did you conclude that computer engineers / programmers have "the arrogant belief that they know all they need"?

      Computer programming is a cognitive task, whenever a computer engineer / programmer faces a challenge in a field in which he/she doesn't have any knowledge, the very first step is to get familiar with the language, models and concepts (already) developed by the experts on that field. As a programmer / engineer you just don't go trying to solve a problem without understanding it first. Often you start a project knowing nothing of a specific field and end up becoming an expert, perhaps limited to a small part of such field, however a top expert. Even scientist do programming in order to get better understanding of their own hypothesis and models. The computer forces you to ask, research and study in such an strict way that makes programming a learning tool potentially more powerful than formal academy. So we do not know everything we need, but once faced with the problem, the computer will force us to either learn or quit.

      Said that, I agree the average engineer / programmer (including myself) doesn't have the knowledge to accurately emulate human cognition, I also know the knowledge is available, but in real life we are struggling to serve hundreds of users at the same time, or making a phone app to respond quickly, so our first concern is to squeeze as many operations as possible in every computational second, computer cycles are our "currency" and the base of our "economy", that's why we insist in avoiding anything that seem to waste them. So my point is: unless you give us an economically convincing reason of why we should be concerned with getting that knowledge, many of us will consider it: "a nice to have" but never a priority when there are more urgent challenges to solve.

      BTW, to respond please click on the "reply" link and make sure you are typing on the right field, threads become messy and hard to follow otherwise.
  • Jun 25 2013: What I've said thus far, and I suppose how I said it may be perceived as harsh but the opinion is that of the reader or those considering my argument. It was not my intent. And if the reader simply considers my points against their own or popular belief I believe this is a good thing. Such criticism is, I believe essential to maintaining a healthy scientific atmosphere.

    Respectfully, I suggest that your statistical decision making, in my opinion, provided only a scant understanding of the decision making -- action taking processes and functions of the individual human being.
  • Jun 25 2013: You are right, however there are few points you seem to be overlooking, human cognition is a very broad field, which implies and includes associative memory, senses, pattern recognition, thought, prediction, and other mind functions which are really hard to emulate in both, hardware and software. Hard problems require time to solve, harder problems require even more time, and so on, so the reason for us not having that taring or knowledge is purely economic (as time is money). Human behavior is an area in which I have serious doubts it is a good idea to try to accurately emulate in a computer, in my experience actual human working behavior is mostly partial, subjective, inefficient, biased, in some cases irrational, and even corrupt in extreme cases, so why to bother trying to accurately emulate such vices?... In the field of human computer interface we do the best we can with the knowledge and tools at hand, just like in any other industry. Human-machine interface has always being a challenge for engineers and technicians of all industries. Training is a key issue in the human-machine interface, there are machines in which if you don't complete the training you might end up dead, so the problems is not exclusive of the software industry as you seem to imply. Driving a car is something most of us consider an easy task, however you cannot put a person who has never driven on the drivers sit and hope they will drive away safely, even the bike requires training, so why should a piece of software be the exception?

    I don't know where you are aiming with this debate, but with all do respect, unless I'm missing something,you position sounds a little bit unfair.
  • Jun 24 2013: You said, “If you are saying that these employees need additional training by psychologist, that's fine with me.” That’s not what I am suggesting at all. I am saying that the selection, promotion, training, and management must be considered from more than the technical system to be operated and that those who may be totally familiar with the power plant technical systems, the computer programmers, engineers, and systems analysts do not have the necessary education and experience to develop and deliver economic and effective human resources development programs suitable to such a complex technical operation. They do not have the experience or education to understand the interface between the system and the human or the concepts of an operational team. In addition to the technical systems, the training and testing may also include challenges demanding, for instance, improving interpersonal effectiveness, attention to detail, improvement in the ability to synthesize, to better employ logic, value judgment, improved perception, and etc. Such things are not included in the “User’s Guide” for the systems. While the training may be best served by a subject matter expert, the training development must be a joint effort between the SME and a human performance specialist. The SME trainer must themselves be trained in the human performance part of safe and effective operation.
    Since you mentioned the Japanese nuclear plant disaster, I will mention Three Mile Island #2 reactor that was a human performance disaster because after success with the start up and operation of TMI#1`the engineering management decided that they didn’t really need all of the training to support start up and operation of TMI#2. An exact duplicate of the event happened in an Ohio Nuclear plant with a well trained staff and disaster was avoided.
    • Jun 25 2013: OK, we really don't have more disagreement. When I said more human resources development , including psychologists, is fine with me, then of course it's OK on other training advisers too.
      By the way, I have never been a computer engineer or a system programmer. I have a master's degree in Statistics and a PhD in Bio-statistics. I had a couple of courses in Statistical Decision Theory, so I understand the decision-making strategy based on risks of errors. I argue for the computer engineers because of your harsh criticism of them, not because of my opposition to the needs for additional training.
      Cheers.
  • Jun 24 2013: Let's just use your example. In current setup of a coal fired generation plant what kind of workers and supervisors do they hire for process control and crisis prevention decision-making. I understand that most of them are engineers or technicians trained in the field of power plant operation. Am I wrong?
    If you are saying that these employees need additional training by psychologist, that's fine with me. But I do not particularly think that's necessary. Like the Japanese nuclear plant disaster, the decisions made by the management and the government bureaucrats hadn't done better than the engineers and technical supervisors at all. (probably the management did even impede the in-plant decisions for sake of salvage of the remaining reactors).
    Here you are. You may criticize the current industrial system, but at least it is currently the "normal" operation model.
  • Jun 24 2013: Barry, I have a right to a qualified opinion and I am open to be proven wrong but over forty years in the business, I haven't been proven wrong. I'd sincerly hope that you would without being insulted or feeling that I am simply being unfair or unkind. I'd be happy to go up against any system engineer facing a complex man to machine challenge. Let the computer programmer or system analysis do the work their way and I'll do it mine and we'll find out which produces the strongest results in economy and effectivity. We'll see which work relies on anecdotal evidence and which does not. We;ll see which one produces actual scientifically deffendable results and which does not. Perhaps we can even do a little wagering?
  • Jun 24 2013: You say, “When your saying that computer engineer and programmer are not educated to built such principles into the software is an insult to our education system.”
    I did not mean to insult in any way but I am stating my opinion that is open to argument and I would accept a reasonable criticism of my point – just judging that I insulted computer engineers and programmer’s education, is not a reasonable response to my charge. I am prepared to demonstrate that the education sufficient to really emulate human cognition and the resulting behavior does NOT exist in such curriculum. I’d certainly like to see a case, just one case that would prove me wrong.
  • Jun 24 2013: I say, that the current systems designers are as unfamiliar with the human perception, judgment, decision making functions, problem solving, and their cognitive products as psychologists are with a Schmidt Trigger circuit. As such a programmer’s ability to produce a robot that can make any more than the simplest judgments, given the current understanding of the individual human psyche by the computer programmer or engineer is unlikely. They quite simply do not have the necessary understandings, models, concepts, and language necessary to the task.
    In your statement you expressed a need to employ psychologists to do “behavior modification” to solve human performance problems like your example. I say with the proper human performance systems in place individuals could be selected and trained without anecdotally defined learning objectives who could do the correct thing (or at least be so disposed). Once one understands the behavior of an individual within a network or social complex such as the platoon of guards, or any team facing complex tasks or objectives, this offers management and organizational advantages.
    The term “Behavior Modification” tends to run up a red flag for me as psychologists who follow that concept are beholding to those like B. F. Skinner who did not recognize any human cognition and only worked with the observable behavior. This was useful in the days of time and motion studies but given the amount of energy spent by the individual human in unobservable cognition, “Behaviorism” is today, a philosophical dinosaur. It lacks the same concepts, models, and language to properly support today’s more complex work tasks. Behaviorism relies heavily on statistical studies unfortunately, today’s more complex job tasks are performed by individuals who may be far removed from the statistically defined average or norm.
    more to come
  • Jun 24 2013: I say, while there may be a rare exception, that given a complex human/machine interface, the human factors resulting from an analysis by an engineer or programmer, will mainly be anecdotal and unsupportable scientifically. I think it is important to distinguish the term “behavior” as the product of the cognition and not the cognition itself. The product of the cognition may be a decision to behave in a certain manner while the actual behavior may not happen because the individual may not wish to take the necessary risk or incur the loss. The programmers lack the necessary language and concepts to capably handle such a lacking.
    I am suggesting a complex systems interface and not robotics as servers. As an example, a coal fired electric generating station control panel with meters, chart recorders, alarms, switches (binary), rheostats, supervisory systems, and communication to outboard employees for human investigation and analysis. Let’s say during a shift, a condition happens where several system signals and meters, individually not surpassing their critical threshold for an alarm but the unusual combination of increase or reduction may indicate the possibility of a cascading failure, a capable and experienced control room operator synthesizes an approaching cascading system failure (or not). Most of the cognition including analysis, perception, problem solving, decision making, attention to detail, synthesis, weighing of options, logic and value judgment are unobservable like a behavior. I specified a coal fired generation plant because they have many more complex systems and sub systems than a nuclear plant. The scenario may not be too different in a complex computer game or military simulation. More to follow
  • Jun 24 2013: What you implicitly saying is that computer engineers and programmers are incapable of design a computer system based on AI, mimic human cognitive behavior. First, we must clarify what is the kind of behavior you are talking about. At the present time, most of the robots and automatic machines function as a server, which takes a command and perform a given task. No deviation in this task is allowed. If we continue along this path, we may be able to let the robots to make a judgment to act in SEVERAL DEFINED ALTERNATIVES assigned by the software designers. The situation like the Abu Ghraib prison guards will never occur if the robots are programmed to act under certain given moral principles, such as part of the Ten Commandments.
    Let me comment on your specific statements and the referenced talk by Zimbardo. Zimbardo's discussion on the Abu Ghraib case applies to modification of behavior by psychologist on some human (prison guards) "evil" behavior. But by human nature almost all of us are born with innate capability of part good and part evil thoughts of behavior. Here we need the interference or the assistance of psychologists to perform behavioral modification for future prevention of such incidence.
    But the computer simulation of such behavior patterns is completely different. Even when robots are given the choice in functions like "if this , then do that", the "do that" part should be strictly limited to well defined functions. And these functions can be restricted to prohibitions such as no killing, no harm, no lying, etc. to be built in the software. When your saying that computer engineer and programmer are not educated to built such principles into the software is an insult to our education system. First, the "brain" of the computer is blank without the evil thought to start with, thus no behavior modification needed. Morality is taught to the techies thru social/literature studies in K-12 classes, which is sufficient for such programming purposes.
  • Jun 24 2013: I would like to compare the analysis process you employ with my own. I'd like to add that my idea of a human individual does NOT equate to one formed by a statistical analysis with the individual being defined by the center of the bell-curve.
    • Jun 24 2013: I do not understand your attitude regarding systems analysis. Systems analysis is a very well established field of study and endeavor, and has been for decades.. I entered "systems analysis methodology site:.edu" into Google Scholar and it provided 261,000 replies. At least one school provides a masters degree in Systems Analysis. If you sincerely want to compare your process to others, there are commercially available methodologies available, and their salesmen would gladly provide you with a plethora of information.