This conversation is closed.

Neural Net Approach for Better Ethical Decision Making.

You generalize conflicts into strutures of parameters. E.G.Two sister's fighting over who gets to take out the car. You map their relationship, where more detail gets you closer to reality. (Relationship-->sister-->caring, memories-->loyalty/sharing etc...) You map the conflict (conflict over property-->fun vs. necessity etc..)
You train a neural net with the parameters of specific conflicts against what the legal department says, what different philosophers says, what twitter says etc. You adapt and refine your input parameters. This treats human decision making as black boxes, but may be used as a tool to better decision making, or perhaps as a looking glass into generalizations.
If looking at a crowd's inputs and outputs gives wild untrainable results, consider giving each person their own neural nets representing how they feel about, say menu items picked at a restaurant, books drawn from a shelf etc. You train a "meta" network to best learn which personality traits for everyone are give the best results for a conflict of such and such parameters.

In the case of the two sisters fighting over who gets to take out the car, the neural net sees that, for example, sister A wants to go see a movie rendition of a book she owns, while sister B is meeting friends. The meta network pulls up sister A's book preference neural-net so it can give its influence, as well as sister B's friend's neural nets, depending on her transparency settings. The neural net could pull of the fact that sister B's friend is leaving from the inputs. And again, the neural net should respond not just how the sister's would act, but also ow the legal department would, the philosophers, their grandparents, etc...

If a machine predicts your reaction wrongly, and you have the ability to guide it to do better next time, we ourselves are complex machines and could surely see what led to the mistake, especially if the inputs are details from our lives. Simply, it must pass the owner test.

  • Jun 14 2011: Fascinating statement ...

    How do we know what the programmers are really doing with a neural net in terms of training?

    Where is this technology explained. If I want to experiment with neural nets how do I do it, where do I go to learn how they work. Where do I get one?

    So, what weight do you give the neural net version of morality. Looking at IBMs Watson, it beats lots of people at Jeopardy, but when it is wrong it is really way off track. What meta-consciousness attaches a weighting or usefulness or gives the output of a Neural Net a reality check?

    Finally, can a neural net grow or learn? If so, how is the mutations or change results from growth evaluated and/or controlled. What if it grows in a way that seems wrong or bad?