Abstract
In traditional social theory, trust is a function of the cooperation across a system of multiple human or artificial agents. Logical positivism assures us that conflict ends with a consensus of facts. Overlooked is the downside of cooperation (e.g., corruption), and the lack of computational efficiency with the growth in the number, N, of agents in an interaction, the degree of cooperation, or the costs to communicate. More importantly, the traditional model is impractical for a system of human or computational agent to solve difficult problems. In contrast to logical positivist theories, quantizing the pro-con positions in decision-making produces a robust computational model of argumentation. Unlike the paucity of evidence for logical positivist theories, support exists for the prediction that the optimum solutions of ill-defined problems (idp's) occur when the collision of incommensurable beliefs before neutral decision makers is managed to permit emotion but preclude violence.
Original language | English (US) |
---|---|
Pages | 1013-1018 |
Number of pages | 6 |
State | Published - 2002 |
Event | Proceedings of the Artificial Neutral Networks in Engineering Conference:Smart Engineering System Design - St. Louis, MO, United States Duration: Nov 10 2002 → Nov 13 2002 |
Conference
Conference | Proceedings of the Artificial Neutral Networks in Engineering Conference:Smart Engineering System Design |
---|---|
Country/Territory | United States |
City | St. Louis, MO |
Period | 11/10/02 → 11/13/02 |
ASJC Scopus subject areas
- Software