Abstract
The lack of first principles linking organizational theory and empirical evidence puts the future of autonomous multi-agent system (MAS) missions and their interactions with humans at risk. Characterizing this issue as N increases are the significant trade-offs between costs and computational power to interact among agents and humans. In contrast to the extreme of command or consensus decision-making approaches to manage these tradeoffs, quantizing the pro-con positions in decision-making may produce a robust model of interaction that better integrates social theory with experiment and increases computational power with N. We have found that optimum solutions of ill-defined problems (idp's) occurred when incommensurable beliefs interacting before neutral decision makers generated sufficient emotion to process information, I, but insufficient to impair the interaction, unexpectedly producing more trust than under the game-theory model of cooperation. We have extended our model to a mathematical theory of organizations, especially mergers; and we introduce random exploration into our model with the goal of revising rational theory to achieve autonomy with an MAS.
Original language | English (US) |
---|---|
Pages | 116-121 |
Number of pages | 6 |
State | Published - 2004 |
Externally published | Yes |
Event | 2004 AAAI Spring Symposium - Stanford, CA, United States Duration: Mar 22 2004 → Mar 24 2004 |
Other
Other | 2004 AAAI Spring Symposium |
---|---|
Country/Territory | United States |
City | Stanford, CA |
Period | 3/22/04 → 3/24/04 |
ASJC Scopus subject areas
- Engineering(all)