Multi-Agent Applications with Evolutionary Computation and Biologically Inspired Technologies
Latest Publications


TOTAL DOCUMENTS

18
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Published By IGI Global

9781605668987, 9781605668994

Author(s):  
Hiroshi Sato ◽  
Masao Kubo ◽  
Akira Namatame

In this chapter, we conduct a comparative study of various traders following different trading strategies. We design an agent-based artificial stock market consisting of two opposing types of traders: “rational traders” (or “fundamentalists”) and “imitators” (or “chartists”). Rational traders trade by trying to optimize their short-term income. On the other hand, imitators trade by copying the majority behavior of rational traders. We obtain the wealth distribution for different fractions of rational traders and imitators. When rational traders are in the minority, they can come to dominate imitators in terms of accumulated wealth. On the other hand, when rational traders are in the majority and imitators are in the minority, imitators can come to dominate rational traders in terms of accumulated wealth. We show that survival in a finance market is a kind of minority game in behavioral types, rational traders and imitators. The coexistence of rational traders and imitators in different combinations may explain the market’s complex behavior as well as the success or failure of various trading strategies. We also show that successful rational traders are clustered into two groups: In one group traders always buy and their wealth is accumulated in stocks; in the other group they always sell and their wealth is accumulated in cash. However, successful imitators buy and sell coherently and their wealth is accumulated only in cash.


Author(s):  
Yasushi Kambayashi ◽  
Yasuhiro Tsujimura ◽  
Hidemi Yamachi ◽  
Munehiro Takimoto

This chapter presents a framework using novel methods for controlling mobile multiple robots directed by mobile agents on a communication networks. Instead of physical movement of multiple robots, mobile software agents migrate from one robot to another so that the robots more efficiently complete their task. In some applications, it is desirable that multiple robots draw themselves together automatically. In order to avoid excessive energy consumption, we employ mobile software agents to locate robots scattered in a field, and cause them to autonomously determine their moving behaviors by using a clustering algorithm based on the Ant Colony Optimization (ACO) method. ACO is the swarm-intelligence-based method that exploits artificial stigmergy for the solution of combinatorial optimization problems. Preliminary experiments have provided a favorable result. Even though there is much room to improve the collaboration of multiple agents and ACO, the current results suggest a promising direction for the design of control mechanisms for multi-robot systems. In this chapter, we focus on the implementation of the controlling mechanism of the multi-robot system using mobile agents.


Author(s):  
Mak Kaboudan

Successful decision-making by home-owners, lending institutions, and real estate developers among others is dependent on obtaining reasonable forecasts of residential home prices. For decades, home-price forecasts were produced by agents utilizing academically well-established statistical models. In this chapter, several modeling agents will compete and cooperate to produce a single forecast. A cooperative multi-agent system (MAS) is developed and used to obtain monthly forecasts (April 2008 through March 2010) of the S&P/Case-Shiller home price index for Los Angeles, CA (LXXR). Monthly housing market demand and supply variables including conventional 30-year fixed real mortgage rate, real personal income, cash out loans, homes for sale, change in housing inventory, and construction material price index are used to find different independent models that explain percentage change in LXXR. An agent then combines the forecasts obtained from the different models to obtain a final prediction.


Author(s):  
Hidenori Kawamura ◽  
Keiji Suzuki

Pheromones are the important chemical substances for social insects to realize cooperative collective behavior. The most famous example of pheromone-based behavior is foraging. Real ants use pheromone trail to inform each other where food source exists and they effectively reach and forage the food. This sophisticated but simple communication method is useful to design artificial multiagent systems. In this chapter, the evolutionary pheromone communication is proposed on a competitive ant environment model, and we show two patterns of pheromone communication emerged through co-evolutionary process by genetic algorithm. In addition, such communication patterns are investigated with Shannon’s entropy.


Author(s):  
Shu-Heng Chen ◽  
Shu G. Wang

Recently, the relation between neuroeconomics and agent-based computational economics (ACE) has become an issue concerning the agent-based economics community. Neuroeconomics can interest agent-based economists when they are inquiring for the foundation or the principle of the software-agent design, normally known as agent engineering. It has been shown in many studies that the design of software agents is non-trivial and can determine what will emerge from the bottom. Therefore, it has been quested for rather a period regarding whether we can sensibly design these software agents, including both the choice of software agent models, such as reinforcement learning, and the parameter setting associated with the chosen model, such as risk attitude. In this chapter, we shall start a formal inquiry by focusing on examining the models and parameters used to build software agents.


Author(s):  
Shu-Heng Chen ◽  
Ren-Jie Zeng ◽  
Tina Yu ◽  
Shu G. Wang

We investigate the dynamics of trader behaviors using an agent-based genetic programming system to simulate double-auction markets. The objective of this study is two-fold. First, we seek to evaluate how, if any, the difference in trader rationality/intelligence influences trading behavior. Second, besides rationality, we also analyze how, if any, the co-evolution between two learnable traders impacts their trading behaviors. We have found that traders with different degrees of rationality may exhibit different behavior depending on the type of market they are in. When the market has a profit zone to explore, the more intelligent trader demonstrates more intelligent behaviors. Also, when the market has two learnable buyers, their co-evolution produced more profitable transactions than when there was only one learnable buyer in the market. We have analyzed the trading strategies and found the learning behaviors are very similar to humans in decision-making. We plan to conduct human subject experiments to validate these results in the near future.


Author(s):  
Yukiko Orito ◽  
Yasushi Kambayashi ◽  
Yasuhiro Tsujimura ◽  
Hisashi Yamamoto

Portfolio optimization is the determination of the weights of assets to be included in a portfolio in order to achieve the investment objective. It can be viewed as a tight combinatorial optimization problem that has many solutions near the optimal solution in a narrow solution space. In order to solve such a tight problem, we introduce an Agent-based Model in this chapter. We continue to employ the Information Ratio, a well-known measure of the performance of actively managed portfolios, as an objective function. Our agent has one portfolio, the Information Ratio and its character as a set of properties. The evolution of agent properties splits the search space into a lot of small spaces. In a population of one small space, there is one leader agent and several follower agents. As the processing of the populations progresses, the agent properties change by the interaction between the leader and the follower, and when the iteration is over, we obtain one leader who has the highest Information Ratio.


Author(s):  
Germano Resconi ◽  
Boris Kovalerchuk

This chapter models quantum and neural uncertainty using a concept of the Agent–based Uncertainty Theory (AUT). The AUT is based on complex fusion of crisp (non-fuzzy) conflicting judgments of agents. It provides a uniform representation and an operational empirical interpretation for several uncertainty theories such as rough set theory, fuzzy sets theory, evidence theory, and probability theory. The AUT models conflicting evaluations that are fused in the same evaluation context. This agent approach gives also a novel definition of the quantum uncertainty and quantum computations for quantum gates that are realized by unitary transformations of the state. In the AUT approach, unitary matrices are interpreted as logic operations in logic computations. We show that by using permutation operators any type of complex classical logic expression can be generated. With the quantum gate, we introduce classical logic into the quantum domain. This chapter connects the intrinsic irrationality of the quantum system and the non-classical quantum logic with the agents. We argue that AUT can help to find meaning for quantum superposition of non-consistent states. Next, this chapter shows that the neural fusion at the synapse can be modeled by the AUT in the same fashion. The neuron is modeled as an operator that transforms classical logic expressions into many-valued logic expressions. The motivation for such neural network is to provide high flexibility and logic adaptation of the brain model.


Author(s):  
Shu-Heng Chen ◽  
Chung-Ching Tai ◽  
Tzai-Der Wang ◽  
Shu G. Wang

In this chapter, we will present agent-based simulations as well as human experiments in double auction markets. Our idea is to investigate the learning capabilities of human traders by studying learning agents constructed by Genetic Programming (GP), and the latter can further serve as a design platform in conducting human experiments. By manipulating the population size of GP traders, we attempt to characterize the innate heterogeneity in human being’s intellectual abilities. We find that GP traders are efficient in the sense that they can beat other trading strategies even with very limited learning capacity. A series of human experiments and multi-agent simulations are conducted and compared for an examination at the end of this chapter.


Sign in / Sign up

Export Citation Format

Share Document