Amoeba-inspired Tug-of-War algorithms for exploration-exploitation dilemma in extended Bandit Problem
- PMID: 24384066
- DOI: 10.1016/j.biosystems.2013.12.007
Amoeba-inspired Tug-of-War algorithms for exploration-exploitation dilemma in extended Bandit Problem
Abstract
The true slime mold Physarum polycephalum, a single-celled amoeboid organism, is capable of efficiently allocating a constant amount of intracellular resource to its pseudopod-like branches that best fit the environment where dynamic light stimuli are applied. Inspired by the resource allocation process, the authors formulated a concurrent search algorithm, called the Tug-of-War (TOW) model, for maximizing the profit in the multi-armed Bandit Problem (BP). A player (gambler) of the BP should decide as quickly and accurately as possible which slot machine to invest in out of the N machines and faces an "exploration-exploitation dilemma." The dilemma is a trade-off between the speed and accuracy of the decision making that are conflicted objectives. The TOW model maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a nonlocal correlation among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). Owing to this nonlocal correlation, the TOW model can efficiently manage the dilemma. In this study, we extend the TOW model to apply it to a stretched variant of BP, the Extended Bandit Problem (EBP), which is a problem of selecting the best M-tuple of the N machines. We demonstrate that the extended TOW model exhibits better performances for 2-tuple-3-machine and 2-tuple-4-machine instances of EBP compared with the extended versions of well-known algorithms for BP, the ϵ-Greedy and SoftMax algorithms, particularly in terms of its short-term decision-making capability that is essential for the survival of the amoeba in a hostile environment.
Keywords: Decision making; Multi-armed Bandit Problem; Natural computing; Physarum polycephalum; Resource allocation.
Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Similar articles
-
Tug-of-war model for the two-bandit problem: nonlocally-correlated parallel exploration via resource conservation.Biosystems. 2010 Jul;101(1):29-36. doi: 10.1016/j.biosystems.2010.04.002. Epub 2010 Apr 22. Biosystems. 2010. PMID: 20399248
-
Decision-making without a brain: how an amoeboid organism solves the two-armed bandit.J R Soc Interface. 2016 Jun;13(119):20160030. doi: 10.1098/rsif.2016.0030. J R Soc Interface. 2016. PMID: 27278359 Free PMC article.
-
A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.Bioinspir Biomim. 2014 Sep;9(3):036006. doi: 10.1088/1748-3182/9/3/036006. Epub 2014 Mar 11. Bioinspir Biomim. 2014. PMID: 24613939
-
Does being multi-headed make you better at solving problems? A survey of Physarum-based models and computations.Phys Life Rev. 2019 Jul;29:1-26. doi: 10.1016/j.plrev.2018.05.002. Epub 2018 May 22. Phys Life Rev. 2019. PMID: 29857934 Review.
-
Brainless but Multi-Headed: Decision Making by the Acellular Slime Mould Physarum polycephalum.J Mol Biol. 2015 Nov 20;427(23):3734-43. doi: 10.1016/j.jmb.2015.07.007. Epub 2015 Jul 17. J Mol Biol. 2015. PMID: 26189159 Review.
Cited by
-
Emergence of behaviour in a self-organized living matter network.Elife. 2022 Jan 21;11:e62863. doi: 10.7554/eLife.62863. Elife. 2022. PMID: 35060901 Free PMC article.
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
