A Theory of Arbitrage Capital

2013 ◽  
Vol 2 (1) ◽  
pp. 62-97 ◽  
Author(s):  
Viral V. Acharya ◽  
Hyun Song Shin ◽  
Tanju Yorulmazer

We present a model of equilibrium allocation of capital for arbitrage. If asset prices may fall low enough, it is profitable to carry liquid capital to acquire assets in such states. Set against this, keeping capital in liquid form entails costs in terms of foregone profitable investments. This trade-off generates occasional fire sales and limited arbitrage capital as robust phenomena. With learning-by-doing effects, arbitrage capital moves in to acquire assets only if fire sales are steep. However, once arbitrage capital finds it profitable to acquire assets, it requires similar returns elsewhere, inducing contagious fire-sale prices even for unrelated assets. (JEL G21, G28, G38, E58, D62)

Author(s):  
Bruno Biais ◽  
Florian Heider ◽  
Marie Hoerova

Abstract In order to share risk, protection buyers trade derivatives with protection sellers. Protection sellers’ actions affect the riskiness of their assets, which can create counterparty risk. Because these actions are unobservable, moral hazard limits risk sharing. To mitigate this problem, privately optimal derivative contracts involve variation margins. When margins are called, protection sellers must liquidate some assets, depressing asset prices. This tightens the incentive constraints of other protection sellers and reduces their ability to provide insurance. Despite this fire-sale externality, equilibrium is information-constrained efficient. Investors, who benefit from buying assets at fire-sale prices, optimally supply insurance against the risk of fire sales.


2021 ◽  
Author(s):  
Henning Piezunka ◽  
Vikas A. Aggarwal ◽  
Hart E. Posen

Organizational decision making that leverages the collective wisdom and knowledge of multiple individuals is ubiquitous in management practice, occurring in settings such as top management teams, corporate boards, and the teams and groups that pervade modern organizations. Decision-making structures employed by organizations shape the effectiveness of knowledge aggregation. We argue that decision-making structures play a second crucial role in that they shape the learning of individuals that participate in organizational decision making. In organizational decision making, individuals do not engage in learning by doing but, rather, in what we call learning by participating, which is distinct in that individuals learn by receiving feedback not on their own choices but, rather, on the choice made by the organization. We examine how learning by participating influences the efficacy of aggregation and learning across alternative decision-making structures and group sizes. Our central insight is that learning by participating leads to an aggregation–learning trade-off in which structures that are effective in aggregating information can be ineffective in fostering individual learning. We discuss implications for research on organizations in the areas of learning, microfoundations, teams, and crowds.


2011 ◽  
Vol 3 (2) ◽  
pp. 1-37 ◽  
Author(s):  
Douglas Gale ◽  
Piero Gottardi

We study a competitive model in which market incompleteness implies that debt-financed firms may default in some states of nature, and default may lead to the sale of the firms' assets at fire sale prices when a finance constraint is binding. The anticipation of such “losses” alone may distort firms' investment decisions. We characterize the conditions under which fire sales occur in equilibrium, and their consequences on firms' investment decisions. We also show that endogenous financial crises may arise in this environment, with asset prices collapsing as a result of pure self-fulfilling beliefs. Finally, we examine alternative interventions to restore the efficiency of equilibria. (JEL D83, G31, G32, G33)


2011 ◽  
Vol 49 (2) ◽  
pp. 287-325 ◽  
Author(s):  
Jean Tirole

The recent crisis was characterized by massive illiquidity. This paper reviews what we know and don't know about illiquidity and all its friends: market freezes, fire sales, contagion, and ultimately insolvencies and bailouts. It first explains why liquidity cannot easily be apprehended through a single statistic, and asks whether liquidity should be regulated given that a capital adequacy requirement is already in place. The paper then analyzes market breakdowns due to either adverse selection or shortages of financial muscle, and explains why such breakdowns are endogenous to balance sheet choices and to information acquisition. It then looks at what economics can contribute to the debate on systemic risk and its containment. Finally, the paper takes a macroeconomic perspective, discusses shortages of aggregate liquidity, and analyzes how market value accounting and capital adequacy should react to asset prices. It concludes with a topical form of liquidity provision, monetary bailouts and recapitalizations, and analyzes optimal combinations thereof; it stresses the need for macro-prudential policies. (JEL E44, G01, G21, G28, G32, L51)


2017 ◽  
Vol 20 (01) ◽  
pp. 1750001 ◽  
Author(s):  
LAKSHITHE WAGALATH

We propose a tool for monitoring fire sales and fund liquidations in financial markets. This liquidation index detects fire sales episodes in a contemporaneous manner and estimates their magnitude, using only publicly available data (asset prices and volumes). At every date [Formula: see text], it takes as input the movement of asset prices and realized covariances between dates [Formula: see text] and [Formula: see text] and the market depth of each asset and estimates a theoretical magnitude for fire sales over the period [Formula: see text] that generated such joint movement of prices and covariances. As such, the liquidation index spikes during fire sales episodes and can hence be used in a systemic risk management perspective, as it enables to detect fire sales episodes — even complex liquidation events such as the hedge fund crash of August 2007 which was undetected by commonly-used monitoring tools. It can also be useful in a trading and portfolio allocation perspective as it allows to distinguish between periods of “fundamental” asset behavior from fire sales periods, characterized by crowding and contagion effects and during which diversification effects are reduced.


Econometrica ◽  
2021 ◽  
Vol 89 (4) ◽  
pp. 1665-1698 ◽  
Author(s):  
Piotr Dworczak ◽  
Scott Duke Kominers ◽  
Mohammad Akbarpour

Policymakers frequently use price regulations as a response to inequality in the markets they control. In this paper, we examine the optimal structure of such policies from the perspective of mechanism design. We study a buyer‐seller market in which agents have private information about both their valuations for an indivisible object and their marginal utilities for money. The planner seeks a mechanism that maximizes agents' total utilities, subject to incentive and market‐clearing constraints. We uncover the constrained Pareto frontier by identifying the optimal trade‐off between allocative efficiency and redistribution. We find that competitive‐equilibrium allocation is not always optimal. Instead, when there is inequality across sides of the market, the optimal design uses a tax‐like mechanism, introducing a wedge between the buyer and seller prices, and redistributing the resulting surplus to the poorer side of the market via lump‐sum payments. When there is significant same‐side inequality that can be uncovered by market behavior, it may be optimal to impose price controls even though doing so induces rationing.


1982 ◽  
Vol 14 (2) ◽  
pp. 109-113 ◽  
Author(s):  
Suleyman Tufekci
Keyword(s):  

2012 ◽  
Vol 11 (3) ◽  
pp. 118-126 ◽  
Author(s):  
Olive Emil Wetter ◽  
Jürgen Wegge ◽  
Klaus Jonas ◽  
Klaus-Helmut Schmidt

In most work contexts, several performance goals coexist, and conflicts between them and trade-offs can occur. Our paper is the first to contrast a dual goal for speed and accuracy with a single goal for speed on the same task. The Sternberg paradigm (Experiment 1, n = 57) and the d2 test (Experiment 2, n = 19) were used as performance tasks. Speed measures and errors revealed in both experiments that dual as well as single goals increase performance by enhancing memory scanning. However, the single speed goal triggered a speed-accuracy trade-off, favoring speed over accuracy, whereas this was not the case with the dual goal. In difficult trials, dual goals slowed down scanning processes again so that errors could be prevented. This new finding is particularly relevant for security domains, where both aspects have to be managed simultaneously.


2001 ◽  
Vol 56 (11) ◽  
pp. 964-973 ◽  
Author(s):  
Alan M. Lesgold

PsycCRITIQUES ◽  
2008 ◽  
Vol 53 (10) ◽  
Author(s):  
Joseph A. Durlak ◽  
Christine I. Celio
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document