Home > Research > Publications & Outputs > Dynamic Principal–Agent Models

Electronic data

View graph of relations

Dynamic Principal–Agent Models

Research output: Working paper

Published

Standard

Dynamic Principal–Agent Models. / Renner, Philipp; Schmedders, Karl.
Lancaster: Lancaster University, Department of Economics, 2017. (Economics Working Paper Series).

Research output: Working paper

Harvard

Renner, P & Schmedders, K 2017 'Dynamic Principal–Agent Models' Economics Working Paper Series, Lancaster University, Department of Economics, Lancaster.

APA

Renner, P., & Schmedders, K. (2017). Dynamic Principal–Agent Models. (Economics Working Paper Series). Lancaster University, Department of Economics.

Vancouver

Renner P, Schmedders K. Dynamic Principal–Agent Models. Lancaster: Lancaster University, Department of Economics. 2017 Nov. (Economics Working Paper Series).

Author

Renner, Philipp ; Schmedders, Karl. / Dynamic Principal–Agent Models. Lancaster : Lancaster University, Department of Economics, 2017. (Economics Working Paper Series).

Bibtex

@techreport{00270ff5a0d44e52b7f6f1b2891f1d9a,
title = "Dynamic Principal–Agent Models",
abstract = "This paper contributes to the theoretical and numerical analysis of discrete time dynamic principal-agent problems with continuous choice sets. We first provide a new and simplified proof for the recursive reformulation of the sequential dynamic principal-agent relationship. Next we prove the existence of a unique solution for the principal's value function, which solves the dynamic programming problem in the recursive formulation. By showing that the Bellman operator is a contraction mapping, we also obtain a convergence result for the value function iteration. To compute a solution for the problem, we have to solve a collection of static principal{agent problems at each iteration. Under the assumption that the agent's expected utility is a rational function of his action, we can transform the bi-level optimization problem into a standard nonlinear program. The final results of our solution method are numerical approximations of the policy and value functions for the dynamic principal-agent model. We illustrate our solution method by solving variations of two prominent social planning models from the economics literature. ",
keywords = "Optimal unemployment tax, principal-agent model, repeated moral hazard",
author = "Philipp Renner and Karl Schmedders",
year = "2017",
month = nov,
language = "English",
series = "Economics Working Paper Series",
publisher = "Lancaster University, Department of Economics",
type = "WorkingPaper",
institution = "Lancaster University, Department of Economics",

}

RIS

TY - UNPB

T1 - Dynamic Principal–Agent Models

AU - Renner, Philipp

AU - Schmedders, Karl

PY - 2017/11

Y1 - 2017/11

N2 - This paper contributes to the theoretical and numerical analysis of discrete time dynamic principal-agent problems with continuous choice sets. We first provide a new and simplified proof for the recursive reformulation of the sequential dynamic principal-agent relationship. Next we prove the existence of a unique solution for the principal's value function, which solves the dynamic programming problem in the recursive formulation. By showing that the Bellman operator is a contraction mapping, we also obtain a convergence result for the value function iteration. To compute a solution for the problem, we have to solve a collection of static principal{agent problems at each iteration. Under the assumption that the agent's expected utility is a rational function of his action, we can transform the bi-level optimization problem into a standard nonlinear program. The final results of our solution method are numerical approximations of the policy and value functions for the dynamic principal-agent model. We illustrate our solution method by solving variations of two prominent social planning models from the economics literature.

AB - This paper contributes to the theoretical and numerical analysis of discrete time dynamic principal-agent problems with continuous choice sets. We first provide a new and simplified proof for the recursive reformulation of the sequential dynamic principal-agent relationship. Next we prove the existence of a unique solution for the principal's value function, which solves the dynamic programming problem in the recursive formulation. By showing that the Bellman operator is a contraction mapping, we also obtain a convergence result for the value function iteration. To compute a solution for the problem, we have to solve a collection of static principal{agent problems at each iteration. Under the assumption that the agent's expected utility is a rational function of his action, we can transform the bi-level optimization problem into a standard nonlinear program. The final results of our solution method are numerical approximations of the policy and value functions for the dynamic principal-agent model. We illustrate our solution method by solving variations of two prominent social planning models from the economics literature.

KW - Optimal unemployment tax

KW - principal-agent model

KW - repeated moral hazard

M3 - Working paper

T3 - Economics Working Paper Series

BT - Dynamic Principal–Agent Models

PB - Lancaster University, Department of Economics

CY - Lancaster

ER -