Home > Research > Publications & Outputs > Machine learning for dynamic incentive problems

Electronic data

View graph of relations

Machine learning for dynamic incentive problems

Research output: Working paper

Published

Standard

Machine learning for dynamic incentive problems. / Renner, Philipp; Scheidegger, Simon.
Lancaster: Lancaster University, Department of Economics, 2017. (Economics Working Papers Series).

Research output: Working paper

Harvard

Renner, P & Scheidegger, S 2017 'Machine learning for dynamic incentive problems' Economics Working Papers Series, Lancaster University, Department of Economics, Lancaster.

APA

Renner, P., & Scheidegger, S. (2017). Machine learning for dynamic incentive problems. (Economics Working Papers Series). Lancaster University, Department of Economics.

Vancouver

Renner P, Scheidegger S. Machine learning for dynamic incentive problems. Lancaster: Lancaster University, Department of Economics. 2017 Nov. (Economics Working Papers Series).

Author

Renner, Philipp ; Scheidegger, Simon. / Machine learning for dynamic incentive problems. Lancaster : Lancaster University, Department of Economics, 2017. (Economics Working Papers Series).

Bibtex

@techreport{fb4d8c75892a4221bc93529506f743ee,
title = "Machine learning for dynamic incentive problems",
abstract = "We propose a generic method for solving infinite-horizon, discrete-time dynamic incentive problems with hidden states. We first combine set-valued dynamic programming techniques with Bayesian Gaussian mixture models to determine irregularly shaped equilibrium value correspondences. Second, we generate training data from those pre-computed feasible sets to recursively solve the dynamic incentive problem by a massively parallelized Gaussian process machine learning algorithm. This combination enables us to analyzemodels of a complexity that was previously considered to be intractable. To demonstrate the broad applicability of our framework, we compute solutions for models of repeated agency with history dependence, many types, and varying preferences.",
keywords = "Dynamic Contracts, Principal-Agent Model, Dynamic Programming, Machine Learning, Gaussian Processes, High-performance Computing",
author = "Philipp Renner and Simon Scheidegger",
year = "2017",
month = nov,
language = "English",
series = "Economics Working Papers Series",
publisher = "Lancaster University, Department of Economics",
type = "WorkingPaper",
institution = "Lancaster University, Department of Economics",

}

RIS

TY - UNPB

T1 - Machine learning for dynamic incentive problems

AU - Renner, Philipp

AU - Scheidegger, Simon

PY - 2017/11

Y1 - 2017/11

N2 - We propose a generic method for solving infinite-horizon, discrete-time dynamic incentive problems with hidden states. We first combine set-valued dynamic programming techniques with Bayesian Gaussian mixture models to determine irregularly shaped equilibrium value correspondences. Second, we generate training data from those pre-computed feasible sets to recursively solve the dynamic incentive problem by a massively parallelized Gaussian process machine learning algorithm. This combination enables us to analyzemodels of a complexity that was previously considered to be intractable. To demonstrate the broad applicability of our framework, we compute solutions for models of repeated agency with history dependence, many types, and varying preferences.

AB - We propose a generic method for solving infinite-horizon, discrete-time dynamic incentive problems with hidden states. We first combine set-valued dynamic programming techniques with Bayesian Gaussian mixture models to determine irregularly shaped equilibrium value correspondences. Second, we generate training data from those pre-computed feasible sets to recursively solve the dynamic incentive problem by a massively parallelized Gaussian process machine learning algorithm. This combination enables us to analyzemodels of a complexity that was previously considered to be intractable. To demonstrate the broad applicability of our framework, we compute solutions for models of repeated agency with history dependence, many types, and varying preferences.

KW - Dynamic Contracts

KW - Principal-Agent Model

KW - Dynamic Programming

KW - Machine Learning

KW - Gaussian Processes

KW - High-performance Computing

M3 - Working paper

T3 - Economics Working Papers Series

BT - Machine learning for dynamic incentive problems

PB - Lancaster University, Department of Economics

CY - Lancaster

ER -