4.9 C
London
Sunday, 22 December, 2024

Deep Learning Interpretability for Rough Volatility

A group of Cambridge, Imperial College, Alan...

Reward is enough

AIReward is enough

In a recent paper by David Silver, Satinder Baveja, Doina Precup, and Richard Sutton, the authors hypothesize that the objective of maximizing reward is enough to drive behaviour that exhibits most if not all attributes of intelligence that are studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, and generalization. This is in contrast to the view that specialized problem formulations are needed for each attribute of intelligence, based on other signals or objectives. The reward-is-enough hypothesis suggests that agents with powerful reinforcement learning algorithms when placed in rich environments with simple rewards could develop the kind of broad, multi-attribute intelligence that constitutes an artificial general intelligence.

Check out our other content

Check out other tags:

Most Popular Articles