By Jeremy Nixon [jnixon2@gmail.com]. Nov. 2017. Updated June 2018.

Overview

  1. Deepmind Paper Framing
  2. Deepmind Papers through Framing
  3. Current Frontier
  4. Examples of Systems Neuroscience Inspiration

Deepmind Papers Categories of the path to date:

  1. Transfer Learning
  2. Multi-task Learning
  3. Tools, Environment & Datasets
  4. Intuitive Physics
  5. Reinforcement Learning
    • Model-based RL
    • Exploration in RL
  6. Applications
  7. Safety
  8. Deep Learning
    • RNNs
    • CNNs
  9. Generative Models
  10. Variational Inference
  11. Unsupervised Learning
  12. Representation Learning
  13. Attention
  14. Memory
  15. Multi-Agent Systems
  16. Imitation Learning
  17. Metalearning
    • Neural Programming
  18. Evolution
  19. Game Theory
  20. Natural Language Processing
  21. Multi-Modal Learning
  22. General Machine Learning
  23. Theory
  24. Miscellaneous
  25. Neuroscience

Papers:

  1. Transfer Learning
  2. Multi-Task Learning
  3. Tools, Environments, Evaluation & Datasets
  4. Intuitive Physics
  5. Reinforcement Learning (Papers with a pure RL focus)
  6. Applications
  7. Safety / Security
  8. Deep Learning
  9. Variational Inference
  10. Generative Models
  11. Unsupervised Learning
  12. Representation Learning
  13. Attention
  14. Memory
  15. Multi-Agent Systems
  16. Imitation Learning
  17. Metalearning
  18. Evolution
  19. Game Theory
  20. Natural Language Processing
  21. Multi-Modal
  22. General Machine Learning
  23. Theory
  24. Miscellaneous
  25. Neuroscience

Current Frontier:

  1. Hierarchical planning
  2. Imagination-based planning with generative models
  3. Unsupervised Learning
  4. Memory and one-shot learning
  5. Abstract Concepts
  6. Continual and Transfer Learning

Emphasis on systems neuroscience - using the brain as inspiration for the structure and function of algorithms.

Neuroscience Inspired Artificial Intelligence

Examples of previous success of neuro-inspiration:

  • Reinforcement Learning
    • Inspired by animal learning
    • TD Learning came out of animal behavior research.
    • Second-order conditioning (Conditional Stimulus) (Sutton and Barto, 1981) * Deep Learning
    • Convolutional Neural Networks. Visual Cortex (V1)
      • Uses hierarchical structure (successive processing layers)
      • Neurons in the early visual systems responds strongly to specific patterns of light (say, precisely oriented bars) but hardly responds to many other patterns.
      • Gabor functions describe the weights in V1 cells.
      • Nonlinear Transduction
      • Divisive Normalization
    • Word / Sentence Vectors - Distributed Embeddings
      • Parallel Distributed Processing in the brain for representation and computation
    • Dropout
      • Stochasticity in neurons that fire with` Poisson-like statistics (Hinton 2012)
        • Attention
    • Applying attention to memory
    • Thought - it doesn’t make much sense to train an attention model over a static image, rather than over a time series. With a time series, bringing attention to changing aspects of the input makes sense. * Multiple Memory Systems
    • Episodic Memory
      • Experience Replay
      • Especially for one shot experiences
    • Working Memory
      • LSTM - gating allows for conditioning on current state
    • Long-term Memory
      • External Memory
      • Gating in LSTM
        • Continual Learning
    • Elastic weight consolidation for slowing down learning on weights that are important for previous tasks.

Example of future success:

  • Intuitive Understanding of Physics
    • Need to understand space, number, objectness
    • Need to disentangle representations for transfer. (Dude, I feel so stolen from) * Efficient Learning (Learning from few examples) * Transfer Learning
    • Transferring generalized knowledge gained in one context to novel domains
    • Concept representations for transfer
      • No direct evidence of concept representations in brains
        • Imagination and Planning
    • Toward model-based RL
    • Internal model of the environment
      • Model needs to include compositional / disentangled representations for flexibility
    • Implementing a forecasted-based method of action selection
    • Monte-carlo Tree Search as simulation based planning
    • In rat brains, we observe ‘preplay’ where rats imagine the likely future experience - measured by comparing neural activations at preplay to activations during the activity
    • Generalization + Transfer in human planning
    • Hierarchical Planning * Virtual Brain Analytics deepminds-path-to-neuro-inspired-general-intelligence.md Displaying deepminds-path-to-neuro-inspired-general-intelligence.md.