ICLR 2016 论文列表
4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
|
Super-Resolution with Deep Convolutional Sufficient Statistics.
Sequence Level Training with Recurrent Neural Networks.
Geodesics of learned representations.
Adversarial Manipulation of Deep Representations.
ACDC: A Structured Efficient Linear Layer.
Neural GPUs Learn Algorithms.
Reasoning in Vector Space: An Exploratory Study of Question Answering.
Data-Dependent Path Normalization in Neural Networks.
An Exploration of Softmax Alternatives Belonging to the Spherical Loss Family.
Digging Deep into the Layers of CNNs: In Search of How CNNs Achieve View Invariance.
Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.
Large-Scale Approximate Kernel Canonical Correlation Analysis.
Deep Linear Discriminant Analysis.
Segmental Recurrent Neural Networks.
Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning.
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs).
Predicting distributions with Linearizing Belief Networks.
Grid Long Short-Term Memory.
Deep multi-scale video prediction beyond mean square error.
High-Dimensional Continuous Control Using Generalized Advantage Estimation.
Order Matters: Sequence to sequence for sets.
Data-dependent Initializations of Convolutional Neural Networks.
8-Bit Approximations for Parallelism in Deep Learning.
Delving Deeper into Convolutional Networks for Learning Video Representations.
Variable Rate Image Compression with Recurrent Neural Networks.
Censoring Representations with an Adversary.
Metric Learning with Adaptive Density Discrimination.
Gated Graph Sequence Neural Networks.
Neural Random-Access Machines.
Policy Distillation.
Auxiliary Image Regularization for Deep CNNs with Noisy Labels.
Modeling Visual Representations: Defining Properties and Deep Approximations.
Recurrent Gaussian Processes.
Continuous control with deep reinforcement learning.
Session-based Recommendations with Recurrent Neural Networks.
Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications.
A Test of Relative Similarity For Model Selection in Generative Models.
Multi-task Sequence to Sequence Learning.
Distributional Smoothing by Virtual Adversarial Examples.
Better Computer Go Player with Neural Network and Long-term Prediction.
Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems.
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks.
Learning Visual Predictive Models of Physics for Playing Billiards.
Deep Reinforcement Learning in Parameterized Action Space.
Diversity Networks.
Data Representation and Compression Using Linear-Programming Approximations.
MuProp: Unbiased Backpropagation for Stochastic Neural Networks.
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks.
SparkNet: Training Deep Networks in Spark.
Neural Programmer: Inducing Latent Programs with Gradient Descent.
When crowds hold privileges: Bayesian unsupervised representation learning with oracle constraints.
All you need is a good init.
Particular object retrieval with integral max-pooling of CNN activations.
Unifying distillation and privileged information.
Convolutional neural networks with low-rank regularization.
Reasoning about Entailment with Neural Attention.
Surpassing Humans in Boundary Detection using Deep Learning.
Reducing Overfitting in Deep Networks by Decorrelating Representations.
Training CNNs with Low-Rank Filters for Efficient Image Classification.
Variational Auto-encoded Deep Gaussian Processes.
Importance Weighted Autoencoders.
Prioritized Experience Replay.
Learning to Diagnose with LSTM Recurrent Neural Networks.
Multi-Scale Context Aggregation by Dilated Convolutions.
Density Modeling of Images using a Generalized Normalization Transformation.
Generating Images from Captions with Attention.
Order-Embeddings of Images and Language.
Neural Networks with Few Multiplications.
Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding.
A note on the evaluation of generative models.
The Variational Fair Autoencoder.
Variational Gaussian Process.
Net2Net: Accelerating Learning via Knowledge Transfer.
Convergent Learning: Do different neural networks learn the same representations?
Towards Universal Paraphrastic Sentence Embeddings.
The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations.
BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies.
Regularizing RNNs by Stabilizing Activations.
Neural Programmer-Interpreters.