emnlp 2018 论文列表
Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2018, Brussels, Belgium, November 1, 2018.
|
Limitations in learning an interpreted language with recurrent models.
End-to-end Image Captioning Exploits Distributional Similarity in Multimodal Space.
Exploiting Attention to Reveal Shortcomings in Memory Models.
Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages?
Grammar Induction with Neural Language Models: An Unusual Replication.
Debugging Sequence-to-Sequence Models with Seq2Seq-Vis.
Interpretable Structure Induction via Sparse Attention.
Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model.
Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis.
Explicitly modeling case improves neural dependency parsing.
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding.
Portable, layer-wise task performance monitoring for NLP models.
Extracting Syntactic Trees from Transformer Encoder Self-Attentions.
State Gradients for RNN Memory Analysis.
Interpretable Word Embedding Contextualization.
Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation.
Probing sentence embeddings for structure-dependent tense.
Predicting and interpreting embeddings for out of vocabulary words in downstream tasks.
Language Models Learn POS First.
Interpretable Textual Neuron Representations for NLP.
Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA System.
How much should you ask? On the question structure in QA systems.
Learning Explanations from Language Data.
Context-Free Transductions with Neural Stacks.
Evaluating Grammaticality in Seq2seq Models with a Broad Coverage HPSG Grammar: A Case Study on Machine Translation.
An Analysis of Encoder Representations in Transformer-Based Machine Translation.
Firearms and Tigers are Dangerous, Kitchen Knives and Zebras are Not: Testing whether Word Embeddings Can Tell.
Importance of Self-Attention for Sentiment Analysis.
Interpreting Word-Level Hidden State Behaviour of Character-Level LSTM Language Models.
Iterative Recursive Attention Model for Interpretable Sequence Classification.
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information.
Closing Brackets with Recurrent Neural Networks.
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items.
What do RNN Language Models Learn about Filler-Gap Dependencies?
Learning and Evaluating Sparse Interpretable Sentence Embeddings.
Introspection for convolutional automatic speech recognition.
An Operation Sequence Model for Explainable Neural Machine Translation.
Analysing the potential of seq-to-seq models for incremental interpretation in task-oriented dialogue.
LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation.
'Indicatements' that character language models learn English morpho-syntactic units and regularities.
Interpreting Neural Networks with Nearest Neighbors.
Interpretable Neural Architectures for Attributing an Ad's Performance to its Writing Style.
Evaluating the Ability of LSTMs to Learn Context-Free Grammars.
Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks.
Can LSTM Learn to Capture Agreement? The Case of Basque.
Rule induction for global explanation of trained models.
Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models.
Linguistic representations in multi-task neural networks for ellipsis resolution.
Understanding Convolutional Neural Networks for Text Classification.
Jump to better conclusions: SCAN both left and right.
On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis.
Evaluating Textual Representations through Image Generation.
Nightmare at test time: How punctuation prevents parsers from generalizing.
Explaining non-linear Classifier Decisions within Kernel-based Deep Architectures.
Analyzing Learned Representations of a Deep ASR Performance Prediction Model.
When does deep multi-task learning work for loosely related document classification tasks?