Cognitive Plausibility of Representational Transfer - Speaker: Lisa Beinborn
Large-scale representational pre-training is a key ingredient for the success of neural machine learning models. Most pre-training models only use a single information channel whereas humans integrate information from multiple channels in most application scenarios. Novel approaches for learning multimodal and multilingual representations aim at combining different sources of information to facilitate transfer. However, it remains an open question whether performance gains obtained by representational transfer can be explained by an exchange of complementary information or by reinforcement of redundant cues. In my talk, I present different methods towards measuring and analyzing representational transfer from a cognitive perspective.