Categories
Language Theory

Improving NLP Generalization with Emotion and Paraphrase Detection

NLP models often work by autoregressively predicting the next word in a text sequence. This is a highly specific task. This does not appear to be optimized to produce generalization. To produce more generalization, I propose two changes. Training Method to Produce More Generalization: Start with a pre-trained model (e.g., GPT-3). Create a strong paraphrase […]

Categories
Uncategorized

Google GATO: A Path to AGI?

On May 12, 2022, Google’s DeepMind released Gato, which they described as a “generalist agent.” They were able to use a single transformer architecture to train an agent that could perform 600 different tasks including captioning images, playing video games, and moving real robotic arms. In my opinion, this does seem like it could be […]