Human-like systematic generalization through a meta-learning neural network
Neither the study nor query examples are remapped to probe how models infer the original meanings. During SCAN testing (an example episode is shown in Extended Data Fig. 7), MLC is evaluated on each query in the test corpus. For each query, 10 study examples are again sampled uniformly from the training corpus (using the test corpus for study examples would inadvertently leak test information). Neither the study nor query examples are remapped; in other words, the model is asked to infer the original meanings. Finally, for the ‘add jump’ split, one study example is fixed to be ‘jump → JUMP’, ensuring that MLC has access to the basic meaning before attempting compositional uses of ‘jump’.
Thus, sampling a response for the open-ended task proceeded as follows. Second, when sampling y2 in response to query x2, the previously sampled (x1, y1) is now a study example, and so on. The query ordering was chosen arbitrarily (this was also randomized for human participants). Relationship level metacommunication, based on a relational definition, happens after several, recurring episodic level metacommunications. You have to reach that stage in the relationship after which the subsequent interactions get contextualized by a relational definition. “Three years after its staggering failures in Myanmar, Meta has once again – through its content-shaping algorithms and data-hungry business model – contributed to serious human rights abuses.
Metacommunication: When What You Said Isn’t What You Meant
On May 19, 2022, Meta held the first ever event dedicated to the future of business-to-customer communication. The event, aptly titled Conversations, highlighted the centrality of messaging as a global cultural phenomenon in the digital transformation of business and commerce. Indeed, according to Meta, there are now more than a billion people connecting with businesses via its messaging services every week.
In this Article, we provide evidence that neural networks can achieve human-like systematic generalization through MLC—an optimization procedure that we introduce for encouraging systematicity through a series of few-shot compositional tasks (Fig. 1). Our implementation of MLC uses only common neural networks without added symbolic machinery, and without hand-designed internal representations or inductive biases. Instead, MLC provides a means of specifying the desired behaviour through high-level guidance and/or direct human examples; a neural network is then asked to develop the right learning skills through meta-learning21.
Machine learning benchmarks
Beyond predicting human behaviour, MLC can achieve error rates of less than 1% on machine learning benchmarks for systematic generalization. Note that here the examples used for optimization were generated by the benchmark designers through algebraic rules, and there is therefore no direct imitation of human behavioural data. We experiment with two popular benchmarks, SCAN11 and COGS16, focusing on their systematic lexical generalization tasks that probe the handling of new words and word combinations (as opposed to new sentence structures). MLC still used only standard transformer components but, to handle longer sequences, added modularity in how the study examples were processed, as described in the ‘Machine learning benchmarks’ section of the Methods.
COGS is a multi-faceted benchmark that evaluates many forms of systematic generalization. To master the lexical generalization splits, the meta-training procedure targets several lexical classes that participate in particularly challenging compositional generalizations. As in SCAN, the main tool used for meta-learning is a surface-level token permutation that induces changing word meaning across episodes. These permutations are applied within several lexical classes; for examples, 406 input word types categorized as common nouns (‘baby’, ‘backpack’ and so on) are remapped to the same set of 406 types.
Implementation of MLC
Read more about https://www.metadialog.com/ here.