Latest Research News on Learnability : Feb 2022
The strength of weak learnability
This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distribution-free (PAC) learning model. A concept class islearnable (orstrongly learnable) if, given access to a source of examples of the unknown concept, the learner with high probability is able to output an hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class isweakly learnable if the learner can produce an hypothesis that performs only slightly better than random guessing. In this paper, it is shown that these two notions of learnability are equivalent.
A method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences, including a set of general upper bounds on the complexity of any strong learning algorithm as a function of the allowed error ο.
The study of natural language learnability is necessarily multidisciplinary. Its aim is to devise and evaluate possible psychological mechanisms by which a system bounded by the cognitive capabilities and linguistic exposure of a young child might be able to arrive at rich knowledge of an adult human language. The abstract formal models that launched this discipline have over the years become increasingly responsive to theoretical linguistic discoveries about the properties of natural language grammars, many embracing parameter theory in particular as a systemization of the ways in which grammars may differ. The concept of grammar acquisition as the setting of parameters has inspired a number of recent learning models, whose details are compared and contrasted here. But it has not swept away all learnability problems, as it has become clear that the input cues needed to trigger the correct parameter settings are often ambiguous or opaque.
Learnability in Optimality Theory
In this article we show how Optimality Theory yields a highly general Constraint Demotion principle for grammar learning. The resulting learning procedure specifically exploits the grammatical structure of Optimality Theory, independent of the content of substantive constraints defining any given grammatical module. We decompose the learning problem and present formal results for a central subproblem, deducing the constraint ranking particular to a target language, given structural descriptions of positive examples. The structure imposed on the space of possible grammars by Optimality Theory allows efficient convergence to a correct grammar. We discuss implications for learning from overt data only, as well as other learning issues. We argue that Optimality Theory promotes confluence of the demands of more effective learnability and deeper linguistic explanation.
Parameters and Learnability in Binding Theory
Modern theory has provided evidence that universal grammar contains principles of a general, but specifically linguistic, form that apply in all natural languages. A goal of this paper is to extend the notion of principle theory to language acquisition. In such a theory each choice that the child makes in his or her growing language is determined by a principle of language or by a principle of learning or by the interaction of these two kinds of principles. The language principles and the learning principles are obviously related (they interact). However, it seems to be a promising approach to see if the two kinds of principles can be separated to some degree. That is, we attempt a modular approach to language acquisition theory. Some aspects of language and its acquisition seem better stated not in linguistic theory, but outside it, in, say, a learning module.
A survey of software learnability: metrics, methodologies and guidelines
It is well-accepted that learnability is an important aspect of usability, yet there is little agreement as to how learnability should be defined, measured, and evaluated. In this paper, we present a survey of the previous definitions, metrics, and evaluation methodologies which have been used for software learnability. Our survey of evaluation methodologies leads us to a new question-suggestion protocol, which, in a user study, was shown to expose a significantly higher number of learnability issues in comparison to a more traditional think-aloud protocol. Based on the issues identified in our study, we present a classification system of learnability issues, and demonstrate how these categories can lead to guidelines for addressing the associated challenges.
 Schapire, R.E., 1990. The strength of weak learnability. Machine learning, 5(2), pp.197-227.
 Fodor, J. and Sakas, W., 2017. Learnability. In The oxford handbook of universal grammar.
 Tesar, B. and Smolensky, P., 1998. Learnability in optimality theory. Linguistic inquiry, 29(2), pp.229-268.
 Wexler, K. and Manzini, M.R., 1987. Parameters and learnability in binding theory. In Parameter setting (pp. 41-76). Springer, Dordrecht.
 Grossman, T., Fitzmaurice, G. and Attar, R., 2009, April. A survey of software learnability: metrics, methodologies and guidelines. In Proceedings of the sigchi conference on human factors in computing systems (pp. 649-658).