Abstract: With imperfect and incomplete information, it is quite common to misspecify a model. This problem exists not only in the social and behavioral sciences, where the underlying models are often a mystery, but also in the other sciences. Traditionally, misspecification deals with the basic issues of model selection (such as the choice of the functional form, moment specification, etc.), variable selection, and frequently the choice of likelihood or the choice of the statistical inferential method itself. Within the info-metrics framework – the science of modeling, reasoning, and drawing inferences under conditions of noisy and insufficient information – misspecification may appear in three ways. The first is to do with the specification of the constraints (the functional form used, based on the input information). The second is to do with the choice of the criterion or decision function. Whether specified correctly or not, together they determine the solution. The third is to do with priors’ misspecification. In this talk, I am concerned with the first two fundamental misspecifications: the constraints and the criterion. (Note, however that the empirical problem of variable selection for a specific model is similar across all inferential methods, so I do not discuss it here.)
In my talk, I will discuss the above misspecification issues and will contrast classical methods with info-metrics. I will demonstrate some of the main issues via a simple example where I investigate power law distributions using Shannon entropy and the Empirical Likelihood. I show that though they both yield the same prediction, one of them is misspecified. But which one?
My talk will be based on my 2018 book ‘Foundations of Info-Metrics: Modeling, Inference, and Imperfect Information,’ http://info-metrics.org/ in which I develop and examine the theoretical underpinning of info-metrics and provide extensive interdisciplinary applications.