I ran across this link to Paul Krugman being insightful and thoughtful about the general question of ‘What is a Model” and “What do we use them for in Science?”
It’s about economics and specifically models of development economics, but the general questions of methodology apply to social sciences more broadly.
It is in a way unfortunate that for many of us the image of a successful field of scientific endeavor is basic physics. The objective of the most basic physics is a complete description of what happens. In principle and apparently in practice, quantum mechanics gives a complete account of what goes on inside, say, a hydrogen atom. But most things we want to analyze, even in physical science, cannot be dealt with at that level of completeness. The only exact model of the global weather system is that system itself. Any model of that system is therefore to some degree a falsification: it leaves out some (many) aspects of reality.
How, then, does the meteorological researcher decide what to put into his model? And how does he decide whether his model is a good one? The answer to the first question is that the choice of model represents a mixture of judgement and compromise. The model must be something you know how to make — that is, you are constrained by your modeling techniques. And the model must be something you can construct given your resources — time, money, and patience are not unlimited. There may be a wide variety of models possible given those constraints; which one or ones you choose actually to build depends on educated guessing.
And how do you know that the model is good? It will never be right in the way that quantum electrodynamics is right. At a certain point you may be good enough at predicting that your results can be put to repeated practical use, like the giant weather-forecasting models that run on today’s supercomputers; in that case predictive success can be measured in terms of dollars and cents, and the improvement of models becomes a quantifiable matter. In the early stages of a complex science, however, the criterion for a good model is more subjective: it is a good model if it succeeds in explaining or rationalizing some of what you see in the world in a way that you might not have expected.
There is also a nice description of a “Dishpan model” by David Fultz as an example of a hyper-simplified model that illustrated some emergent properties useful for meteorology.
What resonates with me about Krugman’s description is a common interest in building the simplest, descriptive models that we hope illuminate underlying principles in complex processes. In Economics, particularly Macro, the scientific goal is to understand systems of unmanageable complexity (interactions among all the people and institutions that produce economic activity). In Neuroscience and Psychology, we attempt to understand the human brain, also a system of unmanageable complexity.
I also prefer simple models with a small handful of parameters to illustrate concepts, while having a lot of admiration and respect for modelers who take on the complexity of building up from individual neurons (each themselves having nearly unmanageable complexity, fwiw). The simple models also cannot be “right” in the same sense Krugman describes above, but they can account for some useful fraction of the variance we aim to explain and hopefully expose some deeper principles that might even eventually direct neural-level modeling.
There’s a good question on the other end of the complexity spectrum as well, about why it is worth even building simple models with a few parameters over and above simply making theoretical statements like “changing x causes a change in y.” Such theoretical statements are the bread and butter of standard approaches to Psychological Science, especially experimental work, but I’ll leave the answer as an exercise, perhaps to be tackled in my graduate seminar next time I teach modeling (hints: quantification and prediction are important).
Links to sources: