Keynesian Spirits

Chomsky on Artificial Intelligence

Noam Chomsky on Artificial Intelligence: it’s interesting that like Douglas Hofstadter, who’s nearly been forgotten and shunted aside in the academia, he disagrees with the current data-heavy approach. In addition, there are very interesting parallels with economics as we debate the utility of Randomised Controlled Trials.


Chomsky: I have to say, myself, that I was very skeptical about the original work [in AI]. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can’t get to that understanding by throwing a complicated machine at it. If you try to do that you are led to a conception of success, which is self-reinforcing, because you do get success in terms of this conception, but it’s very different from what’s done in the sciences. So for example, take an extreme case, suppose that somebody says he wants to eliminate the physics department and do it the right way. The “right” way is to take endless numbers of videotapes of what’s happening outside the video, and feed them into the biggest and fastest computer, gigabytes of data, and do complex statistical analysis — you know, Bayesian this and that [Editor’s note: A modern approach to analysis of data which makes heavy use of probability theory.] — and you’ll get some kind of prediction about what’s gonna happen outside the window next. In fact, you get a much better prediction than the physics department will ever give. Well, if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it’s way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won’t get the kind of understanding that the sciences have always been aimed at — what you’ll get at is an approximation to what’s happening.

And that’s done all over the place. Suppose you want to predict tomorrow’s weather. One way to do it is okay I’ll get my statistical priors, if you like, there’s a high probability that tomorrow’s weather here will be the same as it was yesterday in Cleveland, so I’ll stick that in, and where the sun is will have some effect, so I’ll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow’s weather is going to be. That’s not what meteorologists do — they want to understand how it’s working. And these are just two different concepts of what success means, of what achievement is. In my own field, language fields, it’s all over the place. Like computational cognitive science applied to language, the concept of success that’s used is virtually always this. So if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives — but you learn nothing about the language.

A very different approach, which I think is the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there’s going to be a thousand other variables intervening — kind of like what’s happening outside the window, and you’ll sort of tack those on later on if you want better approximations, that’s a different approach. These are just two different concepts of science. The second one is what science has been since Galileo, that’s modern science. The approximating unanalyzed data kind is sort of a new approach, not totally, there’s things like it in the past. It’s basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn’t have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability…

Q: engineering?

Chomsky: …But away from understanding. Yeah, maybe some effective engineering. And it’s kind of interesting to see what happened to engineering. So like when I got to MIT, it was 1950s, this was an engineering school. There was a very good math department, physics department, but they were service departments. They were teaching the engineers tricks they could use. The electrical engineering department, you learned how to build a circuit. Well if you went to MIT in the 1960s, or now, it’s completely different. No matter what engineering field you’re in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that’s a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it’s going to be different 10 years from now. So you have to learn the fundamental science that’s going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine. So in the past century, again for the first time, biology had something serious to tell to the practice of medicine, so you had to understand biology if you want to be a doctor, and technologies again will change. Well, I think that’s the kind of transition from something like an art, that you learn how to practice — an analog would be trying to match some data that you don’t understand, in some fashion, maybe building something that will work — to science, what happened in the modern period, roughly Galilean science.


Protectionism caused the Great Depression, right?

No and no again!

Intellectual Persecution

While reading John Cassidy‘s brilliant history of economic ideas, How Markets Fail, I begin thinking of the widespread ‘persecution’ of intellectuals of the left, particularly economists, that started in the 1970s when the Keynesian models broke down in the face of both increasing unemployment and inflation.

Naturally, the process accelerated once Thatcher and Reagan won office and began implementing their then radical agendas. Many brilliant economists who warned of the dangers of unbridled finance, de-regulation, manic privatisation and zero capital-flow controls were shunted aside, ridiculed and rarely got top academic or political posts.

So, I take my hat off to geniuses on the right (!) and left, who stuck to their principles and ideas in spite of regular criticism and intellectual isolation. Thanks to Von Hayek, Pigou, Minsky, Baker, Stiglitz, Galbraith Jnr. and many others, we understand better how our economies and societies function. It’s something even beyond being right or wrong (I may not agree with the policy proposals of  Von Hayek and Friedman) but more about fighting for what one believes to be the truth.


A very good read on Friedman

Notes on Minsky 1 and 2

Galbraith on economists’ mistakes

What Goes Around, Comes Around…

An extract from Jonathan Fenby’s The History of Modern China (talking about the country at the end of the 19th century):

“Growing exports of food exposed farmers to international price fluctuations. The trade deficit rose as other nations competed with China’s traditional sales abroad of tea and milk…Imported coal was sold for less than that mined domestically. Handicrafts were hit by imports, notably cotton goods from Britain…Some 700,000 cotton shirts made in Lancashire were sent up the Yangzi from Shanghai to Sichuan each year, undercutting local production despite the cost of shipping them across the globe.”