Phew, these questions aren’t getting any easier (yet). What happened today was that I wrote very messy flow-of-consciousness points and then, at FHI lunch, someone asked me what I was thinking about today (I curse you, Facebook!). So I had to explain my messy thoughts and I now feel like including that summary explanation here as well. (But I’m keeping the messy bit anyway to stay true to my project.)
It seems like we have some idea of how to think about human intelligence. Once we start thinking about animal intelligence, we get kind of confused, and when we try to think about intelligence more abstractly and in terms of what that means for artificial intelligence, we basically have no idea anymore. Actually, I don’t really know if we are more confused about AI than about animals, but I assume so, because animals at least have brains. As far as I’m aware (and correct me if I’m wrong), that’s the reason why we think of measuring AI progress in terms of capabilities and not intelligence. So, instead of saying “AI that is as smart or smarter than humans”, we say “AI that can do all tasks that humans can do as well or better than humans” (for various levels of human capability, depending on what we’re trying to measure). A first problem is the danger of conflating the ability to learn with capability (the “smart kid” vs “expert” example I mention below). At lunch, someone also helpfully alerted me to the Chinese Room Experiment, which, as far as I can tell, is a refutation of the Turing Test (or, rather, the specific assertion that AI can be a “strong AI”, defined by Russell and Norvig as: “The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the ‘weak AI’ hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the ‘strong AI’ hypothesis.”).
And where we enter the waters of “What is consciousness?” is where I give up!
To my computer scientist friends: what do people currently working in computer science think about Turing tests? Is it still seen as a meaningful standard?
The messy bit
- “Intelligence is what is measured by IQ tests”
- Crystallized vs… hm… fluid? Intelligence
- “intelligence (or rather, human cognitive abilities) is not a unitary construct but entails the lifelong coordination of at least two classes of abilities: fluid (Gf), which refers to the ability of understanding relationships among the components of an abstract problem and using such relationships to solve the problem, and crystallized (Gc), which refers to the knowledge accumulated through experiences.” (found here)
- Emotional intelligence
- The camp of people who are trying to say “oh, but IQ tests aren’t the only thing that matters”
- If you take an IQ test, there is
- Verbal reasoning
- Mathsy things
- Not sure what it’s called – orientation in a room? – where you rotate geometric shapes and stuff (oh: it’s just called “spatial” – how boring)
- Drexler is saying that we’ve conflated capability and learning
- “Clever kid” is one that is able to learn
- “Intelligent expert” is someone who can do stuff (and if she lost her ability to learn, she’d still be an expert, because she is amazing at the things she can already do)
- Intelligence and rationality are very different. I’m not sure to what extent that’s obvious, but apparently people are sometimes surprised by this, like “X is so smart, why would they do such a dumb thing”. Apparently there’s a story around GW Bush’s IQ – where someone calculated it to be around 120, and everyone was kind of surprised that he was so smart, even the people in his camp, who had been trying to say that he was “street smart” instead of “book smart”. Turns out that maybe he was just being irrational? (Read that in here)
- Are chimps smarter than birds? (vaguely remembering this post, but overall confused about how to think about and compare animal intelligence)
- What does brain size have to do with intelligence?
- Remembering some dodgy arguments around male vs female intelligence.
- Talking about dodgy arguments makes me remember an even worse one around race and intelligence, which (without having looked into it) seems to get some things around correlation and causation horribly wrong.
- Oh, once we are in the whole nature-nurture thing; I once read this thing about educational inequality and education actually changing children’s IQ….
Within this swathe of things – what are we actually interested in?
Well, maybe that depends… (oh, wow, surprise). There definitely is a whole lot of interesting stuff within the area of “human intelligence and what do we do about it” (which reminds me of another topic: cognitive enhancement). But I feel like the thing I’ve been trying to get to grips with has been more the artificial side of things.
And, as always, I am getting horribly confused (maybe I should put “What is confusion good for?” or “How does confusion work?” down as one of my future questions). For one, we don’t seem to have a good way of thinking about intelligence in precise-ish terms outside of IQ tests. And it’s clear that if you are an artificial intelligence, you can optimize for high grades on IQ tests without actually having what we’d call “general intelligence”. (Thanks to Flo for digging out this study on the IQ of neural networks and this other one which uses a different IQ measure.)
I’ve heard the approximation of “can do all tasks that a human (either the average human or the best human) could do as well, or better”, which clearly tracks the “capability”, not the “learning” side. (What happens if we say “has the potential to learn how to do all of the things?” – that sounds pretty much like a seed AI).
Can we think of cases in which we’d say that an AI has general intelligence without being able to do all tasks that humans can do?
Yeah, I guess so: A seed AI (imagine the equivalent of a four-year-old human) seems to count. And maybe an intelligent system (sorry, I think I’m still using words like “system” and “agent” sloppily!) that hasn’t got it’s robotics down. I’m not sure if people still accept Turing tests.
Geez. Now I’m confused about what the important thing is. Again.
As in, if we’re worried about artificial intelligence, what are we worried about? I guess we can be worried about several things
- Algorithms doing crazy stuff, either when used by malicious actors, or when going wrong (this seems to be the loudest near-term concern, like “cybersecurity”, and “biased algorithms in decision-making”).
- A Bostrom-esque superintelligent agent that
- Transforms the universe into paperclips (or other “perverse instantiations”)
- Forms a singleton and then… who knows? (I remember the chapter about horses, where he makes an analogy to how the population of horses dropped after the advent of the car, and then increased again as they were seen as things some people appreciated having in their lives. It feels pretty creepy to think about human populations in that way, or how human-made things might be quaint collector’s objects
- And other stuff. There is always more that one can worry about when one thinks hard enough. Luckily I lack the time to think more for today. (Also, these last few points seem somehow irrelevant now?)
Another benefit of doing this is that people actually do recommend me things based on what I write about. Today, I got recommended to read:
The Measure of All Minds: Evaluating Natural and Artificial Intelligence, which apparently does not exactly answer the question I asked myself today, but does present a much more informed picture regardless.
The Secret of our Success, that contains a reference to the difference between “raw” intelligence and social learning.
PS: I am aware that my Facebook-friends (and therefore the people most likely to end up here) span a range from people who will have no idea what I am talking about and will be annoyed at me using jargon without explaining (which I agree is bad practice) to people who have heard all of this before and get annoyed at me taking up space without actually contributing anything new. I feel the urge to apologize, but actually I won’t, because I can write what I want and as far as I’m aware I’m not forcing anyone to read it. Also, you’re probably grown-up enough to deal with your annoyment by yourselves.
PPS: I am also feeling the urge to talk about things like “relationships” instead, partly because that feels easier (how on Earth did I end up thinking that, I wonder?), and partly because I expect more positive reinforcement from that (in the form of “oh, these are interesting thoughts!” as opposed to “um… okay” – *changes topic*). Let’s stick with choosing the path of most resistance for now.
This post is part of my two-week writing challenge, where I publicly embarrass myself in the hope of clarifying my thinking on some issues that seem important. Feel free to expose my confusions and tell me what to do to disentangle them.