I tend to have a lot of “hot-topic” dinner conversations with people about AI: will robots take our jobs, is software intelligence going to take over the world, and the near-term impacts of big data on everything from science to ecology to law. And it’s not just me: consider all the recent symposia about “the end of work“, the “AI race“, “how to stay human in a robot society“, etc.
While not necessarily shallow, these conversations are invariably speculative. I can’t really respond to people’s concerns about AI because the questions they ask don’t connect with the technical concepts that I study. Talking about AI, for most non-technical people, is a proxy for talking about the place of people in society, present and future. They characterize AI by a set of external variables: how it will displace jobs, how it will make things cheaper and faster in their life/work, how it could make society more or less fair. These are observations one can make without knowing anything about AI, which is why I call them “external”. On the other hand, I study internal variables (a.k.a. technological variables) like error bounds on particular learning algorithms, logical programming, and a slew of engineering problems like motion planning, natural language, and domain generalization. Studying these things does not make me obviously qualified to address social-sciency concerns about the place of people. For the same reason, I’m quite skeptical of AI “experts” when they prognosticate about the impact of AI on society.
Still, there must be something that the internal variables in AI can say about the external variables—but how to say it? Relating the two sets of variables would clarify non-technical people’s concerns to technical people, measure technical developments in terms of non-technical outcomes, and suggest “internal” solutions to their “external” concerns (or prove the absence of such solutions). And maybe, just maybe, it would help all of us have better conversations about AI.