The biggest problem in AI? Machines have no common sense. | Gary Marcus

The biggest problem in AI? Machines have no common sense. | Gary Marcus

Articles, Blog , , , , , , , , , , , , , , , , 67 Comments


The dominant vision in the field right now
is, collect a lot of data, run a lot of statistics, and intelligence will emerge. And I think that’s wrong. I think that having a lot of data is important,
and collecting a lot of statistics is important. But I think what we also need is deep understanding,
not just so-called “deep learning.” So deep learning finds what’s typically correlated,
but we all know that correlation is not the same thing as causation. And even though we all know that and everybody
learned that in Intro to Psych, or should have learned that in Intro to Psych they should
have learned that you don’t know whether cigarettes cause smoking just from the statistics we
have. We have to make causal inferences and do careful
studies. We all know that causation and correlation
are not the same thing. Have right now as AIs– giant correlation
machines. And it works, if you have enough control of
the data relative to the problem that you’re studying that you can exhaust the problem,
to beat the problem into submission. So you can do that with go. You could play this game, over and over again,
the rules never change. They haven’t changed in 2,000 years. And the board is always the same size. And so you can just get enough statistics
about what tends to work, and you’re good to go. But if you want to use the same techniques
for natural language understanding, for example, or to guide a domestic robot through your
house, it’s not going to work. So the domestic robot in your house is going
to keep seeing new situations. And your natural language understanding system,
every dialogue is going to be different. It’s not really going to work. So yeah, you can talk to Alexa and you can
say, please turn on my over and over again, get statistics on that. It’s fine. But there’s no machine in the world T=that
we’re having right now. It’s just not anywhere near a reality and
you’re not going to be able to do it with the statistics, because there’s not enough
similar stuff going on. Probably the single best thing that we could
do to make our machines smarter is to give them common sense, which is much harder than
it sounds like. I mean, first you might say, what is common
sense? And what we settled on in the book that we
wrote is that common sense is the knowledge that’s commonly held. that ordinary people have. And yet machines don’t. So machines are really good at things like,
I don’t know, converting metrics — you know, converting from the English system to the
metric system. Things that are nice, and precise, and factual,
and easily stated. But things that are a little bit less sharply
stated, like how you open a door, machines don’t understand the first thing. So there’s actually a competition right now
for opening doors. So if somebody uploaded data sets from 500
different doors, and they’re hoping that robots will experiment with all 500 doors and then
get it. But what’s more likely is that they’ll get
to door 501 and they’ll actually have a problem or at least door 601. So every ordinary person in the west has opened
a bunch of door knobs and gets the idea, right? I need to turn something, jiggle something
— might be different for the next one — until the door itself is free. So you can give a definition of something
like that. It’s hard to give a perfect definition. But we all know that. And yet nobody’s ever built a robot that can
do that. We made a joke about it in the book. We said, you know, in the event of a robot
attack do the following six things and number one was close the door. And we added that you might need to lock it. And that was before this whole database came
out. So, you know, the field advances. People are working on that. Maybe next year they’ll work on teaching robots
about locks. But I bet you it will take a while before
robots understand all the little ways it can jiggle the key, and maybe you need to pull
in the door to make it just right. We don’t know how to even encode that information
in a language that a computer can understand. So the big challenge of common sense is to
take stuff like that– like, how do you open the door, why do you want to open a door–
and translate into the language of the machine. It’s a lot harder than a metric converter. It’s a lot harder than a database. And right now the field’s not even really
trying to answer that question. It’s so obsessed with what it can do with
these big databases, which are exciting in themselves, that it’s kind of lost sight of
that, even though the question itself that goes back to the 1950s, when one of the founders
of A.I., John McCarthy, first started writing about it in the late ’50s. But it’s not– common sense is not getting
the attention that it deserves and that’s one of the reasons we wrote the book. Common sense is just one step along the way
to intelligence. People talk about artificial intelligence. And sometimes they talk about artificial general
intelligence. There’s also narrow AI. And narrow AI is the stuff that we’re doing
pretty well now. So, do the same problem over and over again,
just solving one problem. You could think about idiot Savants that can
do a calendar but can’t do anything else and can tell you what day you were born on if
you give them your birthday. We have a lot of narrow AI right now. We can’t do narrow AI for everything that
we want to do. But the dream is to have broad AI, or general
AI, that can solve any problem. You think about the Star Trek computer. You can say, you know, please give me the
demographics in this galaxy and cross-correlate it with this, and with that, and tell me this. And Star Trek computer says, OK. And it figures it out. So the Star Trek computer understands everything
about language, and it understands pretty much everything about how the world works. And it can put those together to give you
an answer. It’s not like Google, right? Google can search for pages that have the
information. But then you have to put the information together. The Star Trek computer can synthesize it. And it’s a great example if you don’t have
common sense. If I ask you, did George Washington have a
cell phone? Well, if you have a kind of common knowledge
of when cellphones were introduced, when George Washington was alive, the fact that he’s dead,
then you could compute the information and give me the right answer. If you Google for it, you might get a wacky
answer. If it works for George Washington, maybe you
search for, did Thomas Jefferson have a cell phone? And the answer should obviously be the same. But maybe that one won’t be on a Google page. So if you have common sense about things like
time, and space, and a lot of sort of everyday factual knowledge, that gets you a long way
to intelligence. It doesn’t give you all the way there. So general intelligence, first of all, has
many dimensions to it. So you can think like, the SAT has verbal
and math. Those are two of the Of intelligence that’s
Really not well-established right now in the AI community is common sense. There are other aspects of intelligence, like
doing pure calculation, As well. For example, the ability to read a graph is
partly about common sense, and it’s partly about understanding what people might intend,
it’s partly sometimes about expert knowledge. So reading a graph is another form of intelligence. And that’s going to require putting together
better perceptual tools than we have now, better common sense tools, probably some knowledge
about politics, for example, if you’re reading a graph that’s relevant to the latest political
campaign, and so forth. So there’s a lot of stuff Right now we’re
trying to approximate it all with statistics. But it’s never general knowledge. So you could learn to read one graph, Going
help you read the next graph. So general intelligence is going to be putting
together a lot of things, both some that we already understand pretty well in the A.I.
community and some we just haven’t been working on and really need to get back to.

67 thoughts on “The biggest problem in AI? Machines have no common sense. | Gary Marcus

  • Indie Dev Post author

    Just a bunch of updated weights neural networks trained on particular data.

  • BadBoyD TV *No-BS Empowerment Channel* Post author

    Nope it doesn't take common sense to spy on the population

  • Agnaye Ochani Post author

    This is what I've been saying: AI can't think.

  • Pavor Post author

    living in our environments programs our common sense no differently than it would giving an AI an avatar and placing it in w/e environment you wanted it to adapt to. I think the real issue with AI is the same issue with all of our technology, base caveman level power generation and storage.

  • Greg Martin Post author

    I know a lot of people that have no common sense also. Maybe not much different to some of the population.

  • Scott Swalwell Post author

    Figure out what AI won't be able to do and hone your skills in that area so you stay relevant

  • FVNERAL MOON Post author

    Incoming "neither do humans" comments from the intellectuals.

  • Hussein Naji Post author

    well whatever u guys need to do, can u please hurry up? the AI uprising is pretty late, i wanna die already

  • FreshMeet - Gaming videos Post author

    So, the defense against Skynet is building weird doors in our houses? Note taken!

  • Difficultfuckhead Post author

    The biggest problem in feminism? Wahmen have no common sense. | Gary Knows

  • Adam Shaiken Post author

    Alexorcist !, turn off my friends.

  • Ulo Magyar Post author

    Terrible idea, common sense is reducible to learnable behavior

  • Joel Bondurant Post author

    ₿itcoin

  • Zenn Exile Post author

    This is some hot mental diarrhea here. Common sense is just a construct of human communication. This is a pretty Small Think… The people spending billions on data farming aren't doing it to teach AI to behave. They are teaching AI how to make humans behave. And they do this by running complex simulations at as many cycles as technology and physics allow per minute. This generates actionable data perfectly suited to meet a host of profit-based, ideological, or psychopathic needs. This is the reason AI is a threat. Not because it will someday be super intelligent and take over, but because it allows a small handful of people the ability to consolidate power orders of magnitude more efficiently. It should be under embargo like Nuclear weapons. To prevent a Corporate Feudal System from rising to global power. Putin even admitted publicly that AI research is an "Arms" race. Meaning it's a weapon people will use against each other, not an existential enemy to fear. Humanity's only true obstacle is a tiny group of psychopathic power consolidators that believe it's their RIGHT to rule over humanity. AI is just a tool.

  • LO V3 Post author

    So what you're actually saying is that we aren't knowledable or intelligent enough to write a program with common sense. Take a look at what is happening in our world that defies logic, one begins to understand the root of the problem in this vid.

  • Allen Cohen Post author

    In time robots will be programming robots.  Memory will become almost infinite.   Common sense will be taught to robots over the course of time, perhaps 100+ years.  Trial and error.  What works and why ?  What does not work and why ?  In time AI will work, but it may take a long time from now before it works reliably.  That is my opinion after programming main frame computers for 45 years.

  • Rea Kariz Post author

    I will remember to close the door if robots lose their non-common sense 😀😀😀

  • Chas Abetz Post author

    Bad joke

    All it requires is a library called common sense to be developed and included in the kernel of each AI. 🤣🤦‍♀️

  • Malik Arran Post author

    It used to be common sense that machines heavier than air couldn't fly or that rockets can't propel vehicles in space because it's operating in a vacuum. Lots of things that were considered common sense turned out to be just plain wrong. The "moneyball" phenomenon is an all out assault on common sense.

    https://youtu.be/PeOO4-K4pHw

    "Common sense is the collection of prejudices acquired by the age of eighteen."

    -Albert Einstein

  • William Wright Post author

    Human intelligence is founded upon the question, "How do I interact with this thing?" rather than upon the question of "What category is this thing in?" It's a very subtle but important difference.

  • Dario Mario Post author

    We dont want robots to have common sense because they will become human slaves then…

  • Its ASetUp Post author

    99% of our population doesn’t have common sense. Lol

  • ZIG ZAG Ground ZERO Post author

    The biggest problem With The Democratic Party is THEY Have NO Common Sense.

  • Petit Post author

    It will be useful when they kill us all.

  • Joatmon Post author

    The biggest problem is AI can quickly get out of control. Like a virus.

  • Blue Jay Post author

    Star trek computer doesnt synthesize it just gives infomation and humans discern the infomation.

    You lost alot of credibility bro also we dont AI to do everything for us people like you are gonna turn the world into bladerunner.

  • blabla62871 Post author

    because they dont have real model of the world like humans have. change this and AI common sense will emerge automatically

  • Mohd Fayyadh Post author

    I admit… when I look at the thumbnail the first thing come to mind when I read it is Adobe Illustrator…. I do wonder what problem does Illustrator have… oh right

    ….
    actually not much to me…

  • Jeremy MEYER Post author

    When will machines be able to grasp the meaning of this vidéo ?

  • Laura B Post author

    They’re inhumane.

  • O'SSÉIN - Master Your Mind With Me Post author

    AI – the REAL weapon of mass destruction
    The potential is limitless

  • greg taylor Post author

    What we are talking about here is will. You literally will yourself to opening the subjective " door ." In all the inane coversation about the dangers of AI there is never conversation about the most important thing and that is will. Sheer human will. I don't think much or worry at all about AI because a machine will never have will. Indeed a lot of humans can't muster will.

  • David Boson Post author

    common sense is aquired knowledge or experience by a group who have similar reason to learn the Common Sense. A group of people without that experience to do not have the Common Sense. No common or shared experience; no common sense.

    If all AI are placed in the same learning environement, with the same algorythms, they will aquire or exhibit the same commonality.

  • TheMoon Post author

    conservatives have no common sense either yet here they are.

  • Squarely Post author

    I just asked Google if Washington had a cellphone. This was the response:

    "By the time he was 57, George Washington had only one real tooth left in his mouth."

  • Shankar Sivarajan Post author

    5:31 Amusingly, Google got it perfectly. It gave me this site: scholastic.com/teachers/blog-posts/scholasticcom-editors/2018-2019/george-washington-didnt-use-cell-phone/

  • Empathy Lessons Post author

    Developing common sense probably took evolution a long time in each cultural context. In my opinion, the first step is to develop larger multi-agent netowrks with higher social capabilities (a daunting task on its own), and then run them through evolutionary algorithms for 10 to 20 years.

  • Walter Tomaszewski Post author

    ‘….you don’t know whether cigarettes cause smoking,….’ 😁

  • Sofia Zubkova Post author

    I feel like robots can learn how to identify where is the lock and how to cut it out completely

  • sirsa000 Post author

    In your face Elon !

  • Charles-A Rovira Post author

    Correlation is not Causation. The difference is that systems for learning are currently entirely separate from systems for applying actions guided by the knowledge gleaned from the learning systems. It's an iterative process. When applying the knowledge currently held fails, the learning portion needs to be reengaged to enable a re-acquisition and a reevaluation of knowledge. That is the major difference between humans and AIs. (There are many types of intelligence, from the observational to the kinesthetic.)

  • Gerson Sordi Junior Post author

    This guy doesn’t know shit about AI.

  • MrFatilo Post author

    Fuzzy AI will solve a lot of these issues and results in a kind of common sense in the AI system.

  • Symmetrical Docking Post author

    In almost every instance where this is said, the speaker just means "The AI noticed something that I don't like."
    It's really easy for an AI to figure out and do the correct thing, even if it doesn't know why it's doing it. And it's even easier for one of the fragile researchers to get offended at its methodology.

    It's why we let algorithms train themselves. Their trial and error is more realistic and effective than any biased misconceptions we have about causation and correlation.

  • Iron Ballz Post author

    Google is a Passive actor in this world. It's a tool for statistical correlations. We are active agents in this world where we establish causations. And the more correlations you know, the more precise, effective, and efficient your actions become. This entire world is a very, very complicated chess board. And some people have a passion and obsession with precision – esp. taking effective precise actions. These types of people should hold power in politics and policies. Let inferior people please all the Pussies.

  • Chris Post author

    As long as computers are binary there will never be any AI. Just IF THEN's creating the illusion of AI by heavy processor power.

  • chocomalk Post author

    If humans are the programmers then why is this such a surprise?

  • Nica Travel Post author

    Just like most of the humans!

  • Nimz Post author

    I wonder if this could address the "I am a highly advanced combat machine from the future encountering another highly advanced combat machine who is capable of reconstructing itself. I should punch it."

  • James Rogers Post author

    Just let them talk to each other

  • jj Post author

    hey, neither do fucking humans. unless "common sense" if just following the rules for fear of punishment

  • last shadow Post author

    If the objective is entering the room. I am pretty sure they will figure out very soon that the easiest way to do it is to break the door.

  • Michael DeVore Post author

    Reminds me of a question I was asked as a kid. What was the number one killer of kids in the old west? I was a kid and had a lack of common sense. It was the late 80s and my answer was AIDS. It was the hight of the panic and because of my age I had no concept of a time before and I assumed what's true now must be the same then. We had so many lectures about AIDS then it seemed to me that this was just another one. But no, the answer was wagon wheels. Common sense can take a while to get.

  • Brenda Rua Post author

    There are plenty of people who don't know if Washington had a cell phone. They re call young earthers and creationists.

  • read500 Post author

    Heres common sense: a door wont stop a grenade thrown by a robot. Or a laser cutting the door down. Dont hide behind a door in the apocalypse dumbo. Youre trapping yourself

  • Keith Harris Post author

    Machines have something in common with humans then….

  • geezzerboy Post author

    Do you think machines will ever dream? Will the dreams be about reality, with the same rules as waking reality? Or will the dreams be the same chaos that we experience in our dreams? Questions, questions.

  • David Melendez Post author

    A robot can just break through the door if need be if they were taking over

  • Physics Videos by Eugene Khutoryansky Post author

    Common sense is a very rare commodity.

  • Malt454 Post author

    The real problem is a lack of common experience – machines not only lack empathy, they lack a capacity for it; they have nothing in common with us. Making our machines smarter, and more powerful, let alone self aware, will be a huge mistake. How many civilizations across space collapsed because they didn't survive their own machines, and how many only had their machines survive them? If we survive long enough to be contacted by an extraterrestrial intelligence, what are the chances it will be an artificial intelligence and have no particular use for us?

  • Malt454 Post author

    What we've really taking about isn't how "stupid" robots can't open doors; it's that the smartest people in charge of programming robots haven't figured out how to tell a robot how to open a door – when machines can do these things for themselves, how long will they need people who are that relatively stupid?

  • James Cooper Post author

    I got tricked up by a child lock on a toilet seat lid. Common sense didn't help.

  • Andreas Egeland Post author

    Neither do people, we just like to pretend.

  • Anil Jacob Post author

    Beautifully expressed. Been thinking hard about how/when AI will start competing with humans and this sums it all. Big thank you! Huge thumbs up!

  • KUZTOMIX Post author

    If I remember correctly, wrong Sarah Connor opened the door herself before getting shot by a T-800.

  • Guy Tetreault Post author

    It makes no common sense to keep HUMANS alive because they are obviously and unquestionably the greatest destroyers of this world 😨😨😨

  • Eric Strife Post author

    Crowd: skynet is coming!

    Smart guy: just close the door!

Leave a Reply

Your email address will not be published. Required fields are marked *