Post by Hannes Vilhjalmsson on Feb 20, 2009 7:31:23 GMT -5
Sorry I didn't start this thread earlier (I thought I had started it already, but I had just put the reading on the wiki without a thread). This week's reading is the "The Open Mind Common Sense Project", which relates to the topic of knowledge representation we've just started reviewing in the course. Gathering and representing common sense knowledge is one of the biggest issues in general AI. Here we have a specific project that attempts to make this a practical endeavor. Post your questions here by Sunday (since I forgot to get the thread going).
1) What should be the standard to compare common sense of a computer system? Should it be humans ability to use common sense? Is there any other measure of common sense?
2) This Openmind approach is very interesting and challenges the conception of common sense belonging to a person and perhaps being to some degree culture specific. Could this approach lead to a common sense that is not culture specific? And is that necessarily a benefit? It will be interesting to see the possibilities and the limitations of this holistic approach.
Post by Birna Íris on Feb 21, 2009 10:21:32 GMT -5
"...,we need to find ways to build systems into the mind..." Are they talking about the human mind here? I would volunteer for memory chip implant into my brain! Any ideas on how this would be implemented??
Common sense varies between people and we sometimes talk about lack of common sense, especially with people who live risk seeking lives. Is common sense taught to children or some internal mechanism that we are born with?
Another thought (maybe not so connected to this article... but still...) is to look at human life somewhat as trajectories around chaotic attractors. Maybe common sense is to find balance between those attractors?? (members.tripod.com/~Vlad_3_6_7/Complexity-of-Life.html)
Post by Helgi Páll Helgason on Feb 21, 2009 15:07:50 GMT -5
I did not find much information in the article regarding how the segment or categorize the facts in their database. For practical usability reasons isn't this a rather important aspect, so the machine can at least have some idea as to where to look for relevant facts instead of considering the whole database?
Aren't these guys underestimating a little the implications of having data that is known in advance to be partially flawed? It may be fine for something as trivial suggesting relevant images for an email, but one would think carefully before giving a program operating on such facts more powerful (and useful) abilities.
I've known and respected Lenat's early work but I fear that he's fallen off the track and while CyCL and The Open Mind Common Sense Project are both laudible goals I think both are misguided.
They are both looking for a hard-coded banality of common sense while ignoring that common sense is a process, not a knowledge base. For any such common sense rule you might come up with, you can easily imagine a world where that rule does not apply.
Common sense is the process by which you record obvious things about the world after having occupied it long enough. Some of it is evolutionary in nature, some of it has to do with later information processing.
For any given age there have been things men should run from. For our age we might say it's an oncoming car. For the Neanderthal it was large predators. (I am sure you can think of many more examples). The principal brain organ responsible for the fight or flight mechanism is the amygdala. It's function is to turn on all your fears and make you RUN. It's a very primal instinct and saved humanity from predators. It still makes us bolt, just not from the same things.
The Neanderthal ran away from a lion on instinct. The Frontiersman of the New World would have drawn a gun and shot it with much less concern. Today you would have to run again. That 'common sense' iota has changed throughout the ages depending on man's attitude towards the world.
Post by Hjalti Kolbeinsson on Feb 22, 2009 12:39:08 GMT -5
This problem appears to be so big that it is unsolvable. We need to gather hundreds of millions of statements that are true somewhere and false in other context. And also does the website automatically create new questions for people to answer when it has new knowledge or does it keep asking the same questions after they have been answered because the answer sometimes depends on culture for example.
Do they log from what country, what religion and other information, about the people who answer, that can affect the answer? And if so do they link each answer to a certain user? If so then they need a huge database where each "fact" would be linked to a certain user so the same question could have tens of thousands of answers to the same question.
Post by Jón Trausti on Feb 22, 2009 17:46:06 GMT -5
Has the media helped openmind.org gather people to send in common sense? If we succeed acquiring as much common sense as they need, then what? How will it help?
2. Regarding the question "How to use many kinds of methods together to solve a problem" I've always liked to think of this in Object Orientation. I personally believe common sense cannot be defined by a database, rather by some aglorithm that uses external information: sound, vision & feelings.
I want to give an example:
AI hears a loud sound. It detects all loud sounds it has heard before. It regonizes the sound of tires screeching on road. Possible edges coming from a vertice that defines the"screeching sound" 1. Car braking 2. Aircraft landing
(It will do some calculations to estimate which edge is a match) 1. Definition of car braking is car slowing down.
Is the car slowing down? Yes
Edges from Car braking: 1. Red light 2. Another car infront 3. Person infront 4. I'm blocking the way.
(More calculations to find next matching edge).
Let's now assume it'll order the edges on what's most important to check on first. Such as "I'm blocking the way". It'll calculate it:
Is the car getting larger on my vision?[YES] Is it moving fast on my FOV? (Field of View) [NO].
If it's matching, check edges from that possibility:
Car coming my way: 1. Is it going faster than 100km/h? 2. Is it going faster than 50km/h? 3. Is it going less than 1km/h?
If (1) ... else if (2) ...
I personally think common senes can be built like that, but ofcourse, it'd need a time to learn, to add vertices and edges to that vertice with time. It'll have to learn alot of calculations, I'd love to see it as some event driven scripting language that the AI generates from learning.
Post by Björn Vignir Magnússon on Feb 22, 2009 17:49:59 GMT -5
Has there ever been made a program that can successfully tell if a sentence is said in sarcasm or not?
The information on the Open Mind website seem to be rather simple, mostly about connecting certain words to other words instead of providing deeper understanding of objects. Would it not be possible to make a program gather information about the world from Wikipedia, or would the program then maybe be more like a dictionary?
Post by Richard Ottó O'Brien on Feb 22, 2009 18:36:47 GMT -5
Has this Cyc or the Open Mind databases been used in some system/project?
They way common sense is described in the paper, it is hard to draw accurate lines determining what one might consider common sense and then the next person. Where should these lines be drawn? Where does common knowledge end and "uncommon knowledge" begin?
Post by Haukur Jónasson on Feb 22, 2009 20:52:34 GMT -5
This article was definitely one of the most interesting ones so far. But... It felt so complete, so self-contained. It didn't really evoke any questions that it didn't then subsequently answer itself (a sign of a good article, I guess). I've been thinking about it on and off for hours, fighting the urge to just post "No comment."
But then it struck me. What's the point? Why do we need a computer/robot with all the common sense of a human being? Aren't the Cyc and Open Mind projects really just huge thought experiments? Are AI agents ever going to be used in such a general context that they will need this information? I can see the common sense reasoning capabilities being used, but the vast library of trivial knowledge? Really?
"Artificial intelligence is no match for natural stupidity."