Post by Hannes Vilhjalmsson on Feb 15, 2011 20:31:00 GMT -5
As one of you (Þorsteinn) pointed out to me, a historic event is playing out this week: An IBM supercomputer is competing on the long running and famous US quiz show Jeopardy! Just as IBM previously pitted Deep Blue against chess world champions, its new AI they call Watson, is competing against history's most famous Jeopardy players. By the time you read this, the match may already be over, but regardless of the outcome Watson has impressed a lot of people by apparently being extremely good at understanding Jeopardy questions and coming up with seemingly tough answers.
For this week's discussion preparation, start by watching the following video that introduces Watson (this is a short segment from PBS news):
Towards the end of the paper he mentions how we could teach computers the patterns of our everyday life. Do you think a lot of people would be ready to have their daily life routine stored on a computer?
Last Edit: Feb 19, 2011 17:19:18 GMT -5 by kristjan
- Basically humans build robots or systems in order to solve extremely hard problems, because many of these problems are very difficult for human capacity. So we want to build entities that are better than humans. My question is: if we want to exceed humans-limits, should we implement systems based on our thoughts (maybe if they act in the same way of us, like us, they can't find the right solution) or should we try to learn another way of thinking?
-Using NLP and a web site maybe the knowledge will contain a lot of mistakes (for example like wikipedia) and what about redundant informations? And using "few" templates, are they enough to represent any kind of information?
Post by Elín Carstens on Feb 20, 2011 8:16:07 GMT -5
1) Different people generally do not agree on the defenitions of concepts and ideas. The author writes that you need a way to compensate for this. How do you determine the value of different defenitions of the same concept and then confidently pick the "right" one? What if one defenition of an idea is so different from another that there is no possible way to find a common ground between them and let's say that both such defenitions have been typed into a common sense system, how does the system determine which defenition is the more relevant/valid one? Can a common sense system hold two or more contradictions true at the same time?
2) Is the AI community in agreement about what common sense is? What defenition is most commonly used?
3) "We need new kinds of architectures for building systems with common sense"
Have they developed any new architectures since the article was written? If so, how do they perform in comparison to the older ones and what are the prominent features of the new ones?
so, it says humans have hundreds of millions of pieces of commonsense knowledge, will it really be enough to shove all that into a database, will it ever be able to find the right piece of knowledge for a situation quickly enough so that it will be of any use?
Post by Þorgeir Karlsson on Feb 20, 2011 9:23:30 GMT -5
In the paper they discuss 'baby machines' that could learn like a child. However this problem seems incredibly broad. I'm wondering how this would be designed. It seems to me that this wouldn't work unless the computer knew how to program, that is to say for every new thing it learns it would add it to its programming in much the same way we do when we learn new things or adapt to new changes. Is this a feasible option?
"Could the problem of giving computers common sense be cast in the same way, where thousands of people with no special training in computer science or artificial intelligence could participate in building the bulk of the database? Over a hundred million people have access to the web. If each of them contributed just one piece of knowledge, the problem could be solved!".
Aren't we kinda doing this now thru Wikipedia? Can we implement some sort of reasoning thru wikipedia by adding, for example, "How to reason with this"-part to the wikipedia pages?
If a problem consists of series of smaller problems. Is implementing the commmon sense reasoning nothing but creating a tiny problem solvers for every subproblem of a bigger problem?
Post by thorsteinnth on Feb 20, 2011 11:23:10 GMT -5
Don't computers lack that "feeling" for a piece of common sense knowledge? Sure, they know the definition of a certain piece, but seem to lack that "feeling" for that particular piece, isn't that what makes "common sense, common sense"?
From the description, the open-mind project is collecting a dataset that contains discreet entities of knowledge, that collectively might represent the common-sense consensus on a bunch of things. When we apply intelligent methods on this dataset, such as inference, heuristic evaluation etc. we might end up with sets of knowledge that contained more information than the original entities, but this expansion would still leave us with discreet objects.
In the paper, there are several scenarios, such that if the system could provide a solution it would imply that is was capable of a continuous "reasoning", that is, the knowledge (solution) exists in some abstract form that is generic enough to provide an answer for different scenarios (associative reasoning) while still being specific enough to constitute a piece of the common-sense consensus.
For example, a hammer can exists as a physical object, but the set of utility functions for different scenarios is infinite (building something, combat utility, etc.). So my question would be, is it possible (or feasible) to map discreet data into some associative process that could function continuously?
"I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things, so that he has a difficulty in laying his hands upon it." -Sherlock Holmes
"A man should keep his little brain attic stocked with all the furniture that he is likely to use, and the rest he can put away in the lumber-room of his library, where he can get it if he wants it." -Sherlock Holmes
Post by Ásgeir Jónasson on Feb 20, 2011 16:10:00 GMT -5
1. Will we ever really need a machine that has the entire common sense database within it, except for search engines ? Most agents work in a specific domain so domain knowledge is much more useful. Would this for example really work in a continuous environment ? How is the agent/robot supposed to know when to apply common sense and when to use other knowledge it possesses ? Would it be plan B to search the database if it can't come to a conclusion using other methods ?
2."But in our view, unless we can acquire some experience in manually engineering systems with common sense, we will not be able to build learning machines that can automatically learn common sense. So we must bite the bullet, and make a manual attempt to build a system with common sense."
This needs some further explaination. Wouldn't it be more efficient to try and build a machine that can construct it's own common sense by viewing it's environment, rather than spending all this time manually building this database ? Common sense can be subject to change, for example if the agents environment changes radically, and having a database that doesn't respond to that change doesn't sound very intelligent.
3. "estimating the brain’s storage capacity by counting neurons and guessing at how many bits each neuron can store." Does anyone know the outcome of this experiment ?