|
Post by stephan on Jan 26, 2012 12:23:13 GMT -5
Post the stats of your algorithms here for comparison. Of course, you can also use this board to ask questions regarding the assignment and discuss and problems you might have.
|
|
|
Post by Baldur on Jan 31, 2012 12:56:00 GMT -5
I'm working on implementing the data structures. I'm wondering how much information a state is supposed to have? Right now I have the following:
position cell contents (that is, if the cell contains dirt) orientation dirty cell count score
Is this OK? Should a state have all this information? Or should it have even more? (Not sure what that might be)
|
|
|
Post by Baldur on Jan 31, 2012 15:09:33 GMT -5
Also, I don't understand the score calculations on the wiki page. Is it possible there is some kind of formatting issue (because of the DocuWiki syntax)?
After reading through the gdl file I gathered the following:
You start with 25 points. You get: -5 for SUCK if there is no dirt +15 for SUCK if there is dirt -1 for shutting off at home -25 for shutting off not at home -1 for every other action( or no action )
Is this correct? Also, we don't get the initial score from the initial information. Should we set it to 25 in our code or use the score rules only and just set the inital score to 0?
|
|
|
Post by stephan on Feb 2, 2012 10:27:38 GMT -5
I'm working on implementing the data structures. I'm wondering how much information a state is supposed to have? A state should be as small as possible to reduce the size of the state space (less information = fewer possible states) and the memory requirements of the search. Of course you should still have all the necessary information in the state. Just think about which information you actually need in your search to compute legal moves, successor states and the goal test. Right now I have the following: position cell contents (that is, if the cell contains dirt) orientation dirty cell count score Is this OK? Should a state have all this information? Or should it have even more? (Not sure what that might be) 1. Don't forget the obstacles! 2. The score is not really part of the state. You don't have a score, but path costs, and path costs can differ depending on the path you take to reach the state. So the score/cost is not part of the state (but maybe part of the node in the search tree, depending on the search algorithm). 3. Ask yourself if you really need the dirty cell count? The information is also included in the contents of the cells. However, it may be useful to keep it anyway, if it makes the functions for computing legal moves, state updates or path costs a lot easier or faster.
|
|
|
Post by stephan on Feb 2, 2012 11:34:49 GMT -5
Also, I don't understand the score calculations on the wiki page. Is it possible there is some kind of formatting issue (because of the DocuWiki syntax)? Yes, there was. Sorry. I fixed it. After reading through the gdl file I gathered the following: ... Don't be mislead by the information in the gdl file. The score there is computed a bit different than the costs of the actions (cost is roughly the same as negative score and there is an initial score of >0 because there are no negative numbers). You don't have a score in your search problem. Each action has an associated cost and the cost of a solution (i.e., a path from the initial state to a goal state) is the sum of the action costs. The costs of the actions are: - 1 + 15*D, for TURN_OFF, if the robot is in the home location and there are D dirty cells left
- 100 + 15*D, for TURN_OFF, if the robot is not in the home location and there are D dirty cells left
- 5, for SUCC if the current location does not contain dirt
- 1, for SUCC if the current location contains dirt
- 1, for each other action (GO, TURN_RIGHT, TURN_LEFT, TURN_ON)
|
|
|
Post by heidar on Feb 2, 2012 16:46:15 GMT -5
Hi, I've implemented the DFS, BFS and Iterative Deepening Search algorithms using my State and Node representations. However they're not completing their search before running out of memory. I looked at the initial state in vacuumcleaner_obstacles_1.gdl and found a solution in around 25 steps.
The branching factor of the state space is 5 (for most states, right?) and so my solution wouldn't be found until I've reached a depth of 25, having to go through a massive amount of nodes in the state space... Now, I haven't implemented Uniform Cost Search or A* yet, but I'm wondering if I'm modelling this in a wrong way or if there is no way to reach a goal using the DFS and BFS algorithms?
|
|
|
Post by stephan on Feb 3, 2012 10:11:20 GMT -5
Hi, I've implemented the DFS, BFS and Iterative Deepening Search algorithms using my State and Node representations. However they're not completing their search before running out of memory. I looked at the initial state in vacuumcleaner_obstacles_1.gdl and found a solution in around 25 steps. The branching factor of the state space is 5 (for most states, right?) Yes, TURN_RIGHT, TURN_LEFT, TURN_OFF, GO and SUCK are always possible, once the agent is turned on. However, if you remove nonsense actions like sucking when there is no dirt or going against walls you can probably reduce that to around 4 on average. and so my solution wouldn't be found until I've reached a depth of 25, having to go through a massive amount of nodes in the state space... Now, I haven't implemented Uniform Cost Search or A* yet, but I'm wondering if I'm modelling this in a wrong way or if there is no way to reach a goal using the DFS and BFS algorithms? Most likely, your modelling is fine, but the algorithms are just not good enough for solving the problem. What do you know about DFS and BFS with respect to completeness, optimality, space complexity and time complexity? You started already to estimate the size of the search tree. Now, think of the properties of the algorithms and of your specific implementation. Try to estimate how long it will take and how much memory you will need to find a solution in the given environment in the worst case and in the best case. Then you could maybe add some output (e.g., current search depth, number of expanded states, current size of the frontier, ...) to your code to see the progress. If the search doesn't finish, than just post the stats from after a fixed amount of time (say 5 minutes). What is the problem of DFS and what is the problem of BFS with respect to the vacuum cleaner world? Is there anything one could do to fix it? Will this be better with Uniform Cost Search and/or A*?
|
|
|
Post by Baldur on Feb 10, 2012 15:36:20 GMT -5
I managed to find a solution using Uniform Cost search. Here are the results: State Expansions: 190 Maximum Frontier Size: 44 Path Cost to Solution: 43 Search Time: 13 ms Final Score: 57
There is one problem though. I'm not really sure how I should keep track of the best path (I never implemented BFS in GGP last semester). It's easy when using DFS, because then you know that the most current path is the solution. But using BFS or Uniform Cost the path is broken during search. How should I do that?
What I'm doing now is when I find a solution I just put the current node as the "best node" in the parent and work my way to the first node. Then I simply play out the move that is connected to the best node. However, for some reason the "SUCK" moves are skipped! So what I do is I just play out the best nodes unless there is a "DIRT" percept, then I play a "SUCK" move. I think this is kind of a dirty solution...
|
|
|
Post by Baldur on Feb 10, 2012 16:20:19 GMT -5
I also found a solution using DFS:
State Expansions: 144 Maximum Frontier Size: 244 Path Cost to Solution: 125 Search Time: 12 ms Final Score : 0
The search ends when it finds the first path to a goal state. The path is, however, too long to execute so the final score is zero. Should the search keep on going until it finds the shortest solution?
|
|
|
Post by Baldur on Feb 10, 2012 16:46:22 GMT -5
BFS:
State Expansions: 5208 Maximum Frontier Size: 457 Path Cost to Solution: 76 Search Time: 67 ms Final Score: 57
Same problem here as in Uniform Search, SUCK moves are somehow dismissed. Will look into it tomorrow.
|
|
|
Post by Baldur on Feb 11, 2012 5:34:26 GMT -5
I figured out what the problem was. I forgot to clone the HashMaps storing the positions of the dirt so I was changing the HashMaps of the parent states so the dirt would magically disappear without a SUCK move
|
|
|
Post by Baldur on Feb 11, 2012 6:00:00 GMT -5
I ran the searches again and the numbers changed a bit
vacuumcleaner_obstacles_1.gdl
Uniform: State Expansions: 201 Maximum Frontier Size: 57 Path Cost to Solution: 48 Search Time: 15 ms Final Score: 57
DFS State Expansions: 144 Maximum Frontier Size: 244 Path Cost to Solution: 125 Search Time: 8 ms Final Score: 0
BFS State Expansions: 19939 Maximum Frontier Size: 2082 Path Cost to Solution: 76 Search Time: 198 ms Final Score: 57
vacuumcleaner_obstacles_2.gdl
Uniform: State Expansions: 178 Maximum Frontier Size: 41 Path Cost to Solution: 48 Search Time: 15 ms Final Score: 57
DFS: State Expansions: 148 Maximum Frontier Size: 250 Path Cost to Solution: 117 Search Time: 9 ms Final Score: 0
BFS: State Expansions: 51506 Maximum Frontier Size: 8066 Path Cost to Solution: 76 Search Time: 475 ms Final Score: 57
BFS got much more expensive after the changes I made. It's probably because before the changes, there were more states recognized as "already visited" because they were pointing to the same instance of the dirt positions hash map.
Will try to implement A* next.
|
|
|
Post by gudmundurs10 on Feb 11, 2012 9:03:24 GMT -5
We are having trouble with how much time it takes to complete a run of an algorithm and are wondering how people are storing things like positions for dirt and obstacles. We are currently using lists and looping through them but it seems to be the most inefficient solution.
|
|
|
Post by Baldur on Feb 11, 2012 11:13:56 GMT -5
I made classes and enums for basically everything. I have a class for position which has only two variables, x and y. I did this so the same posiiton would give the same hash value. I tried using int[] but int[]{1,2} is not equal to int[]{1,2} so when you try to use int[] as a key in a hashmap you won't get the object you're trying to fetch. Then I have a hashmap for dirt positions so I don't have to loop through anything. Hope this helps
|
|
|
Post by gudmundurs10 on Feb 11, 2012 11:22:36 GMT -5
Could you elaborate on when you say "I did this so the same posiiton would give the same hash value", are you using a hash function?
|
|