In this report, we will look at solving air cargo problems with Planning Searches. Planning solvers using state-space searches have been around since at least 1971 when STRIPS was first released.

Since then it has seen many iterations, in this paper we will be using the Graphplan algorithm made popular in 1995.

### Problem 1¶

Problem 1 is loading two pieces of cargo using two airplanes with two destinations.

Optimal Plan for problem 1.

Load(C1, P1, SFO)
Fly(P2, JFK, SFO)
Fly(P1, SFO, JFK)
Unload(C1, P1, JFK)

This generated sequence came from our breadth_first_search (BFS), which also finished the quickest in just under .2 seconds. BFS will continue to be our search that finishes the quickest of the methods that result in optimal plans. BFS is also both complete, in that it will find a solution if it exists, and has guaranteed optimality.

As you can see below, several search methods were able to generate optimal plans. Our depth_first searches (DFS) however did not, but they did finish quickly! Our depth_first searches are neither complete ( it is not guaranteed to find a solution ) nor optimal ( it is not guaranteed to find an optimum plan), but it will be included in all tables for completeness.

Expansions Goal Tests New Nodes Plan Length Time Elapsed Optimal
astar_search-h1 55 57 224 6 0.24 Yes
astar_search-h_ignore_preconditions 43 45 180 6 0.35 Yes
astar_search-h_pg_levelsum 11 13 50 6 3.3 Yes
breadth_first_search 43 56 180 6 0.19 Yes
breadth_first_tree_search 1458 1459 5960 6 3.37 Yes
depth_first_graph_search 21 22 84 20 0.12 No
depth_limited_search 101 271 414 50 0.43 No
greedy_best_first_graph_search-h1 7 9 28 6 0.04 Yes
recursive_best_first_search-h1 4229 4230 17023 6 8.95 Yes
uniform_cost_search 55 57 224 6 0.24 Yes

### Problem 2¶

Problem 2 expands both our cargo pieces and airplanes to three with three destinations.

Optimal plan #rgb(239, 234, 234)

Load(C1, P1, SFO)
Fly(P1, SFO, JFK)
Fly(P2, JFK, SFO)
Fly(P3, ATL, SFO)
Unload(C1, P1, JFK)

This solution comes to us from our A\*-levelsum algorithm. While this algorithm boasts the fewest new nodes by a large margin ( 85% fewer new nodes than the 2nd fewest algorithm ), it also finishes painfully slowly. Levelsum works by guiding the search based on the sum of the levels where the goals are first encountered. However, the slow down might be due to the implementation rather than the algorithm itself.

Our other A* search, A\*-ignore-preconditions, works by relaxing the problem by dropping preconditions from all actions. This makes our algorithm admissible by not overestimating. From AI-A modern approach, "There are two ways we can relax this problem to make it easier: by adding more edges to the graph making it strictly easier to find a path, or by grouping multiple nodes together, forming an abstraction of the statespace that has fewer states and thus is easier to search".

A\*-ignore-preconditions uses the first approach of adding more edges to the graph.

Expansions Goal Tests New Nodes Plan Length Time Elapsed Optimal
astar_search-h1 4852 4854 44030 9 168.62 Yes
astar_search-h_ignore_preconditions 1548 1550 14214 9 86.53 Yes
astar_search-h_pg_levelsum 86 88 841 9 448.41 Yes
breadth_first_search 3343 4609 30509 9 78.54 Yes
breadth_first_tree_search nan nan nan nan nan Didnt Finish
depth_first_graph_search 624 625 5602 619 17.43 No
depth_limited_search nan nan nan nan nan Didnt Finish
greedy_best_first_graph_search-h1 990 992 8910 15 37.05 No
recursive_best_first_search-h1 nan nan nan nan nan Didnt Finish
uniform_cost_search 4852 4854 44030 9 167.8 Yes

### Problem 3¶

Problem 3 expands our cargo and airport to four, but reduces our planes to only two. Problem 3 is the closest to a real world problem, so we will use it to finish our report and make some final conclusions on our algorithms.

Optimal Plan

Load(C1, P1, SFO)
Fly(P1, SFO, ATL)
Fly(P2, JFK, ORD)
Fly(P2, ORD, SFO)
Fly(P1, ATL, JFK)
Unload(C1, P1, JFK)

This optimal plan comes from our uniform_cost_search, which is guaranteed complete and optimal, if step cost is greater than (ε where ε > 0) and the path cost is always increasing. Our uniform_cost_search actually has the highest Goal Tests of every algorithm tested for problem 3. Despite this, it finishes around middle of the pack as far as time wise. It is a good choice of the guaranteed optimal algorithms for balancing memory and runtime .

Expansions Goal Tests New Nodes Plan Length Time Elapsed Optimal
astar_search-h1 18235 18235 159716 12 1280.23 Yes
astar_search-h_ignore_preconditions 5152 5154 45972 12 533.18 Yes
astar_search-h_pg_levelsum 404 406 3718 12 3129.27 Yes
breadth_first_search 14663 18098 129631 12 522.54 Yes
breadth_first_tree_search nan nan nan nan nan Didnt Finish
depth_first_graph_search 408 409 3364 392 10.85 No
depth_limited_search nan nan nan nan nan Didnt Finish
greedy_best_first_graph_search-h1 5614 5616 49429 22 513.76 No
recursive_best_first_search-h1 nan nan nan nan nan Didnt Finish
uniform_cost_search 18235 18237 159716 12 1263.06 Yes

# Conclusions¶

In this paper we are looking at how different searches solve our Planning problems, comparing informed searches, searches where we provide heuristics to guide the search, and uninformed, searches that traverse the tree without guidance.

As we have seen, our informed searches, while guaranteed to be optimal and complete, typically take much longer to complete than our uninformed searches. Part of this might be due to our implementation, but it might also be that informed searches just require more computational power and are going to take longer.

Some uninformed searches generated plans that were incredibly bad, like depth_first_graph_search , which generated plans with 392 moves for a problem that only required 12. Others , like our 'breadth_first_search' - give us both guaranteed optimality and completeness , all with the speed of an uninformed search. The downsides to BFS family of searches is of course memory. A breadth first search will typically expand bk nodes and requires all nodes to reside in memory, from both the frontier and the explored states.

In the end, the choice of search algorithm should depend on the problem, how large it is, how much memory is required, and how fast does the solution need to be generated.