I was comparing the efficiency of Breadth-first search and Floyd–Warshall algorithm in solving shortest path problem in python, although the complexity of Floyd–Warshall algorithm is much larger then that of BFS, BFS seems to be taking more time
def BFS_ForNode(query_node, parents):
result = {}
queue = []
queue.append( (query_node, 0) )
while queue:
node, dist = queue.pop(0)
result[node] = dist
if node in parents:
for parent in parents[node]:
queue_members = [x[0] for x in queue]
if parent not in result and parent not in queue_members:
queue.append( (parent, dist+1) )
return result
def BFS(Graph,vertix_num):
for i in range(vertix_num):
BFS_ForNode(i, Graph)
def floydWarshall(graph , V):
dist = map(lambda i : map(lambda j : j , i) , graph)
for k in range(V):
for i in range(V):
for j in range(i):
dist[i][j] = min(dist[i][j] ,
dist[i][k]+ dist[k][j]
)
dist[j][i] = dist[i][j]
return dist
Is there some data structure I am using for BFS that makes it very slow and is there anyway to optimize it
Assuming you start BFS by calling
BFS()
, your BFS routine actually takes more than linear time. It treats every vertex in graph as a source vertex, and restarts a BFS for each vertex. which makes time quadratic even if your body of the loop takes constant time (it doesn't).Additionally,
[x[0] for x in queue]
andif node in parents:
can all be linear time worst case, making your worst case runtime approximately equal or even greater than Floyd Warshall.