广度优先搜索 - 学习Web Scraping with Python时遇到的BFS的问题?
伊谢尔伦
伊谢尔伦 2017-04-17 17:22:37
0
1
335

请教一个问题,今天在学习Web Scraping with Python这本书时,书P128页上为了解决Six Degrees of Wikipedia,对爬下来储存在MySQL中的数据作了搜索。
数据用了两个表记录,分别是:

pages:

+---------+--------------+------+-----+-------------------+----------------+
| Field   | Type         | Null | Key | Default           | Extra          |
+---------+--------------+------+-----+-------------------+----------------+
| id      | int(11)      | NO   | PRI | NULL              | auto_increment |
| url     | varchar(255) | NO   |     | NULL              |                |
| created | timestamp    | NO   |     | CURRENT_TIMESTAMP |                |
+---------+--------------+------+-----+-------------------+----------------+

links:

+------------+-----------+------+-----+-------------------+----------------+
| Field      | Type      | Null | Key | Default           | Extra          |
+------------+-----------+------+-----+-------------------+----------------+
| id         | int(11)   | NO   | PRI | NULL              | auto_increment |
| fromPageID | int(11)   | YES  |     | NULL              |                |
| toPageID   | int(11)   | YES  |     | NULL              |                |
| created    | timestamp | NO   |     | CURRENT_TIMESTAMP |                |
+------------+-----------+------+-----+-------------------+----------------+

搜索的算法如下:

from urllib.request import urlopen
from bs4 import BeautifulSoup
import pymysql

conn = pymysql.connect(host = '127.0.0.1', user = 'root', passwd = 'pai433832',
        db = 'mysql', charset = 'utf8')
cur = conn.cursor()
cur.execute("USE wikipedia")

class SolutionFound(RuntimeError):
    def __init__(self, message):
        self.message = message

def getLinks(fromPageId):
    cur.execute("SELECT toPageId FROM links WHERE fromPageID = %s",
            (fromPageId))
    if cur.rowcount == 0:
        return None
    else:
        return [x[0] for x in cur.fetchall()]

def constructDict(currentPageId):
    links = getLinks(currentPageId)
    if links:
        return dict(zip(links, [{}] * len(links)))
    return {}

# The link tree may either be empty or contain multiple links
def searchDepth(targetPageId, currentPageId, linkTree, depth):
#    print(str(depth) + '\t' + str(currentPageId))        # debug
    if depth == 0:
        # Stop recursing and return, regardless
        return linkTree
    if not linkTree:
        linkTree = constructDict(currentPageId)
        if not linkTree:
            # No links found. Cannot continue at this node
            return {}
    if targetPageId in linkTree.keys():
        print("TARGET " + str(targetPageId) + " FOUND!")
        raise SolutionFound("PAGE: " + str(currentPageId))

    for branchKey, branchValue in linkTree.items():
        try:
            # Recurse here to continue building the tree
            linkTree[branchKey] = searchDepth(targetPageId, branchKey,
                    branchValue, depth - 1)
        except SolutionFound as e:
            print(e.message)
            raise SolutionFound("PAGE: " + str(currentPageId))
    return linkTree

try:
    searchDepth(14000, 1, {}, 4)
    print("No solution found")
except SolutionFound as e:
    print(e.message)

作者表示此处用了广度优先搜索,然而我觉得当在递归调用 searchDepth 函数时,使用的必然是深度优先,因此上述代码事实上是不是的确并非广度优先搜索而是深度优先搜索?然而对于这种情况是不是的确是广度优先搜索更好?如果原文的代码不是广度优先,那么广度优先应当如何处理呢?

伊谢尔伦
伊谢尔伦

小伙看你根骨奇佳,潜力无限,来学PHP伐。

Antworte allen(1)
大家讲道理

代码确实是深度优先搜索(dfs)。

如果要改成广度优先搜索(bfs)的话,就用一个队列来维护待访问的url列表,每次从队列首部取url访问,然后把新提取出来的url集合添加到队列的尾部。

深度优先搜索跟广度优先搜索没有必然的优劣之分。如果目标位于层数比较小的地方,bfs可以很快找到解,如果目标所处的层次很深,bfs需要扩展很多节点,dfs反倒是有可能很快找到解。

Neueste Downloads
Mehr>
Web-Effekte
Quellcode der Website
Website-Materialien
Frontend-Vorlage