Given a string consisting of words without spaces, this article presents an efficient algorithm for segmenting it into individual words while considering their relative frequencies.
Input: "tableapplechairtablecupboard..."
Output: ["table", "apple", "chair", "table", ["cupboard", ["cup", "board"]], ...]
Rather than using a naive approach, the algorithm leverages word frequency to improve accuracy. Assuming words are distributed independently and follow Zipf's law, the algorithm uses dynamic programming to identify the most likely sequence of words.
<code class="python">from math import log words = open("words-by-frequency.txt").read().split() wordcost = dict((k, log((i+1)*log(len(words)))) for i,k in enumerate(words)) maxword = max(len(x) for x in words) def infer_spaces(s): cost = [0] for i in range(1,len(s)+1): c,k = best_match(i) cost.append(c) out = [] i = len(s) while i>0: c,k = best_match(i) out.append(s[i-k:i]) i -= k return " ".join(reversed(out)) def best_match(i): candidates = enumerate(reversed(cost[max(0, i-maxword):i])) return min((c + wordcost.get(s[i-k-1:i], 9e999), k+1) for k,c in candidates) s = 'thumbgreenappleactiveassignmentweeklymetaphor' print(infer_spaces(s))</code>
The algorithm relies on a dictionary that maps words to their relative frequencies, assuming Zipf's law. To account for unseen words, a high cost is assigned to them.
The algorithm computes the cost of each possible word segment, considering the potential next words. It selects the path with the lowest cost using dynamic programming, ensuring the most likely word sequence.
For large inputs, the algorithm can be optimized by splitting the text into blocks and processing them independently. This reduces memory usage without significantly impacting accuracy.
The above is the detailed content of How can we efficiently separate text without spaces into a word list, leveraging word frequency and dynamic programming?. For more information, please follow other related articles on the PHP Chinese website!