擅长:python、mysql、java
<p>你可以使用nltk库。你知道吗</p>
<pre><code>from nltk.tokenize import word_tokenize
def sample(string, keyword, n):
output = []
word_list = word_tokenize(string.lower())
indices = [i for i, x in enumerate(word_list) if x==keyword]
for index in indices:
output.append(word_list[index+1:index+n+1])
return output
>>>print sample(string, 'populations', 3)
>>>[['of', '20,000', 'people']]
>>>print sample(string, 'tables', 3)
>>>[['consist', 'of', '59'], ['tabulated', 'on', 'the']]
</code></pre>