<p>好吧,在实施沃伦的解决方案时,我必须做出一些改变,我已经在下面列出了这些。基本上是一样的,所以他得到了所有的荣誉,但是numpy和scipy的数值近似的现实需要更多的按摩,我想这对其他将来尝试这样做的人会有帮助。我还将变量名改为超级noob友好型。在</p>
<p>如果我有什么问题或有进一步的改进建议(例如速度方面的改进),请告诉我。在</p>
<pre><code># in this case my Markov model is a weighted directed graph, so convert that nx.graph (G) into it's transition matrix
M = transitionMatrix( G )
#create a list of the left eigenvalues and a separate array of the left eigenvectors
theEigenvalues, leftEigenvectors = scipy.linalg.eig(M, right=False, left=True)
# for stationary distribution the eigenvalues and vectors are always real, and this speeds it up a bit
theEigenvalues = theEigenvalues.real
leftEigenvectors = leftEigenvectors.real
# set how close to zero is acceptable as being zero...1e-15 was too low to find one of the actual eigenvalues
tolerance = 1e-10
# create a filter to collect the eigenvalues that are near enough to zero
mask = abs(theEigenvalues - 1) < tolerance
# apply that filter
theEigenvalues = theEigenvalues[mask]
# filter out the eigenvectors with non-zero eigenvalues
leftEigenvectors = leftEigenvectors[:, mask]
# convert all the tiny and negative values to zero to isolate the actual stationary distributions
leftEigenvectors[leftEigenvectors < tolerance] = 0
# normalize each distribution by the sum of the eigenvector columns
attractorDistributions = leftEigenvectors / leftEigenvectors.sum(axis=0, keepdims=True)
# this checks that the vectors are actually the left eigenvectors, but I guess it's not needed to usage
#attractorDistributions = np.dot(attractorDistributions.T, M).T
# convert the column vectors into row vectors (lists) for each attractor (the standard output for this kind of analysis)
attractorDistributions = attractorDistributions.T
# a list of the states in any attractor with the approximate stationary distribution within THAT attractor (e.g. for graph coloring)
theSteadyStates = np.sum(attractorDistributions, axis=1)
</code></pre>
<p>以一种简单的复制和粘贴格式将它们组合在一起:</p>
^{pr2}$
<p>通过对生成的Markov模型的分析,得到了一个吸引子(三个),稳态分布为0.19835218和0.80164782,而数学上的精确值为0.2和0.8。所以这是超过0.1%的折扣,对科学来说是一个很大的错误。这不是一个真正的问题,因为如果精度很重要的话,那么既然单个吸引子已经被识别,那么就可以使用矩阵子集对每个吸引子内部的行为进行更精确的分析。在</p>