ValueError:arange:无法计算长度

2024-09-28 03:20:37 发布

您现在位置:Python中文网/ 问答频道 /正文

我需要计算一个显著性映射的度量。输入是预测显著图(热图)和固定图(二元图)

在我按照以下方式进行后处理之前,该函数是正常的:

predictions[predictions < 0] = 0
scaled_predictions = (predictions - np.min(predictions)) / (np.max(predictions) - np.min(predictions))

我遇到了这个错误,但不知道如何修复它:

<ipython-input-17-4d18bbcabb28> in run_metrics(model_type, trained_weights, vid_number)
---> 72             auc_borji_score.append( AUC_Borji(scaled_predictions[i], fixation[i]) )

<ipython-input-5-0f936e3c9c4f> in AUC_Borji(saliency_map, fixation_map, n_rep, step_size, rand_sampler)
    210     for rep in range(n_rep):
--> 211         thresholds = np.r_[0:np.max(np.r_[S_fix, S_rand[:,rep]]):step_size][::-1]
    212         tp = np.zeros(len(thresholds)+2)
    213         fp = np.zeros(len(thresholds)+2)

/usr/local/lib/python3.7/dist-packages/numpy/lib/index_tricks.py in __getitem__(self, key)
    349                     newobj = linspace(start, stop, num=size)
    350                 else:
--> 351                     newobj = _nx.arange(start, stop, step)
    352                 if ndmin > 1:
    353                     newobj = array(newobj, copy=False, ndmin=ndmin)

ValueError: arange: cannot compute length

完整的AUC_Borji代码:

def AUC_Borji(saliency_map, fixation_map, n_rep=100, step_size=0.1, rand_sampler=None):
    '''
    This measures how well the saliency map of an image predicts the ground truth human fixations on the image.
    ROC curve created by sweeping through threshold values at fixed step size
    until the maximum saliency map value.
    True positive (tp) rate correspond to the ratio of saliency map values above threshold
    at fixation locations to the total number of fixation locations.
    False positive (fp) rate correspond to the ratio of saliency map values above threshold
    at random locations to the total number of random locations
    (as many random locations as fixations, sampled uniformly from fixation_map ALL IMAGE PIXELS),
    averaging over n_rep number of selections of random locations.
    Parameters
    ----------
    saliency_map : real-valued matrix
    fixation_map : binary matrix
        Human fixation map.
    n_rep : int, optional
        Number of repeats for random sampling of non-fixated locations.
    step_size : int, optional
        Step size for sweeping through saliency map.
    rand_sampler : callable
        S_rand = rand_sampler(S, F, n_rep, n_fix)
        Sample the saliency map at random locations to estimate false positive.
        Return the sampled saliency values, S_rand.shape=(n_fix,n_rep)
    Returns
    -------
    AUC : float, between [0,1]
    '''
    saliency_map = np.array(saliency_map, copy=False)
    fixation_map = np.array(fixation_map, copy=False) > 0.5
    # If there are no fixation to predict, return NaN
    if not np.any(fixation_map):
        print('no fixation to predict')
        return np.nan
    # Make the saliency_map the size of the fixation_map
    if saliency_map.shape != fixation_map.shape:
        saliency_map = resize(saliency_map, fixation_map.shape, order=3, mode='nearest')
    # Normalize saliency map to have values between [0,1]
    saliency_map = normalize(saliency_map, method='range')

    S = saliency_map.ravel()
    F = fixation_map.ravel()
    S_fix = S[F] # Saliency map values at fixation locations
    n_fix = len(S_fix)
    n_pixels = len(S)
    # For each fixation, sample n_rep values from anywhere on the saliency map
    if rand_sampler is None:
        r = random.randint(0, n_pixels, [n_fix, n_rep])
        S_rand = S[r] # Saliency map values at random locations (including fixated locations!? underestimated)
    else:
        S_rand = rand_sampler(S, F, n_rep, n_fix)
    # Calculate AUC per random split (set of random locations)
    auc = np.zeros(n_rep) * np.nan
    for rep in range(n_rep):
        thresholds = np.r_[0:np.max(np.r_[S_fix, S_rand[:,rep]]):step_size][::-1]
        tp = np.zeros(len(thresholds)+2)
        fp = np.zeros(len(thresholds)+2)
        tp[0] = 0; tp[-1] = 1
        fp[0] = 0; fp[-1] = 1
        for k, thresh in enumerate(thresholds):
            tp[k+1] = np.sum(S_fix >= thresh) / float(n_fix)
            fp[k+1] = np.sum(S_rand[:,rep] >= thresh) / float(n_fix)
        auc[rep] = np.trapz(tp, fp)
    return np.mean(auc) # Average across random splits

由于后处理应该只改变每个像素的值,而不改变像素的数量,我不明白为什么会发生这种错误


Tags: ofthetomapsizenprandomfix
2条回答

根据回溯,错误发生在

np.r_[0:np.max(np.r_[S_fix, S_rand[:,rep]]):step_size]

r_使用np.arangeslice转换为数字数组

问题行的各个部分是:

np.r_[S_fix, S_rand[:,rep]]   # What is `S_fix`?  `S_rand`?
np.max(_)
np.r_[0:_:step_size]          # this uses `np.arange`

网络搜索和一些实验表明np.arange(np.nan)会产生以下错误消息:

ValueError: arange: cannot compute length

这意味着np.max(_)必须产生nan,这反过来意味着S_fixS_rand[:,rep]包含nan

所以saliency_map必须有一些np.nan

尝试在predictions[predictions < 0] = 0之后打印值 也许您需要将其等同于某个变量,例如: someVariable=predictions[predictions < 0] = 0

我也有同样的错误,我通过删除Nan值来解决: df.dropna()

相关问题 更多 >

    热门问题