我正试着去理解scipy.optimize.minimize
的语法。我已经读了一遍又一遍的文件和谷歌搜索,但我只是不明白如何做到这一点。希望有人能给我指出正确的方向:
我有一个信号(numpy array
),我希望以这种方式与模板信号(相同长度的numpy数组)匹配,以确定哪个信号基线调整和振幅倍增因子将导致最小的卡方误差。我想我需要做些事情,尽量减少
(baselineadjust+amplitudefactor*模板阵列-信号阵列)**2
我想使用baselineadjust和amplitudefactor来修改信号,减去修改后的模板,即
新信号=baselineadjust+amplitudefactor*templatearray-signalarray
此操作的目的是有效地从实际信号中去除模板信号,留下剩余的信号/噪声。你知道吗
不确定这是否有意义,并乐于详细说明。你知道吗
#data is a (very) long list of floats, that contains a noisy signal and a repetitive signal of a specific length but with slight variations in amplitude, partially due to the noisy signal and partially because it inherently varies a bit. I wish to remove the repetitive signal from the data.
#Prior to this, I detect signals in data:
sigwindows = []
#Gather signals
for sig in signals:
sigwindow = list(data[sig-200:sig+200])
sigwindows += [sigwindow]
#Construct an averaged signal
avgsig = np.mean(sigwindows, axis=0)
for sig in signals:
sigwindow = list(data[sig-200:sig+200])
#First, I subtract the average signal
data[sig-200:sig+200] = list(np.array(sigavg) - np.array(sigwindow))
#This removes some of the signal, but due to the inherent variation in amplitude, some of it remains.
#Instead would like to do this:
data[sig-200:sig+200] = list(np.array(sigavg) - baselineadjust - amplitudefactor * np.array(sigwindow))
#... having obtained 'baselineadjust' and 'amplitudefactor' as described above, by minimizing chi-square.
目前没有回答
相关问题 更多 >
编程相关推荐