我正在分析.wav文件的频谱图。但是在代码最终运行之后,我遇到了一个小问题。在保存了700+.wav文件的光谱图之后,我意识到它们本质上看起来都一样!!!这并不是因为它们是同一个音频文件,而是因为我不知道如何将绘图的比例改得更小(这样我就可以分辨出它们之间的差异)。在
我已经试图通过查看这个StackOverflow帖子来解决这个问题 Changing plot scale by a factor in matplotlib
我将在下面显示两个不同的.wav文件的图形
信不信由你,这是两个不同的.wav文件,但它们看起来非常相似。如果这两个.wav文件的范围如此之广,计算机将无法识别这两个.wav文件之间的差异。在
我的代码在下面
def individualWavToSpectrogram(myAudio, fileNameToSaveTo):
print(myAudio)
#Read file and get sampling freq [ usually 44100 Hz ] and sound object
samplingFreq, mySound = wavfile.read(myAudio)
#Check if wave file is 16bit or 32 bit. 24bit is not supported
mySoundDataType = mySound.dtype
#We can convert our sound array to floating point values ranging from -1 to 1 as follows
mySound = mySound / (2.**15)
#Check sample points and sound channel for duel channel(5060, 2) or (5060, ) for mono channel
mySoundShape = mySound.shape
samplePoints = float(mySound.shape[0])
#Get duration of sound file
signalDuration = mySound.shape[0] / samplingFreq
#If two channels, then select only one channel
#mySoundOneChannel = mySound[:,0]
#if one channel then index like a 1d array, if 2 channel index into 2 dimensional array
if len(mySound.shape) > 1:
mySoundOneChannel = mySound[:,0]
else:
mySoundOneChannel = mySound
#Plotting the tone
# We can represent sound by plotting the pressure values against time axis.
#Create an array of sample point in one dimension
timeArray = numpy.arange(0, samplePoints, 1)
#
timeArray = timeArray / samplingFreq
#Scale to milliSeconds
timeArray = timeArray * 1000
plt.rcParams['agg.path.chunksize'] = 100000
#Plot the tone
plt.plot(timeArray, mySoundOneChannel, color='Black')
#plt.xlabel('Time (ms)')
#plt.ylabel('Amplitude')
print("trying to save")
plt.savefig('/Users/BillyBobJoe/Desktop/' + fileNameToSaveTo + '.jpg')
print("saved")
#plt.show()
#plt.close()
如何修改此代码以提高绘图的敏感性,以便使两个.wav文件之间的差异更加明显?在
谢谢!在
[更新]
我试过使用plt.xlim((0, 16000))
我需要一种方法来改变每个单位的比例。所以当我把x轴从0-16000改变时,这个图就被填满了
如果问题是:如何将X轴上的刻度限制在0到1000之间,可以执行以下操作:
plt.xlim((0, 1000))
相关问题 更多 >
编程相关推荐