值错误:无法为张量“1”输入形状(5,15)的值_热:0,其形状为“(5,15,2)”

2024-09-21 03:17:49 发布

您现在位置:Python中文网/ 问答频道 /正文

这是密码

num_epochs = 100
total_series_length = 50000
truncated_backprop_length = 15
state_size = 4
num_classes = 2
echo_step = 3
batch_size = 5
num_batches = total_series_length//batch_size//truncated_backprop_length

正在生成数据{…}

^{pr2}$

一些优化代码{….} 然后创建图形

#Step 3 Training the network
with tf.Session() as sess:
    #we stupidly have to do this everytime, it should just know
    #that we initialized these vars. v2 guys, v2..
    sess.run(tf.initialize_all_variables())
    #interactive mode
    plt.ion()
    #initialize the figure
    plt.figure()
    #show the graph
    plt.show()
    #to show the loss decrease
    loss_list = []

    for epoch_idx in range(num_epochs):
        #generate data at eveery epoch, batches run in epochs
        x,y = generateData()
        #initialize an empty hidden state
        _current_state = np.zeros((batch_size, state_size))

        print("New data, epoch", epoch_idx)
        #each batch
        for batch_idx in range(num_batches):
            #starting and ending point per batch
            #since weights reoccuer at every layer through time
            #These layers will not be unrolled to the beginning of time, 
            #that would be too computationally expensive, and are therefore truncated 
            #at a limited number of time-steps
            start_idx = batch_idx * truncated_backprop_length
            end_idx = start_idx + truncated_backprop_length

            batchX = x[:,start_idx:end_idx]
            batchY = y[:,start_idx:end_idx]

            #run the computation graph, give it the values
            #we calculated earlier
            _total_loss, _train_step, _final_state, _predictions_series = sess.run(
                [total_loss, train_step, final_state, predictions],
                feed_dict={
                    batchX_placeholder:batchX,
                    batchY_placeholder:batchY,
                    init_state:_current_state
                })

            loss_list.append(_total_loss)

            if batch_idx%100 == 0:
                print("Step",batch_idx, "Loss", _total_loss)
                plot(loss_list, _predictions_series, batchX, batchY)

plt.ioff()
plt.show()

这个错误是:

ValueError                                Traceback (most recent call last)
<ipython-input-9-7c3d1289d16b> in <module>()
     40                     batchX_placeholder:batchX,
     41                     batchY_placeholder:batchY,
---> 42                     init_state:_current_state
     43                 })
     44 

/home/pranshu_44/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    765     try:
    766       result = self._run(None, fetches, feed_dict, options_ptr,
--> 767                          run_metadata_ptr)
    768       if run_metadata:
    769         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/home/pranshu_44/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    942                 'Cannot feed value of shape %r for Tensor %r, '
    943                 'which has shape %r'
--> 944                 % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
    945           if not self.graph.is_feedable(subfeed_t):
    946             raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (5, 15) for Tensor 'one_hot:0', which has shape '(5, 15, 2)'

我看了医生,但一点帮助也没有 如果有其他简单的方法也会有帮助


Tags: theruninfeedbatchlengthnumtotal
1条回答
网友
1楼 · 发布于 2024-09-21 03:17:49

您正在将占位符变量转换为one hot表示,但不转换在培训期间实际输入到网络的数据。在输入batchX之前,请尝试将batchX转换为一个热表示。此代码段将把矩阵转换为一个热表示:

# Assuming that y contains values from 0 to N-1 where N is number of classes
batchX = (np.arange(max(batchX.flatten())+1) = batchX[:,:,None]).astype(int)

相关问题 更多 >

    热门问题