无法生成二进制classifi

2024-10-01 22:32:34 发布

您现在位置:Python中文网/ 问答频道 /正文

我不知道这是不是很合适,但已经两天了,我被封锁,所以我尝试。 我遵循了MNIST的Forwar神经网络模型,并尝试将其应用于我的二元分类问题

两天前,我的张量的数据类型有问题

,我问stackoverflow这里binary classification, xentropy mismatch , invalid argument ( Received a label value of 1 which is outside the valid range of [0, 1) ) 但是没有得到任何回应,所以我找到了一个“解决方案”,用tf.tofloat(Y)来投射张量Y

现在,我得到了一个错误,在非常的结尾

InvalidArgumentError                      Traceback (most recent call last)
~\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
   1333     try:
-> 1334       return fn(*args)
   1335     except errors.OpError as e:

~\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
   1318       return self._call_tf_sessionrun(
-> 1319           options, feed_dict, fetch_list, target_list, run_metadata)
   1320 

~\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
   1406         self._session, options, feed_dict, fetch_list, target_list,
-> 1407         run_metadata)
   1408 

InvalidArgumentError: targets[0] is out of range
     [[{{node in_top_k_2/InTopKV2}}]]

目标是函数的第二个参数:tf.nn.in\u top\u k(logits,y,1)

这是我的完整代码

如果有人能帮我,告诉我哪里是我的错误,我做错了什么,因为我放弃了

import tensorflow as tf
n_inputs = 28 
n_hidden1 = 15
n_hidden2 = 5
n_outputs = 1
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") # variable a qui on assignera values par feed_dict
y = tf.placeholder(tf.int32, shape=(None), name="y")   #None => any

def neuron_layer(X, n_neurons, name, activation=None):
    with tf.name_scope(name):
        n_inputs = int(X.shape[1])
        stddev = 2 / np.sqrt(n_inputs) 
        init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev) #matrice n_inputs x n_neurons values proche de 0    
        W = tf.Variable(init,name="kernel")  #weights random
        b = tf.Variable(tf.zeros([n_neurons]), name="bias")
        Z = tf.matmul(X, W) + b
        tf.cast(Z,tf.int32)
        if activation is not None:
            return activation(Z)
        else:
            return Z

hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
                           activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
                           activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")

xentropy = tf.keras.backend.binary_crossentropy(tf.to_float(y),logits)  
loss = tf.reduce_mean(xentropy)
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits,y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))


init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 40
batch_size = 50

def shuffle_batch(X, y, batch_size):
    rnd_idx = np.random.permutation(len(X))
    n_batches = len(X) // batch_size
    for batch_idx in np.array_split(rnd_idx, n_batches):
        X_batch, y_batch = X[batch_idx], y[batch_idx]
        yield X_batch, y_batch

#until here, no errors ...
with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for X_batch, y_batch in shuffle_batch(X_train, Y_train, batch_size):
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
        acc_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})
        print(epoch, "Batch accuracy:", acc_batch, "Val accuracy:", acc_val)

    save_path = saver.save(sess, "./my_model_final.ckpt")

我真的需要一些帮助。。谢谢


Tags: runnameinnonetffeedbatchactivation

热门问题