tensorflow从十位数得到最大值

2024-10-01 05:00:57 发布

您现在位置:Python中文网/ 问答频道 /正文

所以我有一个形状为(50, ?, 1, 100)的张量h_in,我现在想把它变成形状(50, 1, 1, 100),方法是在1上取最大值。在

我该怎么做?在

我试过了

h_out = max_pool(h_in)

^{pr2}$

但这似乎并没有缩小规模。在

可运行示例:

import tensorflow as tf
import numpy as np
import numpy.random as nprand

def _weight_variable(shape,name):
    initial = tf.truncated_normal(shape,stddev=0.1)
    v = tf.Variable(initial,name=name)
    return v

def _bias_variable(shape,name):
    initial = tf.constant(0.1,shape=shape)
    v = tf.Variable(initial,name=name)
    return v

def _embedding_variable(shape,name):
    initial = tf.truncated_normal(shape)
    v = tf.Variable(initial,name=name)
    return v

def conv2d(x,W,strides=[1,1,1,1],padding='VALID'):
    return tf.nn.conv2d(x,W,strides=strides,padding=padding)

def max_pool(h,ksize=[1,-1,1,1],strides=[1,1,1,1],padding='VALID'):
    return tf.nn.max_pool(h,ksize=ksize,strides=strides,padding=padding)

nof_embeddings= 55000
dim_embeddings = 300

batch_size = 50
filter_size = 100
x_input = tf.placeholder(tf.int32, shape=[batch_size, None])

def _model():

    embeddings = _embedding_variable([nof_embeddings,dim_embeddings],'embeddings')

    h_lookup = tf.nn.embedding_lookup(embeddings,x_input)
    h_embed = tf.reshape(h_lookup,[batch_size,-1,dim_embeddings,1])

    f = 3

    W_conv1f = _weight_variable([f,dim_embeddings,1,filter_size],f'W_conv1_{f}')
    b_conv1f = _bias_variable([filter_size],f'b_conv1_{f}')
    h_conv1f = tf.nn.relu(conv2d(h_embed,W_conv1f) + b_conv1f)

    h_pool1f = max_pool(h_conv1f)

    print("h_embed:",h_embed.get_shape())
    print()
    print(f'h_conv1_{f}:',h_conv1f.get_shape())
    print(f'h_pool1_{f}:',h_pool1f.get_shape())
    print()

    return tf.shape(h_pool1f)

if __name__ == '__main__':

    tensor_length = 35

    model = _model()
    with tf.Session() as sess:
        tf.global_variables_initializer().run()
        batch = nprand.randint(0,nof_embeddings,size=[batch_size,tensor_length])
        shape = sess.run(model,
                         feed_dict ={
                                 x_input : batch
                                 })
        print('result:',shape)

哪些输出

h_embed: (50, ?, 300, 1)

h_conv1_3: (50, ?, 1, 100)
h_pool1_3: (50, ?, 1, 100)

result: [ 50  35   1 100]

假设我硬编码我想要的尺寸:

h_pool1f = max_pool(h_conv1f,ksize=[1,35-f+1,1,1])

这很管用。 但是现在只要我更改tensor_length(它是在运行时确定的,所以不,我不能硬编码它)。在

一个“解决方案”是通过填充或其他方法将输入放大到固定的最大长度,但是,这又引入了不必要的计算和人为的上限,这两个我都非常希望避免。在

那么,有没有

  • 一种让tensorflow“正确”识别-1的方法吗?在
  • 或者另一种计算最大值的方法?在

Tags: namesizereturntfdefbatchvariablemax
1条回答
网友
1楼 · 发布于 2024-10-01 05:00:57

我认为tf.reduce_max就是你要找的: https://www.tensorflow.org/api_docs/python/tf/reduce_max

用法:

tens = some tensorflow.Tensor
ax = some positive integer, or -1 or None
red_m = tf.reduce_max(tens, axis=ax)

如果十的形状是[shape_0, shape_1, shape_2],则得到的张量{}将具有形状{}如果ax=0,形状{}如果ax=1,依此类推。如果ax=-1,则推断最后一个轴;如果ax=None,则沿所有轴进行缩减。在

相关问题 更多 >