无效参数:没有足够的时间用于目标转换序列

2024-06-25 23:19:13 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图运行这个HTR模型https://github.com/arthurflor23/handwritten-text-recognition,但它给了我这个错误Invalid argument: Not enough time for target transition sequence。我认为问题出在{}。我的图像尺寸是(137518),文本的最大长度是137。你知道我该如何解决这个问题吗


Tags: texthttps模型githubcomtime错误not
1条回答
网友
1楼 · 发布于 2024-06-25 23:19:13

我修复了这个问题,这是由于输入的大小

Layer (type)                 Output Shape              Param #   
=================================================================
input (InputLayer)           [(None, 1024, 128, 1)]    0         
_________________________________________________________________
conv2d (Conv2D)              (None, 1024, 64, 16)      160       
_________________________________________________________________
p_re_lu (PReLU)              (None, 1024, 64, 16)      16        
_________________________________________________________________
batch_normalization (BatchNo (None, 1024, 64, 16)      112       
_________________________________________________________________
full_gated_conv2d (FullGated (None, 1024, 64, 16)      4640      
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 1024, 64, 32)      4640      
_________________________________________________________________
p_re_lu_1 (PReLU)            (None, 1024, 64, 32)      32        
_________________________________________________________________
batch_normalization_1 (Batch (None, 1024, 64, 32)      224       
_________________________________________________________________
full_gated_conv2d_1 (FullGat (None, 1024, 64, 32)      18496     
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 512, 16, 40)       10280     
_________________________________________________________________
p_re_lu_2 (PReLU)            (None, 512, 16, 40)       40        
_________________________________________________________________
batch_normalization_2 (Batch (None, 512, 16, 40)       280       
_________________________________________________________________
full_gated_conv2d_2 (FullGat (None, 512, 16, 40)       28880     
_________________________________________________________________
dropout (Dropout)            (None, 512, 16, 40)       0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 512, 16, 48)       17328     
_________________________________________________________________
p_re_lu_3 (PReLU)            (None, 512, 16, 48)       48        
_________________________________________________________________
batch_normalization_3 (Batch (None, 512, 16, 48)       336       
_________________________________________________________________
full_gated_conv2d_3 (FullGat (None, 512, 16, 48)       41568     
_________________________________________________________________
dropout_1 (Dropout)          (None, 512, 16, 48)       0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 256, 4, 56)        21560     
_________________________________________________________________
p_re_lu_4 (PReLU)            (None, 256, 4, 56)        56        
_________________________________________________________________
batch_normalization_4 (Batch (None, 256, 4, 56)        392       
_________________________________________________________________
full_gated_conv2d_4 (FullGat (None, 256, 4, 56)        56560     
_________________________________________________________________
dropout_2 (Dropout)          (None, 256, 4, 56)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 256, 4, 64)        32320     
_________________________________________________________________
p_re_lu_5 (PReLU)            (None, 256, 4, 64)        64        
_________________________________________________________________
batch_normalization_5 (Batch (None, 256, 4, 64)        448       
_________________________________________________________________
reshape (Reshape)            (None, 256, 256)          0         
_________________________________________________________________
bidirectional (Bidirectional (None, 256, 256)          296448    
_________________________________________________________________
dense (Dense)                (None, 256, 256)          65792     
_________________________________________________________________
bidirectional_1 (Bidirection (None, 256, 256)          296448    
_________________________________________________________________
dense_1 (Dense)              (None, 256, 332)          85324     
=================================================================
look at the final layer ( dense_1 ) the second dimension is 256, so your text label should be <=256, not more. The problem comes from here.

相关问题 更多 >