如何实现barebones训练的keras模型功能

2024-06-24 12:31:01 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图使用我使用Keras序列模型训练的权重来预测值,但是,我需要它尽可能轻。使用这些权重涉及哪些数学步骤

我对机器学习模型及其背后的数学有一个大致的了解,但我似乎对Keras如何在稠密模型中保存或使用其权重有一个不太了解。我有一个顺序模型,在输入层之后有3层,结构如下

Layer (type)                 Output Shape              Param      Activation
=============================================================================
dense_1 (Dense)              (None, 32)                128           selu
_____________________________________________________________________________
dense_2 (Dense)              (None, 16)                528           selu
_____________________________________________________________________________
dense_3 (Dense)              (None, 1)                 17            None
=============================================================================

在HDF5文件中,密集_1如下所示

GROUP "dense_1" {
        DATASET "bias:0" {
           DATATYPE  H5T_IEEE_F32LE
           DATASPACE  SIMPLE { ( 32 ) / ( 32 ) }
           DATA {
           (0): -1.94451, 1.01336, -0.331715, -0.113848, 1.62048,
           (5): -0.048518, -0.354262, 0.0446166, -0.589025, 0.52462,
           (10): -0.65492, -1.31858, 0.655622, 0.105934, -0.211289,
           (15): -0.707541, 0.136989, -0.780716, 0.848544, -0.243017,
           (20): 0.573721, -0.486272, -0.341983, -1.13598, 0.319228,
           (25): -0.251834, -2.36574, -0.162145, -1.39944, -0.121933,
           (30): -0.31354, -2.03999
           }
        }
        DATASET "kernel:0" {
           DATATYPE  H5T_IEEE_F32LE
           DATASPACE  SIMPLE { ( 3, 32 ) / ( 3, 32 ) }
           DATA {
           (0,0): -0.0363489, -0.710178, 0.235509, -1.05998, -0.219359,
           (0,5): 0.409596, -0.979901, -0.956889, -0.760354, -0.666067,
           (0,10): -0.984245, -0.0354558, -0.349002, 0.00105414,
           (0,14): 0.116752, -1.15233, 0.0275432, 0.215353, -0.678382,
           (0,19): -1.30734, -0.607154, 0.339789, -1.15283, 0.264189,
           (0,24): -0.6365, -1.06603, 0.626251, 0.511415, 0.211209,
           (0,29): 0.570231, -1.12093, 0.0427848,
           (1,0): 0.499443, 0.919749, 0.211407, -0.650205, 0.0994867,
           (1,5): -0.193579, -0.927313, -0.663942, -0.971027, 0.646805,
           (1,10): -0.891204, 0.432123, 0.459246, 0.0648564, -0.338207,
           (1,15): -0.926523, -0.0511654, 0.18302, 0.676123, -0.394053,
           (1,20): 0.520645, 0.289941, -0.760597, -0.309942, 0.477148,
           (1,25): -0.769285, -0.885901, 0.370343, -0.111789, -0.117417,
           (1,30): -0.534311, 0.466348,
           (2,0): -0.635124, 0.0613677, 0.10305, -0.635547, -0.00577374,
           (2,5): 0.0560383, -0.496999, -1.10678, -1.1172, 0.686428,
           (2,10): -1.04004, 0.569042, -0.285685, 1.24021, -0.602061,
           (2,15): 0.628718, 0.213643, -0.552893, 0.648553, -0.864449,
           (2,20): 1.21918, 0.185123, -0.415593, -0.187327, 1.11334,
           (2,25): -0.0186234, -0.192697, 0.3651, -0.964619, 0.201066,
           (2,30): -1.03416, 0.514029
           }
        }
     }

我对机器学习模型的理解使我相信,要计算第一层中节点0的输出,我将执行以下操作: 偏差(0)*((Input0*内核(0,0))+(Input1*内核(1,0))+Input2*内核(2,0))

import h5py
import pandas as pd
import numpy as np
import sys

T, D, Y = 9, 5, -0.3#sys.argv[1:4]

def selu(vals):
    alpha = 1.6732632423543772848170429916717
    scale = 1.0507009873554804934193349852946
    for i in range(len(vals)):
        if vals[i] < 0:
            vals[i] = np.e ** vals[i]
    return scale * vals

file = h5py.File('file.h5', 'r') #load h5 file

layers = list(file.keys()) #get keys

model = model[layers[0]] #get model

layers = list(model.keys()) #get layer keys

biases, kernals = [], []
for layer in layers:
    layer_weights = model[layer][layer]
    sub = list(layer_weights.keys())
    bias = layer_weights[sub[0]][:]
    kernal = layer_weights[sub[1]][:]
    biases.append(bias)
    kernals.append(kernal)

_in = [T, D, Y]
for i in range(len(kernals)):
#   print("Layer {}".format(i))
    _sum = np.zeros(len(biases[i]))
    for j in range(len(kernals[i])):
        _cross = np.float64(_in[j]) * np.float64(kernals[i][j])
        _sum += _cross

    if i < len(kernals)-1:
        _vals = selu(_sum)
    else:
        _vals = _sum
    _out = _vals * biases[i]
    _in = _out
    print(_out)

如果有人能给我解释一下,或者给我指出一条价值观通过这样一个模型的路径,我将不胜感激


Tags: in模型importnonelayerformodellen