简单非面向对象神经网络“跳跃”的代价

2024-09-30 14:20:09 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在用python3.4用numpy和矩阵构建一个神经网络的草图,以学习一个简单的异或。 我的符号如下:

是神经元的活动

是神经元的输入

W是一个权重矩阵,其大小为R^{前一层神经元数}x{下一层神经元数}

B是偏差值的向量

在用python实现了一个非常简单的网络之后,当只在一个输入向量上进行训练时,一切都可以正常工作。然而,当对所有四个异或训练示例进行训练时,误差函数显示出一种非常奇怪的行为(见图),网络的输出总是大约为0.5。 改变网络规模、学习率或培训时间似乎没有帮助。你知道吗

Cost J while only training on one training example 仅对一个培训示例进行培训时的成本

Cost J while training with all training examples 所有培训实例的培训成本

这是网络的代码:

import numpy as np
import time
import matplotlib.pyplot as plt


Js = []
start = time.time()
np.random.seed(2)


#Sigmoid        
def activation(x, derivative = False):
    if(derivative):
        a = activation(x)
        return a * (1 - a)
    else:
        return 1/(1+np.exp(-x))

def cost(output, target):
    return (1/2) * np.sum((target - output)**2)


INPUTS = np.array([
    [0, 1],
    [1, 0],
    [0, 0],
    [1, 1],
])
TARGET = np.array([
    [1],
    [1],
    [0],
    [0],
])

"Hyper-Parameters"
# Layer Structure
LAYER = [2, 3, 1]
LEARNING_RATE = 0.1
ITERATIONS = int(1e3)

# Init Weights
W1 = np.random.rand(LAYER[0], LAYER[1])
W2 = np.random.rand(LAYER[1], LAYER[2])

# Init Biases
B1 = np.random.rand(LAYER[1], 1)
B2 = np.random.rand(LAYER[2], 1)

for i in range(0, ITERATIONS):
    exampleIndex = i % len(INPUTS)
    #exampleIndex = 2
    "Forward Pass"
    # Layer One Activity (Input layer)
    A0 = np.transpose(INPUTS[exampleIndex:exampleIndex+1])

    # Layer Two Activity (Hidden Layer)
    Z1 = np.dot(np.transpose(W1), A0) + B1
    A1 = activation(Z1)

    # Layer Three Activity (Output Layer)
    Z2 = np.dot(np.transpose(W2), A1) + B2
    A2 = activation(Z2)

    # Output
    O = A2

    # Cost J

    # Target Vector T
    T = np.transpose(TARGET[exampleIndex:exampleIndex+1])
    J = cost(O, T)
    Js.append(J)

    print("J = {}".format(J))
    print("I = {}, O = {}".format(A0, O))

    "Backward Pass"

    # Calculate Delta of output layer
    D2 = (O - T) * activation(Z2, True)

    # Calculate Delta of hidden layer
    D1 = np.dot(W2, D2) * activation(Z1, True)

    # Calculate Derivatives w.r.t. W2
    DerW2 = np.dot(A1, np.transpose(D2))
    # Calculate Derivatives w.r.t. W1
    DerW1 = np.dot(A0, np.transpose(D1))

    # Calculate Derivatives w.r.t. B2
    DerB2 = D2
    # Calculate Derivatives w.r.t. B1
    DerB1 = D1

    "Update Weights and Biases"

    W1 -= LEARNING_RATE * DerW1
    B1 -= LEARNING_RATE * DerB1

    W2 -= LEARNING_RATE * DerW2
    B2 -= LEARNING_RATE * DerB2

# Show prediction

print("Time elapsed {}s".format(time.time() - start))    
plt.plot(Js)
plt.ylabel("Cost J")
plt.xlabel("Iterations")
plt.show()

在我的实现中出现这种奇怪行为的原因是什么?你知道吗


Tags: 网络layerratetimenppltrandomactivation
1条回答
网友
1楼 · 发布于 2024-09-30 14:20:09

我认为你的成本函数是跳跃的,因为你在每个样品后进行重量更新。但是,您的人际网络正在培训正确的行为:

479997
J = 4.7222501603409765e-05
I = [[1]
 [0]], O = [[ 0.99028172]]
T = [[1]]
479998
J = 7.3205311398742e-05
I = [[0]
 [0]], O = [[ 0.01210003]]
T = [[0]]
479999
J = 4.577485181547362e-05
I = [[1]
 [1]], O = [[ 0.00956816]]
T = [[0]]
480000
J = 4.726257702199439e-05
I = [[0]
 [1]], O = [[ 0.9902776]]
T = [[1]]

成本函数显示了一些有趣的行为:训练过程达到一个点,成本函数的跳跃将变得非常小。 您可以用下面的代码来重现这一点(我只做了一些细微的更改;请注意,我接受过更多的培训):

import numpy as np
import time
import matplotlib.pyplot as plt


Js = []
start = time.time()
np.random.seed(2)


#Sigmoid        
def activation(x, derivative = False):
    if(derivative):
        a = activation(x)
        return a * (1 - a)
    else:
        return 1/(1+np.exp(-x))

def cost(output, target):
    return (1/2) * np.sum((target - output)**2)


INPUTS = np.array([[0, 1],[1, 0],[0, 0],[1, 1]])
TARGET = np.array([[1],[1],[0],[0]])

"Hyper-Parameters"
# Layer Structure
LAYER = [2, 3, 1]
LEARNING_RATE = 0.1
ITERATIONS = int(5e5)

# Init Weights
W1 = np.random.rand(LAYER[0], LAYER[1])
W2 = np.random.rand(LAYER[1], LAYER[2])

# Init Biases
B1 = np.random.rand(LAYER[1], 1)
B2 = np.random.rand(LAYER[2], 1)

for i in range(0, ITERATIONS):
    exampleIndex = i % len(INPUTS)
    # exampleIndex = 2
    "Forward Pass"
    # Layer One Activity (Input layer)
    A0 = np.transpose(INPUTS[exampleIndex:exampleIndex+1])

    # Layer Two Activity (Hidden Layer)
    Z1 = np.dot(np.transpose(W1), A0) + B1
    A1 = activation(Z1)

    # Layer Three Activity (Output Layer)
    Z2 = np.dot(np.transpose(W2), A1) + B2
    A2 = activation(Z2)

    # Output
    O = A2

    # Cost J

    # Target Vector T
    T = np.transpose(TARGET[exampleIndex:exampleIndex+1])
    J = cost(O, T)
    Js.append(J)

    # print("J = {}".format(J))
    # print("I = {}, O = {}".format(A0, O))
    # print("T = {}".format(T))
    if ((i+3) % 20000 == 0):
        print(i)
        print("J = {}".format(J))
        print("I = {}, O = {}".format(A0, O))
        print("T = {}".format(T))
    if ((i+2) % 20000 == 0):
        print(i)
        print("J = {}".format(J))
        print("I = {}, O = {}".format(A0, O))
        print("T = {}".format(T))
    if ((i+1) % 20000 == 0):
        print(i)
        print("J = {}".format(J))
        print("I = {}, O = {}".format(A0, O))
        print("T = {}".format(T))
    if (i % 20000 == 0):
        print(i)
        print("J = {}".format(J))
        print("I = {}, O = {}".format(A0, O))
        print("T = {}".format(T))

    "Backward Pass"

    # Calculate Delta of output layer
    D2 = (O - T) * activation(Z2, True)

    # Calculate Delta of hidden layer
    D1 = np.dot(W2, D2) * activation(Z1, True)

    # Calculate Derivatives w.r.t. W2
    DerW2 = np.dot(A1, np.transpose(D2))
    # Calculate Derivatives w.r.t. W1
    DerW1 = np.dot(A0, np.transpose(D1))

    # Calculate Derivatives w.r.t. B2
    DerB2 = D2
    # Calculate Derivatives w.r.t. B1
    DerB1 = D1

    "Update Weights and Biases"

    W1 -= LEARNING_RATE * DerW1
    B1 -= LEARNING_RATE * DerB1

    W2 -= LEARNING_RATE * DerW2
    B2 -= LEARNING_RATE * DerB2

# Show prediction

print("Time elapsed {}s".format(time.time() - start))    
plt.plot(Js)
plt.ylabel("Cost J")
plt.xlabel("Iterations")
plt.savefig('cost.pdf')
plt.show()

为了减少成本函数中的波动,通常在执行更新(一些平均更新)之前使用多个数据样本,但我发现在仅包含四个不同训练事件的集合中,这是很困难的。 所以,总结这个相当长的答案:你的成本函数跳跃,因为它是针对每个例子计算的,而不是针对多个例子的平均值。但是,网络输出很好地遵循了XOR函数的分布,因此不需要更改它。你知道吗

相关问题 更多 >