Neural Networks with Malignant Neurons: Robust Models for Smart Manufacturing

(Code and results of experiments related to the article submitted to ISM-2025: “International Conference on Industry of the Future and Smart Manufacturing”)

 

 

ABSTRACT

In smart manufacturing, robust and reliable neural networks are critical for ensuring seamless operations, particularly in adversarial or noisy environments. While regularization techniques like Dropout are widely used to improve generalization, their efficacy in adversarial settings remains limited. In this paper, we propose Adversarial Regularization (AR), a novel technique that enhances network robustness by designating certain neurons as “malignant” during training. These malignant neurons are trained not to minimize the loss but to maximize it, introducing probabilistic adversarial effects that actively challenge the network. Unlike Dropout, which deactivates neurons to prevent co-adaptation, AR simulates adversarial contributions to improve resilience against perturbations. We position AR within the broader landscape of regularization approaches, providing theoretical insights and justifications for its scaling factors. Additionally, we introduce a hybrid approach that combines Dropout and AR for enhanced flexibility. Experimental evaluations on public datasets for classification tasks show that AR achieves competitive performance on clean data and outperforms Dropout under adversarial noise, either alone or as part of hybrid technique. Comparative analyses of accuracy and loss dynamics further demonstrate AR’s robustness and generalization capabilities. These findings establish AR as a promising technique for developing resilient neural networks tailored to the demands of smart manufacturing applications.

 

 

 

 

 

 

 

=================================================================

 

0. “FASHION-MNIST” DATASET EXPERIMENT https://www.kaggle.com/datasets/zalando-research/fashionmnist

 

Straightforward Implementation of Adversarial Regularization

(not included to the article)

 

=================================================================

 

import torch

import torch.nn as nn

import torch.optim as optim

from torch.utils.data import DataLoader

from torchvision import datasets, transforms

import matplotlib.pyplot as plt

 

# Device configuration

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

 

# Data loading and preprocessing for Fashion-MNIST

transform = transforms.Compose([

    transforms.ToTensor(),

    transforms.Normalize((0.5,), (0.5,))

])

 

train_dataset = datasets.FashionMNIST(root='./data', train=True, download=True, transform=transform)

test_dataset = datasets.FashionMNIST(root='./data', train=False, download=True, transform=transform)

 

batch_size = 64

train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)

 

# Model definition

class RegularizedNN(nn.Module):

    def __init__(self, input_size=784, hidden_size=128, output_size=10, p_malignant=0.0):

        super(RegularizedNN, self).__init__()

        self.fc1 = nn.Linear(input_size, hidden_size)

        self.relu = nn.ReLU()

        self.fc2 = nn.Linear(hidden_size, output_size)

        self.p_malignant = p_malignant

        # Note: Scaling factor is removed from the model definition

 

    def forward(self, x):

        x = x.view(x.size(0), -1)  # Flatten the input

        x = self.fc1(x)

        x = self.relu(x)

        x = self.fc2(x)

        return x

 

# Train and evaluate function with adversarial regularization

def train_and_evaluate_with_ar(num_epochs=50, lr=0.001, p_malignant=0.0, use_output_scaling=False):

    model = RegularizedNN(p_malignant=p_malignant).to(device)

    criterion = nn.CrossEntropyLoss()

    optimizer = optim.Adam(model.parameters(), lr=lr)

 

    train_acc_list = []

    test_acc_list = []

    train_loss_list = []  # Store train losses

    test_loss_list = [] # Store test losses

 

    for epoch in range(num_epochs):

        model.train()

        correct, total = 0, 0

        epoch_train_loss = 0.0

 

        for inputs, labels in train_loader:

            inputs, labels = inputs.to(device), labels.to(device)

 

            outputs = model(inputs)

            loss = criterion(outputs, labels)  # No scaling before the loss calculation

 

            optimizer.zero_grad()

            loss.backward()

 

            with torch.no_grad():

                 for name, param in model.named_parameters():

                    if 'weight' in name and param.grad is not None:

                        # Generate the malignant mask for each weight at this training step

                        malignant_mask = (torch.rand(param.shape) < p_malignant).to(device)

                        # Invert the gradients for malignant neurons ONLY. Healthy neuron gradients are left untouched.

                        param.grad[malignant_mask] *= -1

 

            optimizer.step()

           

            epoch_train_loss += loss.item()

 

            _, predicted = torch.max(outputs, 1)

            total += labels.size(0)

            correct += (predicted == labels).sum().item()

       

        avg_train_loss = epoch_train_loss / len(train_loader) # compute average loss

        train_loss_list.append(avg_train_loss)  # Store average train loss

        train_acc = correct / total

        train_acc_list.append(train_acc)

        print(f"Epoch {epoch+1}: Train Accuracy = {train_acc:.4f}, Train Loss = {avg_train_loss:.4f}")

 

        model.eval()

        correct, total = 0, 0

        epoch_test_loss = 0.0

        with torch.no_grad():

            for inputs, labels in test_loader:

                inputs, labels = inputs.to(device), labels.to(device)

                outputs = model(inputs)

                 # Test with or without output scaling based on use_output_scaling

                if use_output_scaling:

                     outputs = outputs * (1 - 2 * p_malignant)

                loss = criterion(outputs,labels)

                epoch_test_loss += loss.item()

                _, predicted = torch.max(outputs, 1)

                total += labels.size(0)

                correct += (predicted == labels).sum().item()

       

        avg_test_loss = epoch_test_loss / len(test_loader)

        test_loss_list.append(avg_test_loss)

        test_acc = correct / total

        test_acc_list.append(test_acc)

        print(f"Epoch {epoch+1}: Test Accuracy = {test_acc:.4f}, Test Loss = {avg_test_loss:.4f}")

 

    return train_acc_list, test_acc_list, train_loss_list, test_loss_list

 

# Run experiment

num_epochs = 50

p_malignant_values = [0.0, 0.1, 0.2, 0.3, 0.4]

use_output_scaling_values = [True, False]  # Evaluate both with and without scaling for each p_malignant

 

for p_malignant in p_malignant_values:

    for use_output_scaling in use_output_scaling_values:

          print(f"\nTraining with p_malignant = {p_malignant}, Output Scaling = {use_output_scaling}")

          train_acc, test_acc, train_loss, test_loss = train_and_evaluate_with_ar(num_epochs=num_epochs, p_malignant=p_malignant, use_output_scaling=use_output_scaling)

         

          # Print final results

          print(f"Final Train Accuracy: {train_acc[-1]:.4f}")

          print(f"Final Test Accuracy: {test_acc[-1]:.4f}")

 

          # Plot results

          plt.figure(figsize=(12, 6))

         

          # Plotting Accuracy

          plt.subplot(1, 2, 1) # 1 row, 2 columns, first plot

          plt.plot(range(1, num_epochs + 1), train_acc, label="Train Accuracy")

          plt.plot(range(1, num_epochs + 1), test_acc, label="Test Accuracy")

          plt.title(f"Accuracy (p={p_malignant}, scale={use_output_scaling})")

          plt.xlabel("Epoch")

          plt.ylabel("Accuracy")

          plt.legend()

          plt.grid()

 

          # Plotting Loss

          plt.subplot(1, 2, 2)  # 1 row, 2 columns, second plot

          plt.plot(range(1, num_epochs + 1), train_loss, label="Train Loss")

          plt.plot(range(1, num_epochs + 1), test_loss, label="Test Loss")

          plt.title(f"Loss (p={p_malignant}, scale={use_output_scaling})")

          plt.xlabel("Epoch")

          plt.ylabel("Loss")

          plt.legend()

          plt.grid()

         

          plt.tight_layout()

          plt.show()

 

 

=================================================================

OUTCOME (run with 50 epochs):

=================================================================

 

Training with , Output Scaling = True

Epoch 1: Train Accuracy = 0.8224, Train Loss = 0.4944

Epoch 1: Test Accuracy = 0.8404, Test Loss = 0.4382

Epoch 2: Train Accuracy = 0.8621, Train Loss = 0.3779

Epoch 2: Test Accuracy = 0.8474, Test Loss = 0.4318

Epoch 3: Train Accuracy = 0.8742, Train Loss = 0.3438

Epoch 3: Test Accuracy = 0.8565, Test Loss = 0.3985

Epoch 4: Train Accuracy = 0.8839, Train Loss = 0.3169

Epoch 4: Test Accuracy = 0.8643, Test Loss = 0.3717

Epoch 5: Train Accuracy = 0.8894, Train Loss = 0.3007

Epoch 5: Test Accuracy = 0.8691, Test Loss = 0.3712

Epoch 6: Train Accuracy = 0.8946, Train Loss = 0.2852

Epoch 6: Test Accuracy = 0.8633, Test Loss = 0.3694

Epoch 7: Train Accuracy = 0.8996, Train Loss = 0.2737

Epoch 7: Test Accuracy = 0.8680, Test Loss = 0.3667

Epoch 8: Train Accuracy = 0.9048, Train Loss = 0.2609

Epoch 8: Test Accuracy = 0.8766, Test Loss = 0.3582

Epoch 9: Train Accuracy = 0.9069, Train Loss = 0.2509

Epoch 9: Test Accuracy = 0.8803, Test Loss = 0.3421

Epoch 10: Train Accuracy = 0.9094, Train Loss = 0.2433

Epoch 10: Test Accuracy = 0.8795, Test Loss = 0.3466

Epoch 11: Train Accuracy = 0.9126, Train Loss = 0.2349

Epoch 11: Test Accuracy = 0.8817, Test Loss = 0.3453

Epoch 12: Train Accuracy = 0.9166, Train Loss = 0.2250

Epoch 12: Test Accuracy = 0.8756, Test Loss = 0.3597

Epoch 13: Train Accuracy = 0.9182, Train Loss = 0.2177

Epoch 13: Test Accuracy = 0.8727, Test Loss = 0.3825

Epoch 14: Train Accuracy = 0.9208, Train Loss = 0.2132

Epoch 14: Test Accuracy = 0.8774, Test Loss = 0.3547

Epoch 15: Train Accuracy = 0.9224, Train Loss = 0.2055

Epoch 15: Test Accuracy = 0.8844, Test Loss = 0.3471

Epoch 16: Train Accuracy = 0.9242, Train Loss = 0.1998

Epoch 16: Test Accuracy = 0.8820, Test Loss = 0.3553

Epoch 17: Train Accuracy = 0.9277, Train Loss = 0.1937

Epoch 17: Test Accuracy = 0.8832, Test Loss = 0.3605

Epoch 18: Train Accuracy = 0.9291, Train Loss = 0.1881

Epoch 18: Test Accuracy = 0.8797, Test Loss = 0.3692

Epoch 19: Train Accuracy = 0.9322, Train Loss = 0.1815

Epoch 19: Test Accuracy = 0.8806, Test Loss = 0.3646

Epoch 20: Train Accuracy = 0.9339, Train Loss = 0.1786

Epoch 20: Test Accuracy = 0.8817, Test Loss = 0.3760

Epoch 21: Train Accuracy = 0.9345, Train Loss = 0.1733

Epoch 21: Test Accuracy = 0.8735, Test Loss = 0.3917

Epoch 22: Train Accuracy = 0.9370, Train Loss = 0.1680

Epoch 22: Test Accuracy = 0.8826, Test Loss = 0.3820

Epoch 23: Train Accuracy = 0.9375, Train Loss = 0.1644

Epoch 23: Test Accuracy = 0.8733, Test Loss = 0.4102

Epoch 24: Train Accuracy = 0.9416, Train Loss = 0.1587

Epoch 24: Test Accuracy = 0.8784, Test Loss = 0.3999

Epoch 25: Train Accuracy = 0.9417, Train Loss = 0.1568

Epoch 25: Test Accuracy = 0.8796, Test Loss = 0.3976

Epoch 26: Train Accuracy = 0.9432, Train Loss = 0.1523

Epoch 26: Test Accuracy = 0.8810, Test Loss = 0.3890

Epoch 27: Train Accuracy = 0.9450, Train Loss = 0.1484

Epoch 27: Test Accuracy = 0.8849, Test Loss = 0.3993

Epoch 28: Train Accuracy = 0.9466, Train Loss = 0.1435

Epoch 28: Test Accuracy = 0.8823, Test Loss = 0.4096

Epoch 29: Train Accuracy = 0.9457, Train Loss = 0.1441

Epoch 29: Test Accuracy = 0.8900, Test Loss = 0.4034

Epoch 30: Train Accuracy = 0.9493, Train Loss = 0.1360

Epoch 30: Test Accuracy = 0.8814, Test Loss = 0.4372

Epoch 31: Train Accuracy = 0.9508, Train Loss = 0.1327

Epoch 31: Test Accuracy = 0.8862, Test Loss = 0.4111

Epoch 32: Train Accuracy = 0.9507, Train Loss = 0.1335

Epoch 32: Test Accuracy = 0.8782, Test Loss = 0.4469

Epoch 33: Train Accuracy = 0.9516, Train Loss = 0.1287

Epoch 33: Test Accuracy = 0.8826, Test Loss = 0.4485

Epoch 34: Train Accuracy = 0.9547, Train Loss = 0.1222

Epoch 34: Test Accuracy = 0.8847, Test Loss = 0.4486

Epoch 35: Train Accuracy = 0.9521, Train Loss = 0.1266

Epoch 35: Test Accuracy = 0.8803, Test Loss = 0.4789

Epoch 36: Train Accuracy = 0.9566, Train Loss = 0.1183

Epoch 36: Test Accuracy = 0.8785, Test Loss = 0.4739

Epoch 37: Train Accuracy = 0.9560, Train Loss = 0.1167

Epoch 37: Test Accuracy = 0.8875, Test Loss = 0.4501

Epoch 38: Train Accuracy = 0.9571, Train Loss = 0.1169

Epoch 38: Test Accuracy = 0.8865, Test Loss = 0.4657

Epoch 39: Train Accuracy = 0.9577, Train Loss = 0.1137

Epoch 39: Test Accuracy = 0.8860, Test Loss = 0.4640

Epoch 40: Train Accuracy = 0.9604, Train Loss = 0.1071

Epoch 40: Test Accuracy = 0.8798, Test Loss = 0.4983

Epoch 41: Train Accuracy = 0.9599, Train Loss = 0.1063

Epoch 41: Test Accuracy = 0.8789, Test Loss = 0.4931

Epoch 42: Train Accuracy = 0.9624, Train Loss = 0.1036

Epoch 42: Test Accuracy = 0.8814, Test Loss = 0.5005

Epoch 43: Train Accuracy = 0.9617, Train Loss = 0.1022

Epoch 43: Test Accuracy = 0.8850, Test Loss = 0.5076

Epoch 44: Train Accuracy = 0.9623, Train Loss = 0.1016

Epoch 44: Test Accuracy = 0.8842, Test Loss = 0.5017

Epoch 45: Train Accuracy = 0.9651, Train Loss = 0.0953

Epoch 45: Test Accuracy = 0.8860, Test Loss = 0.4986

Epoch 46: Train Accuracy = 0.9645, Train Loss = 0.0957

Epoch 46: Test Accuracy = 0.8837, Test Loss = 0.5244

Epoch 47: Train Accuracy = 0.9643, Train Loss = 0.0970

Epoch 47: Test Accuracy = 0.8808, Test Loss = 0.5458

Epoch 48: Train Accuracy = 0.9659, Train Loss = 0.0919

Epoch 48: Test Accuracy = 0.8843, Test Loss = 0.5408

Epoch 49: Train Accuracy = 0.9681, Train Loss = 0.0875

Epoch 49: Test Accuracy = 0.8852, Test Loss = 0.5248

Epoch 50: Train Accuracy = 0.9669, Train Loss = 0.0888

Epoch 50: Test Accuracy = 0.8794, Test Loss = 0.5548

Final Train Accuracy:    0.9669

Final Test Accuracy:     0.8794

 

 

Training with , Output Scaling = False

Epoch 1: Train Accuracy = 0.8189, Train Loss = 0.5014

Epoch 1: Test Accuracy = 0.8461, Test Loss = 0.4227

Epoch 2: Train Accuracy = 0.8608, Train Loss = 0.3794

Epoch 2: Test Accuracy = 0.8625, Test Loss = 0.3841

Epoch 3: Train Accuracy = 0.8744, Train Loss = 0.3413

Epoch 3: Test Accuracy = 0.8651, Test Loss = 0.3714

Epoch 4: Train Accuracy = 0.8839, Train Loss = 0.3156

Epoch 4: Test Accuracy = 0.8691, Test Loss = 0.3597

Epoch 5: Train Accuracy = 0.8912, Train Loss = 0.2957

Epoch 5: Test Accuracy = 0.8716, Test Loss = 0.3608

Epoch 6: Train Accuracy = 0.8952, Train Loss = 0.2845

Epoch 6: Test Accuracy = 0.8778, Test Loss = 0.3405

Epoch 7: Train Accuracy = 0.9000, Train Loss = 0.2706

Epoch 7: Test Accuracy = 0.8731, Test Loss = 0.3464

Epoch 8: Train Accuracy = 0.9039, Train Loss = 0.2580

Epoch 8: Test Accuracy = 0.8716, Test Loss = 0.3572

Epoch 9: Train Accuracy = 0.9064, Train Loss = 0.2510

Epoch 9: Test Accuracy = 0.8760, Test Loss = 0.3572

Epoch 10: Train Accuracy = 0.9112, Train Loss = 0.2393

Epoch 10: Test Accuracy = 0.8770, Test Loss = 0.3557

Epoch 11: Train Accuracy = 0.9144, Train Loss = 0.2314

Epoch 11: Test Accuracy = 0.8652, Test Loss = 0.3803

Epoch 12: Train Accuracy = 0.9156, Train Loss = 0.2243

Epoch 12: Test Accuracy = 0.8755, Test Loss = 0.3533

Epoch 13: Train Accuracy = 0.9192, Train Loss = 0.2175

Epoch 13: Test Accuracy = 0.8807, Test Loss = 0.3412

Epoch 14: Train Accuracy = 0.9217, Train Loss = 0.2094

Epoch 14: Test Accuracy = 0.8810, Test Loss = 0.3606

Epoch 15: Train Accuracy = 0.9238, Train Loss = 0.2041

Epoch 15: Test Accuracy = 0.8793, Test Loss = 0.3622

Epoch 16: Train Accuracy = 0.9263, Train Loss = 0.1983

Epoch 16: Test Accuracy = 0.8768, Test Loss = 0.3630

Epoch 17: Train Accuracy = 0.9291, Train Loss = 0.1906

Epoch 17: Test Accuracy = 0.8809, Test Loss = 0.3634

Epoch 18: Train Accuracy = 0.9312, Train Loss = 0.1855

Epoch 18: Test Accuracy = 0.8857, Test Loss = 0.3594

Epoch 19: Train Accuracy = 0.9316, Train Loss = 0.1812

Epoch 19: Test Accuracy = 0.8835, Test Loss = 0.3574

Epoch 20: Train Accuracy = 0.9337, Train Loss = 0.1776

Epoch 20: Test Accuracy = 0.8839, Test Loss = 0.3646

Epoch 21: Train Accuracy = 0.9351, Train Loss = 0.1731

Epoch 21: Test Accuracy = 0.8824, Test Loss = 0.3717

Epoch 22: Train Accuracy = 0.9373, Train Loss = 0.1676

Epoch 22: Test Accuracy = 0.8821, Test Loss = 0.3739

Epoch 23: Train Accuracy = 0.9395, Train Loss = 0.1620

Epoch 23: Test Accuracy = 0.8833, Test Loss = 0.3858

Epoch 24: Train Accuracy = 0.9404, Train Loss = 0.1599

Epoch 24: Test Accuracy = 0.8800, Test Loss = 0.4016

Epoch 25: Train Accuracy = 0.9430, Train Loss = 0.1545

Epoch 25: Test Accuracy = 0.8805, Test Loss = 0.4092

Epoch 26: Train Accuracy = 0.9434, Train Loss = 0.1510

Epoch 26: Test Accuracy = 0.8805, Test Loss = 0.4121

Epoch 27: Train Accuracy = 0.9454, Train Loss = 0.1455

Epoch 27: Test Accuracy = 0.8839, Test Loss = 0.4026

Epoch 28: Train Accuracy = 0.9464, Train Loss = 0.1441

Epoch 28: Test Accuracy = 0.8854, Test Loss = 0.3939

Epoch 29: Train Accuracy = 0.9474, Train Loss = 0.1416

Epoch 29: Test Accuracy = 0.8843, Test Loss = 0.4101

Epoch 30: Train Accuracy = 0.9498, Train Loss = 0.1349

Epoch 30: Test Accuracy = 0.8877, Test Loss = 0.4126

Epoch 31: Train Accuracy = 0.9496, Train Loss = 0.1337

Epoch 31: Test Accuracy = 0.8818, Test Loss = 0.4367

Epoch 32: Train Accuracy = 0.9527, Train Loss = 0.1286

Epoch 32: Test Accuracy = 0.8761, Test Loss = 0.4555

Epoch 33: Train Accuracy = 0.9522, Train Loss = 0.1272

Epoch 33: Test Accuracy = 0.8873, Test Loss = 0.4194

Epoch 34: Train Accuracy = 0.9518, Train Loss = 0.1286

Epoch 34: Test Accuracy = 0.8797, Test Loss = 0.4586

Epoch 35: Train Accuracy = 0.9546, Train Loss = 0.1212

Epoch 35: Test Accuracy = 0.8790, Test Loss = 0.4549

Epoch 36: Train Accuracy = 0.9560, Train Loss = 0.1189

Epoch 36: Test Accuracy = 0.8801, Test Loss = 0.4605

Epoch 37: Train Accuracy = 0.9565, Train Loss = 0.1162

Epoch 37: Test Accuracy = 0.8840, Test Loss = 0.4569

Epoch 38: Train Accuracy = 0.9578, Train Loss = 0.1142

Epoch 38: Test Accuracy = 0.8837, Test Loss = 0.4636

Epoch 39: Train Accuracy = 0.9585, Train Loss = 0.1122

Epoch 39: Test Accuracy = 0.8871, Test Loss = 0.4683

Epoch 40: Train Accuracy = 0.9596, Train Loss = 0.1084

Epoch 40: Test Accuracy = 0.8805, Test Loss = 0.4860

Epoch 41: Train Accuracy = 0.9615, Train Loss = 0.1045

Epoch 41: Test Accuracy = 0.8841, Test Loss = 0.4910

Epoch 42: Train Accuracy = 0.9599, Train Loss = 0.1061

Epoch 42: Test Accuracy = 0.8816, Test Loss = 0.4999

Epoch 43: Train Accuracy = 0.9628, Train Loss = 0.0999

Epoch 43: Test Accuracy = 0.8831, Test Loss = 0.5012

Epoch 44: Train Accuracy = 0.9612, Train Loss = 0.1026

Epoch 44: Test Accuracy = 0.8826, Test Loss = 0.5062

Epoch 45: Train Accuracy = 0.9645, Train Loss = 0.0963

Epoch 45: Test Accuracy = 0.8856, Test Loss = 0.5015

Epoch 46: Train Accuracy = 0.9639, Train Loss = 0.0961

Epoch 46: Test Accuracy = 0.8846, Test Loss = 0.5248

Epoch 47: Train Accuracy = 0.9646, Train Loss = 0.0952

Epoch 47: Test Accuracy = 0.8814, Test Loss = 0.5352

Epoch 48: Train Accuracy = 0.9659, Train Loss = 0.0915

Epoch 48: Test Accuracy = 0.8791, Test Loss = 0.5346

Epoch 49: Train Accuracy = 0.9674, Train Loss = 0.0887

Epoch 49: Test Accuracy = 0.8799, Test Loss = 0.5464

Epoch 50: Train Accuracy = 0.9663, Train Loss = 0.0908

Epoch 50: Test Accuracy = 0.8778, Test Loss = 0.5737

Final Train Accuracy:    0.9663

Final Test Accuracy:     0.8778

 

Training with , Output Scaling = True

Epoch 1: Train Accuracy = 0.8178, Train Loss = 0.5085

Epoch 1: Test Accuracy = 0.8448, Test Loss = 0.4460

Epoch 2: Train Accuracy = 0.8601, Train Loss = 0.3836

Epoch 2: Test Accuracy = 0.8528, Test Loss = 0.4143

Epoch 3: Train Accuracy = 0.8738, Train Loss = 0.3460

Epoch 3: Test Accuracy = 0.8584, Test Loss = 0.3871

Epoch 4: Train Accuracy = 0.8814, Train Loss = 0.3247

Epoch 4: Test Accuracy = 0.8588, Test Loss = 0.3845

Epoch 5: Train Accuracy = 0.8877, Train Loss = 0.3054

Epoch 5: Test Accuracy = 0.8695, Test Loss = 0.3657

Epoch 6: Train Accuracy = 0.8938, Train Loss = 0.2894

Epoch 6: Test Accuracy = 0.8643, Test Loss = 0.3647

Epoch 7: Train Accuracy = 0.8962, Train Loss = 0.2788

Epoch 7: Test Accuracy = 0.8736, Test Loss = 0.3451

Epoch 8: Train Accuracy = 0.9000, Train Loss = 0.2691

Epoch 8: Test Accuracy = 0.8773, Test Loss = 0.3425

Epoch 9: Train Accuracy = 0.9045, Train Loss = 0.2588

Epoch 9: Test Accuracy = 0.8804, Test Loss = 0.3373

Epoch 10: Train Accuracy = 0.9074, Train Loss = 0.2495

Epoch 10: Test Accuracy = 0.8775, Test Loss = 0.3352

Epoch 11: Train Accuracy = 0.9110, Train Loss = 0.2403

Epoch 11: Test Accuracy = 0.8780, Test Loss = 0.3407

Epoch 12: Train Accuracy = 0.9135, Train Loss = 0.2328

Epoch 12: Test Accuracy = 0.8767, Test Loss = 0.3437

Epoch 13: Train Accuracy = 0.9158, Train Loss = 0.2268

Epoch 13: Test Accuracy = 0.8767, Test Loss = 0.3377

Epoch 14: Train Accuracy = 0.9170, Train Loss = 0.2198

Epoch 14: Test Accuracy = 0.8797, Test Loss = 0.3365

Epoch 15: Train Accuracy = 0.9211, Train Loss = 0.2121

Epoch 15: Test Accuracy = 0.8787, Test Loss = 0.3406

Epoch 16: Train Accuracy = 0.9216, Train Loss = 0.2087

Epoch 16: Test Accuracy = 0.8754, Test Loss = 0.3497

Epoch 17: Train Accuracy = 0.9249, Train Loss = 0.2009

Epoch 17: Test Accuracy = 0.8788, Test Loss = 0.3451

Epoch 18: Train Accuracy = 0.9264, Train Loss = 0.1957

Epoch 18: Test Accuracy = 0.8777, Test Loss = 0.3441

Epoch 19: Train Accuracy = 0.9281, Train Loss = 0.1914

Epoch 19: Test Accuracy = 0.8819, Test Loss = 0.3338

Epoch 20: Train Accuracy = 0.9301, Train Loss = 0.1850

Epoch 20: Test Accuracy = 0.8803, Test Loss = 0.3365

Epoch 21: Train Accuracy = 0.9315, Train Loss = 0.1814

Epoch 21: Test Accuracy = 0.8785, Test Loss = 0.3443

Epoch 22: Train Accuracy = 0.9331, Train Loss = 0.1768

Epoch 22: Test Accuracy = 0.8831, Test Loss = 0.3420

Epoch 23: Train Accuracy = 0.9347, Train Loss = 0.1736

Epoch 23: Test Accuracy = 0.8805, Test Loss = 0.3524

Epoch 24: Train Accuracy = 0.9365, Train Loss = 0.1706

Epoch 24: Test Accuracy = 0.8857, Test Loss = 0.3428

Epoch 25: Train Accuracy = 0.9392, Train Loss = 0.1631

Epoch 25: Test Accuracy = 0.8825, Test Loss = 0.3449

Epoch 26: Train Accuracy = 0.9395, Train Loss = 0.1620

Epoch 26: Test Accuracy = 0.8757, Test Loss = 0.3632

Epoch 27: Train Accuracy = 0.9417, Train Loss = 0.1569

Epoch 27: Test Accuracy = 0.8805, Test Loss = 0.3536

Epoch 28: Train Accuracy = 0.9412, Train Loss = 0.1550

Epoch 28: Test Accuracy = 0.8771, Test Loss = 0.3754

Epoch 29: Train Accuracy = 0.9446, Train Loss = 0.1496

Epoch 29: Test Accuracy = 0.8822, Test Loss = 0.3661

Epoch 30: Train Accuracy = 0.9446, Train Loss = 0.1479

Epoch 30: Test Accuracy = 0.8835, Test Loss = 0.3559

Epoch 31: Train Accuracy = 0.9465, Train Loss = 0.1427

Epoch 31: Test Accuracy = 0.8835, Test Loss = 0.3706

Epoch 32: Train Accuracy = 0.9466, Train Loss = 0.1411

Epoch 32: Test Accuracy = 0.8850, Test Loss = 0.3655

Epoch 33: Train Accuracy = 0.9477, Train Loss = 0.1388

Epoch 33: Test Accuracy = 0.8819, Test Loss = 0.3730

Epoch 34: Train Accuracy = 0.9510, Train Loss = 0.1328

Epoch 34: Test Accuracy = 0.8816, Test Loss = 0.3789

Epoch 35: Train Accuracy = 0.9507, Train Loss = 0.1311

Epoch 35: Test Accuracy = 0.8792, Test Loss = 0.4032

Epoch 36: Train Accuracy = 0.9525, Train Loss = 0.1270

Epoch 36: Test Accuracy = 0.8816, Test Loss = 0.3870

Epoch 37: Train Accuracy = 0.9527, Train Loss = 0.1253

Epoch 37: Test Accuracy = 0.8880, Test Loss = 0.3750

Epoch 38: Train Accuracy = 0.9536, Train Loss = 0.1236

Epoch 38: Test Accuracy = 0.8779, Test Loss = 0.4020

Epoch 39: Train Accuracy = 0.9543, Train Loss = 0.1207

Epoch 39: Test Accuracy = 0.8811, Test Loss = 0.4004

Epoch 40: Train Accuracy = 0.9563, Train Loss = 0.1190

Epoch 40: Test Accuracy = 0.8812, Test Loss = 0.4081

Epoch 41: Train Accuracy = 0.9573, Train Loss = 0.1145

Epoch 41: Test Accuracy = 0.8812, Test Loss = 0.3989

Epoch 42: Train Accuracy = 0.9565, Train Loss = 0.1153

Epoch 42: Test Accuracy = 0.8826, Test Loss = 0.4158

Epoch 43: Train Accuracy = 0.9590, Train Loss = 0.1094

Epoch 43: Test Accuracy = 0.8779, Test Loss = 0.4211

Epoch 44: Train Accuracy = 0.9587, Train Loss = 0.1103

Epoch 44: Test Accuracy = 0.8816, Test Loss = 0.4181

Epoch 45: Train Accuracy = 0.9608, Train Loss = 0.1073

Epoch 45: Test Accuracy = 0.8793, Test Loss = 0.4271

Epoch 46: Train Accuracy = 0.9614, Train Loss = 0.1047

Epoch 46: Test Accuracy = 0.8833, Test Loss = 0.4220

Epoch 47: Train Accuracy = 0.9606, Train Loss = 0.1059

Epoch 47: Test Accuracy = 0.8826, Test Loss = 0.4446

Epoch 48: Train Accuracy = 0.9626, Train Loss = 0.1018

Epoch 48: Test Accuracy = 0.8842, Test Loss = 0.4311

Epoch 49: Train Accuracy = 0.9629, Train Loss = 0.0992

Epoch 49: Test Accuracy = 0.8874, Test Loss = 0.4201

Epoch 50: Train Accuracy = 0.9627, Train Loss = 0.0986

Epoch 50: Test Accuracy = 0.8817, Test Loss = 0.4400

Final Train Accuracy:    0.9627

Final Test Accuracy:     0.8817

 

Training with , Output Scaling = False

Epoch 1: Train Accuracy = 0.8194, Train Loss = 0.5066

Epoch 1: Test Accuracy = 0.8463, Test Loss = 0.4316

Epoch 2: Train Accuracy = 0.8584, Train Loss = 0.3894

Epoch 2: Test Accuracy = 0.8583, Test Loss = 0.3973

Epoch 3: Train Accuracy = 0.8716, Train Loss = 0.3494

Epoch 3: Test Accuracy = 0.8609, Test Loss = 0.3866

Epoch 4: Train Accuracy = 0.8811, Train Loss = 0.3228

Epoch 4: Test Accuracy = 0.8667, Test Loss = 0.3739

Epoch 5: Train Accuracy = 0.8869, Train Loss = 0.3051

Epoch 5: Test Accuracy = 0.8726, Test Loss = 0.3571

Epoch 6: Train Accuracy = 0.8929, Train Loss = 0.2919

Epoch 6: Test Accuracy = 0.8715, Test Loss = 0.3589

Epoch 7: Train Accuracy = 0.8973, Train Loss = 0.2775

Epoch 7: Test Accuracy = 0.8646, Test Loss = 0.3668

Epoch 8: Train Accuracy = 0.9007, Train Loss = 0.2658

Epoch 8: Test Accuracy = 0.8691, Test Loss = 0.3692

Epoch 9: Train Accuracy = 0.9043, Train Loss = 0.2575

Epoch 9: Test Accuracy = 0.8773, Test Loss = 0.3467

Epoch 10: Train Accuracy = 0.9091, Train Loss = 0.2466

Epoch 10: Test Accuracy = 0.8802, Test Loss = 0.3392

Epoch 11: Train Accuracy = 0.9109, Train Loss = 0.2401

Epoch 11: Test Accuracy = 0.8807, Test Loss = 0.3497

Epoch 12: Train Accuracy = 0.9144, Train Loss = 0.2309

Epoch 12: Test Accuracy = 0.8810, Test Loss = 0.3510

Epoch 13: Train Accuracy = 0.9163, Train Loss = 0.2253

Epoch 13: Test Accuracy = 0.8773, Test Loss = 0.3589

Epoch 14: Train Accuracy = 0.9208, Train Loss = 0.2137

Epoch 14: Test Accuracy = 0.8769, Test Loss = 0.3687

Epoch 15: Train Accuracy = 0.9214, Train Loss = 0.2109

Epoch 15: Test Accuracy = 0.8765, Test Loss = 0.3645

Epoch 16: Train Accuracy = 0.9236, Train Loss = 0.2047

Epoch 16: Test Accuracy = 0.8807, Test Loss = 0.3715

Epoch 17: Train Accuracy = 0.9256, Train Loss = 0.2001

Epoch 17: Test Accuracy = 0.8862, Test Loss = 0.3448

Epoch 18: Train Accuracy = 0.9269, Train Loss = 0.1934

Epoch 18: Test Accuracy = 0.8802, Test Loss = 0.3679

Epoch 19: Train Accuracy = 0.9290, Train Loss = 0.1872

Epoch 19: Test Accuracy = 0.8763, Test Loss = 0.3820

Epoch 20: Train Accuracy = 0.9308, Train Loss = 0.1824

Epoch 20: Test Accuracy = 0.8843, Test Loss = 0.3505

Epoch 21: Train Accuracy = 0.9329, Train Loss = 0.1761

Epoch 21: Test Accuracy = 0.8823, Test Loss = 0.3699

Epoch 22: Train Accuracy = 0.9357, Train Loss = 0.1712

Epoch 22: Test Accuracy = 0.8797, Test Loss = 0.3847

Epoch 23: Train Accuracy = 0.9365, Train Loss = 0.1689

Epoch 23: Test Accuracy = 0.8784, Test Loss = 0.3878

Epoch 24: Train Accuracy = 0.9374, Train Loss = 0.1657

Epoch 24: Test Accuracy = 0.8858, Test Loss = 0.3871

Epoch 25: Train Accuracy = 0.9400, Train Loss = 0.1586

Epoch 25: Test Accuracy = 0.8841, Test Loss = 0.3867

Epoch 26: Train Accuracy = 0.9412, Train Loss = 0.1551

Epoch 26: Test Accuracy = 0.8842, Test Loss = 0.3924

Epoch 27: Train Accuracy = 0.9428, Train Loss = 0.1523

Epoch 27: Test Accuracy = 0.8804, Test Loss = 0.4160

Epoch 28: Train Accuracy = 0.9437, Train Loss = 0.1499

Epoch 28: Test Accuracy = 0.8797, Test Loss = 0.4252

Epoch 29: Train Accuracy = 0.9452, Train Loss = 0.1458

Epoch 29: Test Accuracy = 0.8864, Test Loss = 0.4095

Epoch 30: Train Accuracy = 0.9458, Train Loss = 0.1430

Epoch 30: Test Accuracy = 0.8833, Test Loss = 0.4149

Epoch 31: Train Accuracy = 0.9477, Train Loss = 0.1394

Epoch 31: Test Accuracy = 0.8825, Test Loss = 0.4202

Epoch 32: Train Accuracy = 0.9476, Train Loss = 0.1366

Epoch 32: Test Accuracy = 0.8835, Test Loss = 0.4219

Epoch 33: Train Accuracy = 0.9492, Train Loss = 0.1338

Epoch 33: Test Accuracy = 0.8841, Test Loss = 0.4438

Epoch 34: Train Accuracy = 0.9521, Train Loss = 0.1290

Epoch 34: Test Accuracy = 0.8856, Test Loss = 0.4247

Epoch 35: Train Accuracy = 0.9522, Train Loss = 0.1278

Epoch 35: Test Accuracy = 0.8817, Test Loss = 0.4477

Epoch 36: Train Accuracy = 0.9536, Train Loss = 0.1247

Epoch 36: Test Accuracy = 0.8802, Test Loss = 0.4693

Epoch 37: Train Accuracy = 0.9547, Train Loss = 0.1211

Epoch 37: Test Accuracy = 0.8808, Test Loss = 0.4520

Epoch 38: Train Accuracy = 0.9553, Train Loss = 0.1195

Epoch 38: Test Accuracy = 0.8813, Test Loss = 0.4631

Epoch 39: Train Accuracy = 0.9553, Train Loss = 0.1179

Epoch 39: Test Accuracy = 0.8787, Test Loss = 0.4824

Epoch 40: Train Accuracy = 0.9565, Train Loss = 0.1156

Epoch 40: Test Accuracy = 0.8828, Test Loss = 0.4803

Epoch 41: Train Accuracy = 0.9589, Train Loss = 0.1114

Epoch 41: Test Accuracy = 0.8827, Test Loss = 0.4690

Epoch 42: Train Accuracy = 0.9588, Train Loss = 0.1116

Epoch 42: Test Accuracy = 0.8835, Test Loss = 0.4645

Epoch 43: Train Accuracy = 0.9597, Train Loss = 0.1073

Epoch 43: Test Accuracy = 0.8829, Test Loss = 0.4637

Epoch 44: Train Accuracy = 0.9605, Train Loss = 0.1064

Epoch 44: Test Accuracy = 0.8838, Test Loss = 0.4748

Epoch 45: Train Accuracy = 0.9621, Train Loss = 0.1042

Epoch 45: Test Accuracy = 0.8832, Test Loss = 0.4876

Epoch 46: Train Accuracy = 0.9634, Train Loss = 0.0988

Epoch 46: Test Accuracy = 0.8798, Test Loss = 0.5034

Epoch 47: Train Accuracy = 0.9612, Train Loss = 0.1027

Epoch 47: Test Accuracy = 0.8812, Test Loss = 0.4960

Epoch 48: Train Accuracy = 0.9635, Train Loss = 0.0976

Epoch 48: Test Accuracy = 0.8780, Test Loss = 0.5129

Epoch 49: Train Accuracy = 0.9643, Train Loss = 0.0957

Epoch 49: Test Accuracy = 0.8829, Test Loss = 0.5147

Epoch 50: Train Accuracy = 0.9657, Train Loss = 0.0915

Epoch 50: Test Accuracy = 0.8811, Test Loss = 0.5242

Final Train Accuracy:    0.9657

Final Test Accuracy:     0.8811

 

Training with , Output Scaling = True

Epoch 1: Train Accuracy = 0.8133, Train Loss = 0.5270

Epoch 1: Test Accuracy = 0.8390, Test Loss = 0.5200

Epoch 2: Train Accuracy = 0.8568, Train Loss = 0.3987

Epoch 2: Test Accuracy = 0.8522, Test Loss = 0.4763

Epoch 3: Train Accuracy = 0.8685, Train Loss = 0.3613

Epoch 3: Test Accuracy = 0.8678, Test Loss = 0.4392

Epoch 4: Train Accuracy = 0.8764, Train Loss = 0.3365

Epoch 4: Test Accuracy = 0.8660, Test Loss = 0.4308

Epoch 5: Train Accuracy = 0.8837, Train Loss = 0.3184

Epoch 5: Test Accuracy = 0.8724, Test Loss = 0.4004

Epoch 6: Train Accuracy = 0.8893, Train Loss = 0.3032

Epoch 6: Test Accuracy = 0.8603, Test Loss = 0.4129

Epoch 7: Train Accuracy = 0.8932, Train Loss = 0.2912

Epoch 7: Test Accuracy = 0.8686, Test Loss = 0.3864

Epoch 8: Train Accuracy = 0.8972, Train Loss = 0.2808

Epoch 8: Test Accuracy = 0.8660, Test Loss = 0.3924

Epoch 9: Train Accuracy = 0.9005, Train Loss = 0.2693

Epoch 9: Test Accuracy = 0.8746, Test Loss = 0.3714

Epoch 10: Train Accuracy = 0.9030, Train Loss = 0.2627

Epoch 10: Test Accuracy = 0.8643, Test Loss = 0.3848

Epoch 11: Train Accuracy = 0.9062, Train Loss = 0.2532

Epoch 11: Test Accuracy = 0.8773, Test Loss = 0.3595

Epoch 12: Train Accuracy = 0.9098, Train Loss = 0.2454

Epoch 12: Test Accuracy = 0.8719, Test Loss = 0.3530

Epoch 13: Train Accuracy = 0.9104, Train Loss = 0.2410

Epoch 13: Test Accuracy = 0.8831, Test Loss = 0.3452

Epoch 14: Train Accuracy = 0.9142, Train Loss = 0.2323

Epoch 14: Test Accuracy = 0.8769, Test Loss = 0.3489

Epoch 15: Train Accuracy = 0.9159, Train Loss = 0.2258

Epoch 15: Test Accuracy = 0.8734, Test Loss = 0.3550

Epoch 16: Train Accuracy = 0.9181, Train Loss = 0.2204

Epoch 16: Test Accuracy = 0.8804, Test Loss = 0.3447

Epoch 17: Train Accuracy = 0.9185, Train Loss = 0.2149

Epoch 17: Test Accuracy = 0.8790, Test Loss = 0.3439

Epoch 18: Train Accuracy = 0.9218, Train Loss = 0.2092

Epoch 18: Test Accuracy = 0.8844, Test Loss = 0.3330

Epoch 19: Train Accuracy = 0.9230, Train Loss = 0.2034

Epoch 19: Test Accuracy = 0.8773, Test Loss = 0.3386

Epoch 20: Train Accuracy = 0.9246, Train Loss = 0.2014

Epoch 20: Test Accuracy = 0.8843, Test Loss = 0.3303

Epoch 21: Train Accuracy = 0.9266, Train Loss = 0.1953

Epoch 21: Test Accuracy = 0.8797, Test Loss = 0.3353

Epoch 22: Train Accuracy = 0.9285, Train Loss = 0.1915

Epoch 22: Test Accuracy = 0.8787, Test Loss = 0.3399

Epoch 23: Train Accuracy = 0.9303, Train Loss = 0.1867

Epoch 23: Test Accuracy = 0.8804, Test Loss = 0.3297

Epoch 24: Train Accuracy = 0.9320, Train Loss = 0.1820

Epoch 24: Test Accuracy = 0.8827, Test Loss = 0.3322

Epoch 25: Train Accuracy = 0.9332, Train Loss = 0.1789

Epoch 25: Test Accuracy = 0.8824, Test Loss = 0.3319

Epoch 26: Train Accuracy = 0.9344, Train Loss = 0.1736

Epoch 26: Test Accuracy = 0.8836, Test Loss = 0.3318

Epoch 27: Train Accuracy = 0.9352, Train Loss = 0.1720

Epoch 27: Test Accuracy = 0.8766, Test Loss = 0.3464

Epoch 28: Train Accuracy = 0.9360, Train Loss = 0.1706

Epoch 28: Test Accuracy = 0.8788, Test Loss = 0.3365

Epoch 29: Train Accuracy = 0.9393, Train Loss = 0.1631

Epoch 29: Test Accuracy = 0.8835, Test Loss = 0.3268

Epoch 30: Train Accuracy = 0.9390, Train Loss = 0.1628

Epoch 30: Test Accuracy = 0.8840, Test Loss = 0.3314

Epoch 31: Train Accuracy = 0.9405, Train Loss = 0.1588

Epoch 31: Test Accuracy = 0.8839, Test Loss = 0.3310

Epoch 32: Train Accuracy = 0.9420, Train Loss = 0.1547

Epoch 32: Test Accuracy = 0.8804, Test Loss = 0.3375

Epoch 33: Train Accuracy = 0.9438, Train Loss = 0.1499

Epoch 33: Test Accuracy = 0.8853, Test Loss = 0.3309

Epoch 34: Train Accuracy = 0.9443, Train Loss = 0.1500

Epoch 34: Test Accuracy = 0.8795, Test Loss = 0.3426

Epoch 35: Train Accuracy = 0.9467, Train Loss = 0.1445

Epoch 35: Test Accuracy = 0.8839, Test Loss = 0.3379

Epoch 36: Train Accuracy = 0.9452, Train Loss = 0.1450

Epoch 36: Test Accuracy = 0.8830, Test Loss = 0.3403

Epoch 37: Train Accuracy = 0.9477, Train Loss = 0.1407

Epoch 37: Test Accuracy = 0.8795, Test Loss = 0.3494

Epoch 38: Train Accuracy = 0.9485, Train Loss = 0.1363

Epoch 38: Test Accuracy = 0.8803, Test Loss = 0.3450

Epoch 39: Train Accuracy = 0.9483, Train Loss = 0.1361

Epoch 39: Test Accuracy = 0.8821, Test Loss = 0.3439

Epoch 40: Train Accuracy = 0.9503, Train Loss = 0.1329

Epoch 40: Test Accuracy = 0.8768, Test Loss = 0.3525

Epoch 41: Train Accuracy = 0.9519, Train Loss = 0.1285

Epoch 41: Test Accuracy = 0.8853, Test Loss = 0.3421

Epoch 42: Train Accuracy = 0.9516, Train Loss = 0.1268

Epoch 42: Test Accuracy = 0.8820, Test Loss = 0.3507

Epoch 43: Train Accuracy = 0.9544, Train Loss = 0.1227

Epoch 43: Test Accuracy = 0.8794, Test Loss = 0.3480

Epoch 44: Train Accuracy = 0.9541, Train Loss = 0.1217

Epoch 44: Test Accuracy = 0.8788, Test Loss = 0.3595

Epoch 45: Train Accuracy = 0.9548, Train Loss = 0.1195

Epoch 45: Test Accuracy = 0.8824, Test Loss = 0.3546

Epoch 46: Train Accuracy = 0.9564, Train Loss = 0.1164

Epoch 46: Test Accuracy = 0.8802, Test Loss = 0.3674

Epoch 47: Train Accuracy = 0.9564, Train Loss = 0.1170

Epoch 47: Test Accuracy = 0.8825, Test Loss = 0.3618

Epoch 48: Train Accuracy = 0.9578, Train Loss = 0.1135

Epoch 48: Test Accuracy = 0.8784, Test Loss = 0.3708

Epoch 49: Train Accuracy = 0.9585, Train Loss = 0.1114

Epoch 49: Test Accuracy = 0.8749, Test Loss = 0.3787

Epoch 50: Train Accuracy = 0.9578, Train Loss = 0.1132

Epoch 50: Test Accuracy = 0.8798, Test Loss = 0.3672

Final Train Accuracy:    0.9578

Final Test Accuracy:     0.8798

 

Training with , Output Scaling = False

Epoch 1: Train Accuracy = 0.8163, Train Loss = 0.5218

Epoch 1: Test Accuracy = 0.8266, Test Loss = 0.4667

Epoch 2: Train Accuracy = 0.8606, Train Loss = 0.3922

Epoch 2: Test Accuracy = 0.8508, Test Loss = 0.4151

Epoch 3: Train Accuracy = 0.8693, Train Loss = 0.3575

Epoch 3: Test Accuracy = 0.8554, Test Loss = 0.4058

Epoch 4: Train Accuracy = 0.8785, Train Loss = 0.3324

Epoch 4: Test Accuracy = 0.8645, Test Loss = 0.3770

Epoch 5: Train Accuracy = 0.8841, Train Loss = 0.3139

Epoch 5: Test Accuracy = 0.8709, Test Loss = 0.3650

Epoch 6: Train Accuracy = 0.8910, Train Loss = 0.2985

Epoch 6: Test Accuracy = 0.8751, Test Loss = 0.3575

Epoch 7: Train Accuracy = 0.8946, Train Loss = 0.2863

Epoch 7: Test Accuracy = 0.8739, Test Loss = 0.3556

Epoch 8: Train Accuracy = 0.8991, Train Loss = 0.2738

Epoch 8: Test Accuracy = 0.8711, Test Loss = 0.3608

Epoch 9: Train Accuracy = 0.9028, Train Loss = 0.2649

Epoch 9: Test Accuracy = 0.8604, Test Loss = 0.3789

Epoch 10: Train Accuracy = 0.9053, Train Loss = 0.2579

Epoch 10: Test Accuracy = 0.8770, Test Loss = 0.3515

Epoch 11: Train Accuracy = 0.9086, Train Loss = 0.2495

Epoch 11: Test Accuracy = 0.8715, Test Loss = 0.3603

Epoch 12: Train Accuracy = 0.9110, Train Loss = 0.2405

Epoch 12: Test Accuracy = 0.8795, Test Loss = 0.3545

Epoch 13: Train Accuracy = 0.9130, Train Loss = 0.2347

Epoch 13: Test Accuracy = 0.8778, Test Loss = 0.3568

Epoch 14: Train Accuracy = 0.9159, Train Loss = 0.2282

Epoch 14: Test Accuracy = 0.8837, Test Loss = 0.3470

Epoch 15: Train Accuracy = 0.9185, Train Loss = 0.2205

Epoch 15: Test Accuracy = 0.8819, Test Loss = 0.3511

Epoch 16: Train Accuracy = 0.9204, Train Loss = 0.2158

Epoch 16: Test Accuracy = 0.8754, Test Loss = 0.3672

Epoch 17: Train Accuracy = 0.9211, Train Loss = 0.2103

Epoch 17: Test Accuracy = 0.8866, Test Loss = 0.3448

Epoch 18: Train Accuracy = 0.9239, Train Loss = 0.2051

Epoch 18: Test Accuracy = 0.8856, Test Loss = 0.3489

Epoch 19: Train Accuracy = 0.9255, Train Loss = 0.2006

Epoch 19: Test Accuracy = 0.8850, Test Loss = 0.3578

Epoch 20: Train Accuracy = 0.9274, Train Loss = 0.1944

Epoch 20: Test Accuracy = 0.8826, Test Loss = 0.3686

Epoch 21: Train Accuracy = 0.9291, Train Loss = 0.1908

Epoch 21: Test Accuracy = 0.8809, Test Loss = 0.3667

Epoch 22: Train Accuracy = 0.9303, Train Loss = 0.1866

Epoch 22: Test Accuracy = 0.8831, Test Loss = 0.3747

Epoch 23: Train Accuracy = 0.9316, Train Loss = 0.1829

Epoch 23: Test Accuracy = 0.8853, Test Loss = 0.3649

Epoch 24: Train Accuracy = 0.9344, Train Loss = 0.1781

Epoch 24: Test Accuracy = 0.8847, Test Loss = 0.3703

Epoch 25: Train Accuracy = 0.9352, Train Loss = 0.1739

Epoch 25: Test Accuracy = 0.8851, Test Loss = 0.3825

Epoch 26: Train Accuracy = 0.9374, Train Loss = 0.1705

Epoch 26: Test Accuracy = 0.8851, Test Loss = 0.3763

Epoch 27: Train Accuracy = 0.9387, Train Loss = 0.1651

Epoch 27: Test Accuracy = 0.8809, Test Loss = 0.3897

Epoch 28: Train Accuracy = 0.9393, Train Loss = 0.1627

Epoch 28: Test Accuracy = 0.8794, Test Loss = 0.4143

Epoch 29: Train Accuracy = 0.9416, Train Loss = 0.1586

Epoch 29: Test Accuracy = 0.8880, Test Loss = 0.3857

Epoch 30: Train Accuracy = 0.9425, Train Loss = 0.1546

Epoch 30: Test Accuracy = 0.8829, Test Loss = 0.4012

Epoch 31: Train Accuracy = 0.9437, Train Loss = 0.1521

Epoch 31: Test Accuracy = 0.8822, Test Loss = 0.4109

Epoch 32: Train Accuracy = 0.9443, Train Loss = 0.1491

Epoch 32: Test Accuracy = 0.8755, Test Loss = 0.4270

Epoch 33: Train Accuracy = 0.9450, Train Loss = 0.1456

Epoch 33: Test Accuracy = 0.8887, Test Loss = 0.3998

Epoch 34: Train Accuracy = 0.9474, Train Loss = 0.1423

Epoch 34: Test Accuracy = 0.8884, Test Loss = 0.4116

Epoch 35: Train Accuracy = 0.9477, Train Loss = 0.1397

Epoch 35: Test Accuracy = 0.8871, Test Loss = 0.4131

Epoch 36: Train Accuracy = 0.9494, Train Loss = 0.1367

Epoch 36: Test Accuracy = 0.8860, Test Loss = 0.4107

Epoch 37: Train Accuracy = 0.9488, Train Loss = 0.1356

Epoch 37: Test Accuracy = 0.8807, Test Loss = 0.4458

Epoch 38: Train Accuracy = 0.9497, Train Loss = 0.1332

Epoch 38: Test Accuracy = 0.8857, Test Loss = 0.4320

Epoch 39: Train Accuracy = 0.9512, Train Loss = 0.1298

Epoch 39: Test Accuracy = 0.8839, Test Loss = 0.4449

Epoch 40: Train Accuracy = 0.9525, Train Loss = 0.1273

Epoch 40: Test Accuracy = 0.8888, Test Loss = 0.4433

Epoch 41: Train Accuracy = 0.9530, Train Loss = 0.1242

Epoch 41: Test Accuracy = 0.8802, Test Loss = 0.4483

Epoch 42: Train Accuracy = 0.9539, Train Loss = 0.1242

Epoch 42: Test Accuracy = 0.8812, Test Loss = 0.4543

Epoch 43: Train Accuracy = 0.9555, Train Loss = 0.1187

Epoch 43: Test Accuracy = 0.8860, Test Loss = 0.4465

Epoch 44: Train Accuracy = 0.9558, Train Loss = 0.1174

Epoch 44: Test Accuracy = 0.8754, Test Loss = 0.4977

Epoch 45: Train Accuracy = 0.9567, Train Loss = 0.1160

Epoch 45: Test Accuracy = 0.8794, Test Loss = 0.4924

Epoch 46: Train Accuracy = 0.9566, Train Loss = 0.1151

Epoch 46: Test Accuracy = 0.8837, Test Loss = 0.4760

Epoch 47: Train Accuracy = 0.9574, Train Loss = 0.1140

Epoch 47: Test Accuracy = 0.8826, Test Loss = 0.4887

Epoch 48: Train Accuracy = 0.9579, Train Loss = 0.1109

Epoch 48: Test Accuracy = 0.8829, Test Loss = 0.4905

Epoch 49: Train Accuracy = 0.9578, Train Loss = 0.1105

Epoch 49: Test Accuracy = 0.8880, Test Loss = 0.4691

Epoch 50: Train Accuracy = 0.9593, Train Loss = 0.1070

Epoch 50: Test Accuracy = 0.8840, Test Loss = 0.4911

Final Train Accuracy:    0.9593

Final Test Accuracy:     0.8840

 

Training with , Output Scaling = True

Epoch 1: Train Accuracy = 0.8076, Train Loss = 0.5544

Epoch 1: Test Accuracy = 0.8323, Test Loss = 0.7426

Epoch 2: Train Accuracy = 0.8541, Train Loss = 0.4097

Epoch 2: Test Accuracy = 0.8472, Test Loss = 0.6486

Epoch 3: Train Accuracy = 0.8652, Train Loss = 0.3750

Epoch 3: Test Accuracy = 0.8479, Test Loss = 0.6227

Epoch 4: Train Accuracy = 0.8738, Train Loss = 0.3510

Epoch 4: Test Accuracy = 0.8564, Test Loss = 0.5917

Epoch 5: Train Accuracy = 0.8799, Train Loss = 0.3328

Epoch 5: Test Accuracy = 0.8655, Test Loss = 0.5636

Epoch 6: Train Accuracy = 0.8841, Train Loss = 0.3180

Epoch 6: Test Accuracy = 0.8676, Test Loss = 0.5536

Epoch 7: Train Accuracy = 0.8878, Train Loss = 0.3077

Epoch 7: Test Accuracy = 0.8749, Test Loss = 0.5232

Epoch 8: Train Accuracy = 0.8919, Train Loss = 0.2961

Epoch 8: Test Accuracy = 0.8710, Test Loss = 0.5228

Epoch 9: Train Accuracy = 0.8946, Train Loss = 0.2879

Epoch 9: Test Accuracy = 0.8673, Test Loss = 0.5240

Epoch 10: Train Accuracy = 0.8985, Train Loss = 0.2798

Epoch 10: Test Accuracy = 0.8755, Test Loss = 0.4821

Epoch 11: Train Accuracy = 0.9005, Train Loss = 0.2717

Epoch 11: Test Accuracy = 0.8779, Test Loss = 0.4803

Epoch 12: Train Accuracy = 0.9023, Train Loss = 0.2660

Epoch 12: Test Accuracy = 0.8783, Test Loss = 0.4633

Epoch 13: Train Accuracy = 0.9051, Train Loss = 0.2593

Epoch 13: Test Accuracy = 0.8748, Test Loss = 0.4669

Epoch 14: Train Accuracy = 0.9079, Train Loss = 0.2535

Epoch 14: Test Accuracy = 0.8790, Test Loss = 0.4539

Epoch 15: Train Accuracy = 0.9096, Train Loss = 0.2480

Epoch 15: Test Accuracy = 0.8771, Test Loss = 0.4544

Epoch 16: Train Accuracy = 0.9117, Train Loss = 0.2411

Epoch 16: Test Accuracy = 0.8794, Test Loss = 0.4336

Epoch 17: Train Accuracy = 0.9125, Train Loss = 0.2366

Epoch 17: Test Accuracy = 0.8796, Test Loss = 0.4388

Epoch 18: Train Accuracy = 0.9155, Train Loss = 0.2311

Epoch 18: Test Accuracy = 0.8774, Test Loss = 0.4357

Epoch 19: Train Accuracy = 0.9158, Train Loss = 0.2285

Epoch 19: Test Accuracy = 0.8768, Test Loss = 0.4323

Epoch 20: Train Accuracy = 0.9181, Train Loss = 0.2220

Epoch 20: Test Accuracy = 0.8757, Test Loss = 0.4324

Epoch 21: Train Accuracy = 0.9204, Train Loss = 0.2172

Epoch 21: Test Accuracy = 0.8773, Test Loss = 0.4167

Epoch 22: Train Accuracy = 0.9202, Train Loss = 0.2159

Epoch 22: Test Accuracy = 0.8791, Test Loss = 0.4107

Epoch 23: Train Accuracy = 0.9206, Train Loss = 0.2119

Epoch 23: Test Accuracy = 0.8765, Test Loss = 0.4089

Epoch 24: Train Accuracy = 0.9217, Train Loss = 0.2083

Epoch 24: Test Accuracy = 0.8805, Test Loss = 0.4058

Epoch 25: Train Accuracy = 0.9250, Train Loss = 0.2043

Epoch 25: Test Accuracy = 0.8732, Test Loss = 0.4001

Epoch 26: Train Accuracy = 0.9262, Train Loss = 0.2006

Epoch 26: Test Accuracy = 0.8812, Test Loss = 0.3911

Epoch 27: Train Accuracy = 0.9259, Train Loss = 0.1976

Epoch 27: Test Accuracy = 0.8789, Test Loss = 0.3933

Epoch 28: Train Accuracy = 0.9277, Train Loss = 0.1938

Epoch 28: Test Accuracy = 0.8838, Test Loss = 0.3885

Epoch 29: Train Accuracy = 0.9273, Train Loss = 0.1939

Epoch 29: Test Accuracy = 0.8768, Test Loss = 0.3976

Epoch 30: Train Accuracy = 0.9294, Train Loss = 0.1885

Epoch 30: Test Accuracy = 0.8805, Test Loss = 0.3855

Epoch 31: Train Accuracy = 0.9322, Train Loss = 0.1840

Epoch 31: Test Accuracy = 0.8825, Test Loss = 0.3742

Epoch 32: Train Accuracy = 0.9318, Train Loss = 0.1830

Epoch 32: Test Accuracy = 0.8801, Test Loss = 0.3754

Epoch 33: Train Accuracy = 0.9345, Train Loss = 0.1770

Epoch 33: Test Accuracy = 0.8832, Test Loss = 0.3703

Epoch 34: Train Accuracy = 0.9332, Train Loss = 0.1769

Epoch 34: Test Accuracy = 0.8794, Test Loss = 0.3763

Epoch 35: Train Accuracy = 0.9349, Train Loss = 0.1748

Epoch 35: Test Accuracy = 0.8796, Test Loss = 0.3810

Epoch 36: Train Accuracy = 0.9356, Train Loss = 0.1728

Epoch 36: Test Accuracy = 0.8779, Test Loss = 0.3795

Epoch 37: Train Accuracy = 0.9360, Train Loss = 0.1689

Epoch 37: Test Accuracy = 0.8789, Test Loss = 0.3668

Epoch 38: Train Accuracy = 0.9377, Train Loss = 0.1670

Epoch 38: Test Accuracy = 0.8857, Test Loss = 0.3610

Epoch 39: Train Accuracy = 0.9399, Train Loss = 0.1618

Epoch 39: Test Accuracy = 0.8780, Test Loss = 0.3720

Epoch 40: Train Accuracy = 0.9397, Train Loss = 0.1620

Epoch 40: Test Accuracy = 0.8821, Test Loss = 0.3580

Epoch 41: Train Accuracy = 0.9394, Train Loss = 0.1596

Epoch 41: Test Accuracy = 0.8791, Test Loss = 0.3610

Epoch 42: Train Accuracy = 0.9400, Train Loss = 0.1570

Epoch 42: Test Accuracy = 0.8834, Test Loss = 0.3606

Epoch 43: Train Accuracy = 0.9426, Train Loss = 0.1537

Epoch 43: Test Accuracy = 0.8805, Test Loss = 0.3609

Epoch 44: Train Accuracy = 0.9434, Train Loss = 0.1519

Epoch 44: Test Accuracy = 0.8777, Test Loss = 0.3670

Epoch 45: Train Accuracy = 0.9430, Train Loss = 0.1522

Epoch 45: Test Accuracy = 0.8824, Test Loss = 0.3573

Epoch 46: Train Accuracy = 0.9447, Train Loss = 0.1478

Epoch 46: Test Accuracy = 0.8807, Test Loss = 0.3563

Epoch 47: Train Accuracy = 0.9455, Train Loss = 0.1469

Epoch 47: Test Accuracy = 0.8799, Test Loss = 0.3601

Epoch 48: Train Accuracy = 0.9451, Train Loss = 0.1450

Epoch 48: Test Accuracy = 0.8805, Test Loss = 0.3576

Epoch 49: Train Accuracy = 0.9470, Train Loss = 0.1417

Epoch 49: Test Accuracy = 0.8840, Test Loss = 0.3555

Epoch 50: Train Accuracy = 0.9478, Train Loss = 0.1405

Epoch 50: Test Accuracy = 0.8774, Test Loss = 0.3652

Final Train Accuracy:    0.9478

Final Test Accuracy:     0.8774

 

Training with , Output Scaling = False

Epoch 1: Train Accuracy = 0.8089, Train Loss = 0.5505

Epoch 1: Test Accuracy = 0.8348, Test Loss = 0.4629

Epoch 2: Train Accuracy = 0.8540, Train Loss = 0.4060

Epoch 2: Test Accuracy = 0.8521, Test Loss = 0.4198

Epoch 3: Train Accuracy = 0.8654, Train Loss = 0.3725

Epoch 3: Test Accuracy = 0.8599, Test Loss = 0.3965

Epoch 4: Train Accuracy = 0.8755, Train Loss = 0.3480

Epoch 4: Test Accuracy = 0.8598, Test Loss = 0.3921

Epoch 5: Train Accuracy = 0.8808, Train Loss = 0.3281

Epoch 5: Test Accuracy = 0.8653, Test Loss = 0.3841

Epoch 6: Train Accuracy = 0.8825, Train Loss = 0.3177

Epoch 6: Test Accuracy = 0.8668, Test Loss = 0.3751

Epoch 7: Train Accuracy = 0.8884, Train Loss = 0.3040

Epoch 7: Test Accuracy = 0.8686, Test Loss = 0.3659

Epoch 8: Train Accuracy = 0.8917, Train Loss = 0.2957

Epoch 8: Test Accuracy = 0.8749, Test Loss = 0.3544

Epoch 9: Train Accuracy = 0.8946, Train Loss = 0.2872

Epoch 9: Test Accuracy = 0.8739, Test Loss = 0.3578

Epoch 10: Train Accuracy = 0.8978, Train Loss = 0.2799

Epoch 10: Test Accuracy = 0.8735, Test Loss = 0.3530

Epoch 11: Train Accuracy = 0.9010, Train Loss = 0.2710

Epoch 11: Test Accuracy = 0.8736, Test Loss = 0.3565

Epoch 12: Train Accuracy = 0.9036, Train Loss = 0.2628

Epoch 12: Test Accuracy = 0.8763, Test Loss = 0.3529

Epoch 13: Train Accuracy = 0.9056, Train Loss = 0.2575

Epoch 13: Test Accuracy = 0.8702, Test Loss = 0.3716

Epoch 14: Train Accuracy = 0.9081, Train Loss = 0.2512

Epoch 14: Test Accuracy = 0.8735, Test Loss = 0.3586

Epoch 15: Train Accuracy = 0.9076, Train Loss = 0.2476

Epoch 15: Test Accuracy = 0.8765, Test Loss = 0.3570

Epoch 16: Train Accuracy = 0.9112, Train Loss = 0.2411

Epoch 16: Test Accuracy = 0.8648, Test Loss = 0.3728

Epoch 17: Train Accuracy = 0.9126, Train Loss = 0.2365

Epoch 17: Test Accuracy = 0.8745, Test Loss = 0.3723

Epoch 18: Train Accuracy = 0.9127, Train Loss = 0.2335

Epoch 18: Test Accuracy = 0.8742, Test Loss = 0.3560

Epoch 19: Train Accuracy = 0.9162, Train Loss = 0.2268

Epoch 19: Test Accuracy = 0.8761, Test Loss = 0.3589

Epoch 20: Train Accuracy = 0.9168, Train Loss = 0.2240

Epoch 20: Test Accuracy = 0.8757, Test Loss = 0.3749

Epoch 21: Train Accuracy = 0.9180, Train Loss = 0.2215

Epoch 21: Test Accuracy = 0.8760, Test Loss = 0.3707

Epoch 22: Train Accuracy = 0.9202, Train Loss = 0.2150

Epoch 22: Test Accuracy = 0.8789, Test Loss = 0.3702

Epoch 23: Train Accuracy = 0.9209, Train Loss = 0.2128

Epoch 23: Test Accuracy = 0.8789, Test Loss = 0.3664

Epoch 24: Train Accuracy = 0.9231, Train Loss = 0.2076

Epoch 24: Test Accuracy = 0.8758, Test Loss = 0.3815

Epoch 25: Train Accuracy = 0.9238, Train Loss = 0.2056

Epoch 25: Test Accuracy = 0.8774, Test Loss = 0.3844

Epoch 26: Train Accuracy = 0.9251, Train Loss = 0.2034

Epoch 26: Test Accuracy = 0.8843, Test Loss = 0.3752

Epoch 27: Train Accuracy = 0.9256, Train Loss = 0.1996

Epoch 27: Test Accuracy = 0.8780, Test Loss = 0.3933

Epoch 28: Train Accuracy = 0.9268, Train Loss = 0.1963

Epoch 28: Test Accuracy = 0.8773, Test Loss = 0.3840

Epoch 29: Train Accuracy = 0.9288, Train Loss = 0.1914

Epoch 29: Test Accuracy = 0.8797, Test Loss = 0.3789

Epoch 30: Train Accuracy = 0.9282, Train Loss = 0.1898

Epoch 30: Test Accuracy = 0.8722, Test Loss = 0.4026

Epoch 31: Train Accuracy = 0.9307, Train Loss = 0.1869

Epoch 31: Test Accuracy = 0.8785, Test Loss = 0.3895

Epoch 32: Train Accuracy = 0.9315, Train Loss = 0.1826

Epoch 32: Test Accuracy = 0.8809, Test Loss = 0.3791

Epoch 33: Train Accuracy = 0.9323, Train Loss = 0.1801

Epoch 33: Test Accuracy = 0.8823, Test Loss = 0.3837

Epoch 34: Train Accuracy = 0.9327, Train Loss = 0.1779

Epoch 34: Test Accuracy = 0.8785, Test Loss = 0.4054

Epoch 35: Train Accuracy = 0.9339, Train Loss = 0.1760

Epoch 35: Test Accuracy = 0.8760, Test Loss = 0.4083

Epoch 36: Train Accuracy = 0.9355, Train Loss = 0.1755

Epoch 36: Test Accuracy = 0.8769, Test Loss = 0.3961

Epoch 37: Train Accuracy = 0.9347, Train Loss = 0.1702

Epoch 37: Test Accuracy = 0.8795, Test Loss = 0.4092

Epoch 38: Train Accuracy = 0.9367, Train Loss = 0.1688

Epoch 38: Test Accuracy = 0.8745, Test Loss = 0.4070

Epoch 39: Train Accuracy = 0.9374, Train Loss = 0.1653

Epoch 39: Test Accuracy = 0.8793, Test Loss = 0.4034

Epoch 40: Train Accuracy = 0.9393, Train Loss = 0.1634

Epoch 40: Test Accuracy = 0.8732, Test Loss = 0.4188

Epoch 41: Train Accuracy = 0.9397, Train Loss = 0.1615

Epoch 41: Test Accuracy = 0.8798, Test Loss = 0.4074

Epoch 42: Train Accuracy = 0.9402, Train Loss = 0.1571

Epoch 42: Test Accuracy = 0.8703, Test Loss = 0.4452

Epoch 43: Train Accuracy = 0.9410, Train Loss = 0.1563

Epoch 43: Test Accuracy = 0.8793, Test Loss = 0.4214

Epoch 44: Train Accuracy = 0.9424, Train Loss = 0.1542

Epoch 44: Test Accuracy = 0.8773, Test Loss = 0.4321

Epoch 45: Train Accuracy = 0.9414, Train Loss = 0.1527

Epoch 45: Test Accuracy = 0.8788, Test Loss = 0.4252

Epoch 46: Train Accuracy = 0.9440, Train Loss = 0.1486

Epoch 46: Test Accuracy = 0.8753, Test Loss = 0.4361

Epoch 47: Train Accuracy = 0.9446, Train Loss = 0.1476

Epoch 47: Test Accuracy = 0.8761, Test Loss = 0.4401

Epoch 48: Train Accuracy = 0.9456, Train Loss = 0.1442

Epoch 48: Test Accuracy = 0.8757, Test Loss = 0.4501

Epoch 49: Train Accuracy = 0.9460, Train Loss = 0.1433

Epoch 49: Test Accuracy = 0.8758, Test Loss = 0.4598

Epoch 50: Train Accuracy = 0.9469, Train Loss = 0.1425

Epoch 50: Test Accuracy = 0.8720, Test Loss = 0.4689

Final Train Accuracy:    0.9469

Final Test Accuracy:     0.8720

 

Training with , Output Scaling = True

Epoch 1: Train Accuracy = 0.7898, Train Loss = 0.6208

Epoch 1: Test Accuracy = 0.8150, Test Loss = 1.2925

Epoch 2: Train Accuracy = 0.8446, Train Loss = 0.4390

Epoch 2: Test Accuracy = 0.8403, Test Loss = 1.1939

Epoch 3: Train Accuracy = 0.8573, Train Loss = 0.4035

Epoch 3: Test Accuracy = 0.8472, Test Loss = 1.1281

Epoch 4: Train Accuracy = 0.8628, Train Loss = 0.3837

Epoch 4: Test Accuracy = 0.8465, Test Loss = 1.0892

Epoch 5: Train Accuracy = 0.8685, Train Loss = 0.3688

Epoch 5: Test Accuracy = 0.8393, Test Loss = 1.0664

Epoch 6: Train Accuracy = 0.8721, Train Loss = 0.3560

Epoch 6: Test Accuracy = 0.8615, Test Loss = 1.0358

Epoch 7: Train Accuracy = 0.8750, Train Loss = 0.3460

Epoch 7: Test Accuracy = 0.8614, Test Loss = 1.0162

Epoch 8: Train Accuracy = 0.8783, Train Loss = 0.3379

Epoch 8: Test Accuracy = 0.8634, Test Loss = 1.0065

Epoch 9: Train Accuracy = 0.8812, Train Loss = 0.3301

Epoch 9: Test Accuracy = 0.8625, Test Loss = 0.9985

Epoch 10: Train Accuracy = 0.8836, Train Loss = 0.3218

Epoch 10: Test Accuracy = 0.8632, Test Loss = 0.9723

Epoch 11: Train Accuracy = 0.8859, Train Loss = 0.3158

Epoch 11: Test Accuracy = 0.8680, Test Loss = 0.9559

Epoch 12: Train Accuracy = 0.8863, Train Loss = 0.3124

Epoch 12: Test Accuracy = 0.8629, Test Loss = 0.9321

Epoch 13: Train Accuracy = 0.8883, Train Loss = 0.3084

Epoch 13: Test Accuracy = 0.8645, Test Loss = 0.9211

Epoch 14: Train Accuracy = 0.8898, Train Loss = 0.3027

Epoch 14: Test Accuracy = 0.8644, Test Loss = 0.9317

Epoch 15: Train Accuracy = 0.8905, Train Loss = 0.3001

Epoch 15: Test Accuracy = 0.8628, Test Loss = 0.9150

Epoch 16: Train Accuracy = 0.8925, Train Loss = 0.2968

Epoch 16: Test Accuracy = 0.8623, Test Loss = 0.8847

Epoch 17: Train Accuracy = 0.8942, Train Loss = 0.2931

Epoch 17: Test Accuracy = 0.8656, Test Loss = 0.8658

Epoch 18: Train Accuracy = 0.8938, Train Loss = 0.2905

Epoch 18: Test Accuracy = 0.8652, Test Loss = 0.8622

Epoch 19: Train Accuracy = 0.8953, Train Loss = 0.2871

Epoch 19: Test Accuracy = 0.8666, Test Loss = 0.8473

Epoch 20: Train Accuracy = 0.8962, Train Loss = 0.2840

Epoch 20: Test Accuracy = 0.8653, Test Loss = 0.8364

Epoch 21: Train Accuracy = 0.8966, Train Loss = 0.2815

Epoch 21: Test Accuracy = 0.8677, Test Loss = 0.8556

Epoch 22: Train Accuracy = 0.8980, Train Loss = 0.2799

Epoch 22: Test Accuracy = 0.8602, Test Loss = 0.8434

Epoch 23: Train Accuracy = 0.9001, Train Loss = 0.2748

Epoch 23: Test Accuracy = 0.8691, Test Loss = 0.8278

Epoch 24: Train Accuracy = 0.8991, Train Loss = 0.2734

Epoch 24: Test Accuracy = 0.8641, Test Loss = 0.8066

Epoch 25: Train Accuracy = 0.9011, Train Loss = 0.2699

Epoch 25: Test Accuracy = 0.8621, Test Loss = 0.8185

Epoch 26: Train Accuracy = 0.9025, Train Loss = 0.2666

Epoch 26: Test Accuracy = 0.8690, Test Loss = 0.7944

Epoch 27: Train Accuracy = 0.9023, Train Loss = 0.2650

Epoch 27: Test Accuracy = 0.8655, Test Loss = 0.7906

Epoch 28: Train Accuracy = 0.9050, Train Loss = 0.2627

Epoch 28: Test Accuracy = 0.8652, Test Loss = 0.7879

Epoch 29: Train Accuracy = 0.9051, Train Loss = 0.2600

Epoch 29: Test Accuracy = 0.8649, Test Loss = 0.7787

Epoch 30: Train Accuracy = 0.9065, Train Loss = 0.2588

Epoch 30: Test Accuracy = 0.8666, Test Loss = 0.7717

Epoch 31: Train Accuracy = 0.9050, Train Loss = 0.2591

Epoch 31: Test Accuracy = 0.8624, Test Loss = 0.7509

Epoch 32: Train Accuracy = 0.9062, Train Loss = 0.2545

Epoch 32: Test Accuracy = 0.8658, Test Loss = 0.7562

Epoch 33: Train Accuracy = 0.9078, Train Loss = 0.2524

Epoch 33: Test Accuracy = 0.8684, Test Loss = 0.7490

Epoch 34: Train Accuracy = 0.9085, Train Loss = 0.2488

Epoch 34: Test Accuracy = 0.8680, Test Loss = 0.7446

Epoch 35: Train Accuracy = 0.9095, Train Loss = 0.2465

Epoch 35: Test Accuracy = 0.8660, Test Loss = 0.7444

Epoch 36: Train Accuracy = 0.9092, Train Loss = 0.2461

Epoch 36: Test Accuracy = 0.8659, Test Loss = 0.7247

Epoch 37: Train Accuracy = 0.9094, Train Loss = 0.2471

Epoch 37: Test Accuracy = 0.8706, Test Loss = 0.7246

Epoch 38: Train Accuracy = 0.9094, Train Loss = 0.2467

Epoch 38: Test Accuracy = 0.8669, Test Loss = 0.6958

Epoch 39: Train Accuracy = 0.9090, Train Loss = 0.2452

Epoch 39: Test Accuracy = 0.8680, Test Loss = 0.7062

Epoch 40: Train Accuracy = 0.9116, Train Loss = 0.2410

Epoch 40: Test Accuracy = 0.8637, Test Loss = 0.7136

Epoch 41: Train Accuracy = 0.9119, Train Loss = 0.2404

Epoch 41: Test Accuracy = 0.8673, Test Loss = 0.6872

Epoch 42: Train Accuracy = 0.9117, Train Loss = 0.2377

Epoch 42: Test Accuracy = 0.8688, Test Loss = 0.7002

Epoch 43: Train Accuracy = 0.9131, Train Loss = 0.2359

Epoch 43: Test Accuracy = 0.8637, Test Loss = 0.6913

Epoch 44: Train Accuracy = 0.9143, Train Loss = 0.2332

Epoch 44: Test Accuracy = 0.8651, Test Loss = 0.6762

Epoch 45: Train Accuracy = 0.9140, Train Loss = 0.2334

Epoch 45: Test Accuracy = 0.8600, Test Loss = 0.6796

Epoch 46: Train Accuracy = 0.9134, Train Loss = 0.2344

Epoch 46: Test Accuracy = 0.8643, Test Loss = 0.6873

Epoch 47: Train Accuracy = 0.9136, Train Loss = 0.2319

Epoch 47: Test Accuracy = 0.8608, Test Loss = 0.6616

Epoch 48: Train Accuracy = 0.9146, Train Loss = 0.2324

Epoch 48: Test Accuracy = 0.8577, Test Loss = 0.6621

Epoch 49: Train Accuracy = 0.9152, Train Loss = 0.2289

Epoch 49: Test Accuracy = 0.8659, Test Loss = 0.6524

Epoch 50: Train Accuracy = 0.9160, Train Loss = 0.2281

Epoch 50: Test Accuracy = 0.8661, Test Loss = 0.6520

Final Train Accuracy:    0.9160

Final Test Accuracy:     0.8661

 

Final Test Accuracy (  = 0.4) - Scaled:      0.8661

Final Test Accuracy (  = 0.3) - Scaled:      0.8774

Final Test Accuracy (  = 0.2) - Scaled:      0.8798

Final Test Accuracy (  = 0.1) - Scaled:      0.8817

Final Test Accuracy (  = 0.0) - Scaled:      0.8794

 

 

=================================================================

OUTCOME (run with 10 epochs):

=================================================================

 

Training with , Output Scaling = True

Epoch 1: Train Accuracy = 0.8212, Train Loss = 0.4992

Epoch 1: Test Accuracy = 0.8398, Test Loss = 0.4408

Epoch 2: Train Accuracy = 0.8617, Train Loss = 0.3819

Epoch 2: Test Accuracy = 0.8590, Test Loss = 0.3914

Epoch 3: Train Accuracy = 0.8750, Train Loss = 0.3405

Epoch 3: Test Accuracy = 0.8651, Test Loss = 0.3751

Epoch 4: Train Accuracy = 0.8830, Train Loss = 0.3207

Epoch 4: Test Accuracy = 0.8727, Test Loss = 0.3661

Epoch 5: Train Accuracy = 0.8904, Train Loss = 0.3003

Epoch 5: Test Accuracy = 0.8674, Test Loss = 0.3706

Epoch 6: Train Accuracy = 0.8935, Train Loss = 0.2863

Epoch 6: Test Accuracy = 0.8768, Test Loss = 0.3402

Epoch 7: Train Accuracy = 0.8991, Train Loss = 0.2722

Epoch 7: Test Accuracy = 0.8732, Test Loss = 0.3572

Epoch 8: Train Accuracy = 0.9029, Train Loss = 0.2634

Epoch 8: Test Accuracy = 0.8780, Test Loss = 0.3442

Epoch 9: Train Accuracy = 0.9056, Train Loss = 0.2524

Epoch 9: Test Accuracy = 0.8759, Test Loss = 0.3568

Epoch 10: Train Accuracy = 0.9087, Train Loss = 0.2426

Epoch 10: Test Accuracy = 0.8780, Test Loss = 0.3493

Final Train Accuracy:    0.9087

Final Test Accuracy:     0.8780

 

Training with , Output Scaling = False

Epoch 1: Train Accuracy = 0.8207, Train Loss = 0.4972

Epoch 1: Test Accuracy = 0.8428, Test Loss = 0.4356

Epoch 2: Train Accuracy = 0.8608, Train Loss = 0.3832

Epoch 2: Test Accuracy = 0.8565, Test Loss = 0.3929

Epoch 3: Train Accuracy = 0.8760, Train Loss = 0.3396

Epoch 3: Test Accuracy = 0.8640, Test Loss = 0.3799

Epoch 4: Train Accuracy = 0.8833, Train Loss = 0.3186

Epoch 4: Test Accuracy = 0.8686, Test Loss = 0.3651

Epoch 5: Train Accuracy = 0.8894, Train Loss = 0.2999

Epoch 5: Test Accuracy = 0.8674, Test Loss = 0.3584

Epoch 6: Train Accuracy = 0.8938, Train Loss = 0.2854

Epoch 6: Test Accuracy = 0.8792, Test Loss = 0.3396

Epoch 7: Train Accuracy = 0.9004, Train Loss = 0.2706

Epoch 7: Test Accuracy = 0.8738, Test Loss = 0.3544

Epoch 8: Train Accuracy = 0.9041, Train Loss = 0.2599

Epoch 8: Test Accuracy = 0.8729, Test Loss = 0.3772

Epoch 9: Train Accuracy = 0.9073, Train Loss = 0.2499

Epoch 9: Test Accuracy = 0.8792, Test Loss = 0.3501

Epoch 10: Train Accuracy = 0.9103, Train Loss = 0.2413

Epoch 10: Test Accuracy = 0.8720, Test Loss = 0.3761

Final Train Accuracy:    0.9103

Final Test Accuracy:     0.8720

 

Training with , Output Scaling = True

Epoch 1: Train Accuracy = 0.8184, Train Loss = 0.5108

Epoch 1: Test Accuracy = 0.8372, Test Loss = 0.4587

Epoch 2: Train Accuracy = 0.8586, Train Loss = 0.3888

Epoch 2: Test Accuracy = 0.8462, Test Loss = 0.4285

Epoch 3: Train Accuracy = 0.8715, Train Loss = 0.3508

Epoch 3: Test Accuracy = 0.8600, Test Loss = 0.3882

Epoch 4: Train Accuracy = 0.8819, Train Loss = 0.3255

Epoch 4: Test Accuracy = 0.8701, Test Loss = 0.3699

Epoch 5: Train Accuracy = 0.8860, Train Loss = 0.3083

Epoch 5: Test Accuracy = 0.8657, Test Loss = 0.3689

Epoch 6: Train Accuracy = 0.8927, Train Loss = 0.2913

Epoch 6: Test Accuracy = 0.8682, Test Loss = 0.3639

Epoch 7: Train Accuracy = 0.8969, Train Loss = 0.2802

Epoch 7: Test Accuracy = 0.8678, Test Loss = 0.3574

Epoch 8: Train Accuracy = 0.8998, Train Loss = 0.2694

Epoch 8: Test Accuracy = 0.8698, Test Loss = 0.3553

Epoch 9: Train Accuracy = 0.9038, Train Loss = 0.2597

Epoch 9: Test Accuracy = 0.8765, Test Loss = 0.3484

Epoch 10: Train Accuracy = 0.9082, Train Loss = 0.2487

Epoch 10: Test Accuracy = 0.8803, Test Loss = 0.3344

Final Train Accuracy:    0.9082

Final Test Accuracy:     0.8803

 

Training with , Output Scaling = False

Epoch 1: Train Accuracy = 0.8166, Train Loss = 0.5119

Epoch 1: Test Accuracy = 0.8384, Test Loss = 0.4407

Epoch 2: Train Accuracy = 0.8609, Train Loss = 0.3870

Epoch 2: Test Accuracy = 0.8496, Test Loss = 0.4169

Epoch 3: Train Accuracy = 0.8726, Train Loss = 0.3492

Epoch 3: Test Accuracy = 0.8568, Test Loss = 0.4002

Epoch 4: Train Accuracy = 0.8808, Train Loss = 0.3255

Epoch 4: Test Accuracy = 0.8658, Test Loss = 0.3691

Epoch 5: Train Accuracy = 0.8871, Train Loss = 0.3050

Epoch 5: Test Accuracy = 0.8725, Test Loss = 0.3597

Epoch 6: Train Accuracy = 0.8921, Train Loss = 0.2889

Epoch 6: Test Accuracy = 0.8605, Test Loss = 0.3738

Epoch 7: Train Accuracy = 0.8964, Train Loss = 0.2762

Epoch 7: Test Accuracy = 0.8755, Test Loss = 0.3482

Epoch 8: Train Accuracy = 0.9016, Train Loss = 0.2654

Epoch 8: Test Accuracy = 0.8755, Test Loss = 0.3482

Epoch 9: Train Accuracy = 0.9056, Train Loss = 0.2547

Epoch 9: Test Accuracy = 0.8740, Test Loss = 0.3558

Epoch 10: Train Accuracy = 0.9074, Train Loss = 0.2467

Epoch 10: Test Accuracy = 0.8775, Test Loss = 0.3556

Final Train Accuracy:    0.9074

Final Test Accuracy:     0.8775

 

Training with , Output Scaling = True

Epoch 1: Train Accuracy = 0.8131, Train Loss = 0.5250

Epoch 1: Test Accuracy = 0.8360, Test Loss = 0.5437

Epoch 2: Train Accuracy = 0.8554, Train Loss = 0.3979

Epoch 2: Test Accuracy = 0.8532, Test Loss = 0.4775

Epoch 3: Train Accuracy = 0.8685, Train Loss = 0.3621

Epoch 3: Test Accuracy = 0.8576, Test Loss = 0.4367

Epoch 4: Train Accuracy = 0.8773, Train Loss = 0.3377

Epoch 4: Test Accuracy = 0.8685, Test Loss = 0.4150

Epoch 5: Train Accuracy = 0.8839, Train Loss = 0.3175

Epoch 5: Test Accuracy = 0.8689, Test Loss = 0.4098

Epoch 6: Train Accuracy = 0.8887, Train Loss = 0.3028

Epoch 6: Test Accuracy = 0.8689, Test Loss = 0.3908

Epoch 7: Train Accuracy = 0.8934, Train Loss = 0.2910

Epoch 7: Test Accuracy = 0.8657, Test Loss = 0.3897

Epoch 8: Train Accuracy = 0.8980, Train Loss = 0.2790

Epoch 8: Test Accuracy = 0.8789, Test Loss = 0.3749

Epoch 9: Train Accuracy = 0.9008, Train Loss = 0.2702

Epoch 9: Test Accuracy = 0.8691, Test Loss = 0.3762

Epoch 10: Train Accuracy = 0.9046, Train Loss = 0.2601

Epoch 10: Test Accuracy = 0.8738, Test Loss = 0.3725

Final Train Accuracy:    0.9046

Final Test Accuracy:     0.8738

 

 

Training with , Output Scaling = False

Epoch 1: Train Accuracy = 0.8153, Train Loss = 0.5214

Epoch 1: Test Accuracy = 0.8316, Test Loss = 0.4566

Epoch 2: Train Accuracy = 0.8575, Train Loss = 0.3940

Epoch 2: Test Accuracy = 0.8551, Test Loss = 0.4108

Epoch 3: Train Accuracy = 0.8700, Train Loss = 0.3591

Epoch 3: Test Accuracy = 0.8655, Test Loss = 0.3817

Epoch 4: Train Accuracy = 0.8785, Train Loss = 0.3330

Epoch 4: Test Accuracy = 0.8658, Test Loss = 0.3842

Epoch 5: Train Accuracy = 0.8847, Train Loss = 0.3145

Epoch 5: Test Accuracy = 0.8679, Test Loss = 0.3706

Epoch 6: Train Accuracy = 0.8904, Train Loss = 0.3014

Epoch 6: Test Accuracy = 0.8682, Test Loss = 0.3625

Epoch 7: Train Accuracy = 0.8931, Train Loss = 0.2866

Epoch 7: Test Accuracy = 0.8660, Test Loss = 0.3660

Epoch 8: Train Accuracy = 0.8972, Train Loss = 0.2760

Epoch 8: Test Accuracy = 0.8782, Test Loss = 0.3508

Epoch 9: Train Accuracy = 0.9025, Train Loss = 0.2647

Epoch 9: Test Accuracy = 0.8768, Test Loss = 0.3464

Epoch 10: Train Accuracy = 0.9044, Train Loss = 0.2561

Epoch 10: Test Accuracy = 0.8723, Test Loss = 0.3678

Final Train Accuracy:    0.9044

Final Test Accuracy:     0.8723

 

Training with  = 0.3, Output Scaling = True

Epoch 1: Train Accuracy = 0.8081, Train Loss = 0.5513

Epoch 1: Test Accuracy = 0.8344, Test Loss = 0.7299

Epoch 2: Train Accuracy = 0.8523, Train Loss = 0.4112

Epoch 2: Test Accuracy = 0.8407, Test Loss = 0.6752

Epoch 3: Train Accuracy = 0.8664, Train Loss = 0.3732

Epoch 3: Test Accuracy = 0.8557, Test Loss = 0.6123

Epoch 4: Train Accuracy = 0.8736, Train Loss = 0.3532

Epoch 4: Test Accuracy = 0.8572, Test Loss = 0.6031

Epoch 5: Train Accuracy = 0.8783, Train Loss = 0.3351

Epoch 5: Test Accuracy = 0.8560, Test Loss = 0.5746

Epoch 6: Train Accuracy = 0.8845, Train Loss = 0.3195

Epoch 6: Test Accuracy = 0.8646, Test Loss = 0.5364

Epoch 7: Train Accuracy = 0.8879, Train Loss = 0.3097

Epoch 7: Test Accuracy = 0.8734, Test Loss = 0.5195

Epoch 8: Train Accuracy = 0.8910, Train Loss = 0.2985

Epoch 8: Test Accuracy = 0.8732, Test Loss = 0.5320

Epoch 9: Train Accuracy = 0.8942, Train Loss = 0.2906

Epoch 9: Test Accuracy = 0.8714, Test Loss = 0.5060

Epoch 10: Train Accuracy = 0.8969, Train Loss = 0.2820

Epoch 10: Test Accuracy = 0.8724, Test Loss = 0.4974

Final Train Accuracy:    0.8969

Final Test Accuracy:     0.8724

 

Training with , Output Scaling = False

Epoch 1: Train Accuracy = 0.8079, Train Loss = 0.5502

Epoch 1: Test Accuracy = 0.8218, Test Loss = 0.4939

Epoch 2: Train Accuracy = 0.8521, Train Loss = 0.4149

Epoch 2: Test Accuracy = 0.8492, Test Loss = 0.4286

Epoch 3: Train Accuracy = 0.8642, Train Loss = 0.3772

Epoch 3: Test Accuracy = 0.8549, Test Loss = 0.4018

Epoch 4: Train Accuracy = 0.8727, Train Loss = 0.3548

Epoch 4: Test Accuracy = 0.8631, Test Loss = 0.3848

Epoch 5: Train Accuracy = 0.8783, Train Loss = 0.3367

Epoch 5: Test Accuracy = 0.8678, Test Loss = 0.3747

Epoch 6: Train Accuracy = 0.8835, Train Loss = 0.3228

Epoch 6: Test Accuracy = 0.8698, Test Loss = 0.3704

Epoch 7: Train Accuracy = 0.8877, Train Loss = 0.3104

Epoch 7: Test Accuracy = 0.8682, Test Loss = 0.3707

Epoch 8: Train Accuracy = 0.8905, Train Loss = 0.2992

Epoch 8: Test Accuracy = 0.8698, Test Loss = 0.3566

Epoch 9: Train Accuracy = 0.8931, Train Loss = 0.2894

Epoch 9: Test Accuracy = 0.8632, Test Loss = 0.3814

Epoch 10: Train Accuracy = 0.8957, Train Loss = 0.2850

Epoch 10: Test Accuracy = 0.8760, Test Loss = 0.3482

Final Train Accuracy:    0.8957

Final Test Accuracy:     0.8760

 

Training with , Output Scaling = True

Epoch 1: Train Accuracy = 0.7909, Train Loss = 0.6192

Epoch 1: Test Accuracy = 0.8208, Test Loss = 1.2870

Epoch 2: Train Accuracy = 0.8431, Train Loss = 0.4445

Epoch 2: Test Accuracy = 0.8371, Test Loss = 1.1847

Epoch 3: Train Accuracy = 0.8556, Train Loss = 0.4085

Epoch 3: Test Accuracy = 0.8437, Test Loss = 1.1238

Epoch 4: Train Accuracy = 0.8620, Train Loss = 0.3896

Epoch 4: Test Accuracy = 0.8453, Test Loss = 1.0998

Epoch 5: Train Accuracy = 0.8661, Train Loss = 0.3739

Epoch 5: Test Accuracy = 0.8557, Test Loss = 1.0888

Epoch 6: Train Accuracy = 0.8704, Train Loss = 0.3614

Epoch 6: Test Accuracy = 0.8556, Test Loss = 1.0446

Epoch 7: Train Accuracy = 0.8719, Train Loss = 0.3527

Epoch 7: Test Accuracy = 0.8556, Test Loss = 1.0376

Epoch 8: Train Accuracy = 0.8763, Train Loss = 0.3435

Epoch 8: Test Accuracy = 0.8583, Test Loss = 1.0285

Epoch 9: Train Accuracy = 0.8788, Train Loss = 0.3369

Epoch 9: Test Accuracy = 0.8586, Test Loss = 1.0070

Epoch 10: Train Accuracy = 0.8810, Train Loss = 0.3301

Epoch 10: Test Accuracy = 0.8580, Test Loss = 0.9968

Final Train Accuracy: 0.8810

Final Test Accuracy: 0.8580

 

Training with  Output Scaling = False

Epoch 1: Train Accuracy = 0.7892, Train Loss = 0.6242

Epoch 1: Test Accuracy = 0.8199, Test Loss = 0.4981

Epoch 2: Train Accuracy = 0.8435, Train Loss = 0.4419

Epoch 2: Test Accuracy = 0.8417, Test Loss = 0.4499

Epoch 3: Train Accuracy = 0.8550, Train Loss = 0.4072

Epoch 3: Test Accuracy = 0.8465, Test Loss = 0.4282

Epoch 4: Train Accuracy = 0.8641, Train Loss = 0.3843

Epoch 4: Test Accuracy = 0.8511, Test Loss = 0.4167

Epoch 5: Train Accuracy = 0.8683, Train Loss = 0.3693

Epoch 5: Test Accuracy = 0.8496, Test Loss = 0.4107

Epoch 6: Train Accuracy = 0.8729, Train Loss = 0.3571

Epoch 6: Test Accuracy = 0.8552, Test Loss = 0.3995

Epoch 7: Train Accuracy = 0.8759, Train Loss = 0.3480

Epoch 7: Test Accuracy = 0.8582, Test Loss = 0.3945

Epoch 8: Train Accuracy = 0.8779, Train Loss = 0.3403

Epoch 8: Test Accuracy = 0.8598, Test Loss = 0.3934

Epoch 9: Train Accuracy = 0.8814, Train Loss = 0.3307

Epoch 9: Test Accuracy = 0.8657, Test Loss = 0.3864

Epoch 10: Train Accuracy = 0.8819, Train Loss = 0.3260

Epoch 10: Test Accuracy = 0.8616, Test Loss = 0.3918

Final Train Accuracy: 0.8819

Final Test Accuracy: 0.8616

 

Training with , Output Scaling = True

Final Train Accuracy: 0.9087

Final Test Accuracy: 0.8780

 

 

Training with , Output Scaling = False

Final Train Accuracy: 0.9103

Final Test Accuracy: 0.8720

 

 

Training with , Output Scaling = True

Final Train Accuracy: 0.9082

Final Test Accuracy: 0.8803

 

 

Training with , Output Scaling = False

Final Train Accuracy: 0.9074

Final Test Accuracy: 0.8775

 

 

Training with , Output Scaling = True

Final Train Accuracy: 0.9046

Final Test Accuracy: 0.8738

 

 

Training with , Output Scaling = False

Final Train Accuracy: 0.9044

Final Test Accuracy: 0.8723

 

 

Training with , Output Scaling = True

Final Train Accuracy: 0.8969

Final Test Accuracy: 0.8724

 

 

Training with , Output Scaling = False

Final Train Accuracy: 0.8957

Final Test Accuracy: 0.8760

 

 

Training with , Output Scaling = True

Final Train Accuracy: 0.8810

Final Test Accuracy: 0.8580

 

 

Training with , Output Scaling = False

Final Train Accuracy: 0.8819

Final Test Accuracy: 0.8616

 

=================================================================

 

END OF “FASHION-MNIST” DATASET EXPERIMENT https://www.kaggle.com/datasets/zalando-research/fashionmnist

 

Straightforward Implementation of Adversarial Regularization

(not included to the article)

 

=================================================================

 

 

=================================================================

 

ADVERSARIAL REGULARIZATION VS DROPOUT ON TOP OF “FASHION-MNIST” DATASET (CLEAN DATA) EXPERIMENT https://www.kaggle.com/datasets/zalando-research/fashionmnist

 

Straightforward Implementation of Adversarial Regularization vs. Dropout

(not included to the article)

 

=================================================================

 

Training with p = 0.0

 

Training with p = 0.1

 

 

Training with p = 0.2

 

 

Training with p = 0.3

 

 

Training with p = 0.4

 

=================================================================

 

END OF ADVERSARIAL REGULARIZATION VS DROPOUT ON TOP OF “FASHION-MNIST” DATASET (CLEAN DATA) EXPERIMENT https://www.kaggle.com/datasets/zalando-research/fashionmnist

 

Straightforward Implementation of Adversarial Regularization vs. Dropout

(not included to the article)

 

=================================================================

 

 

=================================================================

 

ADVERSARIAL REGULARIZATION VS DROPOUT ON TOP OF “FASHION-MNIST” DATASET (NOISY DATA) EXPERIMENT https://www.kaggle.com/datasets/zalando-research/fashionmnist

 

Straightforward Implementation of Adversarial Regularization vs. Dropout

(not included to the article)

 

=================================================================

 

 

import torch

import torch.nn as nn

import torch.optim as optim

from torch.utils.data import DataLoader, TensorDataset

from torchvision import datasets, transforms

import matplotlib.pyplot as plt

import numpy as np

 

# Device configuration

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

 

# Data loading and preprocessing for Fashion-MNIST

transform = transforms.Compose([

    transforms.ToTensor(),

    transforms.Normalize((0.5,), (0.5,))

])

 

train_dataset = datasets.FashionMNIST(root='./data', train=True, download=True, transform=transform)

test_dataset = datasets.FashionMNIST(root='./data', train=False, download=True, transform=transform)

 

batch_size = 64

 

# Function to add noise to the dataset

def add_noise_to_dataset(dataset, noise_factor):

    noisy_images = []

    labels = []

    for image, label in dataset:

        noise = torch.randn_like(image) * noise_factor

        noisy_image = image + noise

        noisy_image = torch.clamp(noisy_image, 0, 1)

        noisy_images.append(noisy_image)

        labels.append(label)

   

    noisy_images = torch.stack(noisy_images)

    labels = torch.tensor(labels)

    noisy_dataset = TensorDataset(noisy_images, labels)

    return noisy_dataset

 

# Model definition for both AR and Dropout

class RegularizedNN(nn.Module):

    def __init__(self, input_size=784, hidden_size=128, output_size=10, p_dropout=0.0):

        super(RegularizedNN, self).__init__()

        self.fc1 = nn.Linear(input_size, hidden_size)

        self.relu = nn.ReLU()

        self.dropout = nn.Dropout(p_dropout)

        self.fc2 = nn.Linear(hidden_size, output_size)

 

    def forward(self, x):

        x = x.view(x.size(0), -1)

        x = self.fc1(x)

        x = self.relu(x)

        x = self.dropout(x)

        x = self.fc2(x)

        return x

 

# Train and evaluate function for AR

def train_and_evaluate_ar(model, train_loader, test_loader, num_epochs=50, lr=0.001, p_malignant=0.0):

    criterion = nn.CrossEntropyLoss()

    optimizer = optim.Adam(model.parameters(), lr=lr)

 

    train_acc_list = []

    test_acc_list = []

    train_loss_list = []

    test_loss_list = []

 

    for epoch in range(num_epochs):

        model.train()

        correct, total = 0, 0

        epoch_train_loss = 0.0

 

        for inputs, labels in train_loader:

            inputs, labels = inputs.to(device), labels.to(device)

 

            outputs = model(inputs)

            loss = criterion(outputs, labels)

 

            optimizer.zero_grad()

            loss.backward()

 

            with torch.no_grad():

                 for name, param in model.named_parameters():

                    if 'weight' in name and param.grad is not None:

                        malignant_mask = (torch.rand(param.shape) < p_malignant).to(device)

                        param.grad[malignant_mask] *= -1

 

            optimizer.step()

            epoch_train_loss += loss.item()

 

            _, predicted = torch.max(outputs, 1)

            total += labels.size(0)

            correct += (predicted == labels).sum().item()

       

        avg_train_loss = epoch_train_loss / len(train_loader)

        train_loss_list.append(avg_train_loss)

        train_acc = correct / total

        train_acc_list.append(train_acc)

 

        model.eval()

        correct, total = 0, 0

        epoch_test_loss = 0.0

        with torch.no_grad():

            for inputs, labels in test_loader:

                inputs, labels = inputs.to(device), labels.to(device)

                outputs = model(inputs)

                loss = criterion(outputs,labels)

                epoch_test_loss += loss.item()

                _, predicted = torch.max(outputs, 1)

                total += labels.size(0)

                correct += (predicted == labels).sum().item()

       

        avg_test_loss = epoch_test_loss / len(test_loader)

        test_loss_list.append(avg_test_loss)

        test_acc = correct / total

        test_acc_list.append(test_acc)

 

    return train_acc_list, test_acc_list, train_loss_list, test_loss_list

 

# Train and evaluate function for Dropout

def train_and_evaluate_dropout(model, train_loader, test_loader, num_epochs=50, lr=0.001):

    criterion = nn.CrossEntropyLoss()

    optimizer = optim.Adam(model.parameters(), lr=lr)

 

    train_acc_list = []

    test_acc_list = []

    train_loss_list = []

    test_loss_list = []

 

    for epoch in range(num_epochs):

        model.train()

        correct, total = 0, 0

        epoch_train_loss = 0.0

 

        for inputs, labels in train_loader:

            inputs, labels = inputs.to(device), labels.to(device)

 

            outputs = model(inputs)

            loss = criterion(outputs, labels)

 

            optimizer.zero_grad()

            loss.backward()

            optimizer.step()

            epoch_train_loss += loss.item()

 

            _, predicted = torch.max(outputs, 1)

            total += labels.size(0)

            correct += (predicted == labels).sum().item()

 

        avg_train_loss = epoch_train_loss / len(train_loader)

        train_loss_list.append(avg_train_loss)

        train_acc = correct / total

        train_acc_list.append(train_acc)

 

        model.eval()

        correct, total = 0, 0

        epoch_test_loss = 0.0

        with torch.no_grad():

            for inputs, labels in test_loader:

                inputs, labels = inputs.to(device), labels.to(device)

                outputs = model(inputs)

                loss = criterion(outputs, labels)

                epoch_test_loss += loss.item()

                _, predicted = torch.max(outputs, 1)

                total += labels.size(0)

                correct += (predicted == labels).sum().item()

       

        avg_test_loss = epoch_test_loss / len(test_loader)

        test_loss_list.append(avg_test_loss)

        test_acc = correct / total

        test_acc_list.append(test_acc)

 

    return train_acc_list, test_acc_list, train_loss_list, test_loss_list

 

# Run experiment

num_epochs = 20

p_values = [0.0, 0.1, 0.2, 0.3, 0.4]

noise_factors = [0.3, 0.5, 1.0]

 

for noise_factor in noise_factors:

    print(f"\nRunning with noise factor = {noise_factor}")

   

    # Add noise to train and test data

    noisy_train_dataset = add_noise_to_dataset(train_dataset, noise_factor)

    noisy_test_dataset = add_noise_to_dataset(test_dataset, noise_factor)

 

    noisy_train_loader = DataLoader(noisy_train_dataset, batch_size=batch_size, shuffle=True)

    noisy_test_loader = DataLoader(noisy_test_dataset, batch_size=batch_size, shuffle=False)

 

    for p in p_values:

        print(f"\nTraining with p = {p}")

        # Initialize models

        ar_model = RegularizedNN(p_dropout=0).to(device)

        dropout_model = RegularizedNN(p_dropout=p).to(device)

 

        # Train and evaluate both methods

        ar_train_acc, ar_test_acc, ar_train_loss, ar_test_loss = train_and_evaluate_ar(

            ar_model, noisy_train_loader, noisy_test_loader, num_epochs=num_epochs, p_malignant=p

        )

        dropout_train_acc, dropout_test_acc, dropout_train_loss, dropout_test_loss = train_and_evaluate_dropout(

            dropout_model, noisy_train_loader, noisy_test_loader, num_epochs=num_epochs

        )

 

         # Plotting results

        plt.figure(figsize=(14, 8))

 

        # Plotting Accuracy

        plt.subplot(1, 2, 1)

        plt.plot(range(1, num_epochs + 1), ar_train_acc, label=f"AR Train Acc")

        plt.plot(range(1, num_epochs + 1), ar_test_acc, label=f"AR Test Acc")

        plt.plot(range(1, num_epochs + 1), dropout_train_acc, label=f"Dropout Train Acc")

        plt.plot(range(1, num_epochs + 1), dropout_test_acc, label=f"Dropout Test Acc")

        plt.title(f"Accuracy (p={p}, noise={noise_factor})")

        plt.xlabel("Epoch")

        plt.ylabel("Accuracy")

        plt.legend()

        plt.grid()

 

        # Plotting Loss

        plt.subplot(1, 2, 2)

        plt.plot(range(1, num_epochs + 1), ar_train_loss, label=f"AR Train Loss")

        plt.plot(range(1, num_epochs + 1), ar_test_loss, label=f"AR Test Loss")

        plt.plot(range(1, num_epochs + 1), dropout_train_loss, label=f"Dropout Train Loss")

        plt.plot(range(1, num_epochs + 1), dropout_test_loss, label=f"Dropout Test Loss")

        plt.title(f"Loss (p={p}, noise={noise_factor})")

        plt.xlabel("Epoch")

        plt.ylabel("Loss")

        plt.legend()

        plt.grid()

 

        plt.tight_layout()

        plt.show()

 

 

 

 

 

 

 

Running with noise factor = 0.3

 

 

Training with p = 0.0

 

 

 

 

Training with p = 0.1

 

 

 

Training with p = 0.2

 

 

 

 

 

 

Training with p = 0.3

 

 

 

Training with p = 0.4

 

 

 

 

 

Running with noise factor = 0.5

 

Training with p = 0.0

 

 

 

Training with p = 0.1

 

 

 

 

 

 

Training with p = 0.2

 

 

 

Training with p = 0.3

 

 

 

 

 

 

 

Training with p = 0.4

 

 

 

 

Running with noise factor = 1.0

 

 

Training with p = 0.0

 

 

 

 

Training with p = 0.1

 

 

 

 

Training with p = 0.2

 

 

 

 

 

Training with p = 0.3

 

Training with p = 0.4

 

=================================================================

 

END OF ADVERSARIAL REGULARIZATION VS DROPOUT ON TOP OF “FASHION-MNIST” DATASET (NOISY DATA) EXPERIMENT https://www.kaggle.com/datasets/zalando-research/fashionmnist

Straightforward Implementation of Adversarial Regularization vas. Dropout

(not included to the article)

 

=================================================================

I. CODE FOR THE EXPERIMENT ON “ADULT INCOME” DATASET WITH HIGHER NOISE LEVELS (included to the article) AND SPECIAL IMPLEMENTATION OF ADVERSARIAL REGULARIZATION

=======================================================================

import torch

import torch.nn as nn

import torch.optim as optim

from torch.utils.data import DataLoader, TensorDataset

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import StandardScaler, LabelEncoder

import pandas as pd

import numpy as np

import matplotlib.pyplot as plt

 

# Load and preprocess the Adult Income dataset

url = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"

columns = [

    "age", "workclass", "fnlwgt", "education", "education-num", "marital-status", "occupation",

    "relationship", "race", "sex", "capital-gain", "capital-loss", "hours-per-week", "native-country", "income"

]

data = pd.read_csv(url, header=None, names=columns, na_values=" ?")

data = data.dropna()

 

# Encode categorical features and target

categorical_columns = data.select_dtypes(include=["object"]).columns[:-1]

label_encoders = {}

for col in categorical_columns:

    le = LabelEncoder()

    data[col] = le.fit_transform(data[col])

    label_encoders[col] = le

 

# Encode target variable

data["income"] = (data["income"] == " >50K").astype(int)

 

# Split features and target

X = data.drop("income", axis=1).values

y = data["income"].values

 

# Normalize features

X = StandardScaler().fit_transform(X)

 

# Train-test split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

 

# Convert to PyTorch tensors

X_train, X_test = map(torch.tensor, (X_train, X_test))

y_train, y_test = map(torch.tensor, (y_train, y_test))

X_train, X_test = X_train.float(), X_test.float()

y_train, y_test = y_train.long(), y_test.long()

 

# Create data loaders

batch_size = 64

train_loader = DataLoader(TensorDataset(X_train, y_train), batch_size=batch_size, shuffle=True)

test_loader = DataLoader(TensorDataset(X_test, y_test), batch_size=batch_size)

 

# Model with different regularization approaches

class RegularizedNN(nn.Module):

    def __init__(self, input_size=14, hidden_size=64, output_size=2, p_malignant=0.0, p_dropout=0.0):

        super(RegularizedNN, self).__init__()

        self.p_malignant = p_malignant

        self.p_dropout = p_dropout

 

        # Define layers

        self.fc1 = nn.Linear(input_size, hidden_size)

        self.relu = nn.ReLU()

        self.dropout = nn.Dropout(p_dropout)

        self.fc2 = nn.Linear(hidden_size, output_size)

 

        # Define malignant neurons mask

        if p_malignant > 0.0:

            self.malignant_mask = torch.zeros(hidden_size, dtype=torch.bool)

            num_malignant = int(hidden_size * p_malignant)

            self.malignant_mask[:num_malignant] = True

        else:

            self.malignant_mask = None

 

    def forward(self, x):

        x = self.fc1(x)

        x = self.relu(x)

        if self.malignant_mask is not None:

            x = x.clone()  # Avoid in-place operation

            scaling_factor = torch.randn_like(x[:, self.malignant_mask])  # Random noise

            x[:, self.malignant_mask] *= scaling_factor

        if self.p_dropout > 0.0:

            x = self.dropout(x)

        x = self.fc2(x)

        return x

 

# Train and evaluate function

def train_and_evaluate(p_malignant, p_dropout, num_epochs=50, lr=0.001, noise_level=0.0):

    model = RegularizedNN(p_malignant=p_malignant, p_dropout=p_dropout).to(device)

    criterion = nn.CrossEntropyLoss()

    optimizer = optim.Adam(model.parameters(), lr=lr)

 

    train_acc_list = []

    test_acc_list = []

 

    for epoch in range(num_epochs):

        # Training phase

        model.train()

        correct, total = 0, 0

 

        for inputs, labels in train_loader:

            inputs, labels = inputs.to(device), labels.to(device)

 

            # Add noise to inputs

            noisy_inputs = inputs + noise_level * torch.randn_like(inputs)

 

            # Forward pass

            outputs = model(noisy_inputs)

            loss = criterion(outputs, labels)

 

            # Backward pass

            optimizer.zero_grad()

            loss.backward()

            optimizer.step()

 

            # Accuracy

            _, predicted = torch.max(outputs, 1)

            total += labels.size(0)

            correct += (predicted == labels).sum().item()

 

        train_acc = correct / total

        train_acc_list.append(train_acc)

 

        # Testing phase

        model.eval()

        correct, total = 0, 0

 

        with torch.no_grad():

            for inputs, labels in test_loader:

                inputs, labels = inputs.to(device), labels.to(device)

                noisy_inputs = inputs + noise_level * torch.randn_like(inputs)

                outputs = model(noisy_inputs)

                _, predicted = torch.max(outputs, 1)

                total += labels.size(0)

                correct += (predicted == labels).sum().item()

 

        test_acc = correct / total

        test_acc_list.append(test_acc)

 

    return train_acc_list, test_acc_list

 

# Define experiment cases

cases = [

    {"title": "No regularization {0.0, 0.0}", "p_malignant": 0.0, "p_dropout": 0.0},

    {"title": "Dropout {0.0, 0.2}", "p_malignant": 0.0, "p_dropout": 0.2},

    {"title": "Dropout {0.0, 0.4}", "p_malignant": 0.0, "p_dropout": 0.4},

    {"title": "Adversarial {0.2, 0.0}", "p_malignant": 0.2, "p_dropout": 0.0},

    {"title": "Adversarial {0.4, 0.0}", "p_malignant": 0.4, "p_dropout": 0.0},

    {"title": "Hybrid {0.2, 0.2}", "p_malignant": 0.2, "p_dropout": 0.2},

    {"title": "Hybrid {0.2, 0.4}", "p_malignant": 0.2, "p_dropout": 0.4},

    {"title": "Hybrid {0.4, 0.2}", "p_malignant": 0.4, "p_dropout": 0.2},

    {"title": "Hybrid {0.4, 0.4}", "p_malignant": 0.4, "p_dropout": 0.4},

]

 

# Noise levels

noise_levels = [0.0, 0.5, 1.0, 1.5, 2.0]

 

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

num_epochs = 50

 

# Run experiments

all_train_results = {}

all_test_results = {}

 

for noise_level in noise_levels:

    print(f"Running experiments with noise level: {noise_level}")

    train_results = []

    test_results = []

 

    for i, case in enumerate(cases, start=1):

        print(f"  Running Case {i}: {case['title']}")

        train_acc, test_acc = train_and_evaluate(case['p_malignant'], case['p_dropout'], num_epochs=num_epochs, noise_level=noise_level)

        train_results.append(train_acc)

        test_results.append(test_acc)

        print(f"  Case {i}: Final Train Accuracy = {train_acc[-1]:.4f}, Final Test Accuracy = {test_acc[-1]:.4f}")

 

    all_train_results[noise_level] = train_results

    all_test_results[noise_level] = test_results

 

    # Plot results for current noise level

    def plot_results(results, title, ylabel):

        plt.figure(figsize=(10, 6))

        for i, acc in enumerate(results):

            plt.plot(range(1, num_epochs + 1), acc, label=f"Case {i + 1}: {cases[i]['title']}")

        plt.title(f"{title} (Noise Level: {noise_level})")

        plt.xlabel("Epoch")

        plt.ylabel(ylabel)

        plt.legend()

        plt.grid()

        plt.show()

 

    plot_results(train_results, "Train Accuracy Across Cases", "Train Accuracy")

    plot_results(test_results, "Test Accuracy Across Cases", "Test Accuracy")

 

# Visualize accuracy trends across noise levels

def plot_accuracy_trends(results_dict, title, ylabel):

    plt.figure(figsize=(12, 8))

    for noise_level, results in results_dict.items():

        avg_acc = [np.mean([res[epoch] for res in results]) for epoch in range(num_epochs)]

        plt.plot(range(1, num_epochs + 1), avg_acc, label=f"Noise Level: {noise_level}")

    plt.title(title)

    plt.xlabel("Epoch")

    plt.ylabel(ylabel)

    plt.legend()

    plt.grid()

    plt.show()

 

plot_accuracy_trends(all_train_results, "Average Train Accuracy Trends Across Noise Levels", "Train Accuracy")

plot_accuracy_trends(all_test_results, "Average Test Accuracy Trends Across Noise Levels", "Test Accuracy")

=======================================================================

RESULTS OF THE EXPERIMENT ON “ADULT INCOME” DATASET WITH HIGHER NOISE LEVELS (included to the article):

=======================================================================

Running experiments with noise level: 0.0

  Running Case 1: No regularization {0.0, 0.0}

  Case 1: Final Train Accuracy = 0.8535, Final Test Accuracy = 0.8482

  Running Case 2: Dropout {0.0, 0.2}

  Case 2: Final Train Accuracy = 0.8484, Final Test Accuracy = 0.8512

  Running Case 3: Dropout {0.0, 0.4}

  Case 3: Final Train Accuracy = 0.8464, Final Test Accuracy = 0.8521

  Running Case 4: Adversarial {0.2, 0.0}

  Case 4: Final Train Accuracy = 0.8533, Final Test Accuracy = 0.8444

  Running Case 5: Adversarial {0.4, 0.0}

  Case 5: Final Train Accuracy = 0.8496, Final Test Accuracy = 0.8503

  Running Case 6: Hybrid {0.2, 0.2}

  Case 6: Final Train Accuracy = 0.8463, Final Test Accuracy = 0.8508

  Running Case 7: Hybrid {0.2, 0.4}

  Case 7: Final Train Accuracy = 0.8452, Final Test Accuracy = 0.8526

  Running Case 8: Hybrid {0.4, 0.2}

  Case 8: Final Train Accuracy = 0.8452, Final Test Accuracy = 0.8523

  Running Case 9: Hybrid {0.4, 0.4}

  Case 9: Final Train Accuracy = 0.8407, Final Test Accuracy = 0.8503

Running experiments with noise level: 0.5

  Running Case 1: No regularization {0.0, 0.0}

  Case 1: Final Train Accuracy = 0.8245, Final Test Accuracy = 0.8212

  Running Case 2: Dropout {0.0, 0.2}

  Case 2: Final Train Accuracy = 0.8206, Final Test Accuracy = 0.8248

  Running Case 3: Dropout {0.0, 0.4}

  Case 3: Final Train Accuracy = 0.8149, Final Test Accuracy = 0.8256

  Running Case 4: Adversarial {0.2, 0.0}

  Case 4: Final Train Accuracy = 0.8184, Final Test Accuracy = 0.8261

  Running Case 5: Adversarial {0.4, 0.0}

  Case 5: Final Train Accuracy = 0.8213, Final Test Accuracy = 0.8230

  Running Case 6: Hybrid {0.2, 0.2}

  Case 6: Final Train Accuracy = 0.8206, Final Test Accuracy = 0.8185

  Running Case 7: Hybrid {0.2, 0.4}

  Case 7: Final Train Accuracy = 0.8133, Final Test Accuracy = 0.8225

  Running Case 8: Hybrid {0.4, 0.2}

  Case 8: Final Train Accuracy = 0.8182, Final Test Accuracy = 0.8245

  Running Case 9: Hybrid {0.4, 0.4}

  Case 9: Final Train Accuracy = 0.8150, Final Test Accuracy = 0.8168

Running experiments with noise level: 1.0

  Running Case 1: No regularization {0.0, 0.0}

  Case 1: Final Train Accuracy = 0.7881, Final Test Accuracy = 0.7837

  Running Case 2: Dropout {0.0, 0.2}

  Case 2: Final Train Accuracy = 0.7860, Final Test Accuracy = 0.7837

  Running Case 3: Dropout {0.0, 0.4}

  Case 3: Final Train Accuracy = 0.7830, Final Test Accuracy = 0.7790

  Running Case 4: Adversarial {0.2, 0.0}

  Case 4: Final Train Accuracy = 0.7867, Final Test Accuracy = 0.7853

  Running Case 5: Adversarial {0.4, 0.0}

  Case 5: Final Train Accuracy = 0.7882, Final Test Accuracy = 0.7862

  Running Case 6: Hybrid {0.2, 0.2}

  Case 6: Final Train Accuracy = 0.7837, Final Test Accuracy = 0.7749

  Running Case 7: Hybrid {0.2, 0.4}

  Case 7: Final Train Accuracy = 0.7790, Final Test Accuracy = 0.7890

  Running Case 8: Hybrid {0.4, 0.2}

  Case 8: Final Train Accuracy = 0.7845, Final Test Accuracy = 0.7857

  Running Case 9: Hybrid {0.4, 0.4}

  Case 9: Final Train Accuracy = 0.7831, Final Test Accuracy = 0.7897

Running experiments with noise level: 1.5

  Running Case 1: No regularization {0.0, 0.0}

  Case 1: Final Train Accuracy = 0.7665, Final Test Accuracy = 0.7660

  Running Case 2: Dropout {0.0, 0.2}

  Case 2: Final Train Accuracy = 0.7657, Final Test Accuracy = 0.7658

  Running Case 3: Dropout {0.0, 0.4}

  Case 3: Final Train Accuracy = 0.7685, Final Test Accuracy = 0.7621

  Running Case 4: Adversarial {0.2, 0.0}

  Case 4: Final Train Accuracy = 0.7667, Final Test Accuracy = 0.7714

  Running Case 5: Adversarial {0.4, 0.0}

  Case 5: Final Train Accuracy = 0.7683, Final Test Accuracy = 0.7651

  Running Case 6: Hybrid {0.2, 0.2}

  Case 6: Final Train Accuracy = 0.7707, Final Test Accuracy = 0.7686

  Running Case 7: Hybrid {0.2, 0.4}

  Case 7: Final Train Accuracy = 0.7680, Final Test Accuracy = 0.7686

  Running Case 8: Hybrid {0.4, 0.2}

  Case 8: Final Train Accuracy = 0.7676, Final Test Accuracy = 0.7641

  Running Case 9: Hybrid {0.4, 0.4}

  Case 9: Final Train Accuracy = 0.7668, Final Test Accuracy = 0.7625

Running experiments with noise level: 2.0

  Running Case 1: No regularization {0.0, 0.0}

  Case 1: Final Train Accuracy = 0.7622, Final Test Accuracy = 0.7592

  Running Case 2: Dropout {0.0, 0.2}

  Case 2: Final Train Accuracy = 0.7617, Final Test Accuracy = 0.7600

  Running Case 3: Dropout {0.0, 0.4}

  Case 3: Final Train Accuracy = 0.7628, Final Test Accuracy = 0.7573

  Running Case 4: Adversarial {0.2, 0.0}

  Case 4: Final Train Accuracy = 0.7616, Final Test Accuracy = 0.7575

  Running Case 5: Adversarial {0.4, 0.0}

  Case 5: Final Train Accuracy = 0.7618, Final Test Accuracy = 0.7582

  Running Case 6: Hybrid {0.2, 0.2}

  Case 6: Final Train Accuracy = 0.7615, Final Test Accuracy = 0.7603

  Running Case 7: Hybrid {0.2, 0.4}

  Case 7: Final Train Accuracy = 0.7604, Final Test Accuracy = 0.7580

  Running Case 8: Hybrid {0.4, 0.2}

  Case 8: Final Train Accuracy = 0.7626, Final Test Accuracy = 0.7611

  Running Case 9: Hybrid {0.4, 0.4}

  Case 9: Final Train Accuracy = 0.7592, Final Test Accuracy = 0.7548

 

 

 

=======================================================================

END OF THE RESULTS OF THE EXPERIMENT ON “ADULT INCOME” DATASET WITH HIGHER NOISE LEVELS (included to the article)

=======================================================================

 

 

=======================================================================

ANALYSIS OF THE RESULTS OF THE EXPERIMENT ON “ADULT INCOME” DATASET WITH HIGHER NOISE LEVELS (included to the article)

=======================================================================

Table: Train and test accuracies of NNs trained on data from “Adult Income” dataset with different regularization techniques and various noise levels.

Regularization

Noise levels (training)

Noise levels (testing)

{ }

0

0.5

1.0

1.5

2.0

0

0.5

1.0

1.5

2.0

Case 1 { }

0.8535

0.8245

0.7881

0.7665

0.7622

0.8482

0.8212

0.7837

0.7660

0.7592

Case 2 { }

0.8484

0.8206

0.7860

0.7657

0.7617

0.8512

0.8248

0.7837

0.7658

0.7600

Case 3 { }

0.8464

0.8149

0.7830

0.7685

0.7628

0.8521

0.8256

0.7790

0.7621

0.7573

Case 4 { }

0.8533

0.8184

0.7867

0.7667

0.7616

0.8444

0.8261

0.7853

0.7714

0.7575

Case 5 { }

0.8496

0.8213

0.7882

0.7683

0.7618

0.8503

0.8230

0.7862

0.7651

0.7582

Case 6 { }

0.8463

0.8206

0.7837

0.7707

0.7615

0.8508

0.8185

0.7749

0.7686

0.7603

Case 7 { }

0.8452

0.8133

0.7790

0.7680

0.7604

0.8526

0.8225

0.7890

0.7686

0.7580

Case 8 { }

0.8452

0.8182

0.7845

0.7676

0.7626

0.8523

0.8245

0.7857

0.7641

0.7611

Case 9 { }

0.8407

0.8150

0.7831

0.7688

0.7592

0.8503

0.8168

0.7897

0.7625

0.7548

Analysis of the results (“Adult Income” dataset) suggests the following:

I. No regularization (summing accuracies separately for training and testing over noise levels):

Case 1 { }: Sum (training): 3.9948 | Sum (testing): 3.9783 | Place in training: 1 | Place in testing: 6

II. Dropout regularization (summing accuracies separately for training and testing over noise levels):

Case 2 { }: Sum (training): 3.9824 | Sum (testing): 3.9855 | Place in training: 5 | Place in testing: 3

Case 3 { }: Sum (training): 3.9756 | Sum (testing): 3.9761 | Place in training: 7 | Place in testing: 7

III. AR (summing accuracies separately for training and testing over noise levels):

Case 4 { }: Sum (training): 3.9867 | Sum (testing): 3.9847 | Place in training: 3 | Place in testing: 4

Case 5 { }: Sum (training): 3.9892 | Sum (testing): 3.9828 | Place in training: 2 | Place in testing: 5

IV. Hybrid regularization (summing accuracies separately for training and testing over noise levels):

Case 6 { }: Sum (training): 3.9828 | Sum (testing): 3.9731 | Place in training: 4 | Place in testing: 9

Case 7 { }: Sum (training): 3.9659 | Sum (testing): 3.9907 | Place in training: 9 | Place in testing: 1

Case 8 { }: Sum (training): 3.9781 | Sum (testing): 3.9877 | Place in training: 6 | Place in testing: 2

Case 9 { }: Sum (training): 3.9668 | Sum (testing): 3.9741 | Place in training: 8 | Place in testing: 8

V. Final scoring for the regularization techniques competition (training):

Overall “no reg.” score (training): 1/min_place_in_training = 1/1 = 1.0;                 Place: 1 (Winner).

Overall Dropout regularization score (training):                       1/5 = 0.20;               Place: 4.

Overall AR score (training):                                                      1/2 = 0.5;               Place: 2.

Overall hybrid regularization score (training):                          1/4 = 0.25;               Place: 3.

VI. Final scoring for the regularization techniques competition (testing):

Overall “no reg.” score (testing): 1/min_place_in_testing     = 1/6 = 0.1(6);            Place: 4.

Overall Dropout regularization score (testing):                         1/3 = 0.(3);              Place: 2.

Overall AR score (testing):                                                        1/4 = 0.25;               Place: 3.

Overall hybrid regularization score (testing):                            1/1 = 1.0;                 Place: 1 (Winner).

=======================================================================

END OF THE ANALYSIS OF THE RESULTS OF EXPERIMENT ON “ADULT INCOME” DATASET WITH HIGHER NOISE LEVELS (included to the article)

=======================================================================

=======================================================================

END OF THE EXPERIMENT ON “ADULT INCOME” DATASET WITH HIGHER NOISE LEVELS (included to the article)

=======================================================================

 

 

=======================================================================

II. CODE FOR EXPERIMENT ON “WINE” DATASET WITH HIGHER NOISE LEVELS (included to the article)

=======================================================================

import torch

import torch.nn as nn

import torch.optim as optim

from torch.utils.data import DataLoader, TensorDataset

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import StandardScaler

import pandas as pd

import numpy as np

import matplotlib.pyplot as plt

 

# Load and preprocess the Wine Quality dataset

url = "https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv"

data = pd.read_csv(url, delimiter=';')

 

# Prepare features and target

X = data.drop("quality", axis=1).values

y = (data["quality"] >= 6).astype(int).values  # Binary classification: good (>=6) vs not good (<6)

 

# Normalize features

X = StandardScaler().fit_transform(X)

 

# Train-test split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

 

# Convert to PyTorch tensors

X_train, X_test = map(torch.tensor, (X_train, X_test))

y_train, y_test = map(torch.tensor, (y_train, y_test))

X_train, X_test = X_train.float(), X_test.float()

y_train, y_test = y_train.long(), y_test.long()

 

# Create data loaders

batch_size = 64

train_loader = DataLoader(TensorDataset(X_train, y_train), batch_size=batch_size, shuffle=True)

test_loader = DataLoader(TensorDataset(X_test, y_test), batch_size=batch_size)

 

# Regularized Neural Network

class RegularizedNN(nn.Module):

    def __init__(self, input_size=11, hidden_size=64, output_size=2, p_malignant=0.0, p_dropout=0.0):

        super(RegularizedNN, self).__init__()

        self.p_malignant = p_malignant

        self.p_dropout = p_dropout

 

        self.fc1 = nn.Linear(input_size, hidden_size)

        self.relu = nn.ReLU()

        self.dropout = nn.Dropout(p_dropout)

        self.fc2 = nn.Linear(hidden_size, output_size)

 

        if p_malignant > 0.0:

            self.malignant_mask = torch.zeros(hidden_size, dtype=torch.bool)

            num_malignant = int(hidden_size * p_malignant)

            self.malignant_mask[:num_malignant] = True

        else:

            self.malignant_mask = None

 

    def forward(self, x):

        x = self.fc1(x)

        x = self.relu(x)

        if self.malignant_mask is not None:

            x = x.clone()

            scaling_factor = torch.randn_like(x[:, self.malignant_mask])

            x[:, self.malignant_mask] *= scaling_factor

        if self.p_dropout > 0.0:

            x = self.dropout(x)

        x = self.fc2(x)

        return x

 

# Training and evaluation logic (unchanged)

def train_and_evaluate(p_malignant, p_dropout, num_epochs=50, lr=0.001, noise_level=0.0):

    model = RegularizedNN(p_malignant=p_malignant, p_dropout=p_dropout).to(device)

    criterion = nn.CrossEntropyLoss()

    optimizer = optim.Adam(model.parameters(), lr=lr)

 

    train_acc_list = []

    test_acc_list = []

 

    for epoch in range(num_epochs):

        model.train()

        correct, total = 0, 0

 

        for inputs, labels in train_loader:

            inputs, labels = inputs.to(device), labels.to(device)

            noisy_inputs = inputs + noise_level * torch.randn_like(inputs)

            outputs = model(noisy_inputs)

            loss = criterion(outputs, labels)

 

            optimizer.zero_grad()

            loss.backward()

            optimizer.step()

 

            _, predicted = torch.max(outputs, 1)

            total += labels.size(0)

            correct += (predicted == labels).sum().item()

 

        train_acc = correct / total

        train_acc_list.append(train_acc)

 

        model.eval()

        correct, total = 0, 0

 

        with torch.no_grad():

            for inputs, labels in test_loader:

                inputs, labels = inputs.to(device), labels.to(device)

                noisy_inputs = inputs + noise_level * torch.randn_like(inputs)

                outputs = model(noisy_inputs)

                _, predicted = torch.max(outputs, 1)

                total += labels.size(0)

                correct += (predicted == labels).sum().item()

 

        test_acc = correct / total

        test_acc_list.append(test_acc)

 

    return train_acc_list, test_acc_list

 

# Define experiment cases (same as original)

cases = [

    {"title": "No regularization {0.0, 0.0}", "p_malignant": 0.0, "p_dropout": 0.0},

    {"title": "Dropout {0.0, 0.2}", "p_malignant": 0.0, "p_dropout": 0.2},

    {"title": "Dropout {0.0, 0.4}", "p_malignant": 0.0, "p_dropout": 0.4},

    {"title": "Adversarial {0.2, 0.0}", "p_malignant": 0.2, "p_dropout": 0.0},

    {"title": "Adversarial {0.4, 0.0}", "p_malignant": 0.4, "p_dropout": 0.0},

    {"title": "Hybrid {0.2, 0.2}", "p_malignant": 0.2, "p_dropout": 0.2},

    {"title": "Hybrid {0.2, 0.4}", "p_malignant": 0.2, "p_dropout": 0.4},

    {"title": "Hybrid {0.4, 0.2}", "p_malignant": 0.4, "p_dropout": 0.2},

    {"title": "Hybrid {0.4, 0.4}", "p_malignant": 0.4, "p_dropout": 0.4},

]

 

noise_levels = [0.0, 0.5, 1.0, 1.5, 2.0]

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

num_epochs = 50

 

# Run experiments

all_train_results = {}

all_test_results = {}

 

for noise_level in noise_levels:

    print(f"Running experiments with noise level: {noise_level}")

    train_results = []

    test_results = []

 

    for i, case in enumerate(cases, start=1):

        print(f"  Running Case {i}: {case['title']}")

        train_acc, test_acc = train_and_evaluate(case['p_malignant'], case['p_dropout'], num_epochs=num_epochs, noise_level=noise_level)

        train_results.append(train_acc)

        test_results.append(test_acc)

        print(f"  Case {i}: Final Train Accuracy = {train_acc[-1]:.4f}, Final Test Accuracy = {test_acc[-1]:.4f}")

 

    all_train_results[noise_level] = train_results

    all_test_results[noise_level] = test_results

 

=======================================================================

 

=======================================================================

RESULTS OF THE EXPERIMENT ON “WINE” DATASET WITH HIGHER NOISE LEVELS (included to the article):

=======================================================================

 

Running experiments with noise level: 0.0

  Running Case 1: No regularization {0.0, 0.0}

  Case 1: Final Train Accuracy = 0.7826, Final Test Accuracy = 0.7500

  Running Case 2: Dropout {0.0, 0.2}

  Case 2: Final Train Accuracy = 0.7772, Final Test Accuracy = 0.7531

  Running Case 3: Dropout {0.0, 0.4}

  Case 3: Final Train Accuracy = 0.7623, Final Test Accuracy = 0.7469

  Running Case 4: Adversarial {0.2, 0.0}

  Case 4: Final Train Accuracy = 0.7772, Final Test Accuracy = 0.7531

  Running Case 5: Adversarial {0.4, 0.0}

  Case 5: Final Train Accuracy = 0.7780, Final Test Accuracy = 0.7438

  Running Case 6: Hybrid {0.2, 0.2}

  Case 6: Final Train Accuracy = 0.7709, Final Test Accuracy = 0.7469

  Running Case 7: Hybrid {0.2, 0.4}

  Case 7: Final Train Accuracy = 0.7701, Final Test Accuracy = 0.7438

  Running Case 8: Hybrid {0.4, 0.2}

  Case 8: Final Train Accuracy = 0.7780, Final Test Accuracy = 0.7500

  Running Case 9: Hybrid {0.4, 0.4}

  Case 9: Final Train Accuracy = 0.7639, Final Test Accuracy = 0.7375

 

Running experiments with noise level: 0.5

  Running Case 1: No regularization {0.0, 0.0}

  Case 1: Final Train Accuracy = 0.7428, Final Test Accuracy = 0.7250

  Running Case 2: Dropout {0.0, 0.2}

  Case 2: Final Train Accuracy = 0.7420, Final Test Accuracy = 0.7219

  Running Case 3: Dropout {0.0, 0.4}

  Case 3: Final Train Accuracy = 0.7342, Final Test Accuracy = 0.7344

  Running Case 4: Adversarial {0.2, 0.0}

  Case 4: Final Train Accuracy = 0.7467, Final Test Accuracy = 0.7188

  Running Case 5: Adversarial {0.4, 0.0}

  Case 5: Final Train Accuracy = 0.7412, Final Test Accuracy = 0.7375

  Running Case 6: Hybrid {0.2, 0.2}

  Case 6: Final Train Accuracy = 0.7295, Final Test Accuracy = 0.6969

  Running Case 7: Hybrid {0.2, 0.4}

  Case 7: Final Train Accuracy = 0.7287, Final Test Accuracy = 0.7063

  Running Case 8: Hybrid {0.4, 0.2}

  Case 8: Final Train Accuracy = 0.7522, Final Test Accuracy = 0.7156

  Running Case 9: Hybrid {0.4, 0.4}

  Case 9: Final Train Accuracy = 0.7185, Final Test Accuracy = 0.7344

 

Running experiments with noise level: 1.0

  Running Case 1: No regularization {0.0, 0.0}

  Case 1: Final Train Accuracy = 0.6810, Final Test Accuracy = 0.7031

  Running Case 2: Dropout {0.0, 0.2}

  Case 2: Final Train Accuracy = 0.6794, Final Test Accuracy = 0.6969

  Running Case 3: Dropout {0.0, 0.4}

  Case 3: Final Train Accuracy = 0.6873, Final Test Accuracy = 0.6844

  Running Case 4: Adversarial {0.2, 0.0}

  Case 4: Final Train Accuracy = 0.6873, Final Test Accuracy = 0.6750

  Running Case 5: Adversarial {0.4, 0.0}

  Case 5: Final Train Accuracy = 0.6974, Final Test Accuracy = 0.6844

  Running Case 6: Hybrid {0.2, 0.2}

  Case 6: Final Train Accuracy = 0.6685, Final Test Accuracy = 0.6781

  Running Case 7: Hybrid {0.2, 0.4}

  Case 7: Final Train Accuracy = 0.6787, Final Test Accuracy = 0.6844

  Running Case 8: Hybrid {0.4, 0.2}

  Case 8: Final Train Accuracy = 0.6755, Final Test Accuracy = 0.7188

  Running Case 9: Hybrid {0.4, 0.4}

  Case 9: Final Train Accuracy = 0.6740, Final Test Accuracy = 0.6906

 

Running experiments with noise level: 1.5

  Running Case 1: No regularization {0.0, 0.0}

  Case 1: Final Train Accuracy = 0.6482, Final Test Accuracy = 0.6562

  Running Case 2: Dropout {0.0, 0.2}

  Case 2: Final Train Accuracy = 0.6396, Final Test Accuracy = 0.6813

  Running Case 3: Dropout {0.0, 0.4}

  Case 3: Final Train Accuracy = 0.6568, Final Test Accuracy = 0.6469

  Running Case 4: Adversarial {0.2, 0.0}

  Case 4: Final Train Accuracy = 0.6630, Final Test Accuracy = 0.6531

  Running Case 5: Adversarial {0.4, 0.0}

  Case 5: Final Train Accuracy = 0.6466, Final Test Accuracy = 0.6438

  Running Case 6: Hybrid {0.2, 0.2}

  Case 6: Final Train Accuracy = 0.6536, Final Test Accuracy = 0.6531

  Running Case 7: Hybrid {0.2, 0.4}

  Case 7: Final Train Accuracy = 0.6513, Final Test Accuracy = 0.6094

  Running Case 8: Hybrid {0.4, 0.2}

  Case 8: Final Train Accuracy = 0.6325, Final Test Accuracy = 0.6250

  Running Case 9: Hybrid {0.4, 0.4}

  Case 9: Final Train Accuracy = 0.6435, Final Test Accuracy = 0.6125

 

Running experiments with noise level: 2.0

  Running Case 1: No regularization {0.0, 0.0}

  Case 1: Final Train Accuracy = 0.6294, Final Test Accuracy = 0.6125

  Running Case 2: Dropout {0.0, 0.2}

  Case 2: Final Train Accuracy = 0.6216, Final Test Accuracy = 0.6000

  Running Case 3: Dropout {0.0, 0.4}

  Case 3: Final Train Accuracy = 0.6278, Final Test Accuracy = 0.6125

  Running Case 4: Adversarial {0.2, 0.0}

  Case 4: Final Train Accuracy = 0.6380, Final Test Accuracy = 0.6625

  Running Case 5: Adversarial {0.4, 0.0}

  Case 5: Final Train Accuracy = 0.6302, Final Test Accuracy = 0.5906

  Running Case 6: Hybrid {0.2, 0.2}

  Case 6: Final Train Accuracy = 0.6169, Final Test Accuracy = 0.5906

  Running Case 7: Hybrid {0.2, 0.4}

  Case 7: Final Train Accuracy = 0.6138, Final Test Accuracy = 0.6531

  Running Case 8: Hybrid {0.4, 0.2}

  Case 8: Final Train Accuracy = 0.6224, Final Test Accuracy = 0.6719

  Running Case 9: Hybrid {0.4, 0.4}

  Case 9: Final Train Accuracy = 0.6145, Final Test Accuracy = 0.6406

 

=======================================================================

END OF THE RESULTS OF THE EXPERIMENT ON “WINE” DATASET WITH HIGHER NOISE LEVELS (included to the article)

=======================================================================

 

 

=======================================================================

ANALYSIS OF THE RESULTS OF THE EXPERIMENT ON “WINE” DATASET WITH HIGHER NOISE LEVELS (included to the article)

=======================================================================

Table: Train and test accuracies of NNs trained on data from “Wine” dataset with different regularization techniques and various noise levels.

Regularization

Noise levels (training)

Noise levels (testing)

{ }

0

0.5

1.0

1.5

2.0

0

0.5

1.0

1.5

2.0

Case 1 { }

0.7826

0.7428

0.6810

0.6482

0.6294

0.7500

0.7250

0.7031

0.6562

0.6125

Case 2 { }

0.7772

0.7420

0.6794

0.6396

0.6216

0.7531

0.7219

0.6969

0.6813

0.6000

Case 3 { }

0.7623

0.7342

0.6873

0.6568

0.6278

0.7469

0.7344

0.6844

0.6469

0.6125

Case 4 { }

0.7772

0.7467

0.6873

0.6630

0.6380

0.7531

0.7188

0.6750

0.6531

0.6625

Case 5 { }

0.7780

0.7412

0.6974

0.6466

0.6302

0.7438

0.7375

0.6844

0.6438

0.5906

Case 6 { }

0.7709

0.7295

0.6685

0.6536

0.6169

0.7469

0.6969

0.6781

0.6531

0.5906

Case 7 { }

0.7701

0.7287

0.6787

0.6513

0.6138

0.7438

0.7063

0.6844

0.6094

0.6531

Case 8 { }

0.7780

0.7522

0.6755

0.6325

0.6224

0.7500

0.7156

0.7188

0.6250

0.6719

Case 9 { }

0.7639

0.7185

0.6740

0.6435

0.6145

0.7375

0.7344

0.6906

0.6125

0.6406

Analysis of the results (“Wine” dataset) suggests the following:

I. No regularization (summing accuracies separately for training and testing over noise levels):

Case 1 { }: Sum (training): 3.4840 | Sum (testing): 3.4468 | Place in training: 3 | Place in testing: 4.

II. Dropout regularization (two cases):

Case 2 { }: Sum (training): 3.4598 | Sum (testing): 3.4532 | Place in training: 6 | Place in testing: 3.

Case 3 { }: Sum (training): 3.4684 | Sum (testing): 3.4251 | Place in training: 4 | Place in testing: 5.

III. AR (summing accuracies separately for training and testing over noise levels):

Case 4 { }: Sum (training): 3.5122 | Sum (testing): 3.4625 | Place in training: 1 | Place in testing: 2.

Case 5 { }: Sum (training): 3.4934 | Sum (testing): 3.4001 | Place in training: 2 | Place in testing: 7.

IV. Hybrid regularization (summing accuracies separately for training and testing over noise levels):

Case 6 { }: Sum (training): 3.4394 | Sum (testing): 3.3656 | Place in training: 8 | Place in testing: 9.

Case 7 { }: Sum (training): 3.4426 | Sum (testing): 3.3970 | Place in training: 7 | Place in testing: 8.

Case 8 { }: Sum (training): 3.4606 | Sum (testing): 3.4813 | Place in training: 5 | Place in testing: 1.

Case 9 { }: Sum (training): 3.4144 | Sum (testing): 3.4156 | Place in training: 9 | Place in testing: 6.

V. Final scoring for the regularization techniques competition (training):

Overall “no reg.” score (training): 1/min_place_in_training = 1/3 = 0.(3);               Place: 2.

Overall Dropout regularization score (training):                       1/4 = 0.25;               Place: 3.

Overall AR score (training):                                                      1/1 = 1.0;                 Place: 1 (Winner).

Overall hybrid regularization score (training):                          1/5 = 0.138;            Place: 4.

VI. Final scoring for the regularization techniques competition (testing):

Overall “no reg.” score (testing):   1 / min_place_in_testing = 1/4 = 0.25;               Place 4.

Overall Dropout regularization score (testing):                         1/3 = 0.(3);              Place 3.

Overall AR score (testing):                                                        1/2 = 0.5;                 Place 2.

Overall hybrid regularization score (testing):                            1/1 = 1.0;                Place: 1 (Winner).

 

=======================================================================

END OF THE ANALYSIS OF THE RESULTS OF EXPERIMENT ON “WINE” DATASET WITH HIGHER NOISE LEVELS (included to the article)

=======================================================================

=======================================================================

END OF THE EXPERIMENT ON “WINE” DATASET WITH HIGHER NOISE LEVELS (included to the article)

=======================================================================

 

 

 

=======================================================================

PRELIMINARY OR DRAFT EXPERIMENTS

NOT INCLUDED TO THE ARTICLE

=======================================================================

III. CODE FOR THE EXPERIMENT ON “BREAST CANCER” DATASET

(not included to the article)

=======================================================================

import torch

import torch.nn as nn

import torch.optim as optim

from sklearn.datasets import fetch_openml

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import StandardScaler, OneHotEncoder

import pandas as pd

import numpy as np

import matplotlib.pyplot as plt

 

# Load and preprocess the dataset

def load_dataset():

    # Load the breast_cancer dataset

    data = fetch_openml(name="breast-cancer", version=1, as_frame=True)

    df = data.frame

 

    # One-hot encode categorical features

    df = pd.get_dummies(df, drop_first=True)

    X = df.drop(columns=["Class_recurrence-events"])

    y = df["Class_recurrence-events"]

 

    # Split the dataset

    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

 

    # Standardize the features

    scaler = StandardScaler()

    X_train = scaler.fit_transform(X_train)

    X_test = scaler.transform(X_test)

 

    return (torch.tensor(X_train, dtype=torch.float32),

            torch.tensor(X_test, dtype=torch.float32),

            torch.tensor(y_train.values, dtype=torch.float32).unsqueeze(1),

            torch.tensor(y_test.values, dtype=torch.float32).unsqueeze(1))

 

# Define a neural network matching the AR code configuration

class DropoutNN(nn.Module):

    def __init__(self, input_size, dropout_rate=0.0):

        super(DropoutNN, self).__init__()

        self.fc1 = nn.Linear(input_size, 64)

        self.relu1 = nn.ReLU()

        self.dropout1 = nn.Dropout(dropout_rate)

        self.fc2 = nn.Linear(64, 32)

        self.relu2 = nn.ReLU()

        self.dropout2 = nn.Dropout(dropout_rate)

        self.fc3 = nn.Linear(32, 1)

        self.sigmoid = nn.Sigmoid()

 

    def forward(self, x):

        x = self.fc1(x)

        x = self.relu1(x)

        x = self.dropout1(x)

        x = self.fc2(x)

        x = self.relu2(x)

        x = self.dropout2(x)

        x = self.fc3(x)

        x = self.sigmoid(x)

        return x

 

# Define training function

def train_model(model, criterion, optimizer, X_train, y_train, X_test, y_test, epochs):

    train_loss, test_loss = [], []

    train_acc, test_acc = [], []

 

    for epoch in range(epochs):

        model.train()

        optimizer.zero_grad()

        outputs = model(X_train)

        loss = criterion(outputs, y_train)

        loss.backward()

        optimizer.step()

 

        # Record training metrics

        train_loss.append(loss.item())

        train_acc.append(((outputs > 0.5) == y_train).float().mean().item())

 

        # Evaluate on test data

        model.eval()

        with torch.no_grad():

            test_outputs = model(X_test)

            test_loss.append(criterion(test_outputs, y_test).item())

            test_acc.append(((test_outputs > 0.5) == y_test).float().mean().item())

 

    return train_loss, test_loss, train_acc, test_acc

 

# Main experiment setup

def run_experiment():

    X_train, X_test, y_train, y_test = load_dataset()

    input_size = X_train.shape[1]

    epochs = 50

    learning_rate = 0.01

    dropout_rates = [0.0, 0.1, 0.2, 0.3, 0.4, 0.49]

 

    results = {}

 

    for dropout_rate in dropout_rates:

        model = DropoutNN(input_size, dropout_rate=dropout_rate)

        optimizer = optim.Adam(model.parameters(), lr=learning_rate)

        criterion = nn.BCELoss()

 

        print(f"Training with dropout_rate = {dropout_rate}")

        train_loss, test_loss, train_acc, test_acc = train_model(

            model, criterion, optimizer, X_train, y_train, X_test, y_test, epochs)

 

        results[dropout_rate] = {

            "train_loss": train_loss,

            "test_loss": test_loss,

            "train_acc": train_acc,

            "test_acc": test_acc

        }

 

    # Plotting results

    for metric in ["train_acc", "test_acc"]:

        plt.figure(figsize=(10, 6))

        for dropout_rate, metrics in results.items():

            plt.plot(metrics[metric], label=f"dropout_rate={dropout_rate}")

        plt.title(f"{metric.replace('_', ' ').capitalize()} Over Epochs")

        plt.xlabel("Epochs")

        plt.ylabel(metric.replace('_', ' ').capitalize())

        plt.legend()

        plt.grid()

        plt.show()

 

    return results

 

# Run the experiment

results = run_experiment()

 

# Output numerical results

for dropout_rate, metrics in results.items():

    print(f"dropout_rate = {dropout_rate}")

    print(f"  Final Train Acc: {metrics['train_acc'][-1]:.4f}")

    print(f"  Final Test Acc: {metrics['test_acc'][-1]:.4f}")

    print(f"  Final Train Loss: {metrics['train_loss'][-1]:.4f}")

    print(f"  Final Test Loss: {metrics['test_loss'][-1]:.4f}")

 

=======================================================================

RESULTS OF THE EXPERIMENT ON “BREAST CANCER” DATASET

(not included to the article)

=======================================================================

Running experiment with p_malignant = 0.0

Epoch [1/40] Train Acc: 0.8308 Test Acc: 0.9474

Epoch [2/40] Train Acc: 0.9451 Test Acc: 0.9649

Epoch [3/40] Train Acc: 0.9538 Test Acc: 0.9737

Epoch [4/40] Train Acc: 0.9648 Test Acc: 0.9825

Epoch [5/40] Train Acc: 0.9758 Test Acc: 0.9825

Epoch [6/40] Train Acc: 0.9780 Test Acc: 0.9825

Epoch [7/40] Train Acc: 0.9780 Test Acc: 0.9825

Epoch [8/40] Train Acc: 0.9802 Test Acc: 0.9825

Epoch [9/40] Train Acc: 0.9802 Test Acc: 0.9825

Epoch [10/40] Train Acc: 0.9824 Test Acc: 0.9825

Epoch [11/40] Train Acc: 0.9824 Test Acc: 0.9912

Epoch [12/40] Train Acc: 0.9868 Test Acc: 0.9912

Epoch [13/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [14/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [15/40] Train Acc: 0.9868 Test Acc: 0.9825

p_malignant = 0.0 Final Train Acc: 0.9934 Final Test Acc: 0.9825

 

Running experiment with p_malignant = 0.1

Epoch [1/40] Train Acc: 0.8703 Test Acc: 0.9649

Epoch [2/40] Train Acc: 0.9363 Test Acc: 0.9649

Epoch [3/40] Train Acc: 0.9385 Test Acc: 0.9737

Epoch [4/40] Train Acc: 0.9582 Test Acc: 0.9912

Epoch [5/40] Train Acc: 0.9714 Test Acc: 0.9825

Epoch [6/40] Train Acc: 0.9780 Test Acc: 0.9825

Epoch [7/40] Train Acc: 0.9802 Test Acc: 0.9912

Epoch [8/40] Train Acc: 0.9846 Test Acc: 0.9825

Epoch [9/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [10/40] Train Acc: 0.9868 Test Acc: 0.9912

Epoch [11/40] Train Acc: 0.9846 Test Acc: 0.9825

Epoch [12/40] Train Acc: 0.9868 Test Acc: 0.9912

Epoch [13/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [14/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [15/40] Train Acc: 0.9868 Test Acc: 0.9825

p_malignant = 0.1 Final Train Acc: 0.9934 Final Test Acc: 0.9825

 

Running experiment with p_malignant = 0.2

Epoch [1/40] Train Acc: 0.6242 Test Acc: 0.9649

Epoch [2/40] Train Acc: 0.9319 Test Acc: 0.9737

Epoch [3/40] Train Acc: 0.9495 Test Acc: 0.9737

Epoch [4/40] Train Acc: 0.9560 Test Acc: 0.9737

Epoch [5/40] Train Acc: 0.9648 Test Acc: 0.9912

Epoch [6/40] Train Acc: 0.9736 Test Acc: 0.9912

Epoch [7/40] Train Acc: 0.9802 Test Acc: 0.9825

Epoch [8/40] Train Acc: 0.9780 Test Acc: 0.9825

Epoch [9/40] Train Acc: 0.9824 Test Acc: 0.9912

Epoch [10/40] Train Acc: 0.9824 Test Acc: 0.9912

Epoch [11/40] Train Acc: 0.9824 Test Acc: 0.9912

Epoch [12/40] Train Acc: 0.9868 Test Acc: 0.9912

Epoch [13/40] Train Acc: 0.9868 Test Acc: 0.9912

Epoch [14/40] Train Acc: 0.9868 Test Acc: 0.9912

Epoch [15/40] Train Acc: 0.9868 Test Acc: 0.9912

p_malignant = 0.2 Final Train Acc: 0.9934 Final Test Acc: 0.9912

 

Running experiment with p_malignant = 0.3

Epoch [1/40] Train Acc: 0.8264 Test Acc: 0.9561

Epoch [2/40] Train Acc: 0.9341 Test Acc: 0.9561

Epoch [3/40] Train Acc: 0.9516 Test Acc: 0.9737

Epoch [4/40] Train Acc: 0.9626 Test Acc: 0.9737

Epoch [5/40] Train Acc: 0.9670 Test Acc: 0.9737

Epoch [6/40] Train Acc: 0.9758 Test Acc: 0.9737

Epoch [7/40] Train Acc: 0.9824 Test Acc: 0.9737

Epoch [8/40] Train Acc: 0.9846 Test Acc: 0.9737

Epoch [9/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [10/40] Train Acc: 0.9868 Test Acc: 0.9912

Epoch [11/40] Train Acc: 0.9868 Test Acc: 0.9912

Epoch [12/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [13/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [14/40] Train Acc: 0.9890 Test Acc: 0.9912

Epoch [15/40] Train Acc: 0.9890 Test Acc: 0.9825

p_malignant = 0.3 Final Train Acc: 0.9934 Final Test Acc: 0.9825

 

Running experiment with p_malignant = 0.4

Epoch [1/40] Train Acc: 0.7934 Test Acc: 0.9474

Epoch [2/40] Train Acc: 0.9407 Test Acc: 0.9561

Epoch [3/40] Train Acc: 0.9604 Test Acc: 0.9561

Epoch [4/40] Train Acc: 0.9626 Test Acc: 0.9737

Epoch [5/40] Train Acc: 0.9648 Test Acc: 0.9737

Epoch [6/40] Train Acc: 0.9670 Test Acc: 0.9737

Epoch [7/40] Train Acc: 0.9802 Test Acc: 0.9737

Epoch [8/40] Train Acc: 0.9824 Test Acc: 0.9737

Epoch [9/40] Train Acc: 0.9846 Test Acc: 0.9825

Epoch [10/40] Train Acc: 0.9846 Test Acc: 0.9825

Epoch [11/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [12/40] Train Acc: 0.9846 Test Acc: 0.9825

Epoch [13/40] Train Acc: 0.9846 Test Acc: 0.9912

Epoch [14/40] Train Acc: 0.9846 Test Acc: 0.9912

Epoch [15/40] Train Acc: 0.9846 Test Acc: 0.9912

Epoch [16/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [17/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [18/40] Train Acc: 0.9846 Test Acc: 0.9825

Epoch [19/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [20/40] Train Acc: 0.9868 Test Acc: 0.9825

Epoch [21/40] Train Acc: 0.9890 Test Acc: 0.9825

Epoch [22/40] Train Acc: 0.9890 Test Acc: 0.9825

Epoch [23/40] Train Acc: 0.9890 Test Acc: 0.9825

Epoch [24/40] Train Acc: 0.9912 Test Acc: 0.9825

Epoch [25/40] Train Acc: 0.9912 Test Acc: 0.9825

Epoch [26/40] Train Acc: 0.9912 Test Acc: 0.9825

Epoch [27/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [28/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [29/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [30/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [31/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [32/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [33/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [34/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [35/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [36/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [37/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [38/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [39/40] Train Acc: 0.9934 Test Acc: 0.9825

Epoch [40/40] Train Acc: 0.9934 Test Acc: 0.9825

p_malignant = 0.4 Final Train Acc: 0.9934 Final Test Acc: 0.9825

=======================================================================

Experiment

Train Accuracy

(AR)

Test

Accuracy (AR)

Train

Accuracy (Dropout)

Test Accuracy (Dropout)

Baseline         (0.0)

0.9799

0.9883

0.9750

0.7500

Experiment 1 (0.1)

0.9824

0.9942

0.9625

0.7625

Experiment 2 (0.2)

0.9799

0.9942

0.9400

0.7550

Experiment 3 (0.3)

0.9849

0.9883

0.9100

0.7674

Experiment 4 (0.4)

0.9824

0.9883

0.8750

0.7450

Experiment 5 (0.49)

0.9799

0.9766

0.7950

0.7300

 

Experiment

Train Accuracy

(AR)

Test

Accuracy (AR)

Train

Accuracy (Dropout)

Test Accuracy (Dropout)

Baseline         (0.0)

0.9750

0.9683

0.9799

0.9600

Experiment 1 (0.1)

0.9824

0.9942

0.9625

0.8825

Experiment 2 (0.2)

0.9799

0.9942

0.9400

0.8750

Experiment 3 (0.3)

0.9849

0.9883

0.9100

0.8874

Experiment 4 (0.4)

0.9824

0.9883

0.8750

0.8650

Experiment 5 (0.49)

0.9799

0.9766

0.7950

0.8500

 

=======================================================================

END OF THE “BREATH CANCER” EXPERIMENT

=======================================================================

 

=======================================================================

IV. CODE FOR THE PRELIMINARY EXPERIMENT ON “ADULT INCOME” DATASET (not included to the article)

=======================================================================

import torch

import torch.nn as nn

import torch.optim as optim

from torch.utils.data import DataLoader, TensorDataset

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import StandardScaler, LabelEncoder

import pandas as pd

import numpy as np

import matplotlib.pyplot as plt

 

# Load and preprocess the Adult Income dataset

url = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"

columns = [

    "age", "workclass", "fnlwgt", "education", "education-num", "marital-status", "occupation",

    "relationship", "race", "sex", "capital-gain", "capital-loss", "hours-per-week", "native-country", "income"

]

data = pd.read_csv(url, header=None, names=columns, na_values=" ?")

data = data.dropna()

 

# Encode categorical features and target

categorical_columns = data.select_dtypes(include=["object"]).columns[:-1]

label_encoders = {}

for col in categorical_columns:

    le = LabelEncoder()

    data[col] = le.fit_transform(data[col])

    label_encoders[col] = le

 

# Encode target variable

data["income"] = (data["income"] == " >50K").astype(int)

 

# Split features and target

X = data.drop("income", axis=1).values

y = data["income"].values

 

# Normalize features

X = StandardScaler().fit_transform(X)

 

# Train-test split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

 

# Convert to PyTorch tensors

X_train, X_test = map(torch.tensor, (X_train, X_test))

y_train, y_test = map(torch.tensor, (y_train, y_test))

X_train, X_test = X_train.float(), X_test.float()

y_train, y_test = y_train.long(), y_test.long()

 

# Create data loaders

batch_size = 64

train_loader = DataLoader(TensorDataset(X_train, y_train), batch_size=batch_size, shuffle=True)

test_loader = DataLoader(TensorDataset(X_test, y_test), batch_size=batch_size)

 

# Model with different regularization approaches

class RegularizedNN(nn.Module):

    def __init__(self, input_size=14, hidden_size=64, output_size=2, p_malignant=0.0, p_dropout=0.0):

        super(RegularizedNN, self).__init__()

        self.p_malignant = p_malignant

        self.p_dropout = p_dropout

 

        # Define layers

        self.fc1 = nn.Linear(input_size, hidden_size)

        self.relu = nn.ReLU()

        self.dropout = nn.Dropout(p_dropout)

        self.fc2 = nn.Linear(hidden_size, output_size)

 

        # Define malignant neurons mask

        if p_malignant > 0.0:

            self.malignant_mask = torch.zeros(hidden_size, dtype=torch.bool)

            num_malignant = int(hidden_size * p_malignant)

            self.malignant_mask[:num_malignant] = True

        else:

            self.malignant_mask = None

 

    def forward(self, x):

        x = self.fc1(x)

        x = self.relu(x)

        if self.malignant_mask is not None:

            x = x.clone()  # Avoid in-place operation

            scaling_factor = torch.randn_like(x[:, self.malignant_mask])  # Random noise

            x[:, self.malignant_mask] *= scaling_factor

        if self.p_dropout > 0.0:

            x = self.dropout(x)

        x = self.fc2(x)

        return x

 

# Train and evaluate function

def train_and_evaluate(p_malignant, p_dropout, num_epochs=50, lr=0.001):

    model = RegularizedNN(p_malignant=p_malignant, p_dropout=p_dropout).to(device)

    criterion = nn.CrossEntropyLoss()

    optimizer = optim.Adam(model.parameters(), lr=lr)

 

    train_acc_list = []

    test_acc_list = []

 

    for epoch in range(num_epochs):

        # Training phase

        model.train()

        correct, total = 0, 0

 

        for inputs, labels in train_loader:

            inputs, labels = inputs.to(device), labels.to(device)

 

            # Forward pass

            outputs = model(inputs)

            loss = criterion(outputs, labels)

 

            # Backward pass

            optimizer.zero_grad()

            loss.backward()

            optimizer.step()

 

            # Accuracy

            _, predicted = torch.max(outputs, 1)

            total += labels.size(0)

            correct += (predicted == labels).sum().item()

 

        train_acc = correct / total

        train_acc_list.append(train_acc)

 

        # Testing phase

        model.eval()

        correct, total = 0, 0

 

        with torch.no_grad():

            for inputs, labels in test_loader:

                inputs, labels = inputs.to(device), labels.to(device)

                outputs = model(inputs)

                _, predicted = torch.max(outputs, 1)

                total += labels.size(0)

                correct += (predicted == labels).sum().item()

 

        test_acc = correct / total

        test_acc_list.append(test_acc)

 

    return train_acc_list, test_acc_list

 

# Define experiment cases

cases = [

    {"title": "No regularization {0.0, 0.0}", "p_malignant": 0.0, "p_dropout": 0.0},

    {"title": "Dropout {0.0, 0.2}", "p_malignant": 0.0, "p_dropout": 0.2},

    {"title": "Dropout {0.0, 0.4}", "p_malignant": 0.0, "p_dropout": 0.4},

    {"title": "Adversarial {0.2, 0.0}", "p_malignant": 0.2, "p_dropout": 0.0},

    {"title": "Adversarial {0.4, 0.0}", "p_malignant": 0.4, "p_dropout": 0.0},

    {"title": "Hybrid {0.2, 0.2}", "p_malignant": 0.2, "p_dropout": 0.2},

    {"title": "Hybrid {0.2, 0.4}", "p_malignant": 0.2, "p_dropout": 0.4},

    {"title": "Hybrid {0.4, 0.2}", "p_malignant": 0.4, "p_dropout": 0.2},

    {"title": "Hybrid {0.4, 0.4}", "p_malignant": 0.4, "p_dropout": 0.4},

]

 

# Run experiments

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

num_epochs = 50

 

train_results = []

test_results = []

 

for i, case in enumerate(cases, start=1):

    print(f"Running Case {i}: {case['title']}")

    train_acc, test_acc = train_and_evaluate(case['p_malignant'], case['p_dropout'], num_epochs=num_epochs)

    train_results.append(train_acc)

    test_results.append(test_acc)

    print(f"Case {i}: Final Train Accuracy = {train_acc[-1]:.4f}, Final Test Accuracy = {test_acc[-1]:.4f}\n")

 

# Plot results

def plot_results(results, title, ylabel):

    plt.figure(figsize=(10, 6))

    for i, acc in enumerate(results):

        plt.plot(range(1, num_epochs + 1), acc, label=f"Case {i + 1}: {cases[i]['title']}")

    plt.title(title)

    plt.xlabel("Epoch")

    plt.ylabel(ylabel)

    plt.legend()

    plt.grid()

    plt.show()

 

plot_results(train_results, "Train Accuracy Across Cases", "Train Accuracy")

plot_results(test_results, "Test Accuracy Across Cases", "Test Accuracy")

 

 

=======================================================================

RESULTS OF THE PRELIMINARY EXPERIMENT ON “ADULT INCOME” DATASET (not included to the article)

=======================================================================

Running Case 1: No regularization {0.0, 0.0}

Case 1: Final Train Accuracy = 0.8557, Final Test Accuracy = 0.8470

 

Running Case 2: Dropout {0.0, 0.2}

Case 2: Final Train Accuracy = 0.8483, Final Test Accuracy = 0.8508

 

Running Case 3: Dropout {0.0, 0.4}

Case 3: Final Train Accuracy = 0.8444, Final Test Accuracy = 0.8533

 

Running Case 4: Adversarial {0.2, 0.0}

Case 4: Final Train Accuracy = 0.8547, Final Test Accuracy = 0.8507

 

Running Case 5: Adversarial {0.4, 0.0}

Case 5: Final Train Accuracy = 0.8507, Final Test Accuracy = 0.8538

 

Running Case 6: Hybrid {0.2, 0.2}

Case 6: Final Train Accuracy = 0.8481, Final Test Accuracy = 0.8516

 

Running Case 7: Hybrid {0.2, 0.4}

Case 7: Final Train Accuracy = 0.8438, Final Test Accuracy = 0.8492

 

Running Case 8: Hybrid {0.4, 0.2}

Case 8: Final Train Accuracy = 0.8456, Final Test Accuracy = 0.8525

 

Running Case 9: Hybrid {0.4, 0.4}

Case 9: Final Train Accuracy = 0.8427, Final Test Accuracy = 0.8502

 

=======================================================================

ANALYSIS OF THE RESULTS OF THE PRELIMINARY EXPERIMENT ON “ADULT INCOME” DATASET (not included to the article)

=======================================================================

Goal and structure of the experiment: The experiment aims to evaluate the performance of a neural network model using different regularization techniques, including adversarial regularization and dropout, on the Adult Income dataset. Specifically, we compare the following approaches:

·         No regularization.

·         Dropout with varying probabilities.

·         Adversarial Regularization with varying proportions of “malignant neurons”.

·         Hybrid combinations of dropout and adversarial regularization.

The primary metrics for evaluation are training accuracy and test accuracy, observed over 50 epochs.

Dataset description: The “Adult Income” dataset is a well-known dataset used for predicting whether an individual's income exceeds $50,000 per year. It contains 48,842 rows and 14 attributes, including both continuous (e.g., age, hours-per-week) and categorical features (e.g., workclass, education).

The dataset was preprocessed as follows:

·         Handling missing values: rows with missing data were dropped.

·         Encoding categorical features: Label encoding was applied to all categorical variables.

·         Target variable: The binary income variable was encoded as 1 for >$50K and 0 otherwise.

·         Normalization: All continuous variables were normalized using StandardScaler.

Neural network configuration:

Architecture: A simple feedforward neural network:

·         Input layer: 14 features.

·         Hidden layer: 64 neurons with ReLU activation.

·         Output layer: 2 neurons (binary classification).

Regularization:

·         Dropout probabilities: 0.2 and 0.4.

·         Adversarial regularization with 20% and 40% of “malignant neurons”.

Training parameters:

·         Optimizer: Adam with a learning rate of 0.001.

·         Loss function: Cross-entropy loss.

·         Batch Size: 64.

·         Epochs: 50.

Experiment results:

The performance of the network across different regularization strategies is summarized below:

------------------------------------------------------------------------------------------------------------------------

Case              Regularization (p_malignant, p_dropout)    Final Train Accuracy            Final Test Accuracy

Case 1            No Regularization (0.0, 0.0)                                       0.8557                     0.8470

Case 2            Dropout (0.0, 0.2)                                                     0.8483                     0.8508

Case 3            Dropout (0.0, 0.4)                                                     0.8444                     0.8533

Case 4            Adversarial (0.2, 0.0)                                                0.8547                     0.8507

Case 5            Adversarial (0.4, 0.0)                                                0.8507                     0.8538

Case 6            Hybrid (0.2, 0.2)                                                       0.8481                     0.8516

Case 7            Hybrid (0.2, 0.4)                                                       0.8438                     0.8492

Case 8            Hybrid (0.4, 0.2)                                                       0.8456                     0.8525

Case 9            Hybrid (0.4, 0.4)                                                       0.8427                     0.8502

------------------------------------------------------------------------------------------------------------------------

Observations and interpretations:

·         No Regularization (serves as a baseline for comparison): Achieves the highest training accuracy (0.8557), but test accuracy (0.8470) suggests potential overfitting.

·         Dropout Regularization: Adding dropout improves test accuracy compared to no regularization. Increasing the dropout probability from 0.2 to 0.4 slightly reduces training accuracy but enhances test accuracy, indicating better generalization.

·         Adversarial Regularization: Adversarial regularization with 20% or 40% "malignant neurons" yields competitive test accuracy. The performance indicates that adversarial regularization effectively regularizes the model without significant loss in training accuracy.

·         Hybrid Regularization: Combining dropout and adversarial regularization achieves balanced performance. Hybrid regularization with (p_malignant = 0.4, p_dropout = 0.2) achieves a good balance between train and test accuracy, indicating robustness.

Performance insights: Models using Dropout or Adversarial Regularization demonstrate improved test accuracy, confirming their ability to reduce overfitting. Adversarial Regularization alone (Case 5) performs slightly better than Dropout alone (Case 3) in terms of test accuracy. Hybrid approaches provide flexible control over regularization but may require fine-tuning for optimal results.

Conclusion: The results demonstrate that adversarial regularization is a viable alternative to dropout for improving model generalization on clean data. The hybrid regularization approach also shows promise, allowing flexibility in controlling different regularization strategies. Future work could extend this evaluation to noisy or adversarially perturbed datasets to assess robustness further.

=======================================================================

END OF THE PRELIMINARY EXPERIMENT ON “ADULT INCOME” DATASET (not included to the article)

=======================================================================

 

======================================================================

SOLO ADVERSARIAL REGULARIZATION EXPERIMENTS

======================================================================

CODE FOR EXPERIMENT ON “ADULT INCOME”

=================================================================

import torch

import torch.nn as nn

import torch.optim as optim

from torch.utils.data import DataLoader, TensorDataset

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import StandardScaler, LabelEncoder

import pandas as pd

import numpy as np

import matplotlib.pyplot as plt

 

# Load and preprocess the Adult Income dataset

url = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"

columns = [

    "age", "workclass", "fnlwgt", "education", "education-num", "marital-status", "occupation",

    "relationship", "race", "sex", "capital-gain", "capital-loss", "hours-per-week", "native-country", "income"

]

data = pd.read_csv(url, header=None, names=columns, na_values=" ?")

data = data.dropna()

 

# Encode categorical features and target

categorical_columns = data.select_dtypes(include=["object"]).columns[:-1]

label_encoders = {}

for col in categorical_columns:

    le = LabelEncoder()

    data[col] = le.fit_transform(data[col])

    label_encoders[col] = le

 

# Encode target variable

data["income"] = (data["income"] == " >50K").astype(int)

 

# Split features and target

X = data.drop("income", axis=1).values

y = data["income"].values

 

# Normalize features

X = StandardScaler().fit_transform(X)

 

# Train-test split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

 

# Convert to PyTorch tensors

X_train, X_test = map(torch.tensor, (X_train, X_test))

y_train, y_test = map(torch.tensor, (y_train, y_test))

X_train, X_test = X_train.float(), X_test.float()

y_train, y_test = y_train.long(), y_test.long()

 

# Create data loaders

batch_size = 64

train_loader = DataLoader(TensorDataset(X_train, y_train), batch_size=batch_size, shuffle=True)

test_loader = DataLoader(TensorDataset(X_test, y_test), batch_size=batch_size)

 

# Model definition

class RegularizedNN(nn.Module):

    def __init__(self, input_size=14, hidden_size=64, output_size=2, p_malignant=0.0):

        super(RegularizedNN, self).__init__()

        self.fc1 = nn.Linear(input_size, hidden_size)

        self.relu = nn.ReLU()

        self.fc2 = nn.Linear(hidden_size, output_size)

        self.p_malignant = p_malignant

        self.scaling_factor = 1.0 - 2.0 * self.p_malignant # Initialize scaling factor for inference

 

    def forward(self, x):

        x = self.fc1(x)

        x = self.relu(x)

        x = self.fc2(x)

        return x

   

    def get_scaling_factor(self):

      return self.scaling_factor



# Train and evaluate function with adversarial regularization

def train_and_evaluate_with_ar(num_epochs=50, lr=0.001, p_malignant=0.0):

    model = RegularizedNN(p_malignant=p_malignant).to(device)

    criterion = nn.CrossEntropyLoss()

    optimizer = optim.Adam(model.parameters(), lr=lr)

   

    train_acc_list = []

    test_acc_list = []

 

    for epoch in range(num_epochs):

        # Training phase

        model.train()

        correct, total = 0, 0

 

        for inputs, labels in train_loader:

            inputs, labels = inputs.to(device), labels.to(device)

 

            # Forward pass

            outputs = model(inputs)

           

            # Scale the output during training

            outputs = outputs * (1 - 2 * p_malignant)

            loss = criterion(outputs, labels)

           

            # Weight updates based on the expected update rule

            optimizer.zero_grad()

            loss.backward()

            with torch.no_grad():

                for param in model.parameters():

                   param.grad = param.grad * (1 - 2 * p_malignant)

            optimizer.step()

 

            # Accuracy calculation

            _, predicted = torch.max(outputs, 1)

            total += labels.size(0)

            correct += (predicted == labels).sum().item()

 

        train_acc = correct / total

        train_acc_list.append(train_acc)

        print(f"Epoch {epoch+1}: Train Accuracy = {train_acc:.4f}")

       

 

        # Testing phase

        model.eval()

        correct, total = 0, 0

 

        with torch.no_grad():

            for inputs, labels in test_loader:

                inputs, labels = inputs.to(device), labels.to(device)

                outputs = model(inputs)

                # Scale the output during inference by the factor computed in the model

                outputs = outputs * model.get_scaling_factor()

                _, predicted = torch.max(outputs, 1)

                total += labels.size(0)

                correct += (predicted == labels).sum().item()

 

        test_acc = correct / total

        test_acc_list.append(test_acc)

        print(f"Epoch {epoch+1}: Test Accuracy = {test_acc:.4f}")

 

    return train_acc_list, test_acc_list

 

# Device configuration

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

num_epochs = 50

p_malignant = 0.3 # Example value for p_malignant

 

# Run experiments with adversarial regularization

train_acc, test_acc = train_and_evaluate_with_ar(num_epochs=num_epochs, p_malignant=p_malignant)

 

# Print final results

print(f"\nFinal Train Accuracy: {train_acc[-1]:.4f}")

print(f"Final Test Accuracy: {test_acc[-1]:.4f}")

 

# Plot results

plt.figure(figsize=(10, 6))

plt.plot(range(1, num_epochs + 1), train_acc, label="Train Accuracy")

plt.plot(range(1, num_epochs + 1), test_acc, label="Test Accuracy")

plt.title(f"Adversarial Regularization (p_malignant={p_malignant})")

plt.xlabel("Epoch")

plt.ylabel("Accuracy")

plt.legend()

plt.grid()

plt.show()

 

=================================================================

=================================================================

RESULTS (“ADULT INCOME” DATASET)

=================================================================

 = 0.0

Epoch 1: Train Accuracy = 0.7879

Epoch 1: Test Accuracy = 0.8329

Epoch 2: Train Accuracy = 0.8346

Epoch 2: Test Accuracy = 0.8468

Epoch 3: Train Accuracy = 0.8420

Epoch 3: Test Accuracy = 0.8483

Epoch 4: Train Accuracy = 0.8428

Epoch 4: Test Accuracy = 0.8458

Epoch 5: Train Accuracy = 0.8441

Epoch 5: Test Accuracy = 0.8478

Epoch 6: Train Accuracy = 0.8443

Epoch 6: Test Accuracy = 0.8463

Epoch 7: Train Accuracy = 0.8448

Epoch 7: Test Accuracy = 0.8449

Epoch 8: Train Accuracy = 0.8460

Epoch 8: Test Accuracy = 0.8455

Epoch 9: Train Accuracy = 0.8454

Epoch 9: Test Accuracy = 0.8470

Epoch 10: Train Accuracy = 0.8466

Epoch 10: Test Accuracy = 0.8498

Epoch 11: Train Accuracy = 0.8467

Epoch 11: Test Accuracy = 0.8463

Epoch 12: Train Accuracy = 0.8468

Epoch 12: Test Accuracy = 0.8450

Epoch 13: Train Accuracy = 0.8467

Epoch 13: Test Accuracy = 0.8468

Epoch 14: Train Accuracy = 0.8476

Epoch 14: Test Accuracy = 0.8437

Epoch 15: Train Accuracy = 0.8482

Epoch 15: Test Accuracy = 0.8478

Epoch 16: Train Accuracy = 0.8488

Epoch 16: Test Accuracy = 0.8468

Epoch 17: Train Accuracy = 0.8482

Epoch 17: Test Accuracy = 0.8458

Epoch 18: Train Accuracy = 0.8464

Epoch 18: Test Accuracy = 0.8457

Epoch 19: Train Accuracy = 0.8488

Epoch 19: Test Accuracy = 0.8457

Epoch 20: Train Accuracy = 0.8492

Epoch 20: Test Accuracy = 0.8460

Epoch 21: Train Accuracy = 0.8483

Epoch 21: Test Accuracy = 0.8449

Epoch 22: Train Accuracy = 0.8512

Epoch 22: Test Accuracy = 0.8478

Epoch 23: Train Accuracy = 0.8488

Epoch 23: Test Accuracy = 0.8477

Epoch 24: Train Accuracy = 0.8506

Epoch 24: Test Accuracy = 0.8475

Epoch 25: Train Accuracy = 0.8502

Epoch 25: Test Accuracy = 0.8458

Epoch 26: Train Accuracy = 0.8509

Epoch 26: Test Accuracy = 0.8465

Epoch 27: Train Accuracy = 0.8509

Epoch 27: Test Accuracy = 0.8483

Epoch 28: Train Accuracy = 0.8510

Epoch 28: Test Accuracy = 0.8462

Epoch 29: Train Accuracy = 0.8525

Epoch 29: Test Accuracy = 0.8475

Epoch 30: Train Accuracy = 0.8498

Epoch 30: Test Accuracy = 0.8467

Epoch 31: Train Accuracy = 0.8520

Epoch 31: Test Accuracy = 0.8462

Epoch 32: Train Accuracy = 0.8516

Epoch 32: Test Accuracy = 0.8482

Epoch 33: Train Accuracy = 0.8518

Epoch 33: Test Accuracy = 0.8450

Epoch 34: Train Accuracy = 0.8523

Epoch 34: Test Accuracy = 0.8420

Epoch 35: Train Accuracy = 0.8520

Epoch 35: Test Accuracy = 0.8480

Epoch 36: Train Accuracy = 0.8512

Epoch 36: Test Accuracy = 0.8488

Epoch 37: Train Accuracy = 0.8515

Epoch 37: Test Accuracy = 0.8444

Epoch 38: Train Accuracy = 0.8532

Epoch 38: Test Accuracy = 0.8475

Epoch 39: Train Accuracy = 0.8525

Epoch 39: Test Accuracy = 0.8482

Epoch 40: Train Accuracy = 0.8529

Epoch 40: Test Accuracy = 0.8475

Epoch 41: Train Accuracy = 0.8522

Epoch 41: Test Accuracy = 0.8467

Epoch 42: Train Accuracy = 0.8530

Epoch 42: Test Accuracy = 0.8468

Epoch 43: Train Accuracy = 0.8540

Epoch 43: Test Accuracy = 0.8472

Epoch 44: Train Accuracy = 0.8525

Epoch 44: Test Accuracy = 0.8485

Epoch 45: Train Accuracy = 0.8534

Epoch 45: Test Accuracy = 0.8468

Epoch 46: Train Accuracy = 0.8536

Epoch 46: Test Accuracy = 0.8492

Epoch 47: Train Accuracy = 0.8539

Epoch 47: Test Accuracy = 0.8492

Epoch 48: Train Accuracy = 0.8529

Epoch 48: Test Accuracy = 0.8510

Epoch 49: Train Accuracy = 0.8525

Epoch 49: Test Accuracy = 0.8483

Epoch 50: Train Accuracy = 0.8537

Epoch 50: Test Accuracy = 0.8483

 

Final Train Accuracy:        0.8537

Final Test Accuracy:           0.8483

=================================================================

=================================================================

 = 0.1

Epoch 1: Train Accuracy = 0.7835

Epoch 1: Test Accuracy = 0.8341

Epoch 2: Train Accuracy = 0.8346

Epoch 2: Test Accuracy = 0.8452

Epoch 3: Train Accuracy = 0.8392

Epoch 3: Test Accuracy = 0.8475

Epoch 4: Train Accuracy = 0.8426

Epoch 4: Test Accuracy = 0.8500

Epoch 5: Train Accuracy = 0.8435

Epoch 5: Test Accuracy = 0.8468

Epoch 6: Train Accuracy = 0.8450

Epoch 6: Test Accuracy = 0.8470

Epoch 7: Train Accuracy = 0.8452

Epoch 7: Test Accuracy = 0.8480

Epoch 8: Train Accuracy = 0.8463

Epoch 8: Test Accuracy = 0.8454

Epoch 9: Train Accuracy = 0.8454

Epoch 9: Test Accuracy = 0.8467

Epoch 10: Train Accuracy = 0.8472

Epoch 10: Test Accuracy = 0.8473

Epoch 11: Train Accuracy = 0.8472

Epoch 11: Test Accuracy = 0.8478

Epoch 12: Train Accuracy = 0.8479

Epoch 12: Test Accuracy = 0.8463

Epoch 13: Train Accuracy = 0.8482

Epoch 13: Test Accuracy = 0.8460

Epoch 14: Train Accuracy = 0.8475

Epoch 14: Test Accuracy = 0.8485

Epoch 15: Train Accuracy = 0.8479

Epoch 15: Test Accuracy = 0.8482

Epoch 16: Train Accuracy = 0.8494

Epoch 16: Test Accuracy = 0.8500

Epoch 17: Train Accuracy = 0.8481

Epoch 17: Test Accuracy = 0.8463

Epoch 18: Train Accuracy = 0.8479

Epoch 18: Test Accuracy = 0.8473

Epoch 19: Train Accuracy = 0.8496

Epoch 19: Test Accuracy = 0.8497

Epoch 20: Train Accuracy = 0.8498

Epoch 20: Test Accuracy = 0.8488

Epoch 21: Train Accuracy = 0.8496

Epoch 21: Test Accuracy = 0.8482

Epoch 22: Train Accuracy = 0.8494

Epoch 22: Test Accuracy = 0.8495

Epoch 23: Train Accuracy = 0.8504

Epoch 23: Test Accuracy = 0.8478

Epoch 24: Train Accuracy = 0.8487

Epoch 24: Test Accuracy = 0.8488

Epoch 25: Train Accuracy = 0.8497

Epoch 25: Test Accuracy = 0.8487

Epoch 26: Train Accuracy = 0.8506

Epoch 26: Test Accuracy = 0.8473

Epoch 27: Train Accuracy = 0.8494

Epoch 27: Test Accuracy = 0.8482

Epoch 28: Train Accuracy = 0.8502

Epoch 28: Test Accuracy = 0.8510

Epoch 29: Train Accuracy = 0.8510

Epoch 29: Test Accuracy = 0.8500

Epoch 30: Train Accuracy = 0.8511

Epoch 30: Test Accuracy = 0.8467

Epoch 31: Train Accuracy = 0.8509

Epoch 31: Test Accuracy = 0.8487

Epoch 32: Train Accuracy = 0.8520

Epoch 32: Test Accuracy = 0.8465

Epoch 33: Train Accuracy = 0.8509

Epoch 33: Test Accuracy = 0.8460

Epoch 34: Train Accuracy = 0.8514

Epoch 34: Test Accuracy = 0.8488

Epoch 35: Train Accuracy = 0.8513

Epoch 35: Test Accuracy = 0.8482

Epoch 36: Train Accuracy = 0.8528

Epoch 36: Test Accuracy = 0.8478

Epoch 37: Train Accuracy = 0.8521

Epoch 37: Test Accuracy = 0.8477

Epoch 38: Train Accuracy = 0.8528

Epoch 38: Test Accuracy = 0.8477

Epoch 39: Train Accuracy = 0.8513

Epoch 39: Test Accuracy = 0.8460

Epoch 40: Train Accuracy = 0.8520

Epoch 40: Test Accuracy = 0.8493

Epoch 41: Train Accuracy = 0.8531

Epoch 41: Test Accuracy = 0.8497

Epoch 42: Train Accuracy = 0.8530

Epoch 42: Test Accuracy = 0.8473

Epoch 43: Train Accuracy = 0.8523

Epoch 43: Test Accuracy = 0.8518

Epoch 44: Train Accuracy = 0.8541

Epoch 44: Test Accuracy = 0.8478

Epoch 45: Train Accuracy = 0.8519

Epoch 45: Test Accuracy = 0.8457

Epoch 46: Train Accuracy = 0.8548

Epoch 46: Test Accuracy = 0.8483

Epoch 47: Train Accuracy = 0.8552

Epoch 47: Test Accuracy = 0.8465

Epoch 48: Train Accuracy = 0.8530

Epoch 48: Test Accuracy = 0.8477

Epoch 49: Train Accuracy = 0.8528

Epoch 49: Test Accuracy = 0.8465

Epoch 50: Train Accuracy = 0.8537

Epoch 50: Test Accuracy = 0.8493

 

Final Train Accuracy:            0.8537

Final Test Accuracy:              0.8493

 

=================================================================

 = 0.2

Epoch 1: Train Accuracy = 0.7962

Epoch 1: Test Accuracy = 0.8342

Epoch 2: Train Accuracy = 0.8374

Epoch 2: Test Accuracy = 0.8450

Epoch 3: Train Accuracy = 0.8423

Epoch 3: Test Accuracy = 0.8508

Epoch 4: Train Accuracy = 0.8436

Epoch 4: Test Accuracy = 0.8452

Epoch 5: Train Accuracy = 0.8442

Epoch 5: Test Accuracy = 0.8460

Epoch 6: Train Accuracy = 0.8450

Epoch 6: Test Accuracy = 0.8450

Epoch 7: Train Accuracy = 0.8442

Epoch 7: Test Accuracy = 0.8452

Epoch 8: Train Accuracy = 0.8457

Epoch 8: Test Accuracy = 0.8485

Epoch 9: Train Accuracy = 0.8465

Epoch 9: Test Accuracy = 0.8490

Epoch 10: Train Accuracy = 0.8471

Epoch 10: Test Accuracy = 0.8468

Epoch 11: Train Accuracy = 0.8460

Epoch 11: Test Accuracy = 0.8473

Epoch 12: Train Accuracy = 0.8457

Epoch 12: Test Accuracy = 0.8485

Epoch 13: Train Accuracy = 0.8483

Epoch 13: Test Accuracy = 0.8498

Epoch 14: Train Accuracy = 0.8472

Epoch 14: Test Accuracy = 0.8480

Epoch 15: Train Accuracy = 0.8464

Epoch 15: Test Accuracy = 0.8470

Epoch 16: Train Accuracy = 0.8475

Epoch 16: Test Accuracy = 0.8472

Epoch 17: Train Accuracy = 0.8472

Epoch 17: Test Accuracy = 0.8477

Epoch 18: Train Accuracy = 0.8477

Epoch 18: Test Accuracy = 0.8472

Epoch 19: Train Accuracy = 0.8483

Epoch 19: Test Accuracy = 0.8487

Epoch 20: Train Accuracy = 0.8483

Epoch 20: Test Accuracy = 0.8480

Epoch 21: Train Accuracy = 0.8489

Epoch 21: Test Accuracy = 0.8483

Epoch 22: Train Accuracy = 0.8490

Epoch 22: Test Accuracy = 0.8483

Epoch 23: Train Accuracy = 0.8494

Epoch 23: Test Accuracy = 0.8490

Epoch 24: Train Accuracy = 0.8509

Epoch 24: Test Accuracy = 0.8495

Epoch 25: Train Accuracy = 0.8489

Epoch 25: Test Accuracy = 0.8478

Epoch 26: Train Accuracy = 0.8495

Epoch 26: Test Accuracy = 0.8490

Epoch 27: Train Accuracy = 0.8499

Epoch 27: Test Accuracy = 0.8487

Epoch 28: Train Accuracy = 0.8500

Epoch 28: Test Accuracy = 0.8475

Epoch 29: Train Accuracy = 0.8510

Epoch 29: Test Accuracy = 0.8498

Epoch 30: Train Accuracy = 0.8515

Epoch 30: Test Accuracy = 0.8503

Epoch 31: Train Accuracy = 0.8505

Epoch 31: Test Accuracy = 0.8467

Epoch 32: Train Accuracy = 0.8514

Epoch 32: Test Accuracy = 0.8502

Epoch 33: Train Accuracy = 0.8516

Epoch 33: Test Accuracy = 0.8478

Epoch 34: Train Accuracy = 0.8504

Epoch 34: Test Accuracy = 0.8472

Epoch 35: Train Accuracy = 0.8518

Epoch 35: Test Accuracy = 0.8470

Epoch 36: Train Accuracy = 0.8512

Epoch 36: Test Accuracy = 0.8497

Epoch 37: Train Accuracy = 0.8513

Epoch 37: Test Accuracy = 0.8503

Epoch 38: Train Accuracy = 0.8516

Epoch 38: Test Accuracy = 0.8490

Epoch 39: Train Accuracy = 0.8520

Epoch 39: Test Accuracy = 0.8478

Epoch 40: Train Accuracy = 0.8530

Epoch 40: Test Accuracy = 0.8502

Epoch 41: Train Accuracy = 0.8514

Epoch 41: Test Accuracy = 0.8495

Epoch 42: Train Accuracy = 0.8519

Epoch 42: Test Accuracy = 0.8490

Epoch 43: Train Accuracy = 0.8524

Epoch 43: Test Accuracy = 0.8502

Epoch 44: Train Accuracy = 0.8527

Epoch 44: Test Accuracy = 0.8505

Epoch 45: Train Accuracy = 0.8518

Epoch 45: Test Accuracy = 0.8500

Epoch 46: Train Accuracy = 0.8529

Epoch 46: Test Accuracy = 0.8465

Epoch 47: Train Accuracy = 0.8536

Epoch 47: Test Accuracy = 0.8462

Epoch 48: Train Accuracy = 0.8535

Epoch 48: Test Accuracy = 0.8488

Epoch 49: Train Accuracy = 0.8527

Epoch 49: Test Accuracy = 0.8490

Epoch 50: Train Accuracy = 0.8523

Epoch 50: Test Accuracy = 0.8507

 

Final Train Accuracy:            0.8523

Final Test Accuracy:              0.8507

 

=================================================================

=================================================================

 = 0.3

Epoch 1: Train Accuracy = 0.7962

Epoch 1: Test Accuracy = 0.8298

Epoch 2: Train Accuracy = 0.8329

Epoch 2: Test Accuracy = 0.8460

Epoch 3: Train Accuracy = 0.8411

Epoch 3: Test Accuracy = 0.8485

Epoch 4: Train Accuracy = 0.8436

Epoch 4: Test Accuracy = 0.8468

Epoch 5: Train Accuracy = 0.8432

Epoch 5: Test Accuracy = 0.8485

Epoch 6: Train Accuracy = 0.8457

Epoch 6: Test Accuracy = 0.8510

Epoch 7: Train Accuracy = 0.8456

Epoch 7: Test Accuracy = 0.8497

Epoch 8: Train Accuracy = 0.8452

Epoch 8: Test Accuracy = 0.8482

Epoch 9: Train Accuracy = 0.8456

Epoch 9: Test Accuracy = 0.8470

Epoch 10: Train Accuracy = 0.8457

Epoch 10: Test Accuracy = 0.8492

Epoch 11: Train Accuracy = 0.8453

Epoch 11: Test Accuracy = 0.8498

Epoch 12: Train Accuracy = 0.8464

Epoch 12: Test Accuracy = 0.8497

Epoch 13: Train Accuracy = 0.8472

Epoch 13: Test Accuracy = 0.8482

Epoch 14: Train Accuracy = 0.8463

Epoch 14: Test Accuracy = 0.8495

Epoch 15: Train Accuracy = 0.8478

Epoch 15: Test Accuracy = 0.8502

Epoch 16: Train Accuracy = 0.8464

Epoch 16: Test Accuracy = 0.8497

Epoch 17: Train Accuracy = 0.8472

Epoch 17: Test Accuracy = 0.8492

Epoch 18: Train Accuracy = 0.8465

Epoch 18: Test Accuracy = 0.8470

Epoch 19: Train Accuracy = 0.8482

Epoch 19: Test Accuracy = 0.8487

Epoch 20: Train Accuracy = 0.8477

Epoch 20: Test Accuracy = 0.8483

Epoch 21: Train Accuracy = 0.8472

Epoch 21: Test Accuracy = 0.8508

Epoch 22: Train Accuracy = 0.8476

Epoch 22: Test Accuracy = 0.8475

Epoch 23: Train Accuracy = 0.8485

Epoch 23: Test Accuracy = 0.8490

Epoch 24: Train Accuracy = 0.8486

Epoch 24: Test Accuracy = 0.8487

Epoch 25: Train Accuracy = 0.8496

Epoch 25: Test Accuracy = 0.8493

Epoch 26: Train Accuracy = 0.8484

Epoch 26: Test Accuracy = 0.8508

Epoch 27: Train Accuracy = 0.8484

Epoch 27: Test Accuracy = 0.8473

Epoch 28: Train Accuracy = 0.8484

Epoch 28: Test Accuracy = 0.8493

Epoch 29: Train Accuracy = 0.8484

Epoch 29: Test Accuracy = 0.8503

Epoch 30: Train Accuracy = 0.8488

Epoch 30: Test Accuracy = 0.8483

Epoch 31: Train Accuracy = 0.8496

Epoch 31: Test Accuracy = 0.8492

Epoch 32: Train Accuracy = 0.8503

Epoch 32: Test Accuracy = 0.8512

Epoch 33: Train Accuracy = 0.8501

Epoch 33: Test Accuracy = 0.8493

Epoch 34: Train Accuracy = 0.8499

Epoch 34: Test Accuracy = 0.8513

Epoch 35: Train Accuracy = 0.8500

Epoch 35: Test Accuracy = 0.8492

Epoch 36: Train Accuracy = 0.8491

Epoch 36: Test Accuracy = 0.8495

Epoch 37: Train Accuracy = 0.8512

Epoch 37: Test Accuracy = 0.8485

Epoch 38: Train Accuracy = 0.8510

Epoch 38: Test Accuracy = 0.8510

Epoch 39: Train Accuracy = 0.8500

Epoch 39: Test Accuracy = 0.8507

Epoch 40: Train Accuracy = 0.8513

Epoch 40: Test Accuracy = 0.8500

Epoch 41: Train Accuracy = 0.8520

Epoch 41: Test Accuracy = 0.8492

Epoch 42: Train Accuracy = 0.8527

Epoch 42: Test Accuracy = 0.8487

Epoch 43: Train Accuracy = 0.8511

Epoch 43: Test Accuracy = 0.8498

Epoch 44: Train Accuracy = 0.8522

Epoch 44: Test Accuracy = 0.8490

Epoch 45: Train Accuracy = 0.8509

Epoch 45: Test Accuracy = 0.8487

Epoch 46: Train Accuracy = 0.8514

Epoch 46: Test Accuracy = 0.8485

Epoch 47: Train Accuracy = 0.8515

Epoch 47: Test Accuracy = 0.8495

Epoch 48: Train Accuracy = 0.8512

Epoch 48: Test Accuracy = 0.8502

Epoch 49: Train Accuracy = 0.8521

Epoch 49: Test Accuracy = 0.8488

Epoch 50: Train Accuracy = 0.8517

Epoch 50: Test Accuracy = 0.8505

 

Final Train Accuracy:            0.8517

Final Test Accuracy:              0.8505

=================================================================

=================================================================

 = 0.4

Epoch 1: Train Accuracy = 0.7886

Epoch 1: Test Accuracy = 0.8255

Epoch 2: Train Accuracy = 0.8301

Epoch 2: Test Accuracy = 0.8372

Epoch 3: Train Accuracy = 0.8359

Epoch 3: Test Accuracy = 0.8465

Epoch 4: Train Accuracy = 0.8403

Epoch 4: Test Accuracy = 0.8450

Epoch 5: Train Accuracy = 0.8428

Epoch 5: Test Accuracy = 0.8468

Epoch 6: Train Accuracy = 0.8429

Epoch 6: Test Accuracy = 0.8487

Epoch 7: Train Accuracy = 0.8441

Epoch 7: Test Accuracy = 0.8488

Epoch 8: Train Accuracy = 0.8438

Epoch 8: Test Accuracy = 0.8488

Epoch 9: Train Accuracy = 0.8445

Epoch 9: Test Accuracy = 0.8483

Epoch 10: Train Accuracy = 0.8456

Epoch 10: Test Accuracy = 0.8483

Epoch 11: Train Accuracy = 0.8459

Epoch 11: Test Accuracy = 0.8477

Epoch 12: Train Accuracy = 0.8448

Epoch 12: Test Accuracy = 0.8493

Epoch 13: Train Accuracy = 0.8457

Epoch 13: Test Accuracy = 0.8490

Epoch 14: Train Accuracy = 0.8460

Epoch 14: Test Accuracy = 0.8475

Epoch 15: Train Accuracy = 0.8460

Epoch 15: Test Accuracy = 0.8480

Epoch 16: Train Accuracy = 0.8456

Epoch 16: Test Accuracy = 0.8480

Epoch 17: Train Accuracy = 0.8459

Epoch 17: Test Accuracy = 0.8505

Epoch 18: Train Accuracy = 0.8458

Epoch 18: Test Accuracy = 0.8470

Epoch 19: Train Accuracy = 0.8459

Epoch 19: Test Accuracy = 0.8488

Epoch 20: Train Accuracy = 0.8464

Epoch 20: Test Accuracy = 0.8488

Epoch 21: Train Accuracy = 0.8467

Epoch 21: Test Accuracy = 0.8503

Epoch 22: Train Accuracy = 0.8464

Epoch 22: Test Accuracy = 0.8487

Epoch 23: Train Accuracy = 0.8463

Epoch 23: Test Accuracy = 0.8485

Epoch 24: Train Accuracy = 0.8471

Epoch 24: Test Accuracy = 0.8502

Epoch 25: Train Accuracy = 0.8466

Epoch 25: Test Accuracy = 0.8478

Epoch 26: Train Accuracy = 0.8473

Epoch 26: Test Accuracy = 0.8490

Epoch 27: Train Accuracy = 0.8476

Epoch 27: Test Accuracy = 0.8490

Epoch 28: Train Accuracy = 0.8473

Epoch 28: Test Accuracy = 0.8498

Epoch 29: Train Accuracy = 0.8476

Epoch 29: Test Accuracy = 0.8498

Epoch 30: Train Accuracy = 0.8469

Epoch 30: Test Accuracy = 0.8488

Epoch 31: Train Accuracy = 0.8482

Epoch 31: Test Accuracy = 0.8495

Epoch 32: Train Accuracy = 0.8476

Epoch 32: Test Accuracy = 0.8473

Epoch 33: Train Accuracy = 0.8477

Epoch 33: Test Accuracy = 0.8483

Epoch 34: Train Accuracy = 0.8480

Epoch 34: Test Accuracy = 0.8512

Epoch 35: Train Accuracy = 0.8476

Epoch 35: Test Accuracy = 0.8495

Epoch 36: Train Accuracy = 0.8483

Epoch 36: Test Accuracy = 0.8503

Epoch 37: Train Accuracy = 0.8484

Epoch 37: Test Accuracy = 0.8492

Epoch 38: Train Accuracy = 0.8485

Epoch 38: Test Accuracy = 0.8495

Epoch 39: Train Accuracy = 0.8484

Epoch 39: Test Accuracy = 0.8500

Epoch 40: Train Accuracy = 0.8477

Epoch 40: Test Accuracy = 0.8485

Epoch 41: Train Accuracy = 0.8474

Epoch 41: Test Accuracy = 0.8513

Epoch 42: Train Accuracy = 0.8478

Epoch 42: Test Accuracy = 0.8477

Epoch 43: Train Accuracy = 0.8484

Epoch 43: Test Accuracy = 0.8507

Epoch 44: Train Accuracy = 0.8477

Epoch 44: Test Accuracy = 0.8497

Epoch 45: Train Accuracy = 0.8480

Epoch 45: Test Accuracy = 0.8492

Epoch 46: Train Accuracy = 0.8478

Epoch 46: Test Accuracy = 0.8492

Epoch 47: Train Accuracy = 0.8486

Epoch 47: Test Accuracy = 0.8521

Epoch 48: Train Accuracy = 0.8482

Epoch 48: Test Accuracy = 0.8500

Epoch 49: Train Accuracy = 0.8482

Epoch 49: Test Accuracy = 0.8518

Epoch 50: Train Accuracy = 0.8484

Epoch 50: Test Accuracy = 0.8495

 

Final Train Accuracy:            0.8484

Final Test Accuracy:              0.8495

=================================================================

=================================================================

 = 0.45

Epoch 1: Train Accuracy = 0.7782

Epoch 1: Test Accuracy = 0.8238

Epoch 2: Train Accuracy = 0.8254

Epoch 2: Test Accuracy = 0.8339

Epoch 3: Train Accuracy = 0.8323

Epoch 3: Test Accuracy = 0.8415

Epoch 4: Train Accuracy = 0.8377

Epoch 4: Test Accuracy = 0.8442

Epoch 5: Train Accuracy = 0.8398

Epoch 5: Test Accuracy = 0.8467

Epoch 6: Train Accuracy = 0.8419

Epoch 6: Test Accuracy = 0.8470

Epoch 7: Train Accuracy = 0.8425

Epoch 7: Test Accuracy = 0.8480

Epoch 8: Train Accuracy = 0.8421

Epoch 8: Test Accuracy = 0.8483

Epoch 9: Train Accuracy = 0.8438

Epoch 9: Test Accuracy = 0.8477

Epoch 10: Train Accuracy = 0.8445

Epoch 10: Test Accuracy = 0.8477

Epoch 11: Train Accuracy = 0.8446

Epoch 11: Test Accuracy = 0.8475

Epoch 12: Train Accuracy = 0.8445

Epoch 12: Test Accuracy = 0.8475

Epoch 13: Train Accuracy = 0.8455

Epoch 13: Test Accuracy = 0.8473

Epoch 14: Train Accuracy = 0.8446

Epoch 14: Test Accuracy = 0.8462

Epoch 15: Train Accuracy = 0.8457

Epoch 15: Test Accuracy = 0.8487

Epoch 16: Train Accuracy = 0.8446

Epoch 16: Test Accuracy = 0.8483

Epoch 17: Train Accuracy = 0.8457

Epoch 17: Test Accuracy = 0.8478

Epoch 18: Train Accuracy = 0.8456

Epoch 18: Test Accuracy = 0.8458

Epoch 19: Train Accuracy = 0.8460

Epoch 19: Test Accuracy = 0.8490

Epoch 20: Train Accuracy = 0.8463

Epoch 20: Test Accuracy = 0.8492

Epoch 21: Train Accuracy = 0.8470

Epoch 21: Test Accuracy = 0.8502

Epoch 22: Train Accuracy = 0.8458

Epoch 22: Test Accuracy = 0.8493

Epoch 23: Train Accuracy = 0.8458

Epoch 23: Test Accuracy = 0.8503

Epoch 24: Train Accuracy = 0.8459

Epoch 24: Test Accuracy = 0.8488

Epoch 25: Train Accuracy = 0.8467

Epoch 25: Test Accuracy = 0.8487

Epoch 26: Train Accuracy = 0.8466

Epoch 26: Test Accuracy = 0.8478

Epoch 27: Train Accuracy = 0.8464

Epoch 27: Test Accuracy = 0.8490

Epoch 28: Train Accuracy = 0.8468

Epoch 28: Test Accuracy = 0.8493

Epoch 29: Train Accuracy = 0.8470

Epoch 29: Test Accuracy = 0.8498

Epoch 30: Train Accuracy = 0.8462

Epoch 30: Test Accuracy = 0.8498

Epoch 31: Train Accuracy = 0.8471

Epoch 31: Test Accuracy = 0.8485

Epoch 32: Train Accuracy = 0.8475

Epoch 32: Test Accuracy = 0.8495

Epoch 33: Train Accuracy = 0.8470

Epoch 33: Test Accuracy = 0.8483

Epoch 34: Train Accuracy = 0.8469

Epoch 34: Test Accuracy = 0.8497

Epoch 35: Train Accuracy = 0.8473

Epoch 35: Test Accuracy = 0.8500

Epoch 36: Train Accuracy = 0.8476

Epoch 36: Test Accuracy = 0.8488

Epoch 37: Train Accuracy = 0.8475

Epoch 37: Test Accuracy = 0.8497

Epoch 38: Train Accuracy = 0.8465

Epoch 38: Test Accuracy = 0.8498

Epoch 39: Train Accuracy = 0.8474

Epoch 39: Test Accuracy = 0.8498

Epoch 40: Train Accuracy = 0.8477

Epoch 40: Test Accuracy = 0.8505

Epoch 41: Train Accuracy = 0.8474

Epoch 41: Test Accuracy = 0.8483

Epoch 42: Train Accuracy = 0.8487

Epoch 42: Test Accuracy = 0.8483

Epoch 43: Train Accuracy = 0.8478

Epoch 43: Test Accuracy = 0.8495

Epoch 44: Train Accuracy = 0.8469

Epoch 44: Test Accuracy = 0.8500

Epoch 45: Train Accuracy = 0.8479

Epoch 45: Test Accuracy = 0.8497

Epoch 46: Train Accuracy = 0.8474

Epoch 46: Test Accuracy = 0.8497

Epoch 47: Train Accuracy = 0.8480

Epoch 47: Test Accuracy = 0.8478

Epoch 48: Train Accuracy = 0.8486

Epoch 48: Test Accuracy = 0.8480

Epoch 49: Train Accuracy = 0.8481

Epoch 49: Test Accuracy = 0.8495

Epoch 50: Train Accuracy = 0.8489

Epoch 50: Test Accuracy = 0.8480

 

Final Train Accuracy:            0.8489

Final Test Accuracy:              0.8480

 

=================================================================

=================================================================

 = 0.499

Epoch 1: Train Accuracy = 0.7611

Epoch 1: Test Accuracy = 0.7974

Epoch 2: Train Accuracy = 0.8091

Epoch 2: Test Accuracy = 0.8152

Epoch 3: Train Accuracy = 0.8126

Epoch 3: Test Accuracy = 0.8160

Epoch 4: Train Accuracy = 0.8109

Epoch 4: Test Accuracy = 0.8157

Epoch 5: Train Accuracy = 0.8127

Epoch 5: Test Accuracy = 0.8167

Epoch 6: Train Accuracy = 0.8151

Epoch 6: Test Accuracy = 0.8180

Epoch 7: Train Accuracy = 0.8169

Epoch 7: Test Accuracy = 0.8205

Epoch 8: Train Accuracy = 0.8197

Epoch 8: Test Accuracy = 0.8223

Epoch 9: Train Accuracy = 0.8213

Epoch 9: Test Accuracy = 0.8256

Epoch 10: Train Accuracy = 0.8225

Epoch 10: Test Accuracy = 0.8274

Epoch 11: Train Accuracy = 0.8237

Epoch 11: Test Accuracy = 0.8293

Epoch 12: Train Accuracy = 0.8249

Epoch 12: Test Accuracy = 0.8288

Epoch 13: Train Accuracy = 0.8255

Epoch 13: Test Accuracy = 0.8303

Epoch 14: Train Accuracy = 0.8273

Epoch 14: Test Accuracy = 0.8304

Epoch 15: Train Accuracy = 0.8280

Epoch 15: Test Accuracy = 0.8319

Epoch 16: Train Accuracy = 0.8294

Epoch 16: Test Accuracy = 0.8334

Epoch 17: Train Accuracy = 0.8297

Epoch 17: Test Accuracy = 0.8342

Epoch 18: Train Accuracy = 0.8303

Epoch 18: Test Accuracy = 0.8346

Epoch 19: Train Accuracy = 0.8312

Epoch 19: Test Accuracy = 0.8349

Epoch 20: Train Accuracy = 0.8318

Epoch 20: Test Accuracy = 0.8357

Epoch 21: Train Accuracy = 0.8320

Epoch 21: Test Accuracy = 0.8356

Epoch 22: Train Accuracy = 0.8327

Epoch 22: Test Accuracy = 0.8351

Epoch 23: Train Accuracy = 0.8325

Epoch 23: Test Accuracy = 0.8357

Epoch 24: Train Accuracy = 0.8325

Epoch 24: Test Accuracy = 0.8362

Epoch 25: Train Accuracy = 0.8327

Epoch 25: Test Accuracy = 0.8366

Epoch 26: Train Accuracy = 0.8330

Epoch 26: Test Accuracy = 0.8372

Epoch 27: Train Accuracy = 0.8329

Epoch 27: Test Accuracy = 0.8376

Epoch 28: Train Accuracy = 0.8333

Epoch 28: Test Accuracy = 0.8382

Epoch 29: Train Accuracy = 0.8337

Epoch 29: Test Accuracy = 0.8377

Epoch 30: Train Accuracy = 0.8336

Epoch 30: Test Accuracy = 0.8394

Epoch 31: Train Accuracy = 0.8344

Epoch 31: Test Accuracy = 0.8395

Epoch 32: Train Accuracy = 0.8343

Epoch 32: Test Accuracy = 0.8402

Epoch 33: Train Accuracy = 0.8344

Epoch 33: Test Accuracy = 0.8402

Epoch 34: Train Accuracy = 0.8350

Epoch 34: Test Accuracy = 0.8409

Epoch 35: Train Accuracy = 0.8347

Epoch 35: Test Accuracy = 0.8407

Epoch 36: Train Accuracy = 0.8347

Epoch 36: Test Accuracy = 0.8407

Epoch 37: Train Accuracy = 0.8348

Epoch 37: Test Accuracy = 0.8407

Epoch 38: Train Accuracy = 0.8346

Epoch 38: Test Accuracy = 0.8405

Epoch 39: Train Accuracy = 0.8353

Epoch 39: Test Accuracy = 0.8404

Epoch 40: Train Accuracy = 0.8349

Epoch 40: Test Accuracy = 0.8399

Epoch 41: Train Accuracy = 0.8350

Epoch 41: Test Accuracy = 0.8394

Epoch 42: Train Accuracy = 0.8351

Epoch 42: Test Accuracy = 0.8395

Epoch 43: Train Accuracy = 0.8353

Epoch 43: Test Accuracy = 0.8399

Epoch 44: Train Accuracy = 0.8357

Epoch 44: Test Accuracy = 0.8399

Epoch 45: Train Accuracy = 0.8354

Epoch 45: Test Accuracy = 0.8399

Epoch 46: Train Accuracy = 0.8358

Epoch 46: Test Accuracy = 0.8399

Epoch 47: Train Accuracy = 0.8358

Epoch 47: Test Accuracy = 0.8402

Epoch 48: Train Accuracy = 0.8362

Epoch 48: Test Accuracy = 0.8400

Epoch 49: Train Accuracy = 0.8358

Epoch 49: Test Accuracy = 0.8402

Epoch 50: Train Accuracy = 0.8362

Epoch 50: Test Accuracy = 0.8397

 

Final Train Accuracy:            0.8362

Final Test Accuracy:              0.8397

 

=================================================================

=================================================================

 = 0.5

Epoch 1: Train Accuracy = 0.7522

Epoch 1: Test Accuracy = 0.7464

Epoch 2: Train Accuracy = 0.7522

Epoch 2: Test Accuracy = 0.7464

Epoch 3: Train Accuracy = 0.7522

Epoch 3: Test Accuracy = 0.7464

Epoch 4: Train Accuracy = 0.7522

Epoch 4: Test Accuracy = 0.7464

Epoch 5: Train Accuracy = 0.7522

Epoch 5: Test Accuracy = 0.7464

Epoch 6: Train Accuracy = 0.7522

Epoch 6: Test Accuracy = 0.7464

Epoch 7: Train Accuracy = 0.7522

Epoch 7: Test Accuracy = 0.7464

Epoch 8: Train Accuracy = 0.7522

Epoch 8: Test Accuracy = 0.7464

Epoch 9: Train Accuracy = 0.7522

Epoch 9: Test Accuracy = 0.7464

Epoch 10: Train Accuracy = 0.7522

Epoch 10: Test Accuracy = 0.7464

Epoch 11: Train Accuracy = 0.7522

Epoch 11: Test Accuracy = 0.7464

Epoch 12: Train Accuracy = 0.7522

Epoch 12: Test Accuracy = 0.7464

Epoch 13: Train Accuracy = 0.7522

Epoch 13: Test Accuracy = 0.7464

Epoch 14: Train Accuracy = 0.7522

Epoch 14: Test Accuracy = 0.7464

Epoch 15: Train Accuracy = 0.7522

Epoch 15: Test Accuracy = 0.7464

Epoch 16: Train Accuracy = 0.7522

Epoch 16: Test Accuracy = 0.7464

Epoch 17: Train Accuracy = 0.7522

Epoch 17: Test Accuracy = 0.7464

Epoch 18: Train Accuracy = 0.7522

Epoch 18: Test Accuracy = 0.7464

Epoch 19: Train Accuracy = 0.7522

Epoch 19: Test Accuracy = 0.7464

Epoch 20: Train Accuracy = 0.7522

Epoch 20: Test Accuracy = 0.7464

Epoch 21: Train Accuracy = 0.7522

Epoch 21: Test Accuracy = 0.7464

Epoch 22: Train Accuracy = 0.7522

Epoch 22: Test Accuracy = 0.7464

Epoch 23: Train Accuracy = 0.7522

Epoch 23: Test Accuracy = 0.7464

Epoch 24: Train Accuracy = 0.7522

Epoch 24: Test Accuracy = 0.7464

Epoch 25: Train Accuracy = 0.7522

Epoch 25: Test Accuracy = 0.7464

Epoch 26: Train Accuracy = 0.7522

Epoch 26: Test Accuracy = 0.7464

Epoch 27: Train Accuracy = 0.7522

Epoch 27: Test Accuracy = 0.7464

Epoch 28: Train Accuracy = 0.7522

Epoch 28: Test Accuracy = 0.7464

Epoch 29: Train Accuracy = 0.7522

Epoch 29: Test Accuracy = 0.7464

Epoch 30: Train Accuracy = 0.7522

Epoch 30: Test Accuracy = 0.7464

Epoch 31: Train Accuracy = 0.7522

Epoch 31: Test Accuracy = 0.7464

Epoch 32: Train Accuracy = 0.7522

Epoch 32: Test Accuracy = 0.7464

Epoch 33: Train Accuracy = 0.7522

Epoch 33: Test Accuracy = 0.7464

Epoch 34: Train Accuracy = 0.7522

Epoch 34: Test Accuracy = 0.7464

Epoch 35: Train Accuracy = 0.7522

Epoch 35: Test Accuracy = 0.7464

Epoch 36: Train Accuracy = 0.7522

Epoch 36: Test Accuracy = 0.7464

Epoch 37: Train Accuracy = 0.7522

Epoch 37: Test Accuracy = 0.7464

Epoch 38: Train Accuracy = 0.7522

Epoch 38: Test Accuracy = 0.7464

Epoch 39: Train Accuracy = 0.7522

Epoch 39: Test Accuracy = 0.7464

Epoch 40: Train Accuracy = 0.7522

Epoch 40: Test Accuracy = 0.7464

Epoch 41: Train Accuracy = 0.7522

Epoch 41: Test Accuracy = 0.7464

Epoch 42: Train Accuracy = 0.7522

Epoch 42: Test Accuracy = 0.7464

Epoch 43: Train Accuracy = 0.7522

Epoch 43: Test Accuracy = 0.7464

Epoch 44: Train Accuracy = 0.7522

Epoch 44: Test Accuracy = 0.7464

Epoch 45: Train Accuracy = 0.7522

Epoch 45: Test Accuracy = 0.7464

Epoch 46: Train Accuracy = 0.7522

Epoch 46: Test Accuracy = 0.7464

Epoch 47: Train Accuracy = 0.7522

Epoch 47: Test Accuracy = 0.7464

Epoch 48: Train Accuracy = 0.7522

Epoch 48: Test Accuracy = 0.7464

Epoch 49: Train Accuracy = 0.7522

Epoch 49: Test Accuracy = 0.7464

Epoch 50: Train Accuracy = 0.7522

Epoch 50: Test Accuracy = 0.7464

 

Final Train Accuracy:            0.7522

Final Test Accuracy:              0.7464

 

=================================================================

 

======================================================================

CODE FOR EXPERIMENT ON “BREAST CANCER”

=================================================================

 

import torch

import torch.nn as nn

import torch.optim as optim

from torch.utils.data import DataLoader, TensorDataset

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import StandardScaler

from sklearn.datasets import load_breast_cancer

import numpy as np

import matplotlib.pyplot as plt

 

# Load and preprocess the Breast Cancer dataset

data = load_breast_cancer()

X = data.data

y = data.target

 

# Normalize features

X = StandardScaler().fit_transform(X)

 

# Train-test split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

 

# Convert to PyTorch tensors

X_train, X_test = map(torch.tensor, (X_train, X_test))

y_train, y_test = map(torch.tensor, (y_train, y_test))

X_train, X_test = X_train.float(), X_test.float()

y_train, y_test = y_train.long(), y_test.long()

 

# Create data loaders

batch_size = 64

train_loader = DataLoader(TensorDataset(X_train, y_train), batch_size=batch_size, shuffle=True)

test_loader = DataLoader(TensorDataset(X_test, y_test), batch_size=batch_size)

 

# Model definition

class RegularizedNN(nn.Module):

    def __init__(self, input_size=30, hidden_size=64, output_size=2, p_malignant=0.0):

        super(RegularizedNN, self).__init__()

        self.fc1 = nn.Linear(input_size, hidden_size)

        self.relu = nn.ReLU()

        self.fc2 = nn.Linear(hidden_size, output_size)

        self.p_malignant = p_malignant

        self.scaling_factor = 1.0 - 2.0 * self.p_malignant # Initialize scaling factor for inference

 

    def forward(self, x):

        x = self.fc1(x)

        x = self.relu(x)

        x = self.fc2(x)

        return x

   

    def get_scaling_factor(self):

      return self.scaling_factor



# Train and evaluate function with adversarial regularization

def train_and_evaluate_with_ar(num_epochs=50, lr=0.001, p_malignant=0.0):

    model = RegularizedNN(p_malignant=p_malignant).to(device)

    criterion = nn.CrossEntropyLoss()

    optimizer = optim.Adam(model.parameters(), lr=lr)

   

    train_acc_list = []

    test_acc_list = []

 

    for epoch in range(num_epochs):

        # Training phase

        model.train()

        correct, total = 0, 0

 

        for inputs, labels in train_loader:

            inputs, labels = inputs.to(device), labels.to(device)

 

            # Forward pass

            outputs = model(inputs)

           

            # Scale the output during training

            outputs = outputs * (1 - 2 * p_malignant)

            loss = criterion(outputs, labels)

           

            # Weight updates based on the expected update rule

            optimizer.zero_grad()

            loss.backward()

            with torch.no_grad():

                for param in model.parameters():

                   param.grad = param.grad * (1 - 2 * p_malignant)

            optimizer.step()

 

            # Accuracy calculation

            _, predicted = torch.max(outputs, 1)

            total += labels.size(0)

            correct += (predicted == labels).sum().item()

 

        train_acc = correct / total

        train_acc_list.append(train_acc)

        print(f"Epoch {epoch+1}: Train Accuracy = {train_acc:.4f}")

       

 

        # Testing phase

        model.eval()

        correct, total = 0, 0

 

        with torch.no_grad():

            for inputs, labels in test_loader:

                inputs, labels = inputs.to(device), labels.to(device)

                outputs = model(inputs)

                # Scale the output during inference by the factor computed in the model

                outputs = outputs * model.get_scaling_factor()

                _, predicted = torch.max(outputs, 1)

                total += labels.size(0)

                correct += (predicted == labels).sum().item()

 

        test_acc = correct / total

        test_acc_list.append(test_acc)

        print(f"Epoch {epoch+1}: Test Accuracy = {test_acc:.4f}")

 

    return train_acc_list, test_acc_list

 

# Device configuration

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

num_epochs = 50

p_malignant = 0.3 # Example value for p_malignant

 

# Run experiments with adversarial regularization

train_acc, test_acc = train_and_evaluate_with_ar(num_epochs=num_epochs, p_malignant=p_malignant)

 

# Print final results

print(f"\nFinal Train Accuracy: {train_acc[-1]:.4f}")

print(f"Final Test Accuracy: {test_acc[-1]:.4f}")

 

# Plot results

plt.figure(figsize=(10, 6))

plt.plot(range(1, num_epochs + 1), train_acc, label="Train Accuracy")

plt.plot(range(1, num_epochs + 1), test_acc, label="Test Accuracy")

plt.title(f"Adversarial Regularization (p_malignant={p_malignant})")

plt.xlabel("Epoch")

plt.ylabel("Accuracy")

plt.legend()

plt.grid()

plt.show()

 

 

=================================================================

 

=================================================================

RESULTS (“BREAST CANCER” DATASET)

=================================================================

 = 0.3

Epoch 1: Train Accuracy = 0.7033

Epoch 1: Test Accuracy = 0.9298

Epoch 2: Train Accuracy = 0.9055

Epoch 2: Test Accuracy = 0.9561

Epoch 3: Train Accuracy = 0.9187

Epoch 3: Test Accuracy = 0.9561

Epoch 4: Train Accuracy = 0.9187

Epoch 4: Test Accuracy = 0.9649

Epoch 5: Train Accuracy = 0.9297

Epoch 5: Test Accuracy = 0.9649

Epoch 6: Train Accuracy = 0.9297

Epoch 6: Test Accuracy = 0.9649

Epoch 7: Train Accuracy = 0.9385

Epoch 7: Test Accuracy = 0.9649

Epoch 8: Train Accuracy = 0.9407

Epoch 8: Test Accuracy = 0.9649

Epoch 9: Train Accuracy = 0.9429

Epoch 9: Test Accuracy = 0.9649

Epoch 10: Train Accuracy = 0.9473

Epoch 10: Test Accuracy = 0.9649

Epoch 11: Train Accuracy = 0.9495

Epoch 11: Test Accuracy = 0.9649

Epoch 12: Train Accuracy = 0.9560

Epoch 12: Test Accuracy = 0.9649

Epoch 13: Train Accuracy = 0.9604

Epoch 13: Test Accuracy = 0.9649

Epoch 14: Train Accuracy = 0.9648

Epoch 14: Test Accuracy = 0.9737

Epoch 15: Train Accuracy = 0.9692

Epoch 15: Test Accuracy = 0.9737

Epoch 16: Train Accuracy = 0.9714

Epoch 16: Test Accuracy = 0.9737

Epoch 17: Train Accuracy = 0.9802

Epoch 17: Test Accuracy = 0.9825

Epoch 18: Train Accuracy = 0.9802

Epoch 18: Test Accuracy = 0.9825

Epoch 19: Train Accuracy = 0.9802

Epoch 19: Test Accuracy = 0.9825

Epoch 20: Train Accuracy = 0.9824

Epoch 20: Test Accuracy = 0.9825

Epoch 21: Train Accuracy = 0.9824

Epoch 21: Test Accuracy = 0.9912

Epoch 22: Train Accuracy = 0.9824

Epoch 22: Test Accuracy = 0.9825

Epoch 23: Train Accuracy = 0.9824

Epoch 23: Test Accuracy = 0.9825

Epoch 24: Train Accuracy = 0.9824

Epoch 24: Test Accuracy = 0.9825

Epoch 25: Train Accuracy = 0.9846

Epoch 25: Test Accuracy = 0.9825

Epoch 26: Train Accuracy = 0.9846

Epoch 26: Test Accuracy = 0.9825

Epoch 27: Train Accuracy = 0.9846

Epoch 27: Test Accuracy = 0.9825

Epoch 28: Train Accuracy = 0.9868

Epoch 28: Test Accuracy = 0.9912

Epoch 29: Train Accuracy = 0.9868

Epoch 29: Test Accuracy = 0.9912

Epoch 30: Train Accuracy = 0.9868

Epoch 30: Test Accuracy = 0.9912

Epoch 31: Train Accuracy = 0.9868

Epoch 31: Test Accuracy = 0.9912

Epoch 32: Train Accuracy = 0.9846

Epoch 32: Test Accuracy = 0.9912

Epoch 33: Train Accuracy = 0.9846

Epoch 33: Test Accuracy = 0.9912

Epoch 34: Train Accuracy = 0.9868

Epoch 34: Test Accuracy = 0.9912

Epoch 35: Train Accuracy = 0.9868

Epoch 35: Test Accuracy = 0.9912

Epoch 36: Train Accuracy = 0.9846

Epoch 36: Test Accuracy = 0.9912

Epoch 37: Train Accuracy = 0.9846

Epoch 37: Test Accuracy = 0.9912

Epoch 38: Train Accuracy = 0.9868

Epoch 38: Test Accuracy = 0.9912

Epoch 39: Train Accuracy = 0.9868

Epoch 39: Test Accuracy = 0.9912

Epoch 40: Train Accuracy = 0.9868

Epoch 40: Test Accuracy = 0.9912

Epoch 41: Train Accuracy = 0.9868

Epoch 41: Test Accuracy = 0.9912

Epoch 42: Train Accuracy = 0.9868

Epoch 42: Test Accuracy = 0.9912

Epoch 43: Train Accuracy = 0.9868

Epoch 43: Test Accuracy = 0.9912

Epoch 44: Train Accuracy = 0.9868

Epoch 44: Test Accuracy = 0.9912

Epoch 45: Train Accuracy = 0.9868

Epoch 45: Test Accuracy = 0.9912

Epoch 46: Train Accuracy = 0.9868

Epoch 46: Test Accuracy = 0.9912

Epoch 47: Train Accuracy = 0.9868

Epoch 47: Test Accuracy = 0.9912

Epoch 48: Train Accuracy = 0.9868

Epoch 48: Test Accuracy = 0.9912

Epoch 49: Train Accuracy = 0.9868

Epoch 49: Test Accuracy = 0.9912

Epoch 50: Train Accuracy = 0.9868

Epoch 50: Test Accuracy = 0.9825

 

Final Train Accuracy:            0.9868

Final Test Accuracy:              0.9825

 

=================================================================

=================================================================