2 Node Neural Network

 Simple Explanation of a 2-Node Neural Network Architecture

Imagine a tiny brain with only 2 brain cells (neurons) that receive information and produce 2 outputs.

Input Features [ x₁ ] Weights for Biases [ x₂ ] ──► Neuron 1 ─────► [ b₁ ]───┐ [ x₃ ] W₁₁, W₁₂, … │ [ x₄ ]          ├──► [ Output₁ ] ← Node 1 output (after activation) . │ . [ xₙ ] Weights for │ Neuron 2 ─────► [ b₂ ] ─┘

W₂₁, W₂₂, … └──► [ Output₂ ] ← Node 2 output (after activation)


Real Example We Used (4 inputs → 2 nodes)

Input (4 features) 2-Node Layer (Output)
x₁ = 1.0 weight matrix W (4×2) bias Linear Activation Final Output
x₂ = 0.5 ───────────────────────► + [b₁] ─────► Z = X·W + b ─────► ReLU(Z) ───► [ y₁ ]
x₃ = 0.3 [b₂] [ y₂ ]
x₄ = 0.8

Step-by-Step What Happens Inside

For every input sample (one row of data):

  1. Linear combination for Node 1 Z₁ = x₁·W₁₁ + x₂·W₁₂ + x₃·W₁₃ + x₄·W₁₄ + b₁
  2. Linear combination for Node 2 Z₂ = x₁·W₂₁ + x₂·W₂₂ + x₃·W₂₃ + x₄·W₂₄ + b₂
  3. Activation function (we used ReLU) y₁ = ReLU(Z₁) = max(0, Z₁) y₂ = ReLU(Z₂) = max(0, Z₂)

That’s it! The whole network has only these 10 learnable parameters:


Parameter

Shape

Count

Meaning

Weights W

(4,2)

8

4 inputs × 2 nodes = 8 connections

Biases b

(2,)

2

1 bias per node

Total

10

Exactly what model.summary() showed

Implementation

# =============================================================================
# 2-NODE-NEURAL NETWORK – FULLY EXPLAINED & TRANSPARENT IMPLEMENTATION
#  Architecture: 4 Inputs → 2 Nodes (Neurons) → 2 Outputs (ReLU activation)
# =============================================================================

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, Dense

# -------------------------- 1. Build the Model --------------------------
model = Sequential([
    Input(shape=(4,), name="Input_4_Features"),           # 4 input values
    Dense(units=2,
          activation='relu',
          use_bias=True,
          name="Two_Node_Layer")                    # Exactly 2 neurons
], name="My_2_Node_Network")

# Compile (optional for prediction only)
model.compile(optimizer='adam', loss='mse')

# -------------------------- 2. Model Architecture Summary --------------------------
print("="*60)
print("           2-NODE NEURAL NETWORK ARCHITECTURE")
print("="*60)
model.summary()
print("="*60)

# -------------------------- 3. Manually Set Weights & Biases (for Full Control --------------------------
# We'll use nice, easy-to-follow numbers

# Weight matrix: 4 rows (inputs) × 2 columns (nodes)
W = np.array([
    [ 0.5, -0.8],   # ← weights from input 1
    [ 0.3,  0.6],   # ← weights from input 2
    [-0.4,  0.9],   # ← weights from input 3
    [ 0.7, -0.2]    # ← weights from input 4
], dtype=np.float32)

# Bias for each node
b = np.array([0.1, -0.3], dtype=np.float32)

# Set them into the model
model.get_layer("Two_Node_Layer").set_weights([W, b])

print("MANUALLY SET WEIGHTS (4×2):")
print(W)
print("\nMANUALLY SET BIASES (2,):")
print(b)
print("-"*60)

# -------------------------- 4. Input Data (3 examples) --------------------------
X = np.array([
    [1.0, 0.0, 1.0, 0.0],   # Sample 1
    [0.0, 1.0, 0.0, 1.0],   # Sample 2
    [0.5, 0.5, 0.5, 0.5]    # Sample 3
], dtype=np.float32)

print("INPUT DATA (3 samples × 4 features):")
print(X)
print("-"*60)

# -------------------------- 5. Keras Prediction --------------------------
keras_output = model.predict(X, verbose=0)

print("KERAS OUTPUT (from 2 nodes, after ReLU):")
print(keras_output)

# -------------------------- 6. Manual Calculation (Step-by-Step) --------------------------
print("\n" + "="*60)
print("MANUAL CALCULATION – LET'S VERIFY STEP BY STEP")
print("="*60)

for i in range(len(X)):
    sample = X[i]
    print(f"\nSample {i+1}: {sample}")
   
    # Step 1: Linear part Z = X · W + b
    z1 = sample[0]*W[0,0] + sample[1]*W[1,0] + sample[2]*W[2,0] + sample[3]*W[3,0] + b[0]
    z2 = sample[0]*W[0,1] + sample[1]*W[1,1] + sample[2]*W[2,1] + sample[3]*W[3,1] + b[1]
   
    print(f"   Z₁ (Node 1 before activation) = {z1:.4f}")
    print(f"   Z₂ (Node 2 before activation) = {z2:.4f}")
   
    # Step 2: Apply ReLU
    y1 = max(0, z1)
    y2 = max(0, z2)
   
    print(f"   → Output Node 1 (ReLU) = {y1:.4f}")
    print(f"   → Output Node 2 (ReLU) = {y2:.4f}")
   
    # Match with Keras?
    print(f"   Match Keras? → {np.allclose([y1, y2], keras_output[i])}")

print("\n" + "="*60)
print("FINAL RESULT: Both methods give SAME output!")
print("This is exactly how a 2-node neural network works.")
print("="*60)

Comments

Popular posts from this blog

About me

A set of documents that need to be classified, use the Naive Bayesian Classifier

Keras