Posts

Showing posts from December, 2025

Implementing Single-Layer Perceptron for Binary Classification

Mathematical Formulation: For input vector  x , the perceptron computes: Linear combination:  z = w·x + b Activation:  a = σ(z)  where σ is sigmoid function Prediction:  ŷ = 1 if a ≥ 0.5 else 0 Loss: Binary cross-entropy The network learns by minimizing the loss through gradient descent, updating weights as: w = w - η * ∂L/∂w b = b - η * ∂L/∂b This implementation provides a complete, working single-layer neural network for binary classification that can learn linear decision boundaries. How to use  # Create and train perceptron perceptron = SingleLayerPerceptron(input_size=2, learning_rate=0.1, epochs=500) perceptron.fit(X_train, y_train) # Make predictions predictions = perceptron.predict(X_test) probabilities = perceptron.predict_proba(X_test) Implementation   import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.metrics import ac...

2 Node Neural Network

  Simple Explanation of a 2-Node Neural Network Architecture Imagine a tiny brain with only 2 brain cells (neurons) that receive information and produce 2 outputs. Input Features [ x₁ ] Weights for Biases [ x₂ ] ──► Neuron 1 ─────► [ b₁ ]───┐ [ x₃ ] W₁₁, W₁₂, … │ [ x₄ ]           ├──► [ Output₁ ] ← Node 1 output (after activation) . │ . │ [ xₙ ] Weights for │ Neuron 2 ─────► [ b₂ ] ─ ─┘ W₂₁, W₂₂, … └──► [ Output₂ ] ← Node 2 output (after activation) Real Example We Used (4 inputs → 2 nodes) Input (4 features) 2-Node Layer (Output) x₁ = 1.0 weight matrix W (4×2) bias Linear Activation Final Output x₂ = 0.5 ───────────────────────► + [b₁] ─────► Z = X·W + b ─────► ReLU(Z) ───► [ y...