Posts

Implementing Single-Layer Perceptron for Binary Classification

Mathematical Formulation: For input vector  x , the perceptron computes: Linear combination:  z = w·x + b Activation:  a = σ(z)  where σ is sigmoid function Prediction:  ŷ = 1 if a ≥ 0.5 else 0 Loss: Binary cross-entropy The network learns by minimizing the loss through gradient descent, updating weights as: w = w - η * ∂L/∂w b = b - η * ∂L/∂b This implementation provides a complete, working single-layer neural network for binary classification that can learn linear decision boundaries. How to use  # Create and train perceptron perceptron = SingleLayerPerceptron(input_size=2, learning_rate=0.1, epochs=500) perceptron.fit(X_train, y_train) # Make predictions predictions = perceptron.predict(X_test) probabilities = perceptron.predict_proba(X_test) Implementation   import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.metrics import ac...

2 Node Neural Network

  Simple Explanation of a 2-Node Neural Network Architecture Imagine a tiny brain with only 2 brain cells (neurons) that receive information and produce 2 outputs. Input Features [ x₁ ] Weights for Biases [ x₂ ] ──► Neuron 1 ─────► [ b₁ ]───┐ [ x₃ ] W₁₁, W₁₂, … │ [ x₄ ]           ├──► [ Output₁ ] ← Node 1 output (after activation) . │ . │ [ xₙ ] Weights for │ Neuron 2 ─────► [ b₂ ] ─ ─┘ W₂₁, W₂₂, … └──► [ Output₂ ] ← Node 2 output (after activation) Real Example We Used (4 inputs → 2 nodes) Input (4 features) 2-Node Layer (Output) x₁ = 1.0 weight matrix W (4×2) bias Linear Activation Final Output x₂ = 0.5 ───────────────────────► + [b₁] ─────► Z = X·W + b ─────► ReLU(Z) ───► [ y...

Python packages for NN&DL Models

 U p-to-date (2025) comparison of all major Python packages you can use to build and run Neural Networks & Deep Learning models — ranked by popularity and real-world usage. Rank Package Best For Difficulty Speed Production Ready? 2025 Status & Recommendation 1 TensorFlow + Keras Everything (beginners → Google-scale production) Easy → Medium Very Fast (XLA, GPU/TPU) Yes (Google, Uber, Airbnb) #1 Choice in 2025 – Most jobs, best ecosystem, Keras = easiest API 2 PyTorch Research, flexibility, dynamic graphs Medium Very Fast (especially with torch.compile) Yes (Meta, Tesla, OpenAI) #2 – Dominant in research & startups 3 JAX + Flax / Equinox Cutting-edge research, super fast on TPUs Hard Fastest on accelerators Growing (Go...

Activation Functions

What is an activation function? An activation function is a small mathematical function applied to the output of each neuron (node) in a neural network. It decides: “Given the weighted sum of inputs this neuron received, what should it finally output?” Without activation function → neuron output = just a linear combination (weighted sum + bias) With activation function → neuron output = something more intelligent (usually non-linear) Why Do We Need Activation Functions? (The Real Reason) There are two big reasons : Reason 1: To Introduce Non-Linearity (The Most Important Reason) Real-world data is not linear . Examples: Is this a cat or dog in the photo? → Decision boundary is curved/complex Will the stock price go up or down? → Highly non-linear Will a customer click the ad? → Non-linear patterns Fact : If you use only linear functions (or no activation), then even a 1000-layer neural network collapses into one single linear equation . → Your deep network bec...