Skip to main content

Candidate Elimination

 def candidate_elimination(attributes, target):
    """
    Implements the Candidate-Elimination algorithm for concept learning.
    Parameters:
    - attributes: List of examples (list of lists).
    - target: List of target values (list of strings, e.g., 'Yes' or 'No').
    Returns:
    - S: Most specific boundary.
    - G: Most general boundary.
    """
    # Step 1: Initialize S (most specific) and G (most general)
    num_attributes = len(attributes[0])
    S = ["Φ"] * num_attributes
    G = [["?"] * num_attributes]
    # Step 2: Process each training example
    for i, example in enumerate(attributes):
        if target[i] == "Yes":  # Positive example
            # Remove inconsistent hypotheses from G
            G = [g for g in G if is_consistent(g, example)]
            # Update S: Generalize it to include the current example
            for j in range(num_attributes):
                if S[j] == "Φ":
                    S[j] = example[j]  # Initialize
                elif S[j] != example[j]:
                    S[j] = "?"  # Generalize
        elif target[i] == "No":  # Negative example
            # Remove inconsistent hypotheses from S
            if is_consistent(S, example):
                S = ["Φ"] * num_attributes
            # Update G: Specialize it to exclude the current example
            G = specialize_hypotheses(G, example)
    return S, G

def is_consistent(hypothesis, example):
    """
    Checks if a hypothesis is consistent with an example.
    Parameters:
    - hypothesis: A hypothesis (list of strings).
    - example: An example (list of strings).
    Returns:
    - True if consistent, False otherwise.
    """
    for h, e in zip(hypothesis, example):
        if h != "?" and h != e:
            return False
    return True

def specialize_hypotheses(general_hypotheses, example):
    """
    Specializes the general hypotheses to exclude the given example.
    Parameters:
    - general_hypotheses: List of general hypotheses.
    - example: The negative example to exclude.
    Returns:
    - A new list of specialized hypotheses.
    """
    specialized_hypotheses = []
    for g in general_hypotheses:
        for i in range(len(g)):
            if g[i] == "?":
                for value in set(example):
                    if value != example[i]:
                        new_hypothesis = g[:]
                        new_hypothesis[i] = value
                        if new_hypothesis not in specialized_hypotheses:
                            specialized_hypotheses.append(new_hypothesis)
    return specialized_hypotheses

# Example Dataset
attributes = [
    ["Sunny", "Warm", "Normal", "Strong", "Warm", "Same"],
    ["Sunny", "Warm", "High", "Strong", "Warm", "Same"],
    ["Rainy", "Cold", "High", "Strong", "Warm", "Change"],
    ["Sunny", "Warm", "High", "Weak", "Warm", "Same"]
]
target = ["Yes", "Yes", "No", "Yes"]
# Run Candidate-Elimination Algorithm
specific_boundary, general_boundary = candidate_elimination(attributes, target)
# Display Results
print("Most Specific Boundary (S):", specific_boundary)
print("Most General Boundary (G):", general_boundary)
x

Comments

Popular posts from this blog

ML Lab Questions

1. Using matplotlib and seaborn to perform data visualization on the standard dataset a. Perform the preprocessing b. Print the no of rows and columns c. Plot box plot d. Heat map e. Scatter plot f. Bubble chart g. Area chart 2. Build a Linear Regression model using Gradient Descent methods in Python for a wine data set 3. Build a Linear Regression model using an ordinary least-squared model in Python for a wine data set  4. Implement quadratic Regression for the wine dataset 5. Implement Logistic Regression for the wine data set 6. Implement classification using SVM for Iris Dataset 7. Implement Decision-tree learning for the Tip Dataset 8. Implement Bagging using Random Forests  9.  Implement K-means Clustering    10.  Implement DBSCAN clustering  11.  Implement the Gaussian Mixture Model  12. Solve the curse of Dimensionality by implementing the PCA algorithm on a high-dimensional 13. Comparison of Classification algorithms  14. Compa...

DBSCAN

DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a popular density-based clustering algorithm that groups data points based on their density in feature space. It’s beneficial for datasets with clusters of varying shapes, sizes, and densities, and can identify noise or outliers. Step 1: Initialize Parameters Define two important parameters: Epsilon (ε) : The maximum distance between two points for them to be considered neighbors. Minimum Points (minPts) : The minimum number of points required in an ε-radius neighborhood for a point to be considered a core point. Step 2: Label Each Point as Core, Border, or Noise For each data point P P P in the dataset: Find all points within the ε radius of P P P (the ε-neighborhood of P P P ). Core Point : If P P P has at least minPts points within its ε-neighborhood, it’s marked as a core point. Border Point : If P P P has fewer than minPts points in its ε-neighborhood but is within the ε-neighborhood of a core point, it’...

Gaussian Mixture Model

A Gaussian Mixture Model (GMM) is a probabilistic model used for clustering and density estimation. It assumes that data is generated from a mixture of several Gaussian distributions, each representing a cluster within the dataset. Unlike K-means, which assigns data points to the nearest cluster centroid deterministically, GMM considers each data point as belonging to each cluster with a certain probability, allowing for soft clustering. GMM is ideal when: Clusters have elliptical shapes or different spreads : GMM captures varying shapes and densities, unlike K-means, which assumes clusters are spherical. Soft clustering is preferred : If you want to know the probability of a data point belonging to each cluster (not a hard assignment). Data has overlapping clusters : GMM allows a point to belong partially to multiple clusters, which is helpful when clusters have significant overlap. Applications of GMM Image Segmentation : Used to segment images into regions, where each region can be...