5th sem cse notes and topic

Theory of computation ( automata theory)

Unit 1.0: Formal Language and Regular Expressions

Languages and Regular Expressions:

A language is a set of strings formed from an alphabet. A regular expression defines a language using symbols and operators like concatenation, union (|), and Kleene star (*).

Finite Automata:

  1. Deterministic Finite Automaton (DFA): A finite state machine where for each state and input, there is exactly one transition.

  2. Non-deterministic Finite Automaton (NFA): Allows multiple transitions for the same input symbol.

Conversions:

  1. Regular Expression to NFA: Example: Regular expression a(b|c)* is converted to NFA using Thompson’s construction.

  2. NFA to DFA: Example: Convert the NFA with states {q0, q1} and transitions {(q0, a) -> q1, (q1, b) -> q1} into DFA using the subset construction method.

Unit 2.0: Applications of Finite Automata & Parsing

Lexical Analysis & Lex Tools:

  • Lexical analysis breaks input into tokens.

  • Lex is a tool that generates lexical analyzers.

Context-Free Grammars (CFGs) & Parsing:

  • CFG: Defined as G = (V, Σ, P, S), where V is variables, Σ is terminals, P is production rules, and S is the start symbol.

  • Derivations & Parse Trees: Used to derive strings in a language.

  • LL(K) Grammars & LL(1) Parsing: Predictive parsing where the parser uses one lookahead token.

Example: CFG for balanced parentheses:

S → (S)S | ε

Unit 3.0: Bottom-Up Parsing & Semantics

Parsing Techniques:

  1. Bottom-Up Parsing: Constructs a parse tree from leaves to root.

  2. LR Grammar & LALR Parsing: More powerful parsing methods used in compilers.

  3. YACC (Yet Another Compiler Compiler): Used to generate parsers.

Semantics & Intermediate Code Generation:

  1. Syntax-Directed Translation: Associates rules with parse trees.

  2. Intermediate Code (Three-Address Code, Abstract Syntax Tree): Simplifies further code generation.

Example: For a = b + c * d; an AST representation would have + as a parent of b and *, where * has c and d as children.

Unit 4.0: Context-Sensitive Features

Chomsky Hierarchy of Languages:

  1. Type 0: Unrestricted grammars.

  2. Type 1: Context-sensitive grammars.

  3. Type 2: Context-free grammars.

  4. Type 3: Regular grammars.

Type Checking & Type Conversions:

  1. Type equivalence: Determines if two types are the same.

  2. Function overloading: Allows multiple functions with the same name but different parameters.

Example: Function overloading in C++:

int add(int a, int b); // First definition float add(float a, float b); // Second definition

Unit 5.0: Runtime Storage & Code Optimization

Storage Organization & Allocation Strategies:

  1. Static Allocation: Memory allocated at compile time.

  2. Stack Allocation: Used for function calls.

  3. Heap Allocation: Used for dynamic memory.

Code Optimization Techniques:

  1. Peephole Optimization: Removes redundant instructions.

  2. Flow Graphs & Data Flow Analysis: Used for optimizing loops and variables.

Example: Peephole Optimization:

MOV R1, A ADD R1, 0 // Can be removed as it does nothing.

Unit 6.0: Code Generation

Code Generation & Optimization:

  1. Machine-Dependent Code Generation: Converts intermediate code into machine code.

  2. Register Allocation: Assigns variables to CPU registers.

  3. DAG Representation: Optimizes code by removing common subexpressions.

Example: For x = a + b; y = a + b;, the DAG will store a + b once instead of duplicating the computation.

This document provides a structured explanation of formal languages, parsing, compilation techniques, and code generation with examples. Let’s generate a PDF based on this content.

Overview of Artificial Intelligence (AI)

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines. It enables computers to perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, and decision-making. AI can be classified into:
Weak AI – Designed for specific tasks (e.g., chatbots, recommendation systems).
Strong AI – Capable of generalized human-like intelligence (e.g., AGI, futuristic AI).

Turing Test

Proposed by Alan Turing (1950), this test evaluates a machine’s ability to exhibit intelligent behavior indistinguishable from a human. If a human cannot differentiate between a human and a machine in conversation, the machine "passes" the Turing Test.

Intelligent Agents

An agent is any system that perceives its environment and takes actions to maximize its performance.
Simple Reflex Agents – React to specific conditions but have no memory.
Model-Based Reflex Agents – Maintain an internal state of the world.
Goal-Based Agents – Take actions to achieve predefined goals.
Utility-Based Agents – Aim to maximize performance based on a utility function.
Learning Agents – Continuously improve based on past experiences.

📌 UNIT 2.0: PROBLEM SOLVING BY SEARCHING

AI uses search algorithms to find solutions efficiently.

1️⃣ Uninformed (Blind) Search Techniques

These algorithms explore solutions without prior knowledge of the problem domain.

Depth First Search (DFS) – Explores the deepest node first, using a stack (LIFO).
Breadth First Search (BFS) – Explores all neighboring nodes first, using a queue (FIFO).
Depth First Iterative Deepening (DFID) – Combines DFS and BFS by gradually increasing depth limits.

2️⃣ Heuristic (Informed) Search Techniques

These algorithms use domain-specific knowledge (heuristics) to speed up searching.

Generate and Test – Generates possible solutions and tests them iteratively.
Best First Search – Expands the most promising node based on a heuristic function.
Beam Search – Like Best First Search but limits the number of best nodes considered.
Hill Climbing – Iteratively moves toward the best solution but may get stuck in local optima.
A Algorithm* – Uses g(n) (cost so far) + h(n) (estimated future cost) for optimal pathfinding.

📌 UNIT 3.0: PROBLEM REDUCTION & STOCHASTIC SEARCH

1️⃣ Problem Reduction Search

AND/OR Graphs – Used when problems can be broken into sub-problems (AND nodes) or multiple alternative paths (OR nodes).
AO Algorithm* – Solves AND/OR graphs optimally using a heuristic approach.
Constraint Satisfaction Problems (CSP) – Problems like Sudoku, where solutions must satisfy constraints.
Means-End Analysis – Identifies the difference between the current state and the goal and applies actions to reduce this difference.

2️⃣ Stochastic (Randomized) Search Methods

These algorithms involve randomness to explore solutions.

Simulated Annealing – Inspired by metallurgy, this algorithm avoids local minima by occasionally accepting worse solutions.
Particle Swarm Optimization (PSO) – Inspired by the movement of bird flocks, particles (solutions) adjust their positions based on the best solution found.

3️⃣ Game Playing Algorithms

Minimax Algorithm – Used in two-player games (e.g., Chess). Assumes both players play optimally.
Alpha-Beta Pruning – Optimizes Minimax by ignoring unnecessary branches, reducing computation time.

📌 UNIT 4.0: KNOWLEDGE & REASONING

1️⃣ Knowledge Representation

AI systems need to represent knowledge in a way that allows reasoning and decision-making.

Propositional Logic – Uses simple statements (propositions) that can be true or false. Example:

  • "It is raining" (True/False).
    First Order Logic (FOL) – Uses objects, predicates, and quantifiers for more expressive reasoning. Example:

  • "All humans are mortal" → ∀x (Human(x) → Mortal(x))

2️⃣ Inference Mechanisms

Resolution and Refutation Proofs – Logical deduction methods to infer new knowledge.
Theorem Proving – Used in AI-based expert systems to validate logical statements.

3️⃣ Planning & Decision Making

Partial Order Planning – Plans actions that may occur in any order, as long as dependencies are maintained.
Uncertain Knowledge and Reasoning – Used when outcomes are not certain (e.g., AI in medicine).
Bayesian Networks – Probabilistic models that represent uncertain relationships between variables. Example:

  • If it is cloudy, there is a 70% chance of rain.

📌 UNIT 5.0: LEARNING (MACHINE LEARNING)

1️⃣ Types of Machine Learning

Supervised Learning – Uses labeled data (e.g., email spam classification).
Unsupervised Learning – Finds patterns in unlabeled data (e.g., customer segmentation).
Semi-Supervised Learning – Combines labeled and unlabeled data for training.

2️⃣ Important Learning Algorithms

K-Means Clustering – Groups similar data points into clusters (unsupervised learning).
Decision Trees – Uses a tree-like structure for decision-making (supervised learning).
Neural Networks – Mimics the human brain using layers of neurons to learn patterns.
Deep Learning – Uses multiple hidden layers in neural networks for advanced AI (e.g., facial recognition).

📌 UNIT 6.0: ADVANCED AI TOPICS

1️⃣ Computer Vision

✔ AI algorithms process and analyze images (e.g., face recognition, self-driving cars).

2️⃣ Natural Language Processing (NLP)

✔ AI understands human language (e.g., chatbots, Google Translate, voice assistants).

3️⃣ Expert Systems

✔ AI-based decision-making systems that mimic human experts (e.g., medical diagnosis software).

4️⃣ Robotics

✔ AI-driven robots perform human-like tasks (e.g., industrial automation, space exploration).

5️⃣ Genetic Algorithm (GA)

✔ Inspired by Darwin’s theory of evolution, GA optimizes solutions by simulating natural selection.

📌 STUDY STRATEGY FOR AI