CSci 4511w: In Class Activities

Each week (except for the weeks when there is an exam) there will be an in-class activity. Activities will vary from discussion in small groups, to practical exercises and problem solving.
Participating will help you staying on top of the material and make sure you understand it. It will also help me understand what are the parts of the material you find most difficult.
Participation to each activity is worth 1% fo the class grade. If you miss class you cannot make up for the missing points, but there will be a few opportunities for extra credit during the semester.
  1. Thursday January 21
    This is part of question 2.2 from the textbook. We will examine the rationality of vacuum-cleaner agent functions given the following assumptions from the texbook:
    1. You are given this reflex-vacuum-agent function:
      function Reflex-Vacuum-Agent(location,status] returns an action
        if status = Dirty then return Suck
        else if location = A then return Right
        else if location = B then return Left
      
      Prove that this vacuum-agent function is rational. One way of doing it it to show that for all situationsm i.e. all distributions of dirt and all initial locations, there is no reflex-agent that can do better.
    2. Suppose you change the performance measure so that one point is deducted for each movement. Should the agent function change? Does the agent need internal state?

  2. Tuesday January 26
    Reading of writing #1 and discussion in small groups.
  3. Tuesday February 2
    You are given the following graph, where each node has an identifier (a letter) and an h value. A number along an arc indicates the cost of the arc.
    [figure to be added]
    1. Show in what order A* expands nodes from Start to Goal. For each node expanded during the search show its f and g values. If a node is reached on multiple paths show its f and g values each time the node is reached, and indicate its parent node.
    2. What is the solution path found?

  4. Thursday February 11
    Prove each of the following statements, or give a counterexample:
    1. breadth-first search is a special case of A*
    2. uniform-cost search is a special case of greedy best-first search

  5. Tuesday February 16
    Play a tic-tac-toe game with another student on a 4x4 board with the objective of placing 3 consecutive elements in a row/column/diagonal.
    Write down the rules you use when deciding where to put your piece. Be as precise as possible, as if you were to describe an algorithm.
  6. Tuesday February 23
    Show the backed-up values for all the nodes in the following game tree and show the branches that are pruned by alpha-beta pruning. For each branch pruned, write down the condition that is used to do the pruning. Follow the convention used in the textbook to examine the branches in the tree from left to right. (Figure to be added)
  7. Thursday March 11
    [This is question 7.10 from the 3rd edition of rhe textbook and 7.8 from the 2nd edition] Decide whether each of the following sentences is valid, unsatisfiable, or neither. Verify your decisions using truth tables or equivalence rules.
    1. Smoke → Smoke
    2. Smoke → Fire
    3. (Smoke → Fire) → (¬ Smoke → ¬ Fire)
    4. Smoke ∨ Fire ∨ ¬ Fire
    5. ((Smoke ∧ Heat) → Fire) ← ((Smoke → Fire) ∨ (Heat → Fire))
    6. (Smoke → Fire) → ((Smoke ∧ Heat) → Fire)
    7. Big ∨ Dumb ∨ (Big → Dumb)
    8. (Big ∧ Dumb) ∨ ¬ Dumb

  8. Thursday March 25
    For each of the following sentences, decide if the logic sentence given is a correct translation of the English sentence or not. If not correct it and explain briefly why not.
    1. No one owns a car
      ¬ (∀ x ¬ Own(x, y) ∧ Car(y))
    2. One purple mushroom is poisonous.
      ∃ x Mushroom(x) ∧ Purple(x) ∧ Poisonous(x)
    3. An object is clear if nothing is on it.
      ∃ x ∀ y Clear(x) → ¬ On(y,x)
    4. John loves his dog.
      ∃ x Dog(x) ∧ Owns(John, x) → Loves(John, x)
    5. John loves all his dogs.
      ∀ x Dog(x) ∧ Owns(John, x) ∧ Loves(John, x)
    6. Every city has a dogcatcher who has been bitten by every dog in town.
      ∀ x ∀ y ∀ z City(z) ∧ DogCatcher(y) ∧ Dog(z) ∧ LivesIn(z,x) → BittenBy(y,z)

  9. Thursday April 1

  10. Thursday April 8

  11. Tuesday April 20
    It is autumn, you need to rake leaves out of your yard. Assume that the yard is divided into a linear sequence of 5 squares each one meter on a side. The square at one end (S_0) is identified as the collection point. The goal is to get all the leaves into the collection point. Assume you have a leaf blower that can move all the leaves from square S_i to an adjacent square S_j. (inspired from a problem used by Drew McDermott).
  12. Thursday April 29
    Learn a decision tree using using information gain using the following data set (from Yoh-Han Pao, Adaptive Pattern Recognition and Neural Networks, 1980):
    height hair eyes class
    tall dark blue yes
    short dark blue yes
    tall blond blue no
    tall red blue no
    tall blond brown yes
    short blond blue no
    short blond brown yes
    tall dark brown yes
    1. compute the entropy of the set S Entropy(S) = - p log2 p - n log2 n where p is the fraction of positive examples and n of negative examples
    2. for each attribute A (i.e. for height, hair, and eyes)
      1. split the examples into disjoint subsets, Ek for k=1,..d according to the value v of the attribute A. Each of those subsets corresponds to a branch in the decision tree from node A. For each subset compute its entropy:
        Entropy(S,A=v) = - pk log2 pk - nk log2 nk
        Entropy(S, height=tall) =
        Entropy(S, height=short) =
        Entropy(S, hair=dark) =
        Entropy(S, hair=blond) =
        Entropy(S, hair=red) =
        Entropy(S, eyes=blue) =
        Entropy(S, eyes=brown) =
      2. after you have computed the entropy of each subset for each value of attribute A, compute the Gain, which is the expected reduction in entropy due to sorting on A, as follows:
        Gain(S,A) = Entropy(S) - sum{v in Values(A)} |Sv|/|S| Entropy(Sv)
        where Sv is the subset for which A has value v and Values(A) is the set of all possible values for A.
    3. select the attribute that has the largest Gain and repeat the process until all subsets have have 0 entropy or you run out of attributes.

  13. Thursday May 6
Copyright: © 2010 by the Regents of the University of Minnesota
Department of Computer Science and Engineering. All rights reserved.
Comments to: Maria Gini
Changes and corrections are in red.