Revised 08/2024

ITD 240 - Machine Learning II (3 CR.)

Course Description

Examines theory, algorithms, applications, and issues within the subfield of pattern recognition and machine learning, including feature engineering and extraction, supervised and unsupervised learning. Focuses on theory and practice, with coverage of underlying mathematical and heuristic concepts. Part II of II. Lecture 3 hours per week.

General Course Purpose

Provides advanced-level instruction in artificial intelligence to give the student competence in describing, choosing, training, testing, and evaluating the efficacy and applicability of various machine learning algorithms and methods, using case studies and real-world applications.

Course Prerequisites/Corequisites

Prerequisite: ITD 140 Machine Learning I or division approval Recommended: ITP 150 Python I or intermediate programming experience

Course Objectives

Upon completing the course, the student will be able to:

  • Describe and identify basic artificial intelligence approaches.
  • Describe and differentiate between supervised and unsupervised learning techniques.
  • Identify and explain basic types of machine learning algorithms for both supervised and unsupervised machine learning.
  • Describe and apply feature extraction and engineering techniques.
  • Explain scenarios where supervised or unsupervised learning would be appropriate.
  • Apply supervised and unsupervised algorithms to problems in the form of case studies.
  • Demonstrate basic proficiency in data preparation and machine learning training and testing by both using GUI applications and tools as well as writing code in Python.
  • Explain and apply feature extraction and engineering techniques.
  • Explain and demonstrate basic proficiency in evaluating and tuning models.

Major Topics to Be Included

  • Artificial intelligence; overview, history
  • Machine learning generally
  • Supervised learning – regression and classification
  • Unsupervised learning – clustering, dimensionality reduction
  • Feature engineering and extraction
  • Optimization and loss functions
  • Decision trees
  • Neural networks
  • Noise, overfitting, bias
  • Model validation and evaluation metrics
  • Model and hyperparameter tuning

Student Learning Outcomes

  • Identify artificial intelligence terminology
    • Define and explain the purpose of Artificial Intelligence (AI); define AI Winter
    • Define and explain the purpose of Artificial Narrow Intelligence (ANI)
    • Define and explain the purpose of Artificial General Intelligence (AGI)
    • Explain the capabilities of Artificial General Intelligence (AGI)
  • Identify, define and explain the purpose of machine learning
  • Identify feature engineering terminology
    • Define regression imputation
    • Define multiple imputation
    • Define nominal attributes
    • Define ordinal attributes
  • Define supervised learning terminology 
    • Define and explain the purpose of supervised learning
    • Demonstrate ability to identify supervised learning algorithms and identify appropriate applications
    • Define regression as a supervised learning prediction task
    • Demonstrate the ability to identify when it is appropriate to use regression
    • Define and explain the purpose of decision tree learning
    • Define classification as a supervised learning prediction task
    • Demonstrate the ability to identify appropriate applications of classification
    • Define k-nearest neighbors method for classification prediction tasks
    • Define bias within linear models
    • Define least squares for linear regression modeling
    • Define entropy
    • Define and explain the purpose of the Stochastic Gradient Descent model
    • Define and explain the purpose of deep learning
    • Define and explain the purpose of support vector machines (SVMs)
    • Define and explain the purpose of a Bayesian Network
    • Define bias, variance, overfitting, underfitting
    • Define bias-variance tradeoff
    • Define and explain the purpose of hyperparameters
    • Demonstrate the ability to explain the impact of hyperparameters on a model
    • Understand the impact of hyperparameters on complexity
    • Define and explain the purpose of weights
    • Apply supervised learning to analyze and solve real world problems through case studies
  • Define unsupervised learning terminology
    • Define unsupervised learning
    • Define clustering
    • Define and explain the purpose of anomaly detection
    • Define and explain the purpose of dimensionality reduction
    • Demonstrate the ability to identify when dimensionality reduction is appropriate
    • Define and explain the purpose of k-means clustering
      • Define inter-cluster distance
      • Define centroids
    • Define and explain the purpose of tuning
    • Apply unsupervised learning to analyze and solve real world problems through case studies
  • Apply ML theory and applications to analyze and solve real world problems through case studies

Required Time Allocation

To standardize the core topics of this course, the following student contact hours per topic are required. Each syllabus should be created to adhere as closely as possible to these allocations. Topics are not necessarily to be taught in the order shown.

There are normally 45 student contact-hours per semester for a three-credit course (14 weeks of instruction, excluding final exam week: 14*3.2 = 45 hours). Sections of the course offered in alternative formats (i.e. not standard 15-week) still meet for the same number of contact hours. The final exam is not included in the time table.

The quickly evolving nature of artificial intelligence—and machine learning in particular—means that some content noted in this document may be superseded or made obsolete. As such, it is important to include such changes in individual syllabi. Additionally, time is allocated for additional and optional topics to provide flexibility to instructors in tailoring the course to special needs or resources.

Topics Hours Percentage
Artificial intelligence; overview, history; Machine learning generally 1 2%
Supervised learning – regression and classification 9 20%
Unsupervised learning – clustering, dimensionality reduction 9 20%
Feature engineering and extraction 10 23%
Optimization and loss functions 3 7%
Decision trees; Neural networks 3 7%
Noise, overfitting, bias; Model validation and evaluation metrics 3 7%
Model and hyperparameter tuning 2 4%
Testing to include quizzes, tests and exams (excluding final exam) 3 7%
Other Optional Topics 2 4%
Total 45 100%