AI Testing —ISTQB Software Testing Certification Training

Course Description

ALPI
ALPI
ALPI
ALPI


Why choose ALPI for ISTQB AI Testing certification training?
  • ALPI's training is accredited by ASTQB, the U.S. Board for ISTQB certification, so you can be certain you are getting excellent training quality.
  • We use certified live instructors in both our in-person and virtual classes so you can ask questions and get answers right away.
  • We focus on real-world examples.
  • We teach using interactive, hands-on exercises.
  • This ALPI course is eligible for a free refresher guarantee so you can re-take the course within 4 months at no additional charge. Plus, if you meet the criteria, you could also re-take your exam for free. Contact us for information about this unique benefit that gives you peace of mind.

The ISTQB AI Testing Certificate Course is a five-day course explaining the fundamentals of testing on AI projects. This course addresses the ISTQB AI Testing Syllabus.

The course includes exercises and practice exams to highlight key aspects of the syllabus and to help participants understand and practice the concepts and methods presented.

This course provides participants with the knowledge and skills necessary to become an effective member of an AI testing team. It explains the fundamental concepts of testing on AI projects including methods and practices around machine learning. We suggest that attendees hold the ISTQB Foundation Level certificate, especially if they intend to take the ISTQB AI Testing exam, but non-certificate holders can also benefit from this course.

By the end of this course, an attendee should be able to:

  • Understand the current state and expected trends of AI
  • Experience the implementation and testing of a ML model and recognize where testers can best influence its quality
  • Understand the challenges associated with testing AI-Based systems, such as their self-learning capabilities, bias, ethics, complexity, non-determinism, transparency and explainability
  • Contribute to the test strategy for an AI-Based system
  • Design and execute test cases for AI-based systems
  • Recognize the special requirements for the test infrastructure to support the testing of AI-based systems
  • Understand how AI can be used to support software testing
This course prepares you for the ISTQB AI Testing exam. You have the option to add the ISTQB exam for $199 when registering for class. Passing the exam will grant you an ISTQB CT-AI certification. Extended time requests should be made 2 weeks prior to class start for non-native English speakers.
  • For participants attending class remotely (Virtual Live), the exam can be scheduled online from home/office or by visiting a test center. Visit ISTQB Online Exam Information and Locate a Test Center for details.
  • For participants attending class in Chevy Chase, MD, the exam will be administered on last day of class, ending by 5pm, so please plan your travel accordingly.


Duration

5 day(s)

Time

9 - 3 ET

Price

$2,700

Labs

Exercises reinforcing Learning Objectives help to understand and apply topics in the course.


Intended Audience

The target audience for this course includes:
  • Software testers
  • Senior testers
  • Test analysts
  • Test leads
  • Managers including test managers, project managers, quality managers


Prerequisites

You must have obtained an ISTQB Foundation Level Certification (CTFL) to be eligible for the AI Testing Certification.

Prior to attending class please download and review the following document: AI Testing Syllabus


Outline

Introduction to AI

  • Definition of AI and AI Effect
  • Narrow, General and Super AI
  • AI-Based and Conventional Systems
  • AI Technologies
  • AI Development Frameworks
  • Hardware for AI-Based Systems
  • AI as a Service (AIaaS)
    • Contracts for AI as a Service
    • AIaaS Examples
  • Pre-Trained Models
    • Introduction to Pre-Trained Models
    • Transfer Learning
    • Risks of using Pre-Trained Models and Transfer Learning
  • Standards, Regulations and AI

Quality Characteristics for AI-Based Systems

  • Flexibility and Adaptability
  • Autonomy
  • Evolution
  • Bias
  • Ethics
  • Side Effects and Reward Hacking
  • Transparency, Interpretability and Explainability
  • Safety and AI

Machine Learning (ML) – Overview

  • Forms of ML
    • Supervised Learning
    • Unsupervised Learning
    • Reinforcement Learning
  • ML Workflow
  • Selecting a Form of ML
  • Factors Involved in ML Algorithm Selection
  • Overfitting and Underfitting
    • Overfitting
    • Underfitting
    • Hands-On Exercise: Demonstrate Overfitting and Underfitting

ML - Data

  • Data Preparation as Part of the ML Workflow
    • Challenges in Data Preparation
    • Hands-On Exercise: Data Preparation for ML
  • Training, Validation and Test Datasets in the ML Workflow
    • Hands-On Exercise: Identify Training and Test Data and Create an ML Model
  • Dataset Quality Issues
  • Data Quality and its Effect on the ML Model
  • Data Labelling for Supervised Learning
    • Approaches to Data Labelling
    • Mislabeled Data in Datasets

ML Functional Performance Metrics

  • Confusion Matrix
  • Additional ML Functional Performance Metrics for Classification, Regression and Clustering
  • Limitations of ML Functional Performance Metrics
  • Selecting ML Functional Performance Metrics
    • Hands-On Exercise: Evaluate the Created ML Model
  • Benchmark Suites for ML

ML - Neural Networks and Testing

  • Neural Networks
    • Hands-On Exercise: Implement a Simple Perceptron
  • Coverage Measures for Neural Networks

Testing AI-Based Systems Overview

  • Specification of AI-Based Systems
  • Test Levels for AI-Based Systems
    • Input Data Testing
    • ML Model Testing
    • Component Testing
    • Component Integration Testing
    • System Testing
    • Acceptance Testing
  • Test Data for Testing AI-based Systems
  • Testing for Automation Bias in AI-Based Systems
  • Documenting an AI Component
  • Testing for Concept Drift
  • Selecting a Test Approach for an ML System

Testing AI-Specific Quality Characteristics

  • Challenges Testing Self-Learning Systems
  • Testing Autonomous AI-Based Systems
  • Testing for Algorithmic, Sample and Inappropriate Bias
  • Challenges Testing Probabilistic and Non-Deterministic AI-Based Systems
  • Challenges Testing Complex AI-Based Systems
  • Testing the Transparency, Interpretability and Explainability of AI-Based Systems
    • Hands-On Exercise: Model Explainability
  • Test Oracles for AI-Based Systems
  • Test Objectives and Acceptance Criteria

Methods and Techniques for the Testing of AI-Based Systems

  • Adversarial Attacks and Data Poisoning
    • Adversarial Attacks
    • Data Poisoning
  • Pairwise Testing
    • Hands-On Exercise: Pairwise Testing
  • Back-to-Back Testing
  • A/B Testing
  • Metamorphic Testing (MT)
    • Hands-On Exercise: Metamorphic Testing
  • Experience-Based Testing of AI-Based Systems
    • Hands-On Exercise: Exploratory Testing and Exploratory Data Analysis (EDA)
  • Selecting Test Techniques for AI-Based Systems

Test Environments for AI-Based Systems

  • Test Environments for AI-Based Systems
  • Virtual Test Environments for Testing AI-Based Systems

Using AI for Testing

  • AI Technologies for Testing
    • Hands-On Exercise:The Use of AI in Testing
  • Using AI to Analyze Reported Defects
  • Using AI for Test Case Generation
  • Using AI for the Optimization of Regression Test Suites
  • Using AI for Defect Prediction
    • Hands-On Exercise: Build a Defect Prediction System
  • Using AI for Testing User Interfaces
    • Using AI to Test Through the Graphical User Interface (GUI)
    • Using AI to Test the GUI