Subject System Testing And Verification: The Following Softw ✓ Solved

Subject System Testing And Verificationthe Following Software Was Dev

Subject: System Testing and Verification The following software was developed and ready for the testing. Now, it is your responsibility to design and implement testing activities.

Definition of Developed Software: This software will be integrated into the college student records database and will be used for preparing transcripts. It calculates a student’s final grade based on four input scores: Quiz Average, Midterm Exam Score, Assignments Average, and Final Exam Score. The final grade is the average of these four scores, and the software assigns a letter grade based on the percentage: A (90-100%), B (80-89%), C (70-79%), D (60-69%), and F (0-59%).

Input: Four grades (each based on 100%) for each student in the course.

Output: Final Letter Grade for each student in the course.

Question 1: Explain the Testing Activities for the given software: Test Design, Test Automation, Test Execution, and Test Evaluation

Test Design: In the test design phase, the primary goal is to create test cases that verify the correctness and robustness of the software. This involves defining input conditions, expected outcomes, and boundary conditions. For this software, test design requires understanding the input data range (0-100%), the calculation logic for the average, and the criteria for letter grade assignment. It’s important to consider boundary testing, such as scores exactly at grade cutoffs (e.g., 59, 60, 69, 70, 79, 80, 89, 90), to ensure accurate grade assignment at threshold values. Equivalence partitioning can be used to split input scores into valid and invalid categories, ensuring comprehensive test coverage.

Test Automation: Automating test cases involves creating scripts or using testing frameworks that simulate user input and verify output consistency. For this software, automation can be achieved through scripting languages such as Python or using C++ testing frameworks like Google Test. Automation materials must include test scripts that input various score combinations and check if the final letter grades match expectations. Automated testing helps in repeated execution without manual intervention, increasing testing efficiency and reliability, especially when verifying multiple input scenarios or regressions after modifications.

Test Execution: Test execution entails running the automated or manual test cases against the developed software. During execution, each set of input scores is fed into the program, and the output (final letter grade) is recorded. Any discrepancies between the expected and actual output are noted as failures. The execution should be systematic, covering boundary conditions, typical cases, and invalid inputs (e.g., negative scores or scores over 100) to evaluate how well the software handles different scenarios. Maintaining logs of test results is essential for tracking issues and analyzing the software’s behavior under various conditions.

Test Evaluation: Evaluating test results involves comparing actual outputs with expected outcomes for each test case. A test is considered passed if the software correctly calculates the average and assigns the appropriate letter grade according to the specified criteria. Failures indicate issues that require debugging and correction. Once all test cases pass successfully, confidence increases that the software functions correctly. The software can be deemed ready for integration if it consistently produces correct outputs, handles invalid data gracefully, and meets performance expectations. Final assessment involves reviewing test documentation, defect reports, and verifying adherence to initial requirements.

Question 2: Model-Driven Test Design for Grade Calculation Software

The model-driven test design uses mathematical models to define the expected behavior of the software, enabling systematic and thorough testing. For this grade calculator, the core model is the arithmetic mean of four scores and the mapping of percentage ranges to letter grades.

Mathematical Model and Input Values

The model involves the following formula:

Final Percentage = (Quiz Average + Midterm Score + Assignments Average + Final Exam Score) / 4

Based on this, the output grade is mapped to letter grades as follows:

  • 90-100% : 'A'
  • 80-89% : 'B'
  • 70-79% : 'C'
  • 60-69% : 'D'
  • 0-59% : 'F'

Suitable input values should include maximum and minimum bounds (0, 100), typical scores within each grade range, and invalid scores (e.g., negative values or scores over 100). This ensures validation of boundary conditions, perfect scores, and error handling.

Test Input Cases for Students

  1. Student 1: 85, 88, 80, 90 (Expected grade: B)
  2. Student 2: 70, 75, 72, 78 (Expected grade: C)
  3. Student 3: 59, 58, 55, 60 (Expected grade: F or D depending on calculation)
  4. Student 4: 92, 95, 94, 96 (Expected grade: A)

Executing the Testing

Execution involves inputting the predefined test cases into the program, either manually or through automation scripts. The program’s output will be the final letter grade for each student, which will be compared to the expected results based on the model. Automation can streamline this process by running multiple test cases rapidly and collecting results systematically.

Evaluating Test Results

Test evaluation requires comparing actual outputs with expected grades. A test passes if the output matches expectations, confirming correct implementation of calculation logic and grade thresholds. Any discrepancies highlight logical errors in the code, grade boundary misassignments, or invalid input handling. The software is considered ready for deployment if all test cases pass reliably, including boundary and invalid cases, demonstrating correctness, robustness, and compliance with specifications.

Readiness for Integration

The decision about whether the software is ready for integration hinges on successful test execution, validation against all predefined cases, and adherence to integration requirements. The software should also demonstrate error handling capabilities for invalid inputs and have performance metrics satisfactory for the system environment. Complete documentation and traceability of test results are critical to support the approval process.

References

  • Beizer, B. (1990). Software Testing Techniques. Dreamtech Press.
  • Myers, G. J., Sandler, C., & Badgett, T. (2011). The Art of Software Testing (3rd Edition). John Wiley & Sons.