Thursday, January 8, 2026

SOFTWARE ENGINEERING UNIT- 4 (TESTING OBJECTIVES)

 




Copyright © by Dr. Ajay kumar pathak

B. Sc IT SEMESTER 5 NOTES BASED ON NEP

SUBJECT : MJ–11 (Th): SOFTWARE ENGINEERING

(To be selected by the students from)

NOTES OF MJ–11 (TH): SOFTWARE ENGINEERING

 

 ****  NOTES   *****

OBJECTIVES: THE OBJECTIVE OF THE COURSE IS TO ENABLE STUDENTS –

·         To understand the Software Engineering Practice and the Software Engineering Process Models

·         To understand Design Engineering, Web applications

·         To gain knowledge of the software testing

·         To understand Software Project Management





UNIT- 4    :- TESTING    OBJECTIVES


-:        NOTES READ FROM HERE           :-

 

TESTING OBJECTIVES:-

Within the software development life cycle, software testing is a very critical phase, focusing on the functionality, reliability, and performance of a software application in meeting the specified requirements. It is an execution of the application under controlled conditions to trace existing software bugs or defects, if any. The main aim for doing software testing is to find errors, gaps, or missing requirements in contrast with the actual requirements.

Software testing includes different types of functional testing to evaluate specific functions or features of the application; integration testing evaluates how the interactions take place among the various components of the software; and system testing includes the assessment of performance made on the application in an environment approaching production. It helps to ensure that the new changes are not affecting the existing functionality of the software in any condition. Software testing can be performed manually or with the assistance of automated tools. The basic premise of manual testing means human testers enacting as end-users to conduct tests and report issues, while in an automated form, predefined testing cases get executed with the help of specialized tools.

The Main Objectives of Software Testing:-

(1)      Verification and Validation:- It is a verification activity carried out to assure that the product is developed in a way that requires the same time validating if it is fit for the intended use and expectations of the stakeholders

(2)      Identification of Defects:-  The other basic aim of software testing in the very first stage is to detect as many defects, bugs, and errors in the software as possible. If found at this stage, it is way less costly and doesn’t take long to fix, so the positive impact on quality and stability is out of the question.

(3)      Defects Prevention:-  In very simple words, the purpose of software testing whether it is mobile application testing or web application testing is not a defect find; rather, it is more for preventing defects. Systematic software testing and result analysis let the development team identify the causes of defects and possible corrective actions, so as not to repeat its occurrence in the future, ensuring a better quality standard in practice of software development.

(4)      Ensuring Quality Attributes in the Product:-  Some of the quality attributes tested with software testing include functionality, performance, usability, security, compatibility, and scalability, among others. It means the testing processes ensure that the software product is agreed to quality standards that have been defined either by the development team or the standards defined within the industry for smooth user experience.

(5)      Risk Management:-  Software testing like automation testing or manual testing, largely helps in managing the risks linked to software failures. Testing identifies all possible problems that will have an impact on the users and consequently the operation of the business in bad ways, and it allows time for fixing; hence, it reduces the opportunity risks associated with deployment.

(6)      Enable Confident Releases:-  Give stakeholders the assurance that the product is ready for launch with verified quality standards.

(7)      Validate Performance Standards:-  Check that the system meets speed, scalability, and stability benchmarks.


TYPES OF SOFTWARE TESTING:-

 

UNIT TESTING or COMPONENT or MODULE TESTING:-

Unit testing is the process where you test the smallest functional unit of code. Software testing helps ensure code quality, and it's an integral part of software development. It's a software development best practice to write software as small, functional units then write a unit test for each code unit. You can first write unit tests as code. Then, run that test code automatically every time you make changes in the software code. This way, if a test fails, you can quickly isolate the area of the code that has the bug or error. Unit testing enforces modular thinking paradigms and improves test coverage and quality. Automated unit testing helps ensure you or your developers have more time to concentrate on coding.

A unit test is a block of code that verifies the accuracy of a smaller, isolated block of application code, typically a function or method. The unit test is designed to check that the block of code runs as expected, according to the developer’s theoretical logic behind it. The unit test is only capable of interacting with the block of code via inputs and captured asserted (true or false) output. A single block of code may also have a set of unit tests, known as test cases. A complete set of test cases cover the full expected behavior of the code block, but it’s not always necessary to define the full set of test cases.



UNIT TESTING STRATEGIES:- To create unit tests, you can follow some basic techniques to ensure coverage of all test cases.

(a)      Logic checks:- Does the system perform the right calculations and follow the right path through the code given a correct, expected input? Are all paths through the code covered by the given inputs?

(b)      Boundary checks:- For the given inputs, how does the system respond? How does it respond to typical inputs, edge cases, or invalid inputs?

(c)      Error handling:- When there are errors in inputs, how does the system respond? Is the user prompted for another input? Does the software crash?

(d)      Object-oriented checks :- If the state of any persistent objects is changed by running the code, is the object updated correctly?

Advantages of unit testing

(a)      Improves Code Quality:-  Identifies potential defects early.

(b)      Enhances Maintainability:- Supports refactoring and continuous improvements.

(c )     Speeds Up Development:- Automated tests reduce manual testing efforts.

(d)      Supports Compliance:- Helps meet industry standards and regulations.

Disadvantages of unit testing

(a)      Not a Substitute for Other Testing:- Cannot detect integration

(b)      Requires Time and Effort:- Writing effective tests can be time-consuming.

(c )     Cross-Platform Testing:- Validate applications across browsers, devices, and environments.

(d)      Complex for Large Applications: Maintaining a vast test suite can be challenging.

 

INTEGRATION TESTING:-

Integration testing is known as the second level of the software testing process. Integration testing is a form of software testing in which multiple software components, modules, or services are tested together to verify they work as expected when combined. This type of testing examines how various software application modules interact and operate cohesively. The program is divided into more components, known as modules or units. Each module is responsible for a specific task. The real challenge comes when we combine these components to develop the entire software system. 

At this stage, it begins to carefully examine the connections between each module to discover any possible problems resulting from a single unit. When the testing is complete, end-to-end testing is conducted to assess the application’s functionality from start to finish. It considers the whole user journey from the initial to the final output to identify issues when various units interact. 

Types of Software Integration Testing:-

It can be divided into two subtypes:

(1)      Incremental testing     (2)        Non-Incremental testing

(1)      Incremental testing  :-  Incremental testing involves testing software modules in small increments. The testing of the software starts with smaller pieces and works its way up to the entire system. Each test improves the software by integrating additional modules. Compared to testing the complete system simultaneously, this offers advantages, including early feedback, more straightforward troubleshooting, and decreased complexity.

Incremental testing provides two main types:

(a)      Top-Down integration:- With the top-down approach, testers start with the highest-level modules, then gradually move to lower-level modules, hence the term “top-down”,  The software top down integration testing is one of the categories of integration testing where the high level units are tested first followed by the lower level units. After this process, the integration is considered as completed to ensure that the software is working as expected. The drivers and stubs (The process involves using dummy programs called Stubs and Drivers to stimulate the behavior of unintegrated lower-level modules,  Stubs are temporary replacements for modules and produce the same output as the actual products. In addition to the main module serving as a test driver, stubs are substituted for all components directly under the main control. Further, the stubs are replaced with actual components one by one.) are developed to perform the software top down integration testing. It is used to enhance and stimulate the characteristics of the modules which are not yet combined into the lower levels.

Example:- Let’s consider an E-commerce application with the following modules:-

User Interface (UI), Order Processing System, Payment Gateway, Inventory Management



(b) Bottom-Up integration:- Bottom-up integration testing is an approach where testers begin by testing the lowest-level modules first and then progressively move to the higher-level modules, hence the name “bottom-up.”  Bottom-up integration testing is a software testing approach in which individual components, or modules, are tested starting from the lowest levels of the hierarchy. These tested modules are then integrated step by step, building upward until the entire system is complete. The main principle behind this method is to ensure that the units are thoroughly validated before being integrated into complex layers. 

The idea of bottom-up integration testing is to start small and grow big. Low-level modules are tested first, often representing an application’s core functionalities or utility functions. After these components are verified, they are integrated to form higher-level subsystems.

Components of Bottom-up Integration Testing:-

(1)      Drivers (dummy programs):-  They are used because higher-level modules are not developed in the early stages of the software development life cycle.

(2)      Unit testing:- Unit testing of low-level modules is the foundation of Bottom-Up Integration Testing. It includes testing individual units before the integration phase.

(3)      Integration testing:-  Modules are gradually grouped and tested in button-up testing.

(40     Error localization:- Testing low-level modules first makes identifying and fixing issues at the root level more efficient. This makes debugging easier as modules are integrated sequentially.

 

(2)      Non-Incremental testing:- Non-incremental testing involves testing software modules. In this type, the testing takes place after all the modules are developed and ready for integration. The whole software is tested at one time. The non-incremental testing is often known as the big bang integration approach for testing.

(a) Big Bang Integration Testing:- It is the first type of integration testing and performed after the individual components of the software are ready and merged at a time. It is done to test whether all the units of the software are in conjunction (combination) with each other. The big bang testing is taken up for projects where there is a strict deadline for delivery. Also, in circumstances where individual units cannot be combined incrementally, the big bang testing is used. It is an optional testing approach and generally not suitable for complex projects.

Let us take an example of software having the components X1, X2, X3, X4, and X5. Once all of them are ready, they can be blended together logically all at once. The complete software is then tested to check whether all the components X1, X2, X3, X4, and X5 are working correctly as a single unit. This is called the big bang testing.

 

Advantages of Integration Testing

(1) Integration testing ensures that every integrated module functions correctly

(2) Integration testing uncovers interface errors

(3) Testers can initiate integration testing once a module is completed and doesn’t require waiting for another module to be done and ready for testing

(4) Testers can detect bugs, defects, and security issues

(5) Integration testing provides testers with a comprehensive analysis of the whole system, dramatically reducing the possibility of severe connectivity issues

Challenges of Integration Testing

(1) If testing involves dealing with two different systems created by two different vendors, there will be questions about how these components will affect and interact with each other

(2) Integrating new and legacy systems demands many testing efforts and potential changes

(3) Integration testing becomes complex due to the variety of components involved (e.g., platforms, environments, databases)

(4) Integration testing requires testing not only the integration links but the environment itself, adding another layer of complexity to the process


ACCEPTANCE TESTING:-

Acceptance testing is the final check in software development to ensure the product meets business goals and user expectations before release.

Acceptance testing is like the final boss that software must face before being handed over to its projected users. Acceptance testing validates that software meets business requirements and is ready for production deployment. It’s not just about checking for bugs or glitches—it’s about ensuring the software delivers on its promises. At its core, acceptance testing is a quality assurance (QA) process designed to verify that an application meets both business requirements and end-user needs. It’s the stage where functionality, usability, and performance are put under the microscope to ensure the software is ready for the real world. Unlike technical testing performed by QA engineers, acceptance testing is owned by business users, operations teams, and customers who evaluate software from real-world usage perspectives.

TYPES OF ACCEPTANCE TESTING ARE:-

(1)      User Acceptance Tests (UAT) :- It’s carried out by end-users or client representatives to verify if the software meets their business requirements and expectations. UAT focuses on ensuring the system behaves as anticipated in real-world scenarios, making it ideal for validating the product’s readiness for deployment. If your primary concern is ensuring that the software fulfills the needs of the actual users, UAT is the way to go.

(2)      Business acceptance testing (BAT):- Business acceptance testing (BAT) goes beyond user expectations to check whether the software aligns with the broader business goals. This type of testing helps determine if the software can support the organization’s financial objectives and operational requirements. It's especially important in cases where market conditions or business processes frequently change. If your goal is to ensure that the software serves the company's strategic interests, BAT is crucial. To perform effective BAT, the testing team must have a deep understanding of both the domain and business context.

(3)      Regulatory acceptance testing (RAT):- Regulatory acceptance testing (RAT) is essential for software that must fulfill with legal or industry-specific regulations. This includes ensuring that the product follows to the standards of governing authorities in different regions or industries, such as finance, healthcare, or government. RAT helps prevent costly mistakes like releasing software that violates laws or regulations. If your product will be deployed in a regulated industry or across various regions with differing rules, RAT is non-negotiable.

(4)      Contract acceptance testing (CAT):- Contract acceptance testing (CAT) is closely tied to the terms of a contract. It ensures that the software meets the specifications put out in the agreement, including the functionality and performance benchmarks. This type of testing is particularly important when the product’s acceptance is secured to payment or further action according to a service-level agreement (SLA). If the software delivery is governed by contractual obligations, CAT should be conducted to confirm that all requirements are met before signing off.



(5)      Operational acceptance testing (OAT):- Operational acceptance testing (OAT) focuses on the operational aspects of the software, such as system maintenance, recovery processes, and overall stability. It ensures that the software is operationally ready, meaning it can handle live conditions like backups, disaster recovery, and security monitoring. OAT is crucial if you want to confirm that the system will function smoothly in a production environment. OAT provides the confidence that your infrastructure can handle operational demands.

(6)      Alpha testing:- Alpha testing is generally carried out by an internal team before the software reaches a broader audience. It’s designed to catch critical bugs and issues early on, allowing the development team to address them before the product moves to Beta testing. If you're in the early stages of development and want to catch major issues before getting feedback from real users, Alpha testing is a good fit.

(7)      Beta testing:- Beta testing takes place after Alpha testing and involves releasing the software to a limited group of external users who test the application in real-world environments. Their feedback helps identify any remaining issues and refine the product before a broader release. Beta testing is an essential phase if you want to assess how real users interact with the product, discover potential problems, and gather insights for final improvements.

 

Advantages of Acceptance Testing:-

(1)      It is easier for the user to describe their requirement.

(2)      It covers only the Black-Box testing process and hence the entire functionality of the product will be tested.

(3)      Automated test execution.

(4)      This testing helps the project team to know the further requirements from the users directly as it involves the users for testing.

(5)      It brings confidence and satisfaction to the clients as they are directly involved in the testing process.

Disadvantages of Acceptance Testing:-

(1)      Development team is not participated in this testing process.

(2)      Sometimes, users don't want to participate in the testing process.

(3)      It is a time consuming process to get all the feedback from the customers. Moreover, they keep on changing from one person to another.

 

REGRESSION TESTING:-

Regression testing in software engineering is the process of re-running tests on existing software after code changes (like bug fixes, new features, or updates) to ensure these modifications haven't broken previously working functionality or introduced new defects. As modern applications grow more complex and interconnected, even a small update can create unexpected side effects across the system. Regression testing verifies that updates do not break existing features. A defect found during this process is known as a regression.

A regression is a specific type of bug or issue that occurs when new code changes, like software enhancements, patches, or configuration changes, introduce unintended side effects or break existing functionality that was working correctly before. This can happen when new code conflicts with existing code. Regression testing helps identify and fix these bugs and issues so that the reliability of the software and the quality of the product can be maintained.

Regression Testing Example:- Login functionality.

A user can log into an app using either their username and password or their Gmail account via Google integration.,   A new feature, LinkedIn integration, is added to enable users to log into the app using their LinkedIn account.,  While it is vital to verify that LinkedIn login functions as expected, it is equally necessary to verify that other login methods continue to function (Form login and Google integration).

Types of regression testing techniques:-

(1)      Non-functional regression testing:-     Testing non-functional aspects of the system like performance and usability after changes have been made.

(2)      Partial regression testing:-  Testing certain parts of the system that have been changed, along with any directly related components.

(3)      Unit regression testing:-  Retesting a specific unit of code (like a function or method) after modifications have been made to ensure it still works as expected.

(4)      Selective regression testing:- Selecting specific test cases from the test suite that are likely to be affected by the change, rather than running the entire test suite.

(5)      Complete regression testing:- Retesting the entire system. This is typically more time-consuming and is often used when a significant change has been made to the system.

 

TESTING FOR FUNCTIONALITY AND TESTING FOR PERFORMANCE:-

TESTING FOR FUNCTIONALITY :- Testing for Functionality (also called Functional Testing) Functional testing is a type of testing that seeks to establish whether each application feature works as per the software requirements. Each function is compared to the corresponding requirement to determine whether its output is consistent with the end user’s expectations. The testing is done by providing sample inputs, capturing resulting outputs, and verifying that actual outputs are the same as expected outputs.

In functional testing, testers evaluate an application’s basic functionalities against a predetermined set of specifications. Using Black Box (Black Box, meaning the tester does not need to know the internal code structure. ) Testing techniques, functional tests measure whether a given input returns the desired output, regardless of any other details. Results are binary: tests pass or fail.

It focuses on,  Inputs given by the user, Processing of data, Output produced by the system, Behavior of the system under different conditions.

Functional Testing is Important because its Ensures software meets user requirements, Finds missing, incorrect, or incomplete functions , Improves software reliability and quality, Reduces errors before deployment.



Example of Functional Testing :- Example: Login Functionality, User should be able to log in using valid username and password.

Test Cases:-

(a)      Enter correct username and correct password ,Then   Login successful

(b)      Enter correct username and wrong password , Then  Error message

(c )     Leave username blank,  Then Warning message

(d)      Leave password blank ,  Then Warning message

 

Types of Functional Testing:- ( All testing are already explained in above)

(1) Unit Testing    (2) Integration Testing   (3) System Testing, Tests the entire system as a whole, Ensures all modules work together correctly, Example:- Complete online exam system testing (login, exam, result).   (4) Smoke Testing:- 111 Testers perform smoke testing to verify that the most critical parts of the application work as proposed. It’s a first pass through the testing process and isn’t meant to be in-depth. Smoke tests ensure that the application is operational on a basic level. If it’s not, there’s no need to progress to more detailed testing, and the application can go right back to the development team for review. Example:- Checking if application launches and main buttons work.  (5) Sanity Testing:- Sanity testing acts as a friend to smoke testing, verifying basic functionality to potentially bypass detailed testing on broken software. Unlike smoke tests, sanity tests occur later in the process to confirm whether a new code change achieves its proposed effect. This ‘sanity check’ ensures the new code roughly performs as expected. (6)  Regression Testing   (7)  User Acceptance Testing (UAT)   (8) Alpha Testing  (9) Beta Testing.

 

TESTING FOR PERFORMANCE:- Performance testing is a testing technique that determines the speed:- The speed at which the application responds., scalability:- The maximum user load that the application can handle, and stability:- The condition of the application under varying loads of an application under a given workload. Performance testing is a form of software testing that focuses on how a system running the system performs under a particular load. This type of test is not about finding software bugs or defects. Different performance testing types measures according to benchmarks and standards. Performance testing gives developers the diagnostic information they need to eliminate jams or blocks.  

 

Types of Performance Testing:-

(1)      Load Testing:- Load testing measures system performance as the workload increases. That workload could mean concurrent users or transactions. The system is monitored to measure response time and system staying power as workloads increase and test whether workload parameters fall within normal working conditions.

(2)      Stress Testing:- Unlike load testing, stress testing — also known as fatigue (weakness) testing — is meant to measure system performance outside of the parameters of normal working conditions. The software is given more users or transactions that can be handled. The goal of stress testing is to measure the software stability. At what point does software fail, and how does the software recover from failure?

(3)      Volume Testing:- A significant amount of data is populated in a database during Volume Testing, and the overall behaviour of the software system is observed. The goal is to test the software application’s performance on various database volumes.

(4)      Endurance Testing:- It is performed to ensure that the software can bear the projected load for an extended length of time.

(5)      Spike Testing:- Spike testing examines the software’s response to sudden huge increases in user load.

(6)      Scalability testing:- Scalability testing determines how your system adapts to an increasing number of users or transactions over time, making it critical for applications with growth potential.

 

TOP- DOWN AND BOTTOM-UP TESTING:-

TOP- DOWN TESTING:- (NOTE: - Top-Down Testing and Top-Down Integration Testing are related, but they are not exactly the same thing. Top-Down Integration Testing is a specific type of Integration Testing that follows the Top-Down approach. Key idea:- It is Top-Down Testing applied specifically during Integration Testing Focus is on interaction between modules    

(   Uses stubs   (     stub is a temporary placeholder or a simplified substitute for a software module or component that is not yet developed, unavailable, or difficult to incorporate into the test environment        )    to replace lower modules.)

TOP-DOWN TESTING :- Top-Down Testing is a software testing approach in which testing starts from the top-level (main or control) modules of the system and then proceeds step by step to lower-level modules. In this method, higher-level modules are tested first, even if lower-level modules are not fully developed.

To handle missing modules, STUBS are used :- A Stub is a temporary program that replaces a lower-level module that is not yet developed. Stub provides:- Dummy data, Simple output, No real processing.


How Top-Down Testing Works (Step-by-Step)

(1) Start testing from the main module

(2) Replace missing lower modules with stubs

(3) Test interaction between modules

(4) Gradually replace stubs with real modules

(5) Continue until the entire system is tested

Example of Online Banking System:-




Testing Process:

1.      Test Main Banking System

2.      Use stubs for:

·         Deposit

·         Withdraw

·         Fund Transfer

3.      Replace stubs one by one with real modules

4.      Test full integration




TYPES OF TOP-DOWN TESTING:-

(1)      Depth-First Top-Down Testing ,  Meaning:- Testing follows one complete path from top to bottom, One branch is tested fully before moving to another.

Testing Order:- (a) Main System, Login, Authentication, Database

(b) Then move to Dashboard

(2)      Breadth-First Top-Down Testing ,   Meaning:- All modules at the same level are tested first, Testing moves level by level.

Testing Order:- (a) Main System, Login, Dashboard, Settings (same level)

(b) Then lower modules

 

Advantages of Top-Down Testing:-

(a) Early testing of main system logic

(b) Early detection of design flaws

(c )No need for drivers

(d) Interface testing is easy

(e) Useful for large systems

 

Disadvantages of Top-Down Testing:-

(a) Requires many stubs

(b) Lower-level modules tested late

(c ) Stubs may not represent real behavior

(d) Detailed testing is delayed

 

BOTTOM-UP TESTING:-

(NOTE:- Bottom-Up Testing and Bottom-Up integration Testing , both are not same , Bottom-Up Integration Testing is a specific type of Integration Testing that follows the Bottom-Up approach. Focuses on integration of modules, Tests interaction between modules.

Uses drivers (means a test driver is a piece of code that simulates the behavior of higher-level modules that are not yet developed. It calls the component or module under test and passes the necessary data to it.) :- Performed during the integration testing phase.

BOTTOM-UP TESTING:- Bottom-Up Testing is a software testing approach in which testing starts from the lowest-level (leaf) modules and then moves upward to the higher-level modules.

In this method, lower-level modules are tested first. Since higher-level modules are not yet available, DRIVERS are used to simulate their behavior.

Driver:- A Driver is a temporary program that replaces a higher-level module which calls the lower-level module.

Driver performs:- Calls the lower-level module, Sends test data, Receives output, Displays results.

How Bottom-Up Testing Works (Step-by-Step):-

(1) Identify lowest-level modules

(2) Write drivers to call these modules

(3) Test each module individually

(4) Integrate tested modules upward

(5) Replace drivers with real higher-level modules

(6) Continue until the complete system is tested

Example: Online Shopping System:-



Types of Bottom-Up Testing:-

(1) Cluster (or Incremental) Bottom-Up Testing:- Meaning:- Related low-level modules are grouped into clusters, Each cluster is tested using a driver, Gradually integrated into higher modules.

Example:- Database Operations, then Insert Data, then Update Data, then Delete Data

(2)      Traditional (Pure) Bottom-Up Testing:- Meaning:- Each low-level module is tested individually, Drivers are written for each module, Integration is done step by step

Advantages of Bottom-Up Testing:-

(a)Low-level modules tested thoroughly

(b) No need for stubs

(c )Errors found early in critical functions

(d) Suitable for utility-based systems

Disadvantages of Bottom-Up Testing

(a) High-level logic tested late

(b) Many drivers required

(c ) System behavior visible late

(d) Early prototype not available


SOFTWARE TESTING STRATEGIES:-

A test strategy is a high-level plan or roadmap that outlines the approach, scope, resources, and schedule for testing a software application. It defines the testing objectives, criteria for success, testing scope, and methodologies to be used. The primary aim of a test strategy is to ensure that the software meets its functional requirements, performs as future, and is robust enough to handle real-world scenarios. In other words, a test strategy is like a guide to make sure the software is good and strong. It helps testers know what to check, how to check, and when to check to make sure the software works perfectly.

A well-structured strategy ensures that testing aligns with business goals, identifies defects early in the development cycle, and helps deliver a product that performs as expected. Without a strategy, testing becomes reactive, inconsistent, and expensive.

 

TYPES OF SOFTWARE TESTING STRATEGIES:-

(1)      Static Testing Strategy:- A static test evaluates the quality of a system without actually running the system. While that may appear impossible, it can be accomplished in a few ways.

(a) The static test looks at portions or system elements to detect problems as early as possible. Example 1:- Developers review their code after writing and before pushing it. This is called desk-checking, a form of static testing.

Example 2:-   static test would be a review meeting to evaluate requirements, design, and code.

(b)      Static tests offer a decided advantage: If a problem is detected in the requirements before it develops into a bug in the system, it will save time and money.

(c )     If a preliminary code review leads to bug detection, it saves the trouble of building, installing, and running a system to find and fix the bug.

(d)      Static tests must be performed at the right time. For example, reviewing requirements after developers have finished coding the entire software can help testers design test cases.

(2)      Structural Testing Strategy:- While static tests are attractive helpful, they are not acceptable. The software needs to be operated on real devices, and the system has to be run in its entirety to find all bugs. Structural tests are one of the techniques under unit testing.

(a)      It is also called white-box testing because they are run by testers with thorough knowledge of the devices and systems it is functioning on.

(b)      It is often run on individual components and interfaces to identify localized errors in data flows.

(3)      Behavioral Testing Strategy:- Behavioral Testing focuses on how a system acts rather than the mechanism behind its functions. It focuses on workflows, configurations, performance, and all elements of the user journey. The point of these tests, often called “black box” tests, is to test a website or app from an end-user’s perspective.

(a) It must cover multiple user profiles as well as usage scenarios.

(b) Focus on fully integrated systems rather than individual components. This is because it is possible to measure system behavior from a user’s eyes only after it has been assembled and integrated to a significant level.

(c ) Behavioral tests are run manually, though some can be automated.

(d) Manual testing requires careful planning, design, and meticulous checking of results to detect what goes wrong.

(4)      Front-End Testing Strategy:- Front-end refers to the user-facing part of an app, which is the primary interface for content consumption and business transactions. Front End Testing is a key part of any SDLC as it validates GUI elements are functioning as expected.



TEST DRIVERS:-

A test driver is a piece of code that simulates the behavior of higher-level modules that are not yet developed. It calls the component or module under test and passes the necessary data to it. Test drivers are particularly useful when you want to test a lower-level module in isolation before the upper-level modules are fully implemented.

Example 1, imagine you are developing a function that processes user input. The higher-level modules responsible for gathering and sending this input may not be ready yet. A test driver can simulate these actions, allowing you to test your function independently. Drivers are used when high-level modules are missing and can also be used when lower-level modules are missing.

 

Example 2 : Suppose, you are told to test a website whose corresponding primary modules are, where each of them is interdependent on each other, as follows:

Module-A : Login page website,

Module-B : Home page of the website

Module-C : Profile setting

Module-D : Sign-out page

 

Types of Drivers

(1)      Top-Down Drivers:-

(a) Used in the top-down integration testing approach.

(b) Simulate calling modules to test higher-level modules.

(c ) Allow testing of higher-level components independently.

(2) Bottom-Up Drivers:-

(a) Used in the bottom-up integration testing approach.

(b) Simulate calling modules to test lower-level modules.

(c ) Enable testing of lower-level components independently.

 

Advantages of Using Drivers:-

(1) Parallel Development and Testing:- Drivers enable concurrent development and testing of different parts of the software, enhancing efficiency.

(2) Isolated Testing:- Drivers facilitate the testing of individual modules in isolation, even when dependent modules are not fully developed.

(3) Early Issue Detection:- Drivers help uncover interface issues and potential problems before full integration.

Limitations of Using Drivers:-

(1) Accuracy:- Drivers might not perfectly replicate the actual behavior of the calling modules, leading to potential discrepancies in test results.

(2) Maintenance:- Managing drivers alongside the actual code can introduce additional maintenance overhead.

 

WHAT IS A STUB?:-

In software testing, a stub is a small, simple program or module that serves as a temporary replacement for a part of the system that is not yet available or is still under development or A stub is dummy code used to simulate the behavior of a lower-level component required for testing higher-level components. Stubs are used primarily in integration testing when testing a higher-level module that depends on lower-level modules or components that are not ready for testing.

For example, in a system with components A, B, C, and D, where B (the output of A) and C (the input for D) are not developed yet, stubs simulate component B. This allows component A to be tested while drivers simulate component D.

Types of Stubs:-

(1)      Top-Down Stubs:-

(a) Used in the top-down integration testing approach.

(b) Simulate lower-level modules or components.

(c ) Enable testing of higher-level modules independently.

(2)      Bottom-Up Stubs:-

(a) Used in the bottom-up integration testing approach.

(b) Simulate higher-level modules or components.

(c ) Enable testing of lower-level modules independently

Advantages of Using Stubs:-

(1) Parallel Development and Testing:- Stubs enable different parts of the software to be developed and tested simultaneously, improving efficiency.

(2) Isolation of Testing:- Stubs allow testing of individual modules in isolation, even if dependent modules are incomplete.

(3) Early Detection of Issues:- Stubs help identify interface problems and potential issues before all components are fully integrated.

Limitations of Using Stubs:-

(1) Accuracy:- Stubs might not fully replicate the actual behavior of the missing modules, potentially leading to inaccurate test results.

(2) Risk of False Positives:- Stubs can generate false positives during testing. Since they don't always replicate the exact behavior of the real components, they may report errors that don't exist in the final product, wasting developers' time in investigating non-issues.

(3) Limited Functionality:- Stubs are simplified versions of real components or modules and often lack the full functionality of the actual components. This can lead to incomplete testing and potentially miss critical issues that only visible in the real implementation.

(4) Maintenance Overhead:- Stubs need to be maintained alongside the actual code. As the real code evolves, stubs may become outdated, leading to organization challenges. This can result in additional overhead and complexity in the development process.



DIFFERENCE BETWEEN STUBS AND DRIVERS:-


Stubs

Drivers

Stubs are basically known as a "called programs" and are used in the Top-down integration testing.

Drivers are the "calling program" and are used in bottom-up integration testing.

Stubs are similar to the modules of the software, that are under development process.

While drivers are used to invoking the component that needs to be tested.

Stubs are basically used in the unavailability of low-level modules.

While drivers are mainly used in place of high-level modules and in some situation as well as for low-level modules.

Stubs are taken into use to test the feature and functionality of the modules.

Whereas the drivers are used if the main module of the software isn't developed for testing.

The stubs are taken into concern if testing of upper-levels of the modules are done and the lower-levels of the modules are under developing process.

The drivers are taken into concern if testing of lower-levels of the modules are done and the upper-levels of the modules are under developing process.

Stubs are used when lower-level of modules are missing or in a partially developed phase, and we want to test the main module.

Drivers are used when higher-level of modules are missing or in a partially developed phase, and we want to test the lower(sub)- module.


STRUCTURAL TESTING (WHITE BOX TESTING):-

Structural testing is a synonym for white box testing. White-box testing (also known as clear box testing, glass box testing, transparent box testing, Open Box Testing and structural testing)

White box testing is a way of testing software by looking inside the code to ensure it works correctly. Testers check how the code is written, how it runs, and whether each part does what it’s supposed to. This helps find bugs early and makes the software more reliable and secure.

In white-box testing, an internal perspective of the system is used to design test cases. The tester chooses inputs to exercise paths through the code and determine the expected outputs. This is equivalent to testing nodes in a circuit, e.g. in-circuit testing (ICT). White-box testing can be applied at the unit, integration and system levels of the software testing process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is used for integration and system testing more frequently today. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or missing requirements. Where white-box testing is design-driven (means White Box Testing: Design-driven testing where engineers examine internal workings of code), that is, driven exclusively by agreed specifications of how each component of software is required to behave (ISO 26262 processes), white-box test techniques can achieve assessment for unimplemented or missing requirements.






Types of White Box Testing:-

(1)      Conditional Testing:- In this type of testing, the logical conditions for every value are checked, whether it is true or false. This means that both the if and else conditions are verified, in the case of an IF-ELSE conditional statement.

(2)      Loop Testing:- Loops are one of the fundamental concepts that are implemented in a large number of algorithms. Loop Testing is concerned with determining the loop validity of these algorithms.

(3)      Path Testing:- Path Testing is a white-box testing approach based on a program’s control structure. A control flow graph is created using the structure, and the different pathways in the graph are tested as part of the process. Because this testing is dependent on the program’s control structure, it involves a thorough understanding of the program’s structure.

(4)      Unit Testing:- A unit test is a method of testing a unit, which is the smallest piece of code in a system that can be logically separated. Unit testing ensures that each component performs as intended.

(5)      Integration Testing:- Integration testing is performed to check that modules / components operate as intended when combined, i.e. to ensure that modules that performed fine independently do not have difficulties when merged.

(6)      Mutation Testing:- Mutation testing evaluates the effectiveness of test cases by introducing small modifications or “mutations” into the codebase. These mutations simulate potential faults or defects. The objective is to determine if the existing test suite can detect and identify these changes, thereby improving the robustness of the test cases.

(7)      Testing based on Memory Perspective:- The size of the code could increase due to the following factors:-   There is no code reuse:- Consider the following scenario: We have four different blocks of code written for the development of software, and the first 10 lines of each code block are identical. Now, these 10 lines could be written as a function and can be made available to the four code blocks listed above. Furthermore, if a defect exists, we may alter a line of code in the function rather than the entire code. If one programmer produces code with a file size of up to 250kb, another programmer may develop equivalent code with different logic with a file size of up to 100kb.

(8)      Test Performance of the Program:- An application might be slow due to several factors and a developer or tester can't go through each line of code to detect a bug and verify it. Tools like Rational Quantify are used to come over this issue. There are some other tools as well available in the industry for the same purpose, such as WebLOAD, LoadNinja, LoadView, and StresStimulus.  A general performance test using Rational Quantify is carried out in the below-given procedure. Once the code for the application is complete, this tool will go through the entire code while executing it and the outcome would be displayed in the shape of thick and thin lines on a result sheet.

The thick line indicates which part of the code is time-consuming and when the lines would appear as thin, this means that the program’s efficiency has been improved.

And, rather than doing it manually, the developers will execute white box testing automatically since it saves time.


WHITE BOX TESTING TECHNIQUES:-

(1)      Statement Coverage:- One of the main objectives of white box testing is to cover as much of the source code as possible. Code coverage is a measure that indicates how much of an application’s code contains unit tests that validate its functioning.

Using concepts such as statement coverage, branch coverage, and path coverage, it is possible to check how much of an application’s logic is really executed and verified by the unit test suite. These different white box testing techniques are explained below.




(2)      Branch Coverage:- In programming, “branch” is equivalent to, say, an “IF statement” where True and False are the two branches of an IF statement. As a result, in Branch coverage, we check if each branch is processed at least once. There will be two test conditions in the event of an “IF statement”: One is used to validate the “true” branch, while the other is used to validate the “false” branch.




(3)      Path Coverage:- Path coverage examines all the paths in a given program. This is a thorough strategy that assures that all program paths are explored at least once. Path coverage is more effective than branch coverage. This method is nearby for testing complicated applications.

(4)      Decision Coverage:- Decision Coverage is a white box testing methodology that reports the true or false results of each boolean expression present in the source code. The purpose of decision coverage testing is to cover and validate all available source code by guaranteeing that each branch of each potential decision point is traversed at least once.

A decision coverage is the,  when there is a possibility of the occurrence of two or more outcomes from control flow statements such as an if statement, a do-while statement or a switch case statement. Expressions in this coverage can become difficult at times. As a result, achieving 100% coverage is quite difficult.





(5)      Multiple Condition Coverage:- In this testing technique, all the different combinations of conditions for each decision are evaluated. For example, we have the following expression,

if (A or B)

then

print C

So, in this case, the test cases would be as given below:

TEST CASE1: A=TRUE, B=TRUE

TEST CASE2: A=TRUE, B=FALSE

TEST CASE3: A=FALSE, B=TRUE

TEST CASE4: A=FALSE, B=FALSE

(6)      Control Flow Testing:- This testing technique aims to establish the program’s execution order by use of a control structure. To construct a test case for the program, the control structure of the programme is used. The tester selects a specific section of a programme to build the testing path. It is used mostly in unit testing. The test cases are represented using the control graph of the program. The control Flow Graph consists of the node, edge, decision node, and junction node for all execution paths.

Advantages of White Box Testing;-

(1) Optimization of code by the revelation of hidden faults.

(2) Transparency of the internal code structure helps to derive the type of input data needed to adequately test an application.

(3) This incorporates all conceivable code paths, enabling a software engineering team to carry out comprehensive application testing.

(4) Side effects of having the knowledge of the source code is beneficial to thorough testing.

(5) Provides traceability of tests from the source, thereby allowing future changes to the source to be easily captured in the newly added or modified tests

Disadvantages of White Box Testing:-

(1) A complicated and expensive process that involves the skill of an experienced professional, programming ability and knowledge of the underlying code structure.

(2) A new test script is necessary when the execution changes too frequently.

(3) Detailed testing with the white box testing approach is significantly more demanding if the application covers many different areas, such as the Gojek Super App.

(4) It is not realistic to be able to test every single existing condition of the application and some conditions will be untested.

(5) White-box testing brings complexity to testing because the tester must have knowledge of the program, or the test team needs to have at least one very good programmer who can understand the program at the code level.



FUNCTIONAL TESTING (BLACK BOX TESTING):-

Black box testing is a software testing technique where the internal workings or code structure of the system being tested are not known to the tester.

 

Black box testing involves evaluating the functionality of software without looking into its internal structures or workings. The term “black box” refers to a system where the internal mechanics are unknown, and testing only focused on the output generated by a given input. When conducting black box testing, the tester doesn’t need knowledge of the internal structure of the software; the test is conducted from a user’s perspective. This type of testing can be applied to every level of software testing, including unit testing, integration testing, acceptance testing, and security testing. The primary advantage of black box testing lies in its focus on the user perspective, ensuring that the software meets user requirements and expectations.

TYPES OF BLACK BOX TESTING:-

(1)      Functional Testing:- Functional testing is a type of black box testing that focuses on validating the software against functional requirements and specifications. It ensures that the software behaves as expected in response to specific inputs. Functional testing is conducted at all levels and includes techniques like unit testing, integration testing, system testing, and acceptance testing.

(2)      Non-Functional Testing:- While functional testing focuses on what the software does, non-functional testing is concerned with how the software performs. It evaluates aspects like the performance, usability, reliability, and compatibility. Black-box non-functional testing checks these criteria from the end-user’s perspective. For example, a black-box performance test of a website might simulate a user session and measure the actual page load time.

(3)      Security Testing:- Security testing is a type of black box testing that checks the software for any potential weaknesses or security risks. It aims to ensure that the software is secure from any threats and that the data and resources of the system are protected from breaches.

 

BLACK BOX FUNCTIONAL TESTING TECHNIQUES:-

(1)      Equivalence Partitioning:- Divides the input data into equivalent partitions, with each partition being regarded the same by the program. Testing one representative from each partition is usually enough to cover all potential scenarios.

Example: For a form that accepts age input between 18 and 65, equivalence partitions might include:-

Valid partition: 18-65 (e.g., age 25)

Invalid partition: Below 18 (e.g., age 15)

Invalid partition: Above 65 (e.g., age 70)

(2)      Decision Table Testing:- Decision table testing is a systematic and organized black box testing technique used to deal with complex systems. This technique is beneficial when the system’s behavior is different for different combinations of inputs. It’s often used when there are multiple inputs that can have different values and can result in different outputs.

(3)      Boundary Value Analysis:- At specific boundary values, testers observe how a system responds uniquely. This black box testing technique tests the limits of valid and invalid partitions. It focuses on points where errors are likely to occur.

(4)      Error Guessing:- Based on experience, testers anticipate common mistakes developers may make when building similar systems.

(5)      State Transition Testing:- Tests the system’s behaviour in various states and transitions between them. It ensures that the system functions properly when transitioning from one state to another. Example: For a user login system, states might include:

Logged Out, Logged In, Suspended

Advantages of Black Box Testing:-

(1)      Independence from Internal Implementation:- Testers do not need to have access to the source code or knowledge of the internal implementation, making it suitable for non-technical team members.

(2)      User-Centric Testing:- Black box testing focuses on the software’s external behavior, ensuring that it meets user requirements and expectations.

(3)      Testing from End-User Perspective:- It simulates real user scenarios, helping to identify usability issues and ensuring the software meets user needs.

(4)      Effective for Requirement Validation:- Black box testing helps validate that the software meets the specified requirements.

(5)      Suitable for Large Projects:- It can be applied at different testing levels, from unit testing to acceptance testing, making it scalable for large projects.


Disadvantages of Black Box Testing:-

(1)      Limited Code Coverage:- Black box testing may not explore all possible code paths or internal logic, potentially leaving certain defects undetected.

(2)      Redundant Testing:- Some test cases may overlap, leading to redundant testing efforts and less optimal test coverage.

(3)      Dependency on Requirements:- Test cases are heavily dependent on the accuracy and completeness of the provided requirements. Incomplete or ambiguous requirements can result in incomplete testing.

(4)      Difficulty in Error Localization:- Identifying the root cause of defects detected in black box testing can be challenging, as testers lack access to internal code.

(5)      Limited Security Testing:- While black box testing can identify certain security weaknesses, it may not comprehensively address all potential security issues.

TESTING CONVENTIONAL APPLICATIONS:-

Testing of conventional applications is an important activity in software engineering that focuses on verifying and validating traditional software systems developed using structured or procedural approaches. Conventional applications are generally standalone or client–server based systems such as payroll systems, banking systems, library management systems, and inventory control systems. These applications have fixed functionality, well-defined inputs and outputs, and follow a linear flow of execution. Testing ensures that such applications work correctly according to specified requirements and perform reliably under normal and abnormal conditions.

The main purpose of testing conventional applications is to identify defects, errors, and missing requirements before the software is delivered to users. During development, programmers may introduce logical mistakes, calculation errors, or incorrect control flows. Testing helps in discovering these issues early, which reduces the cost of fixing defects later. It also ensures that the software behaves as expected when used by real users and produces accurate results.

 

Example of Testing a Conventional Application:-

Consider a Library Management System, which is a conventional application. This system includes functions such as adding books, issuing books, returning books, and calculating fines. During testing, each function is checked to verify its correctness. For example, when a valid book ID is entered, the system should successfully issue the book. If an invalid book ID is entered, the system should display an appropriate error message. Similarly, when a book is returned after the due date, the system must calculate the fine correctly. Testing ensures that all these operations work accurately and reliably.



Need for Testing Conventional Applications:-

Testing is required for conventional applications because even small errors can lead to serious failures. For example, in a banking application, a minor calculation error may result in incorrect interest amounts, causing financial loss and loss of customer trust. Testing ensures correctness, reliability, and stability of the software. It also helps in checking whether the application satisfies functional requirements, handles invalid inputs properly, and recovers gracefully from unexpected situations.

 

TESTING OF CONVENTIONAL APPLICATIONS IN OBJECT-ORIENTED APPLICATIONS:-

Testing of conventional applications in object-oriented (OO) applications refers to the process of verifying and validating software systems that are developed using object-oriented concepts such as classes, objects, inheritance, encapsulation, and polymorphism. Although the development approach is object-oriented, the application itself performs traditional business functions such as payroll processing, banking transactions, library management, and student information handling. Testing ensures that each object, class, and their interactions work correctly and that the complete system satisfies user requirements.

In object-oriented applications, testing is more complex than in purely procedural systems because functionality is distributed across multiple objects. Objects interact through method calls, share data using relationships, and reuse behavior through inheritance. Therefore, testing must not only check individual methods but also verify object collaborations, class hierarchies, and dynamic behavior during execution.

 

Why Testing is Important in Object-Oriented Applications:-

Testing is essential in object-oriented applications because errors can occur due to incorrect object interaction, improper inheritance usage, or incorrect overriding of methods. A defect in one class may affect many other classes because of reuse and relationships. Testing helps in identifying such errors early and ensures that the system remains reliable, maintainable, and scalable.

For example, in a banking system developed using object-oriented programming, an error in the Account base class can affect SavingsAccount and CurrentAccount subclasses. Proper testing ensures that such shared functionality works correctly for all derived classes.

Example of Testing an Object-Oriented Conventional Application:-

Consider a Library Management System developed using object-oriented concepts.

Classes involved:- Book, Member, Librarian, Transaction etc.

Each class contains attributes and methods. Testing involves checking whether:

·         A Book object correctly updates availability

·         A Member object can issue and return books

·         The Transaction class correctly calculates fine

·         Interaction between objects works properly

For example, when a Member issues a book, the Transaction object should update the Book object’s status correctly. Testing verifies this object interaction.

Types of Testing in Object-Oriented Conventional Applications:-

(1)      Class Testing (Unit Testing in OO):- Class testing focuses on testing individual classes and their methods. It is similar to unit testing in procedural systems but is applied to classes instead of functions. The goal is to ensure that each method performs its intended task correctly.

Example:- Testing the calculateFine() method of the Transaction class by passing different return dates to verify correct fine calculation.

Class testing also checks:- Correct initialization of objects, Valid input handling, Method output correctness.

(2)        Integration Testing in Object-Oriented Applications:- Integration testing in OO applications focuses on testing interactions between classes and objects. Since objects collaborate to perform tasks, this type of testing ensures correct message passing and data sharing between objects.

Example:- Testing interaction between Member, Book, and Transaction classes during book issue and return operations.

Common OO integration approaches include:-

(a) Thread-based integration testing

(b) Use-based integration testing

(3)        System Testing in Object-Oriented Applications:-System testing verifies the complete object-oriented system as a whole. All classes and objects are integrated and tested against functional and non-functional requirements.

Example:- Testing the entire library system including login, book management, issue/return, and report generation.

System testing checks:- (a) End-to-end functionality (b) Performance (c ) Security (d) Reliability

(4)        Acceptance Testing:- Acceptance testing is performed by end users or clients to confirm that the object-oriented application meets business requirements. It ensures that the system behaves correctly in real-world scenarios.

Example:- Library staff testing the system to confirm that daily operations can be performed easily and correctly.

TESTING OF CONVENTIONAL APPLICATIONS IN WEB APPLICATIONS:-

Testing of conventional applications in web applications refers to the systematic process of verifying and validating traditional web-based software systems to ensure that they function correctly, securely, and reliably over the internet. Conventional web applications usually follow a client–server architecture, where the client is a web browser and the server hosts application logic and databases. Examples include online library systems, college portals, banking websites, e-commerce sites, and government service portals.

Unlike desktop conventional applications, web applications run in a distributed environment, which means they must be tested for browser compatibility, server response, network issues, and user interaction through web pages. Testing ensures that the web application meets functional requirements, handles multiple users, processes data correctly, and provides a smooth user experience.

 

Need for Testing Conventional Web Applications:-

Testing is essential for web applications because they are accessed by many users simultaneously and are exposed to security threats. A small error in a web application can affect thousands of users at the same time. Testing helps detect functional errors, interface issues, performance problems, and security weaknesses before deployment.

For example, in an online banking web application, an incorrect transaction or security loophole can lead to financial loss and data theft. Proper testing ensures data integrity, accuracy, and trustworthiness of the system.

Example of a Conventional Web Application:- Consider a College Management Web Application.

·         Features include:-

·         Student login

·         Course registration

·         Fee payment

·         Result viewing

·         Testing checks whether:-

ü  Students can log in with valid credentials

ü  Invalid login attempts are rejected

ü  Fee payment updates the database correctly

ü  Results displayed are accurate

Testing verifies both front-end (web pages) and back-end (server and database) behavior.



Types of Testing in Conventional Web Applications:-

(1)        Functional Testing:- Functional testing checks whether all functions of the web application work according to requirements. It focuses on links, forms, buttons, navigation, and database operations.

Example:- Testing whether submitting a registration form with valid data successfully creates a new student record in the database.

Functional testing ensures that:-

·         Input validation works correctly

·         Correct outputs are generated

·         Business logic is implemented properly

(2)        Unit Testing (Web Context):- Unit testing in web applications focuses on testing individual components such as server-side functions, APIs, or modules. It is usually performed by developers.

Example:- Testing a server-side function that calculates total fees based on selected courses.

Unit testing ensures correctness at the code level before integration.

(3)        Integration Testing:- Integration testing verifies interaction between different layers of a web application, such as:

Front-endServer

Server Database

API External services

Example:- Testing whether data entered in a web form is correctly stored in the database and retrieved when requested. This type of testing ensures smooth communication between components.

(4)        System Testing:- System testing evaluates the complete web application as a whole. All modules are integrated and tested in an environment similar to real usage.

Example:- Testing the entire college portal including login, course registration, fee payment, and result generation. System testing checks functionality, usability, and reliability.

(5)        Acceptance Testing:- Acceptance testing is performed by end users or clients to confirm that the web application meets business requirements and is ready for deployment.

Example: College administrators testing the portal to ensure it supports daily academic operations.

 

FORMAL MODELING or Formal Methods AND VERIFICATION:-

(1). Meaning of Formal Modeling or Formal Methods:- Formal modeling is the process of describing a software system using mathematical techniques instead of natural language. In normal software development, requirements are written in English, which may be ambiguous, incomplete, or misunderstood. Formal modeling removes this problem by using logic, sets, relations, and state machines to clearly define system behavior. In formal modeling, every operation, input, output, and system state is precisely defined, leaving no scope for confusion. This model becomes the exact reference for developers and testers.

Example:- Instead of saying “User can withdraw money if balance is sufficient”, formal modeling mathematically defines:- Account balance, Withdrawal amount, Conditions under which withdrawal is allowed etc.

Characteristics of Formal Modeling:-

Mathematical Foundation:-  Formal modeling is based on:- Set theory, Predicate logic, Relations and functions, State machines, Because of this, every behavior of the system can be mathematically analyzed.

When Formal Modeling Is Used:-

Formal modeling is used when:- Software is complex, System is safety-critical, High reliability is required, Failure cost is very high.

Examples include:- Aircraft control systems, Railway signaling, Medical devices, Nuclear plant software etc.

Step-by-Step Example of Formal Modeling:-

Example:- Bank Account System

Informal Requirement

“User can withdraw money if sufficient balance is available.”

Formal Model

Balance ≥ 0

Withdraw amount ≤ Balance

New Balance = Balance − Withdraw amount

This ensures no negative balance.

Advantages of Formal Modeling:-

1.      Removes ambiguity

2.      Detects errors early

3.      Improves reliability

4.      Ensures consistency

5.      Reduces maintenance cost

Limitations of Formal Modeling

1.       Requires mathematical knowledge

2.       Time-consuming

3.       Not suitable for small projects

4.       Difficult for non-technical stakeholders



FORMAL VERIFICATION :-

Introduction to Formal Verification:- Formal verification is a mathematical technique used in software engineering to prove that a software system is correct according to its specifications. Instead of executing the program with some test cases, formal verification examines all possible behaviors of the system using logic and mathematics.

In simple words:- Formal verification proves correctness, while testing only checks correctness.

Formal verification is mainly used in systems where failure is unacceptable, such as aircraft control systems, medical devices, railway signaling, and banking software.

Definition of Formal Verification:- Formal verification is the process of using mathematical logic and formal methods to demonstrate that a system satisfies its formal specification for all possible inputs and states.

Why Formal Verification Is Needed:-

Traditional software testing has limitations:

·         It checks only selected test cases

·         It cannot cover all input combinations

·         Bugs may remain hidden

Formal verification is needed because:

·         Software systems are highly complex

·         Errors in critical systems can cause financial loss or death

·         Some bugs appear only in rare conditions

Formal verification provides strong confidence in correctness.

Example:- Step-by-Step of  ATM System

Formal Specification:-

1.      Card must be inserted before PIN entry

2.      PIN must be verified before withdrawal

3.      Balance must be sufficient

4.      ATM must return to idle state

Advantages of Formal Verification:-

1.      Mathematical proof of correctness

2.      Finds hidden and rare bugs

3.      Improves system reliability

4.      Reduces long-term cost

Limitations of Formal Verification:-

1.      Requires advanced mathematics

2.      Time-consuming

3.      Expensive

4.      Not suitable for small systems


SOFTWARE CONFIGURATION MANAGEMENT (or SCM) :-

Software configuration management is a process that ensures that everyone working on a software development project follows a specific set of standards so that the outcomes are consistent in quality and that each person’s work maintains integrity, traceability, and accountability. This system covers the management of computer programs and software, scripts, file storage, tracking changes, and tests and revisions but ensuring that no changes are made without proper authorization and documentation.

Software Configuration Management (SCM) is a discipline of Software Engineering that provides a better process for handling, organizing, and controlling the changes in requirements, codes, teams, and other elements in the software project development life cycle. Whenever software is built, there is always scope for improvement, as the improvements add to the final product and bring change to the overall functionality.

Software engineers make required changes/modify or update any existing solution or create a new solution for a problem to enhance the product. Requirements keep on changing on a daily basis as and when the unit testing is performed and so they need to keep on upgrading the systems based on the current requirements and needs to meet desired outputs.

Changes needed to be analyzed before they are made to the existing system, recorded before they are implemented, reported to have details of before and after, and controlled in a manner that will improve quality and reduce error as the entire system remains at stake. This is where the need for System Configuration Management comes in to handle nuances and bring the appropriate changes to the software.

It is crucial to control the changes because if the changes are not checked legally then they may wind up undermining well-run programming. In this way, SCM is an essential piece of all engineering project management activities. And the primary goal of SCM is to increase productivity with minimal mistakes. SCM is part of the cross-disciplinary field of configuration management and it can accurately determine who made which revision, so it's easier for the team to coordinate with each other and work with accountability.

 

IMPORTANCE OF CONFIGURATION MANAGEMENT IN PROJECTS:-

Configuration management is essential for both small-scale and large-scale projects, especially when multiple teams work on different components simultaneously.

Here are key reasons why CM is critical in system and software engineering and integral to effective systems engineering services:-

(1) Maintaining Consistency:- As software evolves, managing multiple versions and configurations can become untidy. CM helps maintain consistency across different environments (development, testing, production) and ensures that all team members are working with the correct version of the code or system. 

(2)        Enabling Collaboration:-  In projects where multiple teams work on different modules, configuration management ensures that changes from one team do not carelessly break another team’s work. Version control systems like Git help track who made changes, what changes were made, and why. 

(3)        Minimizing Errors:-  Without proper CM, unauthorized or incorrect changes can lead to bugs, system crashes, or security weaknesses. CM ensures that all changes are reviewed, tested, and documented, reducing the possibility of errors. 

(4)        Supporting Automation:-  Configuration management tools integrate well with Continuous Integration (CI) and Continuous Deployment (CD) pipelines, allowing for automated builds, tests, and deployments. This integration leads to faster delivery cycles and more reliable software releases. 

(5)        Handling Complexity:- As projects grow in complexity, managing dependencies between different software components and systems becomes challenging. CM helps manage this complexity by documenting configurations and ensuring smooth integration of components. 



ELEMENTS OF CONFIGURATION MANAGEMENT:-

(1)        Version Control :- Version Control Systems (VCS) are essential tools in configuration management. They allow developers to track changes to the codebase, collaborate on projects, and revert to previous versions if necessary.

(2)        Change Management:- In any software project, changes to the system or codebase are unavoidable. Change Management ensures that all changes are tracked, approved, and tested before they are integrated into the project.

(3)        Release Management:- Release Management involves planning, scheduling, and controlling the movement of releases to test and production environments. Proper release management ensures that releases are smooth, consistent, and free from unnecessary disruptions.

With the help of CI (Continuous Integration) / CD (Continuous Deployment) pipelines, release management is often automated. When developers push changes to the main branch, automated scripts can compile, test, and deploy the new code automatically.

(4)        Configuration Audits :- A configuration audit verifies that a system’s configuration is consistent with the intended baseline. It ensures that all components are up to date and that no unauthorized changes have been made.

 

WHAT ARE SOFTWARE METRICS? (OUT OF SYLLABUS FROM B. SC IT) :-

Software Metrics are quantitative (measurable) measurements used to assess various aspects of a software product, the development process, and the overall project. These metrics provide valuable information to evaluate software development's quality, performance, and progress.

Software metrics provide the ability to measure progress, quality, and efficiency of software development projects. It helps project managers to make better decisions, manage risk and improve the software quality continuously. Understanding and effectively measuring these metrics is critical to creating high-quality software that meets deadlines and stays within budget.

Multiple software metrics are interrelated within the software development process. Software metrics incorporate FOUR management functions: organization, improvement, planning, and control.

(1)        Capability Maturity Model Integrated (CMMI), developed by the Software Engineering Institute (SEI), is an industry model and industry-standard like ISO 9000 that helps in utilizing the software metrics to monitor, manage, understand, and predict software projects, products, and processes.

(2)        Software metrics offer management and engineers the necessary information to make technical decisions.

(3)        If software metrics offer helpful information, everyone involved in selecting, designing, implementing, collecting, and utilizing it must understand its definition and purpose.

(4)        Software metrics are chosen on the basis of project, organizational, and task objectives that must be determined early. Software metrics programs must be created to provide accurate data to enhance software engineering processes and services, as well as manage software projects.

 

Importance of Metrics and Measurement in Software Engineering:-

Software metrics and measurement play a critical role in software engineering and serve as the building block for efficient project management, continuous improvement, and quality assurance.

Importance of Metrics are:-

(1)        Project Planning and Estimation:-   Size measurement, effort estimation, and complexity assessment are software metrics that enable project budgets and schedules. Precise planning minimizes risks associated with over commitment or underutilization and ensures resource optimization.

(2)        Quality Assurance:-   Software metrics, like defect density (bulk), test coverage, and mean time to failure, help teams assess software quality at each stage. Software metrics and measurements offer you insights into where enhancement is required while ensuring reliable and robust software delivery.

(3)        Performance Tracking;- Monitoring productivity metrics helps software development teams track progress and adjust workflows to meet deadlines. Software performance metrics ensure alignment with project goals and customer expectations.

(4)        Risk Management:- Code churn (mix) and requirement unpredictability are software metrics used to identify potential risks in the project. Addressing these risks proactively reduces project delays and cost overruns during the project.

(5)        Informed Decision-Making:- Selecting tools and technologies to allocate resources, as well as objective data, supports decision-making processes. Data-driven strategies are more likely to succeed than those based on intuition.

Advantages of Software Metrics:-

1.      Reduction in cost or budget.

2.      It helps to identify the particular area for improvising.

3.      It helps to increase the product quality.

4.      Managing the workloads and teams.

5.      Reduction in overall time to produce the product,.

6.      It helps to determine the complexity of the code and to test the code with resources.

7.      It helps in providing effective planning, controlling and managing of the entire product.

Disadvantages of Software Metrics:-

1.      It is expensive and difficult to implement the metrics in some cases.

2.      Performance of the entire team or an individual from the team can't be determined. Only the performance of the product is determined.

3.      Sometimes the quality of the product is not met with the expectation.

4.      It leads to measure the unwanted data which is wastage of time.

5.      Measuring the incorrect data leads to make wrong decision making.

TYPES OF SOFTWARE TESTING METRICS:-

(1)        Process Metrics (OUT OF SYLLABUS FROM B. SC IT):- Process Metrics focus on measuring the effectiveness and efficiency of the software development process itself. They help in identifying areas where the development process can be improved for better productivity and quality. Some common process metrics include:

1. Cycle Time:- It measures the time taken to complete a specific task or user story from the beginning to the end. Shorter cycle times indicate better efficiency in the development process.

2. Defect Removal Efficiency (DRE):- This metric evaluates how effective the development process is at finding and fixing defects during testing. Higher DRE implies a more robust testing process.

3. Code Review Effectiveness:- It assesses how well code reviews catch defects and improve code quality. Regular and effective code reviews lead to better code and reduced errors.

 

(2)        Project Metrics (OUT OF SYLLABUS FROM B. SC IT):- Project Metrics focus on measuring the progress and success of the entire software development project. They help in tracking the project's status, budget, and overall health. Some common project metrics include

1. Schedule Variance (SV):- It measures the difference between the planned schedule and the actual progress of the project. Positive SV indicates the project is ahead of schedule, while negative SV indicates delays.

2. Cost Performance Index (CPI):- This metric assesses the cost efficiency of the project by comparing the budget spent with the work completed.

3. Defect Arrival Rate:- It measures the rate at which defects are identified during the testing phase. Tracking this metric helps in understanding the defect trend over time.

 

(3)        Size Estimation (OUT OF SYLLABUS FROM B. SC IT) :- Size estimation in software engineering refers to the process of determining the size of a software project before it is developed. It helps in planning and managing the project effectively.

 

(4)        Line of Code (OUT OF SYLLABUS FROM B. SC IT):- One common metric used for size estimation is "Lines of Code" (LOC). Lines of Code represent the total number of lines in the source code of a software program. To calculate the Lines of Code (LOC), follow these steps:

Step 1: Count all the lines of code in your program, including code lines, comments, and blank lines.

Step 2: Sum up all the lines, and that will be your Lines of Code (LOC) for the software project.

However, it's important to note that Lines of Code is just one metric for size estimation and may not always provide an accurate representation of a project's complexity or effort required.

 

(5)        Function Count (OUT OF SYLLABUS FROM B. SC IT):- Function count is another important aspect of size estimation in software engineering. It involves counting the number of functions or subroutines present in a software program. Functions are set of code that do some specified tasks within the program. By calculating methods, we can predict the approximate complexity and build of the software. To calculate the Function Count, follow these steps:

Step 1: Identify and list down all the functions or subroutines in your software program.

Step 2: Count the total number of functions listed. Function count helps developers and project managers to understand the modularity and maintainability of the codebase.

 

(6)        Product Metrics:-

Product Metrics:- Product Metrics focus on measuring the characteristics and quality of the software product itself. They help us understand how well the software performs and whether it meets the desired requirements.

Some standard product metrics include:-

1. Defect Density-: This metric indicates the number of defects (bugs or errors) found in the software per lines of code or function points. It helps identify the software's reliability and stability.

2. Code Coverage:- It measures the percentage of code that has been tested by the software's automated tests. Better code coverage leads to improved performance and reduces the chances of untested code causing issues.

3. Response Time:- This metric evaluates how quickly the software responds to user inputs or requests. Lower response time indicates better performance and user experience.


TEAM ANALYSIS IN METRICS CALCULATION:-

Meaning of Team Analysis:- Team analysis in metrics calculation means measuring, evaluating, and improving the performance of a software development team using quantitative metrics. Instead of judging individuals subjectively, team analysis uses data (numbers, ratios, trends) to understand:-

·         How efficiently the team works

·         How good the quality of the software is

·         How well the team collaborates and communicates

·         Whether the project is on time and within cost.

Why Team Analysis is Needed:-

Team analysis is important because:

1.      It helps project managers take correct decisions

2.      It identifies team strengths and weaknesses

3.      It improves productivity and quality

4.      It helps in future project estimation

5.      It avoids overloading or under-utilizing team members

Types of Metrics Used in Team Analysis:-

(1)        A. Productivity Metrics:- These metrics show how much work the team produces.

Examples:- Lines of Code (LOC) per developer, Function Points (FP) per person-month, User stories completed per sprint.

Formula:-    productivity = Total output  /  Total effort

Example:

Team size = 5 developers

Total Function Points delivered = 250

Total effort = 5 person-months

Productivity = 250 / 5 =50 FP (function point) / Person Month

 

( Productivity:- Productivity means the amount of useful software produced per unit of effort.

It shows how efficient the team is,  Higher productivity = more work done with less effort,

Lower productivity = less output with more effort.

Unit of productivity depends on how output and effort are measured

Examples:- LOC per person-month, Function Points per person-month,

(Total Output:- Total Output refers to the quantity of software work delivered by the team.)

(Function point:- FP matrices  calculate the size of the software with help of the logical design and performance of the functions as per the requirement of the user.

Example:- Login module = 10 FP, Payment module = 20 FP,  Report module = 15 FP, Modules developed = 8, Total Output = 8 modules , Total Output = 53 FP)

 

(Person Month:- It is used to refer to the effort required to execute a software project. Simply a man month refer to the effort of one developer working for a month, this is unit means one month of effort by one person. )

( Total Effort:- Total Effort means the total human work spent to produce the software.

How is Effort Measured?; Effort is usually measured in:

(a) Person-Hours:- 1 person working for 1 hour = 1 person-hour

Example:- 5 developers × 8 hours × 20 days,  Effort = 800 person-hours

(b) Person-Days:- 1 person working for 1 day = 1 person-day

Example:- 4 developers × 25 days, Effort = 100 person-days

(c) Person-Months (Most common)

1 person working full-time for 1 month

Example:-5 developers × 3 months, Effort = 15 person-months

[ Applying the Formula (Step-by-Step Example)

Example 1: Using Function Points:- Total Output = 300 Function Points, Total Effort = 10 person-months.

Productivity = 300 / 10 =30 FP / person month, Meaning: Each team member produces 30 FP per month. ]

 [  Example 2: Using LOC

Total Output = 15,000 LOC

Total Effort = 5 person-months

Productivity = 15000 / 5 =3000 LOC / person month, Meaning: One person produces 3000 lines of code per month.  ]



(2)        Quality Metrics:- These metrics measure software quality produced by the team.

Common Quality Metrics:- Defect Density, Defects per developer, Defect Removal Efficiency (DRE)

Defect Density Formula:- Defect Density = Number of Defects​ / Size of software

Example:- Defects found = 40, Size = 20,000 LOC

Defect Density = 40/ 20 = 2 defects / KLOC (kilo line of code) ( Lower value = better team quality performance )

(3)        Schedule Metrics:- These metrics show how well the team meets deadlines.

Examples:- Schedule Variance (SV),  Planned vs Actual Time

Formula:-

Schedule Variance = Actual Time − Planned Time

Example:- Planned duration = 6 months, Actual duration = 7 months

SV= 7 6 = + 1 month (delay) (Positive value = delay, Negative value = ahead of schedule )

(4)        Effort Metrics:- These metrics measure team workload and effort.

Examples:-  Person-hours,  Effort variance,  Average effort per task

Example:- Total effort = 2000 person-hours, Tasks completed = 100

Average Effort per Task = 2000 / 100 = 20 hours

(5)        Collaboration and Communication Metrics:- These metrics analyze team coordination and teamwork.

Examples:- Number of task handovers, Code review participation rate, Stand-up meeting attendance.

Example:- Code reviews done = 90, Code changes = 100

Review Coverage =  90 / 100 *  100 = 90 % (High coverage indicates good teamwork and collaboration. )

(6)        Risk and Stability Metrics:- These metrics show team stability and risk level.

Examples:- Team turnover rate, Skill coverage ratio

Turnover Rate Formula:- Turnover Rate Team Members Left / Total Team Members

Example:- Team size = 10, Members left = 2

Turnover Rate = 0.2 =  20% ( High turnover = high project risk )

COMPLETE REAL-LIFE EXAMPLE (TEAM ANALYSIS):-

Project Details:- Project:- Online Shopping Website, Team Size:- 6 developers, Duration:- 4 months

Collected Metrics:- Function Points delivered:-  480, Effort:- 24 person-months, Defects found:- 36,   Size:-  30 KLOC, Planned time:-  4 months,  Actual time:- 4.5 months

Calculations:

Productivity:- 480/24=20 FP/person-month

Defect Density:- 36/30=1.2 defects/KLOC

Schedule Variance:- 4.54=0.5 month delay

Analysis Result:- Productivity: Average, Quality: Good, Schedule: Slight delay, Overall team performance: Satisfactory

 

UNIT 4 IS OVER



















No comments:

Post a Comment

PLEASE DO LEAVE YOUR COMMENTS

UNIT 5 SOFTWARE TESTING (UNIT NAME) :- TEST AUTOMATION TOOLS AND EMERGING TRENDS

  DR. AJAY KUMAR PATHAK  ASSISTANT PROFESSOR READ  ALL THE NOTES CHAPTER WISE   MINOR PAPER   SUBJECT NAME:-   MN–2C (Th):- SOFTWARE TESTING...