Manual Testing

Types of Testing

Types of Testing

1. Manual Testing

Manual testing is a type of software-testing in which test cases are run by hand rather than using an automated tool. All test cases are manually executed by the tester from the perspective of the end user. It determines whether or not the application meets the requirements specified in the requirement document. To finish nearly 100% of the software application, test cases are created and implemented. Manually created test case reports are also available.

Manual testing is one of the most basic testing methods since it may detect both visible and hidden software flaws. A fault is the difference between the expected output and the output provided by the software. The developer corrected the flaws before handing it over to the tester to retest.

Before automated testing, any newly generated software must undergo manual testing. This testing takes a lot of time and work, but it ensures that the product is bug-free. Manual testing necessitates familiarity with manual testing procedures but not with automated testing software.

One of the software-testing fundamentals is that "100% automation is not possible." This necessitates manual testing.

Why we need manual testing

Whenever an application is released to the market and is unstable, has a bug, or causes a problem for end-users when they are using it.

If we don't want to deal with these issues, we should conduct one round of testing to ensure that the application is bug-free and stable, and that we give a high-quality product to the client, because a bug-free application is more convenient for the end-user.

When a test engineer performs manual testing, he or she may test the application from the perspective of an end-user and have a better understanding of the product, which helps them build the necessary test cases for the application and provide timely feedback.

Types of Manual Testing

Manual testing can be done in a variety of ways. Each approach is applied in accordance with its own set of testing requirements. The several types of manual testing are listed below:

  • White Box Testing
  • Black Box Testing
  • Gray Box Testing


Note: Each type is explained in detail in the respective modules.



2. Automation Testing

Now, we are going to learn about the following topics of automation testing:

  • Introduction to Automation testing
  • Why do we need to perform automation testing?
  • Different approaches used in automation testing
  • Automation testing process
  • What are different challenges faced during the automation testing process?
  • Automation testing tools
  • Benefits and Drawbacks of automation testing.

An Overview

Another type of software-testing is automation testing, which uses specialised tools to run test scripts without the need for human intervention. It is the most acceptable method of increasing Software-testing efficiency, productivity, and test coverage.

We may quickly approach the test data, handle the test implementation, and compare the actual output to the predicted end with the help of an automation testing tool.

The test automation engineer will write the test script or utilise automation testing tools to execute the application in automation testing. In manual testing, on the other hand, the test engineer will develop the test cases and then implement the product based on the written test cases.

The test engineer can automate repetitive operations and other related duties via test automation. Implementing the repetitious take over and over again in manual testing is a tiresome task.

Methodologies

  • GUI Testing
  • Code-Driven
  • Test Automation Framework

Automation Testing Process

The automation testing process is a method for organising and executing testing operations in such a way that maximum test coverage is achieved with minimal resources. The test is structured in a multi-step process that supports the task's required, detailed, and interconnected tasks


 

Step 1: Decision to Automation Testing

The Automation Test Life-cycle Methodology's initial phase (ATLM). The testing team's major emphasis at this point is to manage test expectations and determine the potential benefits of using automated testing correctly.

Organizations must deal with a slew of challenges while implementing an automated testing suite, some of which are outlined below:

  • For automation testing, testing tool expertise are essential, so the first issue is to engage a testing equipment professional.
  • The second challenge is determining the best tool for testing a specific function.
  • The importance of design and development standards in automating testing.
  • Evaluation of several automated testing tools in order to select the most appropriate tool for automation testing.
  • The issue of money and time arises since the use of both is considerable at the start of the testing

Step 2: Test Tool Selection

The second part of the Automation Test Life-cycle Methodology is Test Tool Selection (ATLM). This phase instructs the tester on how to evaluate and choose a testing tool.

Despite the fact that the testing tool meets practically all testing criteria, the tester must still study the system engineering environment and other organisational requirements before compiling a list of tool assessment parameters. The equipment is evaluated by test engineers using the sample criteria provided.

Step 3: Scope Introduction

The third phase of the Automation Test Life-cycle Methodology is this one (ATLM). The application's testing area is included in the scope of automation. The following factors are considered when determining scope:

  • Software application functionalities that are shared by all software applications.
  • Automated testing establishes a reusable collection of business components.
  • Automation Testing determines the extent to which business components can be reused.
  • A business-specific application must include business-specific features and be technically possible.
  • In cross-browser testing, automation testing allows for the replication of test cases.

Step 4: Test Planning and Development

Because all of the testing strategies are specified here, the fourth and most significant step of Automation Test Life-cycle Methodology (ATLM) is test design and development. This phase identifies the planning of long-lead test activities, the creation of standards and guidelines, the configuration of the required combination of hardware, software, and network to create a test environment, defect tracking procedure, and guidelines to control test configuration and environment. The project's projected work and expense are determined by the tester. This phase's deliverables include test strategy and effort estimation documents..

Step 5: Test Case Execution

Example of a test The Automation Test Life-cycle Methodology's sixth phase is execution (ATLM). It occurs following the successful completion of the test planning process. The testing team defines test design and development at this level. Test cases can now be run as part of product testing. During this phase, the testing team uses automated tools to begin case development and execution. Peer members of the testing team or quality assurance leaders review the prepared test cases.

The testing team was directed to stick to the execution timetable during the execution of test procedures. The execution phase enacts the techniques indicated in the test plan, such as integration, acceptability, and unit-testing.

Step 6: Review and Assessment

The automated testing life cycle's sixth and final stage is review and assessment, however the activities of this phase are carried out throughout the life cycle to ensure ongoing quality improvement. The examination of matrices, review, and assessment of actions are all part of the improvement process.

During the evaluation, the examiner focuses on whether the specific measure meets the acceptance requirements; if it does, it is ready for usage in software production. It is thorough since each functionality of the application is covered by test cases.

The testing team conducts its own survey to determine the process's potential value; if the prospective benefit is insufficient, the testing method can be changed. The team also includes a sample survey form for getting input from end users on the software's features and management.


Types of Manual Testing:

1. White Box Testing

Software-testing is divided into two categories: black box testing and white box testing. White box testing, also known as glass box testing, structural testing, clear box testing, open box testing, and transparent box testing, is discussed here. It examines a software's internal coding and infrastructure with a focus on comparing predefined inputs to expected and intended results. It centres around internal structure testing and is based on the inner workings of a programme. The design of test cases in this form of testing necessitates programming expertise. The main purpose of white box testing is to concentrate on the flow of inputs and outputs via the software while also ensuring its security.

Because of the system's internal perspective, the term "white box" is employed. The terms "clear box," "white box," and "transparent box" refer to the capacity to see into the inner workings of software through its exterior shell.

White box testing is done by developers. The developer will test each line of the program's code in this step. The developers conduct white-box testing before sending the application or software to the testing team, who conduct black-box testing, verifying the application's compliance with the requirements and identifying flaws before sending it back to the developer.

The developer patches the errors and performs a round of white box testing before sending the code to the testing team. Fixing bugs in this context means that the bug has been removed from the application, and the functionality in question is now operational.

For the following reasons, the test engineers will not be involved in resolving the faults:

  • Fixing the bug may cause the other features to stop working. As a result, the test engineer should always identify flaws, and developers should continue to solve them.
  • If the test engineers spend the majority of their time repairing flaws, they may be unable to find the application's additional bugs.

Generic steps of white box testing

 

  • Create all test scenarios and test cases and assign a high priority number to each.
  • This stage entails looking at code during runtime to see what resources are being used, what portions of the code aren't being used, how long certain procedures and operations take, and so on.
  • Internal subroutines are tested during this step. Internal subroutines, such as non-public methods and interfaces, can handle any form of data correctly or incorrectly.
  • This step focuses on checking the efficiency and quality of control statements such as loops and conditional statements for various data inputs.
  • Finally, white box testing involves security testing to identify any security flaws.

Reasons for white box testing

  • It detects security flaws within the organisation.
  • To inspect the input method within the code.
  • Examine the conditional loops' functionality.
  • To test each function, object, and statement separately.


 

2. Black Box Testing

Black box testing is a type of software-testing that looks at the software's functionality without looking at its internal structure or coding. The customer's statement of needs is the most common source of black box testing.

In this method, the tester chooses a function and inputs a value to verify its functionality, then examines whether the function produces the desired output. If the function returns the expected result, it passes testing; otherwise, it fails. The test team informs the development team of the results before moving on to the next function. If there are serious problems after all functions have been tested, it is returned to the development team for amendment.


 

  • Because the black box test is based on the requirements specification, it is examined first.
  • In the second phase, the tester generates a positive and negative test scenario by picking legitimate and invalid input values to see if the software processes them correctly or incorrectly.
  • The tester creates numerous test cases in the third phase, such as a decision table, all pairs test, equivalent division, error estimation, cause-effect graph, and so on.
  • The execution of all test cases is included in the fourth step.
  • The tester compares the expected op to the actual op in the fifth step.
  • If there is a defect in the software, it is tested again in the last phase.

Test cases

The requirements are taken into account when creating test cases. Working descriptions of the software, including requirements, design parameters, and other specifications, are typically used to build these test cases. To identify the proper output, the test designer chooses a positive test scenario with legitimate input values and an adverse test scenario with faulty input values. Test cases were created primarily for functional testing, although they can also be used for non-functional testing. The testing team is in charge of creating test cases; the software development team is not involved.


Generic steps of black box testing

 

  • Because the black box test is based on the requirements specification, it is examined first.
  • In the second phase, the tester generates a positive and negative test scenario by picking legitimate and invalid input values to see if the software processes them correctly or incorrectly.
  • The tester creates numerous test cases in the third phase, such as a decision table, all pairs test, equivalent division, error estimation, cause-effect graph, and so on.
  • The execution of all test cases is included in the fourth step.
  • The tester compares the expected op to the actual op in the fifth step.
  • If there is a defect in the software, it is tested again in the last phase.

Test cases

The requirements are taken into account when creating test cases. Working descriptions of the software, including requirements, design parameters, and other specifications, are typically used to build these test cases. To identify the proper output, the test designer chooses a positive test scenario with legitimate input values and an adverse test scenario with faulty input values. Test cases were created primarily for functional testing, although they can also be used for non-functional testing. The testing team is in charge of creating test cases; the software development team is not involved.

3. Grey Box Testing

Grey Box testing is a software-testing method that involves testing a software programme with only a partial understanding of its internal workings. Because it involves access to internal coding to develop test cases as white box testing and testing techniques are done at the functionality level as black box testing, it is a hybrid of the two.


 

GreyBox testing is frequently used to identify context-specific issues in web systems. For example, if a tester detects a fault during testing, he makes code changes to fix the problem and then retests it in real time. To increase testing coverage, it focuses on all layers of any complicated software system. It allows you to test both the display layer and the core coding structure. It's mostly used for integration and penetration testing.

Generic Steps to perform Grey box Testing are:

  1. Identify and choose inputs from BlackBox and WhiteBox testing inputs first.
  2. Determine the predicted outcomes from the inputs you've chosen.
  3. Finally, make a list of all the important paths you'll take during the testing session.
  4. To undertake deep level testing, the fourth objective is to identify sub-functions that are part of primary functions.
  5. The fifth task is to identify subfunction inputs.
  6. Identifying predicted outputs for subfunctions is the sixth job.
  7. The seventh job entails running a Subfunctions test case.
  8. The eighth task entails ensuring that the result is correct.

The test cases intended for Greybox testing comprises Security related, Browser related, GUI related, Operational system related and Database related testing.