Search here

Pages

Compatibility Testing - Definition, Types, Tools Used



What is Compatibility Testing

  • Compatibility testing is to check whether your software is capable of running on different hardware, operating systems, applications , network environments or mobile devices.
  • Compatibility Testing is a type of the Non-functional testing
  • Initial phase of compatibility testing is to define the set of environments or platforms the application is expected to work on.
  • Tester should have enough knowledge on the platforms / software / hardware to understand the expected application behavior under different configurations.
  • Environment needs to be set-up for testing with different platforms, devices, networks to check whether your application runs well under different configurations.
  • Report the bugs .Fix the defects. Re-test to confirm defect fixing.

compatibility-testing

Types of Compatibility testing :

  • Hardware
  • Operating Systems
  • Software
  • Network
  • Browser
  • Devices
  • Mobile
  • Versions of the software
Let’s look into compatibility testing types briefly.

Hardware : It checks software to be compatible with different hardware configurations .

Operating Systems: It checks your software to be compatible with different Operating Systems like Windows , Unix , Mac OS etc.

Software: It checks your developed software to be compatible with other software’s.For example: MS Word application should be compatible with other softwares like MS Outlook,MS Excel , VBA etc.

Network: Evaluation of performance of system In network with varying parameters such as Bandwidth, Operating speed, Capacity. It also checks application in different networks with all parameters mentioned earlier.

Browser: It checks compatibility of your website with different browsers like Firefox , Google Chrome , Internet Explorer etc.

Devices : It checks compatibility of your software with different devices like USB port Devices, Printers and Scanners, Other media devices and Blue tooth.

Mobile: Checking you software is compatible with mobile platforms like Android , iOS etc.

Versions of the software: It is verifying you software application to be compatible with different versions of software.For instance checking your Microsoft Word to be compatible with Windows 7, Windows 7 SP1 , Windows 7 SP 2 , Windows 7 SP 3.

There are two types of version checking.

  • Types of Version Checking

    • Backward compatibility Testing
    • Forward compatibility Testing
Backward compatibility Testing : is to verify the behavior of the developed hardware/software with the older versions of the hardware/software.

Forward compatibility Testing : is to verify the behavior of the developed hardware/software with the newer versions of the hardware/software.

Tools for compatibility testing

  • Adobe Browser Lab – Browser Compatibility Testing - This tool helps check your application in different browsers.
  • Secure Platform – Hardware Compatibility tool - This tools includes necessary drivers for a specific hardware platform and it provides information on tool to check for CD burning process with CD burning tools.
  • Virtual Desktops - Operating System Compatibility - This is used to run the applications in multiple operating systems as virtual machines. N Number of systems can be connected and compare the results.

Adhoc Testing - Definition, Types, Advantages, Disadvantages



Adhoc Testing

Definition :

  • Adhoc testing is an informal testing type with an aim to break the system.
  • This testing is usually an unplanned activity.
  • It does not follow any test design techniques to create test cases. In fact is does not create test cases altogether!
  • It is primarily performed if the knowledge of testers in the system under test is very high.
  • Testers randomly test the application without any test cases or any business requirement document.
  • Adhoc testing can be achieved with the testing technique called Error Guessing.
  • Error guessing can be done by the people having enough experience on the system to “geuss” the most likely source of errors.

Adhoc-testing-defintion



Types of adhoc testing

  1. Buddy Testing
  2. Pair testing
  3. Monkey Testing

Buddy Testing

Two buddies mutually work on identifying defects in the same module. Mostly one buddy will be from development team and another person will be from testing team. Buddy testing helps the testers develop better test cases and development team can also make design changes early. This testing usually happens after unit testing completion.

Pair testing

Two testers are assigned modules, share ideas and work on the same machines to find defects. One person can execute the tests and another person can take notes on the findings. Roles of the persons can be a tester and scriber during testing.

Buddy testing is combination of unit and system testing together with developers and testers but Pair testing is done only with the testers with different knowledge levels.(Experienced and non-experienced to share their ideas and views)

Monkey Testing

Randomly test the product or application without test cases with a goal to break the system

Advantages of Adhoc Testing :

  • Adhoc Testing saves lot of time as it doesn’t require elaborate test planning , documentation and test case design.
  • It checks for the completeness of testing and find more defects then  planned testing.

Disadvantages of Adhoc Testing : 

  • This testing requires no documentation/ planning /process to be followed. Since this testing aims at finding defects through random approach, without any documentation, defects will not be mapped to test cases. Hence, sometimes, it is very difficult to reproduce the defects as there are no test-steps or requirements mapped to it.

Define Equivalence Partitioning with Examples



Define Equivalence Partitioning with Examples

What is Equivalence Partitioning ?

The technique is to divide (i.e. to partition) a set of test conditions into groups or sets that can be considered the same (i.e. the system should handle them equivalently), hence ‘equivalence partitioning’. Equivalence partitions are also known as equivalence classes – the two terms mean exactly the same thing.

Example 1 for Equivalence partitioning :

Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.

2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.

3) Input data with any value greater than 1000 to represent third invalid input class.

Example 2 for Equivalence partitioning :

For example in a savings Bank account,
3% rate of interest is given if the balance in the account is in the range of $0 to $100,
5% rate of interest is given if the balance in the account is in the range of $100 to $1000,
And 7% rate of interest is given if the balance in the account is $1000 and above.

We would initially identify three valid equivalence partitions and one invalid partition as shown below. [Click on the image for Zoom view]

Equivalence-partitioning


Example 3 for Equivalence partitioning :

A store in city offers different discounts depending on the purchases made by the individual. In order to test the software that calculates the discounts, we can identify the ranges of purchase values that earn the different discounts. For example, if a purchase is in the range of $1 up to $50 has no discounts, a purchase over $50 and up to $200 has a 5% discount, and purchases of $201 and up to $500 have a 10% discounts, and purchases of $501 and above have a 15% discounts.

Now we can identify 4 valid equivalence partitions and 1 invalid partition as shown below:

Invalid Partition Valid Partition(No Discounts) Valid Partition(5%) Valid Partition(10%) Valid Partition(15%)
$0.01 $1-$50 $51-$200 $201-$500 $501-Above

Happy Sharing! :)

Define Boundary Value Analysis with Examples



Define Boundary Value Analysis with Examples :

What is Boundary Value Analysis ?

A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The hope is that, if a systems works correctly for these special values then it will work correctly for all values in between.

Example 1 for Boundary Value Analysis : 

Password field accepts minimum 6 characters and maximum 12 characters. [Range is 6-12]

Write Test Cases considering values from  Valid region and each Invalid Region and Values which define exact boundary.

We need to execute 5 Test Cases for our Example 1.
1. Consider password length less than 6
2. Consider password of length exactly 6
3. Consider password of length between 7 and 11
4. Consider password of length exactly 12
5. Consider password of length more than 12 

Note : 1st and 5th Test Cases are considered for Negative Testing

Boundary-Value-Analysis-Examples


Example 2 for Boundary Value Analysis :

Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.
2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.
3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.


Example 3 for Boundary Value Analysis :

Name text box which allows 1-30 characters. So in this case writing test cases by entering each character once will be very difficult so then will choose boundary value analysis.

So in this case at max 5 test cases will come:

Test case1: minimum -1 character: Validating not entering anything in text box
Test case2: minimum +1 character: Validating with only one char
Test case3: maximum -1 character: Validating with 29 chars
Test case4: minimum +1 character: Validating with 31 characters
Test case1: any one middle number: validating with 15 chars

Happy Sharing!

What Is System Testing



What Is System Testing

Testing the behavior of the whole software/system as defined in software requirements specification (SRS) is known as system testing, its main focus is to verify that the customer requirements are fulfilled.

System testing is done after integration testing is complete. 

In system testing, there are two type of testing

System Testing
Click on the image for zoom view

Functionality testing is to test whether application functioning as per requirement or not.

Non-functionality testing are several types :
  • Load,
  • Stress,
  • Performance,
  • Reliability,
  • Security,
  • Usability
  • Configuration,
  • Compatibility (forward & Backward),
  • Scalability, Etc...

Click on the links to read more about Functionality testing and Non-functionality testing

Different Types Of White Box Testing



Different Types Of White Box Testing

Path Testing :

Taking up each and every individual path through which the flow of code taken place.

Loop Testing :

A Piece of code executing continuously until the condition becomes false and testing whether it is proper or not.

Ex: For (Loop=1; Loop<=10; Loop++)
      {
        ------
        ------
       }

Bug Life Cycle Or Defect Life Cycle In Software Testing



Bug Life Cycle Or Defect Life Cycle In Software Testing

Defect life cycle is a cycle which a defect goes through during its lifetime. It starts when defect is found and ends when a defect is closed, after ensuring it’s not reproduced. Defect life cycle is related to the bug found during testing.
Bug-life-cycle
Click on the image for zoom view

Automation Testing Interview Questions



Automation Testing Interview Questions

  1. What are principles of good testing scripts for automation?
  2. What are some of the common misconceptions during implementation of an automated testing tools for the first time?
  3. What are the limitations of automating software testing?
  4. What are the main attributes of test automation?
  5. Can test automation improve test effectiveness?

Integration Testing And Types Of Integration Testing



Define Integration Testing And Types Of Integration Testing

Integration Testing:

Combining the modules and testing the flow of data between them. Integration Testing is divided into 2 types.

  • Incremental Integration Testing:

Adding the modules incrementally and checking the data flow between them. Modules are added in a sequential fashion.

This can be done in two ways 
    • Top-Down Approach
    • Bottom-Up approach.

Define Agile Model and its Advantages



Define Agile Model and its Advantages

Agile Model or Agile Methodology :

Agile development methodology attempts to provide many opportunities to assess the direction of a project throughout the development life cycle. Agile methods break tasks into small increments with minimal planning and do not directly involve long-term planning. Iterations are short time frames that typically last from one to four weeks. Each iteration involves a cross functional team working in all functions: planning, requirements analysis, design, coding, unit testing, and acceptance testing. At the end of the iteration a working product is demonstrated to stakeholders. This minimizes overall risk and allows the project to adapt to changes quickly. An iteration might not add enough functionality to warrant a market release, but the goal is to have an available release at the end of each iteration. Multiple iterations might be required to release a product or new features.
agile-model

Software Testing Life Cycle (STLC)



Software Testing Life Cycle (STLC)

Software Testing Life Cycle (STLC) defines the steps/stages/phases in testing of software. 

The different stages in Software Testing Life Cycle:


  1. Requirement Analysis
  2. Test Planning
  3. Test Case Design / Development
  4. Environment setup
  5. Test Execution / Reporting / Defect Tracking
  6. Test Cycle Closure / Retrospective study

Software_Testing_Life_Cycle

Difference Between Severity And Priority



Difference Between Severity And Priority

Severity
Priority
Severity: It is with respect to the Impact on the functionality. It is with respect to the impact on the business.
Example for High Severity:

The Quarterly statement event is not triggering from the website and we are just at the beginning of the new quarter after a new release. In this case the Severity is High but priority could be low because we have time till the quarter end to fix the bug.
Example for High Priority:

The client logo is not appearing on the web site but the site is working fine. in this case the severity is low but the priority is high because from company's reputation it is most important to resolve. After all the reputation wins more clients and projects and hence increases revenue.

What If There Isn't Enough Time For Thorough Testing?



What If There Isn't Enough Time For Thorough Testing?

Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:

  • Which functionality is most important to the project's intended purpose? 
  • Which functionality is most visible to the user? 
  • Which functionality has the largest safety impact? 
  • Which functionality has the largest financial impact on users? 

Difference Between Verification And Validation



Difference Between Verification And Validation

Verification
Validation
Are we building the product right? Are we building the right product?
Verification is to check whether software conforms to the specifications and is done by development team at various development phases.During development phase the SRS document ,Design Document and the code are reviewed to ensure that product is being developed using the process oriented approach.It is an in-house activity of the development organization. It is an Quality assurance activity which prevents the defects of the product. Validation is to check whether the software meets the customer's expectation.This is done by testing the software for its functionality and other requirements as mentioned in Requirements Specification Documents.Validation is carried out with customer involvement.It is Quality control acitvity which detects the defect during testing of the product

Difference Between Test Scenario And Test Case



Difference Between Test Scenario And Test Case

Test Scenario
Test Case
Test Scenario is ‘What to be tested’ Test Case is ‘How to be tested’
Test scenario is nothing but test procedure. Test case consist of set of input values, execution precondition, expected Results and executed post-condition developed to cover certain test Condition.
The scenarios are derived from use cases. Test cases are derived (or written) from test scenario.
Test Scenario represents a series of actions that are associated together. Test Case represents a single (low level) action by the user.
Scenario is thread of operations Test cases are set of input and output given to the System.

For example:
  • Checking the functionality of Login button is Test scenario
  • Test Cases for this Test Scenario are:
    • Click the button without entering user name and password.
    • Click the button only entering User name.
    • Click the button while entering wrong user name and wrong password and etc...

Web Testing Interview Questions Answers



Web Testing Interview Questions Answers

web-testing-interview-questions

What is Web application?

  • It is Software application that is accessed over a network such as the Internet or an intranet through a web browser. Web application provides services (Free and Paid) apart from information. Ex: Online Banking System it provides Bank information, Branches & ATM Information, Loans information etc…And It provides balance enquiry, Fund transfer, Bill payments like services.

What is Web browser?

  • Web browser is a software application used to locate, retrieve and also display content on the World Wide Web, including Web pages, images, videos and other files. Examples: Google Chrome, Mozilla Firefox, Internet Explorer, Opera, Safari.

What is website?

  • Basically website is an information provider; it provides information globally using internet protocols.

Top 100 Software Testing Interview Questions



Top 100 Software Testing Interview Questions

  1. What are the common problems with software automation?
  2. What are the key challenges of software testing?
  3. What is the role of QA in a project development?
  4. Can you explain V model in manual testing?
  5. Can u explain the structure of bug life cycle?
  6. What is “bug leakage?” and what is “bug release?”
  7. Can you explain water fall model in manual testing?
  8. Can you explain me the levels in V model manual?

Compare Desktop testing, Client-Server testing and Web testing



We have three testing types
  • Desktop application testing,
  • Client server application testing and
  • Web application testing.
Each one differs in the environment in which they are tested and you will lose control over the environment in which application you are testing, while you move from desktop to web applications.

DESKTOP APPLICATION:

It runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and back-end i.e DB.

CLIENT / SERVER TESTING:

In client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, back-end.

Difference between Web testing and Client-Server testing



Difference between Web testing and Client-Server testing :

CLIENT / SERVER TESTING
WEB TESTING
This type of testing usually done for 2 tier applications (usually developed for LAN)
Here we will be having frontend and backend.

The application launched on frontend will be having forms and reports which will be monitoring and manipulating data

This is done for 3 tier applications (developed for Internet / intranet / xtranet)

Here we will be having Browser, web server and DB server.

The applications accessable in browser would be developed in HTML, DHTML, XML, JavaScript etc.,
(we can monitor thru these applications)
Applications developed in VB,VC++,Core Java,C,C++,D2K,PowerBuilder etc.,

Applications for the webserver would be developed in Adv Java, ASP, JSP, VBScript, JavaScript, Perl, ColdFusion, PHP etc.,
(all the manipulations are done on the web server with the help of these programs developed)

The backend for these applications would be MS Access, SQL Server, oracle, sybase, mysql, quadbase

The DBserver would be having oracle, sql server, sybase, mysql etc.,
(all data is stored in the database available on the DB server)
The tests performed on these type of applications would be
- user interface testing
- manual support testing
- Functionality testing
- compatability testing & configuration testing
- intersystems testing

The tests performed on these type of applications would be
- user interface testing
- Functionality testing
- security testing
- browser compatability testing
- load / stress testing
- interoperability testing/intersystems testing
- storage and data volume testing

Software testing Interview Questions – Based on Testing Levels



Software testing Interview Questions – Based on Testing Levels

  •  What is Acceptance Testing?
  • What is Alfa Testing?
  • What is Beta Testing?
  • What is Big-bang or System Approach?
  • What is Bottom-up Integration?
  • What is Certification Testing?

Software testing Interview Questions – Based on Testing Types



Software testing Interview Questions – Based on Testing Types

  • What are the prerequisites for Test Execution?
  • What are the roles of Test Lead in Test Execution stage?
  • What are the roles of Testers in Test Execution stage?
  • What is Accessibility Testing?
  • What is Ad-hoc Testing?
  • What is Agile testing?

WinRunner Automation Testing Interview Questions



WinRunner Automation Testing Interview Questions

  1. How do you handle an Exception in WinRunner?
  2. What’s your comfort level in using WinRunner?
  3. Running tests from the command line?
  4. Can you test DB using WinRunner?
  5. When do you go for Context Sensitive and Analog recordings? What’s the difference between them?
  6. When do you use Break Points?
  7. When do you use Verify/Debug/Update Modes?
  8. How do you invoke an application using TSL?

Advantages and Dis-advantages of V-model



Advantages of V-model (SDLC)

These are the advantages V-Model offers in front of other systems development models:

  • The users of the V-Model participate in the development and maintenance of The V-Model. A change control board publicly maintains the V-Model. The change control board meets anywhere from every day to weekly and processes all change requests received during system development and test.
  • The V-Model provides concrete assistance on how to implement an activity and its work steps, defining explicitly the events needed to complete a work step: each activity schema contains instructions, recommendations and detailed explanations of the activity.

All (100) Types of Software Testing



All (100) Types of Software Testing

Below is an exhaustive list of “types of software testing” and a brief description about each type of software testing, like load testing, stress testing, unit testing, system testing, acceptance testing, certification testing, performance testing, user acceptance testing, penetration testing, automated testing, beta testing, compatibility testing, security testing, benchmark testing, functional testing, negative testing, destructive testing, integration testing, regression testing, alpha testing, end-to-end testing, path testing, smoke testing, black box testing, stability testing, usability testing etc., and many more, about 100 software testing types with descriptions are listed below.

Why there are so many types of software testing ?

Quality of software is assessed in terms of 6 Quality factors (Functionality, Reliability, Efficiency, Usability, Maintainability and Portability). Each of below listed type of software testing is designed to validate software for one or more of the mentioned quality factors. More types of software testing have evolved to keep up with the pace with rapid increase in complexity of the software design, frameworks & Programming languages, increased number of users with popularity of internet, advent of new platforms and technologies. With increase in number of software testing types to be performed, need for software testing tools has increased as well.

1. Ad-hoc testing :

This is a informal type of software testing that is performed by software testers, business analyst, developers or any stake holder without referring to test cases or documentation. Person performing ad-hoc testing usually has a good understanding of software requirements and tries to break the software and find defects with the experience and knowledge they have about the domain, requirements and functionality of the software. Ad hoc testing is intended to find defects that were not found by existing test cases.

2. Acceptance testing:

This is a formal type of software testing that is performed by end customer to check if the software confirms to their business needs and to the requirements provided earlier. Acceptance tests are usually documented; however end customers may not document test cases for smaller version or releases.

3. Accessibility testing:

This is a formal type of software testing that helps to determine whether the software can be used by people with disability. Disability can be blind, deaf, color blindness, learning disability etc. There are also companies and consultants that provide website accessibility audits.

4. Aging testing:

This is a type of performance testing that is carried out by running software for longer duration like weeks or months and check performance of software to see if the software performance degrades or shows any signs of degradation after running for a longer period of time. Aging Test is also known as Soak testing and also known as longevity testing.

5. Alpha Testing:

This is a formal type of testing that is performed by end customers at development site. Alpha testing is conducted before taking the software to Beta testing.

Software Test Documents - Test Plan, Test Scenario, Test Case, Traceability Matrix



Explain about Software Test Documents (artifacts)

Testing documentation involves the documentation of artifacts which should be developed before or during the testing of Software.

Documentation for Software testing helps in estimating the testing effort required, test coverage, requirement tracking/tracing etc. This section includes the description of some commonly used documented artifacts related to Software testing such as:

  • Test Plan
  • Test Scenario
  • Test Case
  • Traceability Matrix

Manual Testing Interview Questions Answers



Manual Testing Interview Questions Answers

manual-testing-interview-questions-answers

  1. Can you explain the V model in manual testing?
  2. What is Re-testing testing?
  3. What is Recovery Testing?
  4. What is Regression testing?
  5. What is Sanity Testing?
  6. What is Scalability Testing?

Interview Questions for Software Testing Professionals



Interview Questions for Software Testing Professionals

A) Tester Role and Responsibilities:
B) Software Engineering, Quality Standards:
C) Manual Testing Concepts:
D) Test Automation Concepts (QTP):
E) General Software testing interview questions:
F) Environmental, Development Technologies

A) Tester Role and Responsibilities:

  • Differentiate Priority and Severity in your Defect Report?
  • Generally, when will you plan for test automation for a project in your company?
  • How many Defects did you detect in your project?

What is Acceptance Testing ?



What is Acceptance Testing :

This is arguably the most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client.s requirements. The QA team will have a set of pre written scenarios and Test Cases that will be used to test the application.

More ideas will be shared about the application and more tests can be performed on it to gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the application that will result in system crashers or major errors in the application.

By performing acceptance tests on an application the testing team will deduce how the application will perform in production. There are also legal and contractual requirements for acceptance of the system.

Non-Functional Testing - Performance, Usability, Security, Portability



Non-Functional Testing

  • Performance Testing

    • Load Testing
    • Stress Testing

  • Usability Testing

    • UI VS Usability Testing

  • Security Testing

  • Portability Testing

Non-Functional Testing definition :

This section is based upon the testing of the application from its non-functional attributes. Non-functional testing of Software involves testing the Software from the requirements which are non functional in nature related but important a well such as performance, security, user interface etc.

Functional Testing - Unit, Integration, System, Acceptance



Functional Testing

  • Unit Testing
  • Integration testing

    • Bottom-up integration
    • Top-down integration

  • System Testing
  • Regression Testing

    • Alpha Testing
    • Beta Testing

Functional Testing definition :

This is a type of black box testing that is based on the specifications of the software that is to be tested. The application is tested by providing input and then the results are examined that need to conform to the functionality it was intended for. Functional Testing of the software is conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.

What is Automation testing and List of Automation Testing Tools



What is Automation testing ?

Automation testing which is also known as Test Automation, is when the tester writes scripts and uses another software to test the software. This process involves automation of a manual process. Automation Testing is used to re-run the test scenarios that were performed manually, quickly and repeatedly.

Apart from regression testing, Automation testing is also used to test the application from load, performance and stress point of view. It increases the test coverage; improve accuracy, saves time and money in comparison to manual testing.

What to automate ?

It is not possible to automate everything in the Software; however the areas at which user can make transactions such as login form or registration forms etc, any area where large amount of users. can access the Software simultaneously should be automated.

Furthermore all GUI items, connections with databases, field validations etc can be efficiently tested by automating the manual process.

When to automate ?

  • Test Automation should be uses by considering the following for the Software:
  • Large and critical projects.
  • Projects that require testing the same areas frequently.
  • Requirements not changing frequently.
  • Accessing the application for load and performance with many virtual users.
  • Stable Software with respect to manual testing.
  • Availability of time.

How to automate ?

Automation is done by using a supportive computer language like vb scripting and an automated software application. There are a lot of tools available which can be use to write automation scripts. Before mentioning the tools lets identify the process which can be used to automate the testing:
  • Identifying areas within a software for automation.
  • Selection of appropriate tool for Test automation.
  • Writing Test scripts.
  • Development of Test suits.
  • Execution of scripts.
  • Create result reports.
  • Identify any potential bug or performance issue.
  • Software testing tools

List of Automation Testing Tools :

Benefits of Automated Testing

  • Reliable: Tests perform precisely the same operations each time they are run, thereby eliminating human errors.
  • Repeatable: You can test how the software reacts under repeated execution of the same operations.
  • Programmable: You can program sophisticated tests that bring out hidden information from the application.
  • Comprehensive: You can build a suite of tests that covers every feature in your application.
  • Reusable: You can reuse tests on different versions of an application, even if the users interface changes.
  • Better Quality Software: You can run more tests in less time with fewer resources.
  • Fast: Automated tools run tests significantly faster than human users.
  • Cost Reduction: The cost is reduced as the number of resources for regression test is reduced.
  • Choosing the right tools for the job and targeting the right areas of the organization to deploy them can only realize these benefits. The right areas where the automation fit must be chosen.

Disadvantages of Automation Testing

Though the automation testing has many advantages, it has its own disadvantages too. Some of the disadvantages are:
  • Proficiency is required to write the automation test scripts.
  • Debugging the test script is major issue. If any error is present in the test script, sometimes it may lead to deadly consequences.
  • Test maintenance is costly in case of playback methods. Even though a minor change occurs in the GUI, the test script has to be re-recorded or replaced by a new test script.
  • Maintenance of test data files is difficult, if the test script tests more screens.
Some of the above disadvantages often cause damage to the benefit gained from the automated scripts. Though the automation testing has pros and corns, it is adapted widely all over the world.

Read more about Manual Testing and Manual Testing vs Automated testing

Define Verification and Validation in Software Testing



Various Definitions for Verification and Validation :

  • Validation checks that the product design satisfies or fits the intended use (high-level checking), i.e., the software meets the user requirements. This is done through dynamic testing and other forms of review.
Verification and validation are not the same thing, although they are often confused. Boehm succinctly expressed the difference between them:
  • Validation: Are we building the right product?
  • Verification: Are we building the product right?
According to the Capability Maturity Model (CMMI-SW v1.1),
  • Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610].
  • Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610]

In other words,

  • Validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place, while verification is ensuring that the product has been built according to the requirements and design specifications.
  • Validation ensures that "you built the right thing". Verification ensures that "you built it right".
  • Validation confirms that the product, as provided, will fulfill its intended use.

From testing perspective:

  • Fault – wrong or missing function in the code.
  • Failure – the manifestation of a fault during execution.
  • Malfunction – according to its specification the system does not meet its specified functionality.

Within the modeling and simulation community, the definitions of validation, verification and accreditation are similar:

  • Validation is the process of determining the degree to which a model, simulation, or federation of models and simulations, and their associated data are accurate representations of the real world from the perspective of the intended use(s). Accreditation is the formal certification that a model or simulation is acceptable to be used for a specific purpose.
  • Verification is the process of determining that a computer model, simulation, or federation of models and simulations implementations and their associated data accurately represents the developer's conceptual description and specifications.

Difference between Verification and Validation in Software Testing



What is the difference between Validation & Verification in Software Testing ?

No.
VERIFICATION
VALIDATION
1Verification is a static practice of verifying documents, design, code and program.Validation is a dynamic mechanism of validating and testing the actual product.
2It does not involve executing the code.It always involves executing the code.
3It is human based checking of documents and files.It is computer based execution of program.
4Verification uses methods like inspections, reviews, walkthroughs, and Desk-checking etc.Validation uses methods like black box (functional) testing, gray box testing, and white box (structural) testing etc.
5Verification is to check whether the software conforms to specifications.Validation is to check whether software meets the customer expectations and requirements.
6It can catch errors that validation cannot catch. It is low level exercise.It can catch errors that verification cannot catch. It is High Level Exercise.
7Target is requirements specification, application and software architecture, high level, complete design, and database design etc.Target is actual product-a unit, a module, a bent of integrated modules, and effective final product.
8Verification is done by development team to provide that the software is as per the specifications in the SRS document.Validation is carried out with the involvement of client and testing team.
9It, generally, comes first-done before validation.It generally follows after verification.
10Question
Are we building the product right?
Question
Are we building the right product?
11Evaluation Items
Plans, Requirement Specs, Design Specs, Code, Test Cases
Evaluation Items
The actual product/software.
12Activities
Reviews
Walkthroughs
Inspections
Activities
Testing


verification-and-validation

Manual Software Testing Interview Questions and Answers



Manual Software Testing Interview Questions and Answers

As a software tester the person should have certain qualities, which are imperative. The person should be observant, creative, innovative, speculative, patient, etc. It is important to note, that when you opt for manual testing, it is an accepted fact that the job is going to be tedious and laborious. Whether you are a fresher or experienced, there are certain questions, to which answers you should know. Lets see Interview Tips below.




1) What is difference between bug, error and defect?

Bug and defect essentially mean the same. It is the flaw in a component or system, which can cause the component or system to fail to perform its required function. If a bug or defect is encountered during the execution phase of the software development, it can cause the component or the system to fail. On the other hand, an error is a human error, which gives rise to incorrect result. You may want to know about, how to log a bug (defect), contents of a bug, bug life cycle, and bug and statuses used during a bug life cycle, which help you in understanding the terms bug and defect better.


2) Explain white box testing.

One of the testing types used in software testing is white box testing. Read in detail on white box testing.


3) Tell me about V model in manual testing.

V model is a framework, which describes the software development life cycle activities right from requirements specification up to software maintenance phase. Testing is integrated in each of the phases of the model. The phases of the model start with user requirements and are followed by system requirements, global design, detailed design, implementation and ends with system testing of the entire system. Each phase of model has the respective testing activity integrated in it and is carried out parallel to the development activities. The four test levels used by this model include, component testing, integration testing, system testing and acceptance testing.


4) What are stubs and drivers in manual testing?

Both stubs and drivers are a part of incremental testing. There are two approaches, which are used in incremental testing, namely bottom up and top down approach. Drivers are used in bottom up testing. They are modules, which test the components to be tested. The look of the drivers is similar to the future real modules. A skeletal or special purpose implementation of a component, which is used to develop or test a component, that calls or is otherwise dependent on it. It is the replacement for the called component.


5) Explain black box testing.

Find the answer to the question in the article on black box testing.


Software Testing Experienced Interview Questions and Answers - Part 2



Software Testing Experienced Interview
Questions and Answers - Part 2



51) What is maintenance testing?

Triggered by modifications, migration or retirement of existing software


52) What is called the process starting with the terminal modules?

Bottom-up integration

53) What is beta testing?

Testing performed by potential customers at their own locations.


54) What is an equivalence partition (also known as an equivalence class)?

An input or outputs range of values such that only one value in the range becomes a test case.


55) What is a failure?

Failure is a departure from specified behavior.


Software Testing Experienced Interview Questions and Answers - Part 1



Software Testing Experienced Interview
Questions and Answers - Part 1



1) What is a V-Model?

A software development model that illustrates how testing activities integrate with software development phases


2) What is functional system testing?

Testing the end to end functionality of the system as a whole.


3) What is failure?

Deviation from expected result to actual result


4) What is exploratory testing?

Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used. The test design and test execution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts. This does not mean that other, more formal testing techniques will not be used. For example, the tester may decide to use boundary value analysis but will think through and test the most important boundary values without necessarily writing them down. Some notes will be written during the exploratory-testing session, so that a report can be produced afterwards.


5) What is component testing?

Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.


Job Interview Tips - Do's and Dont's



Job Interview Tips - Do's and Dont's

Are you repeadtedly failing in a interview? Just spend 2 mins to read these tips.


do-and-donot-of-interview-tips

Guys, Listen. Whenever we go for a interview, we get excited thinking about the life after we get the job and forget to do some basic things. We do things in the hurry bury and get tired, sweated and losing the complete energy. Then finally we get into the interview panel at 0% Energy. Giving Interviewer the bad impression at the first sight. All these makes shivering and things get collapsed completely resulting in the FAILURE. 

So let's be aware of < WHAT TO DO FOR PERFORMING THE INTERVIEW, "BEST" >

Job Interview Tips - Do's and Dont's

  • Reach the Interview place at least 30 minutes in advance. Sit in waiting room / wash your face and hands, check your combing, makeup, dress in loo.

be-punctual

  • Carry water with you and have a glass of water before appearing in interview.
  • No chewing of gum at the time of Interview, but check if you have a bad breath. Chew a Gum or Mouth freshener and spit.
donot-spit-or-chew-bubble-gum

Top 100 Frequently asked Software Testing Interview Questions - Part 2



Top 100 Frequently asked Software Testing Interview Questions and Answers - Part 2



51. When should testing be stopped?

It depends on the risks for the system being tested.  



52. Which of the following is the main purpose of the integration strategy for integration testing in the small?

To specify which modules to combine when, and how many at once.


53. What is the purpose of a test completion criterion?

To determine when to stop testing  


54. Given the following code, which statement is true about the minimum number of test cases required for full statement and branch coverage?

     Read p
     Read q
     IF p+q> 100
          THEN Print "Large"
    ENDIF
    IF p > 50
          THEN Print "p Large"
    ENDIF

1 test for statement coverage, 2 for branch coverage


55. What is the difference between re-testing and regression testing?

Re-testing ensures the original fault has been removed; regression testing looks for unexpected side-effects.  


Top 100 Frequently asked Software Testing Interview Questions - Part 1



Top 100 Frequently asked Software Testing Interview Questions and Answers - Part 1



1. What is the MAIN benefit of designing tests early in the life cycle?

It helps prevent defects from being introduced into the code.


2. What is risk-based testing?

Risk-based testing is the term used for an approach to creating a test strategy that is based on prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.


3. A wholesaler sells printer cartridges. The minimum order quantity is 5. There is a 20% discount for orders of 100 or more printer cartridges. You have been asked to prepare test cases using various values for the number of printer cartridges ordered. Which of the following groups contain three test inputs that would be generated using Boundary Value Analysis?

4, 5, 99


4. What is the KEY difference between preventative and reactive approaches to testing?

Preventative tests are designed early; reactive tests are designed after the software has been produced.


5. What is the purpose of exit criteria?

To define when a test level is complete.



6. What determines the level of risk?

  The likelihood of an adverse event and the impact of the event


7. When is used Decision table testing?

  Decision table testing is used for testing systems for which the specification takes the form of rules or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs in the same column but below the inputs. The remainder of the table explores combinations of inputs to define the outputs produced.
Learn More About Decision Table Testing Technique in the Video Tutorial here


8. What is the MAIN objective when reviewing a software deliverable?

To identify defects in any software work product.


9. Which of the following defines the expected results of a test? Test case specification or test design specification.

Test case specification.


10. Which is a benefit of test independence?

It avoids author bias in defining effective tests.


11. As part of which test process do you determine the exit criteria?

Test planning.


12. What is beta testing?

Testing performed by potential customers at their own locations.


13. Given the following fragment of code, how many tests are required for 100% decision coverage?

if width > length
   then biggest_dimension = width
     if height > width
             then biggest_dimension = height
     end_if
else biggest_dimension = length
            if height > length
                then biggest_dimension = height
          end_if
end_if


14. You have designed test cases to provide 100% statement and 100% decision coverage for the following fragment of code. if width > length then biggest_dimension = width else biggest_dimension = length end_if The following has been added to the bottom of the code fragment above. print "Biggest dimension is " & biggest_dimension print "Width: " & width print "Length: " & length How many more test cases are required?

None, existing test cases can be used.


15. Rapid Application Development ?

Rapid Application Development (RAD) is formally a parallel development of functions and subsequent integration. Components/functions are developed in parallel as if they were mini projects, the developments are time-boxed, delivered, and then assembled into a working prototype. This can very quickly give the customer something to see and use and to provide feedback regarding the delivery and their requirements. Rapid change and development of the product is possible using this methodology. However the product specification will need to be developed for the product at some point, and the project will need to be placed under more formal controls prior to going into production.


16. What is the difference between Testing Techniques and Testing Tools?

Testing technique: – Is a process for ensuring that some aspects of the application system or unit functions properly there may be few techniques but many tools.
Testing Tools: – Is a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing
Learn More About Testing Tools  here


17. We use the output of the requirement analysis, the requirement specification as the input for writing …

User Acceptance Test Cases


18. Repeated Testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software component:

Regression Testing


19. What is component testing ?

Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depend-ing on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.


20. What is functional system testing ?

Testing the end to end functionality of the system as a whole.

21. What is the benefits of Independent Testing

Independent testers see other and different defects and are unbiased.


22. In a REACTIVE approach to testing when would you expect the bulk of the test design work to be begun?

After the software or system has been produced.


23. What are the different Methodologies in Agile Development Model?

There are currently seven different Agile methodologies that I am aware of:
 Extreme Programming (XP)
 Scrum
 Lean Software Development
 Feature-Driven Development
 Agile Unified Process
 Crystal
 Dynamic Systems Development Model (DSDM)


24. Which activity in the fundamental test process includes evaluation of the testability of the requirements and system?

A Test analysis and design.


25. What is typically the MOST important reason to use risk to drive testing efforts?

  Because testing everything is not feasible.


26. Which is the MOST important advantage of independence in testing?

An independent tester may be more effective at finding defects missed by the person who wrote the software.


27. Which of the following are valid objectives for incident reports?

i. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.
ii. Provide ideas for test process improvement.
iii. Provide a vehicle for assessing tester competence.
iv. Provide testers with a means of tracking the quality of the system under test.
i. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary,
ii.Provide ideas for test process improvement,
iv.Provide testers with a means of tracking the quality of the system under test


28. Consider the following techniques. Which are static and which are dynamic techniques?

i. Equivalence Partitioning.
ii. Use Case Testing.
iii.Data Flow Analysis.
iv.Exploratory Testing.
v. Decision Testing.
vi. Inspections.
Data Flow Analysis and Inspections are static, Equivalence Partitioning, Use Case Testing, Exploratory Testing and Decision Testing are dynamic.


29. Why are static testing and dynamic testing described as complementary?

Because they share the aim of identifying defects but differ in the types of defect they find.


30. What are the phases of a formal review ?

In contrast to informal reviews, formal reviews follow a formal process. A typical formal review process consists of six main steps:
 Planning
 Kick-off
 Preparation
 Review meeting
 Rework
 Follow-up.


31. What is the role of moderator in review process?

The moderator (or review leader) leads the review process. He or she deter-mines, in co-operation with the author, the type of review, approach and the composition of the review team. The moderator performs the entry check and the follow-up on the rework, in order to control the quality of the input and output of the review process. The moderator also schedules the meeting, disseminates documents before the meeting, coaches other team members, paces the meeting, leads possible discussions and stores the data that is collected.
Learn More About Review process in Video Tutorial here


32. What is an equivalence partition (also known as an equivalence class)?

An input or output range of values such that only one value in the range becomes a test case.


33. When should configuration management procedures be implemented?

During test planning.


34. A Type of functional Testing, which investigates the functions relating to detection of threats, such as virus from malicious outsiders.

Security Testing


35. Testing where in we subject the target of the test , to varying workloads to measure and evaluate the performance behaviors and ability of the target and of the test to continue to function properly under these different workloads.

Load Testing


36. Testing activity which is performed to expose defects in the interfaces and in the interaction between integrated components is:

Integration Level Testing


37. What are the Structure-based (white-box) testing techniques ?

Structure-based testing techniques (which are also dynamic rather than static) use the internal structure of the software to derive test cases. They are com-monly called 'white-box' or 'glass-box' techniques (implying you can see into the system) since they require knowledge of how the software is implemented, that is, how it works. For example, a structural technique may be concerned with exercising loops in the software. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the func-tionality of the software.


38. When should be performed Regression testing ?

After the software has changed or when the environment has changed


39. When should testing be stopped?

It depends on the risks for the system being tested


40. What is the purpose of a test completion criterion?

To determine when to stop testing


41. What can static analysis NOT find?

For example memory leaks


42. What is the difference between re-testing and regression testing?

Re-testing ensures the original fault has been removed; regression testing looks for unexpected sideeffects


43. What are the Experience-based testing techniques ?

In experience-based techniques, people's knowledge, skills and background are a prime contributor to the test conditions and test cases. The experience of both technical and business people is important, as they bring different perspectives to the test analysis and design process. Due to previous experience with similar systems, they may have insights into what could go wrong, which is very useful for testing.


44. What type of review requires formal entry and exit criteria, including metrics? Inspection 45. Could reviews or inspections be considered part of testing?

Yes, because both help detect faults and improve quality


46. An input field takes the year of birth between 1900 and 2004 What are the boundary values for testing this field ?

1899,1900,2004,2005


47. Which of the following tools would be involved in the automation of regression test?

a. Data testerb. Boundary testerc. Capture/Playbackd. Output comparator.

d. Output comparator


48. To test a function,what has to write a programmer, which calls the function to be tested and passes it test data.

  Driver


49. What is the one Key reason why developers have difficulty testing their own work?

Lack of Objectivity


50.“How much testing is enough?”

The answer depends on the risk for your industry, contract and special requirements.