Friday 31 May 2013

Software testing Interview Questions – Based on Testing Types

Software testing Interview Questions – Based on Testing Types

  • What are the prerequisites for Test Execution?
  • What are the roles of Test Lead in Test Execution stage?
  • What are the roles of Testers in Test Execution stage?
  • What is Accessibility Testing?
  • What is Ad-hoc Testing?
  • What is Agile testing?

Tuesday 28 May 2013

WinRunner Automation Testing Interview Questions

WinRunner Automation Testing Interview Questions

  1. How do you handle an Exception in WinRunner?
  2. What’s your comfort level in using WinRunner?
  3. Running tests from the command line?
  4. Can you test DB using WinRunner?
  5. When do you go for Context Sensitive and Analog recordings? What’s the difference between them?
  6. When do you use Break Points?
  7. When do you use Verify/Debug/Update Modes?
  8. How do you invoke an application using TSL?

Friday 24 May 2013

Advantages and Dis-advantages of V-model

Advantages of V-model (SDLC)

These are the advantages V-Model offers in front of other systems development models:

  • The users of the V-Model participate in the development and maintenance of The V-Model. A change control board publicly maintains the V-Model. The change control board meets anywhere from every day to weekly and processes all change requests received during system development and test.
  • The V-Model provides concrete assistance on how to implement an activity and its work steps, defining explicitly the events needed to complete a work step: each activity schema contains instructions, recommendations and detailed explanations of the activity.

All (100) Types of Software Testing

All (100) Types of Software Testing

Below is an exhaustive list of “types of software testing” and a brief description about each type of software testing, like load testing, stress testing, unit testing, system testing, acceptance testing, certification testing, performance testing, user acceptance testing, penetration testing, automated testing, beta testing, compatibility testing, security testing, benchmark testing, functional testing, negative testing, destructive testing, integration testing, regression testing, alpha testing, end-to-end testing, path testing, smoke testing, black box testing, stability testing, usability testing etc., and many more, about 100 software testing types with descriptions are listed below.

Why there are so many types of software testing ?

Quality of software is assessed in terms of 6 Quality factors (Functionality, Reliability, Efficiency, Usability, Maintainability and Portability). Each of below listed type of software testing is designed to validate software for one or more of the mentioned quality factors. More types of software testing have evolved to keep up with the pace with rapid increase in complexity of the software design, frameworks & Programming languages, increased number of users with popularity of internet, advent of new platforms and technologies. With increase in number of software testing types to be performed, need for software testing tools has increased as well.

1. Ad-hoc testing :

This is a informal type of software testing that is performed by software testers, business analyst, developers or any stake holder without referring to test cases or documentation. Person performing ad-hoc testing usually has a good understanding of software requirements and tries to break the software and find defects with the experience and knowledge they have about the domain, requirements and functionality of the software. Ad hoc testing is intended to find defects that were not found by existing test cases.

2. Acceptance testing:

This is a formal type of software testing that is performed by end customer to check if the software confirms to their business needs and to the requirements provided earlier. Acceptance tests are usually documented; however end customers may not document test cases for smaller version or releases.

3. Accessibility testing:

This is a formal type of software testing that helps to determine whether the software can be used by people with disability. Disability can be blind, deaf, color blindness, learning disability etc. There are also companies and consultants that provide website accessibility audits.

4. Aging testing:

This is a type of performance testing that is carried out by running software for longer duration like weeks or months and check performance of software to see if the software performance degrades or shows any signs of degradation after running for a longer period of time. Aging Test is also known as Soak testing and also known as longevity testing.

5. Alpha Testing:

This is a formal type of testing that is performed by end customers at development site. Alpha testing is conducted before taking the software to Beta testing.

Thursday 23 May 2013

Software Test Documents - Test Plan, Test Scenario, Test Case, Traceability Matrix

Explain about Software Test Documents (artifacts)

Testing documentation involves the documentation of artifacts which should be developed before or during the testing of Software.

Documentation for Software testing helps in estimating the testing effort required, test coverage, requirement tracking/tracing etc. This section includes the description of some commonly used documented artifacts related to Software testing such as:

  • Test Plan
  • Test Scenario
  • Test Case
  • Traceability Matrix

Wednesday 22 May 2013

Manual Testing Interview Questions Answers

Manual Testing Interview Questions Answers

manual-testing-interview-questions-answers

  1. Can you explain the V model in manual testing?
  2. What is Re-testing testing?
  3. What is Recovery Testing?
  4. What is Regression testing?
  5. What is Sanity Testing?
  6. What is Scalability Testing?

Interview Questions for Software Testing Professionals

Interview Questions for Software Testing Professionals

A) Tester Role and Responsibilities:
B) Software Engineering, Quality Standards:
C) Manual Testing Concepts:
D) Test Automation Concepts (QTP):
E) General Software testing interview questions:
F) Environmental, Development Technologies

A) Tester Role and Responsibilities:

  • Differentiate Priority and Severity in your Defect Report?
  • Generally, when will you plan for test automation for a project in your company?
  • How many Defects did you detect in your project?

Tuesday 21 May 2013

What is Acceptance Testing ?

What is Acceptance Testing :

This is arguably the most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client.s requirements. The QA team will have a set of pre written scenarios and Test Cases that will be used to test the application.

More ideas will be shared about the application and more tests can be performed on it to gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the application that will result in system crashers or major errors in the application.

By performing acceptance tests on an application the testing team will deduce how the application will perform in production. There are also legal and contractual requirements for acceptance of the system.

Non-Functional Testing - Performance, Usability, Security, Portability

Non-Functional Testing

  • Performance Testing

    • Load Testing
    • Stress Testing

  • Usability Testing

    • UI VS Usability Testing

  • Security Testing

  • Portability Testing

Non-Functional Testing definition :

This section is based upon the testing of the application from its non-functional attributes. Non-functional testing of Software involves testing the Software from the requirements which are non functional in nature related but important a well such as performance, security, user interface etc.

Functional Testing - Unit, Integration, System, Acceptance

Functional Testing

  • Unit Testing
  • Integration testing

    • Bottom-up integration
    • Top-down integration

  • System Testing
  • Regression Testing

    • Alpha Testing
    • Beta Testing

Functional Testing definition :

This is a type of black box testing that is based on the specifications of the software that is to be tested. The application is tested by providing input and then the results are examined that need to conform to the functionality it was intended for. Functional Testing of the software is conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.

Sunday 19 May 2013

What is Automation testing and List of Automation Testing Tools

What is Automation testing ?

Automation testing which is also known as Test Automation, is when the tester writes scripts and uses another software to test the software. This process involves automation of a manual process. Automation Testing is used to re-run the test scenarios that were performed manually, quickly and repeatedly.

Apart from regression testing, Automation testing is also used to test the application from load, performance and stress point of view. It increases the test coverage; improve accuracy, saves time and money in comparison to manual testing.

What to automate ?

It is not possible to automate everything in the Software; however the areas at which user can make transactions such as login form or registration forms etc, any area where large amount of users. can access the Software simultaneously should be automated.

Furthermore all GUI items, connections with databases, field validations etc can be efficiently tested by automating the manual process.

When to automate ?

  • Test Automation should be uses by considering the following for the Software:
  • Large and critical projects.
  • Projects that require testing the same areas frequently.
  • Requirements not changing frequently.
  • Accessing the application for load and performance with many virtual users.
  • Stable Software with respect to manual testing.
  • Availability of time.

How to automate ?

Automation is done by using a supportive computer language like vb scripting and an automated software application. There are a lot of tools available which can be use to write automation scripts. Before mentioning the tools lets identify the process which can be used to automate the testing:
  • Identifying areas within a software for automation.
  • Selection of appropriate tool for Test automation.
  • Writing Test scripts.
  • Development of Test suits.
  • Execution of scripts.
  • Create result reports.
  • Identify any potential bug or performance issue.
  • Software testing tools

List of Automation Testing Tools :

Benefits of Automated Testing

  • Reliable: Tests perform precisely the same operations each time they are run, thereby eliminating human errors.
  • Repeatable: You can test how the software reacts under repeated execution of the same operations.
  • Programmable: You can program sophisticated tests that bring out hidden information from the application.
  • Comprehensive: You can build a suite of tests that covers every feature in your application.
  • Reusable: You can reuse tests on different versions of an application, even if the users interface changes.
  • Better Quality Software: You can run more tests in less time with fewer resources.
  • Fast: Automated tools run tests significantly faster than human users.
  • Cost Reduction: The cost is reduced as the number of resources for regression test is reduced.
  • Choosing the right tools for the job and targeting the right areas of the organization to deploy them can only realize these benefits. The right areas where the automation fit must be chosen.

Disadvantages of Automation Testing

Though the automation testing has many advantages, it has its own disadvantages too. Some of the disadvantages are:
  • Proficiency is required to write the automation test scripts.
  • Debugging the test script is major issue. If any error is present in the test script, sometimes it may lead to deadly consequences.
  • Test maintenance is costly in case of playback methods. Even though a minor change occurs in the GUI, the test script has to be re-recorded or replaced by a new test script.
  • Maintenance of test data files is difficult, if the test script tests more screens.
Some of the above disadvantages often cause damage to the benefit gained from the automated scripts. Though the automation testing has pros and corns, it is adapted widely all over the world.

Read more about Manual Testing and Manual Testing vs Automated testing

Define Verification and Validation in Software Testing

Various Definitions for Verification and Validation :

  • Validation checks that the product design satisfies or fits the intended use (high-level checking), i.e., the software meets the user requirements. This is done through dynamic testing and other forms of review.
Verification and validation are not the same thing, although they are often confused. Boehm succinctly expressed the difference between them:
  • Validation: Are we building the right product?
  • Verification: Are we building the product right?
According to the Capability Maturity Model (CMMI-SW v1.1),
  • Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610].
  • Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610]

In other words,

  • Validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place, while verification is ensuring that the product has been built according to the requirements and design specifications.
  • Validation ensures that "you built the right thing". Verification ensures that "you built it right".
  • Validation confirms that the product, as provided, will fulfill its intended use.

From testing perspective:

  • Fault – wrong or missing function in the code.
  • Failure – the manifestation of a fault during execution.
  • Malfunction – according to its specification the system does not meet its specified functionality.

Within the modeling and simulation community, the definitions of validation, verification and accreditation are similar:

  • Validation is the process of determining the degree to which a model, simulation, or federation of models and simulations, and their associated data are accurate representations of the real world from the perspective of the intended use(s). Accreditation is the formal certification that a model or simulation is acceptable to be used for a specific purpose.
  • Verification is the process of determining that a computer model, simulation, or federation of models and simulations implementations and their associated data accurately represents the developer's conceptual description and specifications.

Difference between Verification and Validation in Software Testing

What is the difference between Validation & Verification in Software Testing ?

No.
VERIFICATION
VALIDATION
1Verification is a static practice of verifying documents, design, code and program.Validation is a dynamic mechanism of validating and testing the actual product.
2It does not involve executing the code.It always involves executing the code.
3It is human based checking of documents and files.It is computer based execution of program.
4Verification uses methods like inspections, reviews, walkthroughs, and Desk-checking etc.Validation uses methods like black box (functional) testing, gray box testing, and white box (structural) testing etc.
5Verification is to check whether the software conforms to specifications.Validation is to check whether software meets the customer expectations and requirements.
6It can catch errors that validation cannot catch. It is low level exercise.It can catch errors that verification cannot catch. It is High Level Exercise.
7Target is requirements specification, application and software architecture, high level, complete design, and database design etc.Target is actual product-a unit, a module, a bent of integrated modules, and effective final product.
8Verification is done by development team to provide that the software is as per the specifications in the SRS document.Validation is carried out with the involvement of client and testing team.
9It, generally, comes first-done before validation.It generally follows after verification.
10Question
Are we building the product right?
Question
Are we building the right product?
11Evaluation Items
Plans, Requirement Specs, Design Specs, Code, Test Cases
Evaluation Items
The actual product/software.
12Activities
Reviews
Walkthroughs
Inspections
Activities
Testing


verification-and-validation

Friday 17 May 2013

Manual Software Testing Interview Questions and Answers

Manual Software Testing Interview Questions and Answers

As a software tester the person should have certain qualities, which are imperative. The person should be observant, creative, innovative, speculative, patient, etc. It is important to note, that when you opt for manual testing, it is an accepted fact that the job is going to be tedious and laborious. Whether you are a fresher or experienced, there are certain questions, to which answers you should know. Lets see Interview Tips below.




1) What is difference between bug, error and defect?

Bug and defect essentially mean the same. It is the flaw in a component or system, which can cause the component or system to fail to perform its required function. If a bug or defect is encountered during the execution phase of the software development, it can cause the component or the system to fail. On the other hand, an error is a human error, which gives rise to incorrect result. You may want to know about, how to log a bug (defect), contents of a bug, bug life cycle, and bug and statuses used during a bug life cycle, which help you in understanding the terms bug and defect better.


2) Explain white box testing.

One of the testing types used in software testing is white box testing. Read in detail on white box testing.


3) Tell me about V model in manual testing.

V model is a framework, which describes the software development life cycle activities right from requirements specification up to software maintenance phase. Testing is integrated in each of the phases of the model. The phases of the model start with user requirements and are followed by system requirements, global design, detailed design, implementation and ends with system testing of the entire system. Each phase of model has the respective testing activity integrated in it and is carried out parallel to the development activities. The four test levels used by this model include, component testing, integration testing, system testing and acceptance testing.


4) What are stubs and drivers in manual testing?

Both stubs and drivers are a part of incremental testing. There are two approaches, which are used in incremental testing, namely bottom up and top down approach. Drivers are used in bottom up testing. They are modules, which test the components to be tested. The look of the drivers is similar to the future real modules. A skeletal or special purpose implementation of a component, which is used to develop or test a component, that calls or is otherwise dependent on it. It is the replacement for the called component.


5) Explain black box testing.

Find the answer to the question in the article on black box testing.


Software Testing Experienced Interview Questions and Answers - Part 2

Software Testing Experienced Interview
Questions and Answers - Part 2



51) What is maintenance testing?

Triggered by modifications, migration or retirement of existing software


52) What is called the process starting with the terminal modules?

Bottom-up integration

53) What is beta testing?

Testing performed by potential customers at their own locations.


54) What is an equivalence partition (also known as an equivalence class)?

An input or outputs range of values such that only one value in the range becomes a test case.


55) What is a failure?

Failure is a departure from specified behavior.


Software Testing Experienced Interview Questions and Answers - Part 1

Software Testing Experienced Interview
Questions and Answers - Part 1



1) What is a V-Model?

A software development model that illustrates how testing activities integrate with software development phases


2) What is functional system testing?

Testing the end to end functionality of the system as a whole.


3) What is failure?

Deviation from expected result to actual result


4) What is exploratory testing?

Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used. The test design and test execution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts. This does not mean that other, more formal testing techniques will not be used. For example, the tester may decide to use boundary value analysis but will think through and test the most important boundary values without necessarily writing them down. Some notes will be written during the exploratory-testing session, so that a report can be produced afterwards.


5) What is component testing?

Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.


Thursday 16 May 2013

Job Interview Tips - Do's and Dont's

Job Interview Tips - Do's and Dont's

Are you repeadtedly failing in a interview? Just spend 2 mins to read these tips.


do-and-donot-of-interview-tips

Guys, Listen. Whenever we go for a interview, we get excited thinking about the life after we get the job and forget to do some basic things. We do things in the hurry bury and get tired, sweated and losing the complete energy. Then finally we get into the interview panel at 0% Energy. Giving Interviewer the bad impression at the first sight. All these makes shivering and things get collapsed completely resulting in the FAILURE. 

So let's be aware of < WHAT TO DO FOR PERFORMING THE INTERVIEW, "BEST" >

Job Interview Tips - Do's and Dont's

  • Reach the Interview place at least 30 minutes in advance. Sit in waiting room / wash your face and hands, check your combing, makeup, dress in loo.

be-punctual

  • Carry water with you and have a glass of water before appearing in interview.
  • No chewing of gum at the time of Interview, but check if you have a bad breath. Chew a Gum or Mouth freshener and spit.
donot-spit-or-chew-bubble-gum

Wednesday 15 May 2013

Top 100 Frequently asked Software Testing Interview Questions - Part 2

Top 100 Frequently asked Software Testing Interview Questions and Answers - Part 2



51. When should testing be stopped?

It depends on the risks for the system being tested.  



52. Which of the following is the main purpose of the integration strategy for integration testing in the small?

To specify which modules to combine when, and how many at once.


53. What is the purpose of a test completion criterion?

To determine when to stop testing  


54. Given the following code, which statement is true about the minimum number of test cases required for full statement and branch coverage?

     Read p
     Read q
     IF p+q> 100
          THEN Print "Large"
    ENDIF
    IF p > 50
          THEN Print "p Large"
    ENDIF

1 test for statement coverage, 2 for branch coverage


55. What is the difference between re-testing and regression testing?

Re-testing ensures the original fault has been removed; regression testing looks for unexpected side-effects.  


Top 100 Frequently asked Software Testing Interview Questions - Part 1

Top 100 Frequently asked Software Testing Interview Questions and Answers - Part 1



1. What is the MAIN benefit of designing tests early in the life cycle?

It helps prevent defects from being introduced into the code.


2. What is risk-based testing?

Risk-based testing is the term used for an approach to creating a test strategy that is based on prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.


3. A wholesaler sells printer cartridges. The minimum order quantity is 5. There is a 20% discount for orders of 100 or more printer cartridges. You have been asked to prepare test cases using various values for the number of printer cartridges ordered. Which of the following groups contain three test inputs that would be generated using Boundary Value Analysis?

4, 5, 99


4. What is the KEY difference between preventative and reactive approaches to testing?

Preventative tests are designed early; reactive tests are designed after the software has been produced.


5. What is the purpose of exit criteria?

To define when a test level is complete.



6. What determines the level of risk?

  The likelihood of an adverse event and the impact of the event


7. When is used Decision table testing?

  Decision table testing is used for testing systems for which the specification takes the form of rules or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs in the same column but below the inputs. The remainder of the table explores combinations of inputs to define the outputs produced.
Learn More About Decision Table Testing Technique in the Video Tutorial here


8. What is the MAIN objective when reviewing a software deliverable?

To identify defects in any software work product.


9. Which of the following defines the expected results of a test? Test case specification or test design specification.

Test case specification.


10. Which is a benefit of test independence?

It avoids author bias in defining effective tests.


11. As part of which test process do you determine the exit criteria?

Test planning.


12. What is beta testing?

Testing performed by potential customers at their own locations.


13. Given the following fragment of code, how many tests are required for 100% decision coverage?

if width > length
   then biggest_dimension = width
     if height > width
             then biggest_dimension = height
     end_if
else biggest_dimension = length
            if height > length
                then biggest_dimension = height
          end_if
end_if


14. You have designed test cases to provide 100% statement and 100% decision coverage for the following fragment of code. if width > length then biggest_dimension = width else biggest_dimension = length end_if The following has been added to the bottom of the code fragment above. print "Biggest dimension is " & biggest_dimension print "Width: " & width print "Length: " & length How many more test cases are required?

None, existing test cases can be used.


15. Rapid Application Development ?

Rapid Application Development (RAD) is formally a parallel development of functions and subsequent integration. Components/functions are developed in parallel as if they were mini projects, the developments are time-boxed, delivered, and then assembled into a working prototype. This can very quickly give the customer something to see and use and to provide feedback regarding the delivery and their requirements. Rapid change and development of the product is possible using this methodology. However the product specification will need to be developed for the product at some point, and the project will need to be placed under more formal controls prior to going into production.


16. What is the difference between Testing Techniques and Testing Tools?

Testing technique: – Is a process for ensuring that some aspects of the application system or unit functions properly there may be few techniques but many tools.
Testing Tools: – Is a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing
Learn More About Testing Tools  here


17. We use the output of the requirement analysis, the requirement specification as the input for writing …

User Acceptance Test Cases


18. Repeated Testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software component:

Regression Testing


19. What is component testing ?

Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depend-ing on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.


20. What is functional system testing ?

Testing the end to end functionality of the system as a whole.

21. What is the benefits of Independent Testing

Independent testers see other and different defects and are unbiased.


22. In a REACTIVE approach to testing when would you expect the bulk of the test design work to be begun?

After the software or system has been produced.


23. What are the different Methodologies in Agile Development Model?

There are currently seven different Agile methodologies that I am aware of:
 Extreme Programming (XP)
 Scrum
 Lean Software Development
 Feature-Driven Development
 Agile Unified Process
 Crystal
 Dynamic Systems Development Model (DSDM)


24. Which activity in the fundamental test process includes evaluation of the testability of the requirements and system?

A Test analysis and design.


25. What is typically the MOST important reason to use risk to drive testing efforts?

  Because testing everything is not feasible.


26. Which is the MOST important advantage of independence in testing?

An independent tester may be more effective at finding defects missed by the person who wrote the software.


27. Which of the following are valid objectives for incident reports?

i. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.
ii. Provide ideas for test process improvement.
iii. Provide a vehicle for assessing tester competence.
iv. Provide testers with a means of tracking the quality of the system under test.
i. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary,
ii.Provide ideas for test process improvement,
iv.Provide testers with a means of tracking the quality of the system under test


28. Consider the following techniques. Which are static and which are dynamic techniques?

i. Equivalence Partitioning.
ii. Use Case Testing.
iii.Data Flow Analysis.
iv.Exploratory Testing.
v. Decision Testing.
vi. Inspections.
Data Flow Analysis and Inspections are static, Equivalence Partitioning, Use Case Testing, Exploratory Testing and Decision Testing are dynamic.


29. Why are static testing and dynamic testing described as complementary?

Because they share the aim of identifying defects but differ in the types of defect they find.


30. What are the phases of a formal review ?

In contrast to informal reviews, formal reviews follow a formal process. A typical formal review process consists of six main steps:
 Planning
 Kick-off
 Preparation
 Review meeting
 Rework
 Follow-up.


31. What is the role of moderator in review process?

The moderator (or review leader) leads the review process. He or she deter-mines, in co-operation with the author, the type of review, approach and the composition of the review team. The moderator performs the entry check and the follow-up on the rework, in order to control the quality of the input and output of the review process. The moderator also schedules the meeting, disseminates documents before the meeting, coaches other team members, paces the meeting, leads possible discussions and stores the data that is collected.
Learn More About Review process in Video Tutorial here


32. What is an equivalence partition (also known as an equivalence class)?

An input or output range of values such that only one value in the range becomes a test case.


33. When should configuration management procedures be implemented?

During test planning.


34. A Type of functional Testing, which investigates the functions relating to detection of threats, such as virus from malicious outsiders.

Security Testing


35. Testing where in we subject the target of the test , to varying workloads to measure and evaluate the performance behaviors and ability of the target and of the test to continue to function properly under these different workloads.

Load Testing


36. Testing activity which is performed to expose defects in the interfaces and in the interaction between integrated components is:

Integration Level Testing


37. What are the Structure-based (white-box) testing techniques ?

Structure-based testing techniques (which are also dynamic rather than static) use the internal structure of the software to derive test cases. They are com-monly called 'white-box' or 'glass-box' techniques (implying you can see into the system) since they require knowledge of how the software is implemented, that is, how it works. For example, a structural technique may be concerned with exercising loops in the software. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the func-tionality of the software.


38. When should be performed Regression testing ?

After the software has changed or when the environment has changed


39. When should testing be stopped?

It depends on the risks for the system being tested


40. What is the purpose of a test completion criterion?

To determine when to stop testing


41. What can static analysis NOT find?

For example memory leaks


42. What is the difference between re-testing and regression testing?

Re-testing ensures the original fault has been removed; regression testing looks for unexpected sideeffects


43. What are the Experience-based testing techniques ?

In experience-based techniques, people's knowledge, skills and background are a prime contributor to the test conditions and test cases. The experience of both technical and business people is important, as they bring different perspectives to the test analysis and design process. Due to previous experience with similar systems, they may have insights into what could go wrong, which is very useful for testing.


44. What type of review requires formal entry and exit criteria, including metrics? Inspection 45. Could reviews or inspections be considered part of testing?

Yes, because both help detect faults and improve quality


46. An input field takes the year of birth between 1900 and 2004 What are the boundary values for testing this field ?

1899,1900,2004,2005


47. Which of the following tools would be involved in the automation of regression test?

a. Data testerb. Boundary testerc. Capture/Playbackd. Output comparator.

d. Output comparator


48. To test a function,what has to write a programmer, which calls the function to be tested and passes it test data.

  Driver


49. What is the one Key reason why developers have difficulty testing their own work?

Lack of Objectivity


50.“How much testing is enough?”

The answer depends on the risk for your industry, contract and special requirements.