Q What is Use Testing Techniques Example?
Ø This Techniques helps identify test cases that’s cover the entire system, on transaction basis from start and finish.
Ø A use case is description of a particular use of the system by an actor (user).
Ø Used widely in developing tests at system or acceptance level.

Q What is the difference between a use case and a test case?
A use case is a simple flow between the end user and the system. It contains pre conditions, post conditions, normal flows and exceptions. It is done by Team Lead/Test Lead/Tester.A test case is a document that contains the steps that has to be executed, it has been planned earlier.
Q What is Gun Shot Testing?
Gun Shot Residue, or simply GSR, is a means of testing for the presence of certain materials on the hands and clothing of a subject in hopes of determining that this individual may have discharged a firearm. The concept is an old one and dates back many years. Advances in testing technology make this examination much more specific than it was some 30-40 years ago.
Shotgun debugging is the debugging of a program, hardware, or system problem using the approach of trying several possible solutions at the same time in the hope that one of them will work. This approach may work in some circumstances while sometimes incurring the risk of introducing new and even more serious problems.
Q What is Adhoc Testing?
Adhoc testing is an informal testing type with an aim to break the system. This testing is usually an unplanned activity. It does not follow any test design techniques to create test cases. In fact is does not create test cases altogether! This testing is primarily performed if the knowledge of testers in the system under test is very high. Testers randomly test the application without any test cases or any business requirement document. 
Ad hoc Testing does not follow any structured way of testing and it is randomly done on any part of application. Main aim of this testing is to find defects by random checking. Adhoc testing can be achieved with the testing technique called Error Guessing. Error guessing can be done by the people having enough experience on the system to "guess" the most likely source of errors.
This testing requires no documentation/ planning /process to be followed. Since this testing aims at finding defects through random approach, without any documentation, defects will not be mapped to test cases. Hence, sometimes, it is very difficult to reproduce the defects as there are no test steps or requirements mapped to it.
Testers should have good knowledge of the business and clear understanding of the requirements- Detailed knowledge of the end to end business process will help find defects easily. Experienced testers find more defects as they are better at error guessing.
Q What is Monkey Testing?
Monkey Testing is defined as the kind of testing that deals with random inputs. Now a question arises that why it is called Monkey Testing? Why is this 'Monkey' here? Here is the answer.
- In Monkey Testing the tester (sometimes developer too) is considered as the 'Monkey'
- If a monkey uses a computer he will randomly perform any task on the system out of his understanding
- Just like the tester will apply random test cases on the system under test to find bugs/errors without predefining any test case
- In some cases, Monkey Testing is dedicated to Unit Testing or GUI Testing too
Q What is a test design technique?
A test design technique is NOT just a process for selecting test cases. A test design is a process for writing test cases and determining expected outputs.
Q What is a Test Enviornment?
A testing environment is a setup of software and hardware for the testing teams to execute test cases. In other words, it supports test execution with hardware, software and network configured.
Test bed or test environment is configured as per the need of the Application Under Test. At few occasion, test bed could be the combination of the test environment and the test data it operates.
Q Bug life cycle
is a cycle which a defect goes through during its lifetime. It starts when defect is found and ends when a defect is closed, after ensuring it’s not reproduced. Defect life cycle is related to the bug found during testing.
The bug has different states in the Life Cycle. The Life cycle of the bug can be shown diagrammatically as follows:

- New: When a defect is logged and posted for the first time. It’s state is given as new.
- Assigned: After the tester has posted the bug, the lead of the tester approves that the bug is genuine and he assigns the bug to corresponding developer and the developer team. It’s state given as assigned.
- Open: At this state the developer has started analyzing and working on the defect fix.
- Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
- Pending retest: After fixing the defect the developer has given that particular code for retesting to the tester. Here the testing is pending on the testers end. Hence its status is pending retest.
- Retest: At this stage the tester do the retesting of the changed code which developer has given to him to check whether the defect got fixed or not.
- Verified: The tester tests the bug again after it got fixed by the developer. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “verified”.
- Reopen: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “reopened”. The bug goes through the life cycle once again.
- Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “closed”. This state means that the bug is fixed, tested and approved.
- Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “duplicate“.
- Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “rejected”.
- Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.
- Not a bug: The state given as “Not a bug” if there is no change in the functionality of the application. For an example: If customer asks for some change in the look and feel of the application like change of colour of some text then it is not a bug but just some change in the look of the application.
Q What is the difference between Quality Assurance, Quality Control and testing?
Quality Assurance is the process of planning and defining the way of monitoring and implementing the quality (test) processes within a team and organization. This method basically defines and sets the quality standards of the projects.
Quality Control is the process of finding defects and providing suggestions to improve the quality of the software. The methods used by Quality Control are usually established by the quality assurance.
It is the primarily responsibility of the testing team to implement quality control.
Testing is the process of finding defects / bug. It validates whether the software built by the development team meets the requirements set by the user and the standards set by the organization.
Here the main focus is on finding bugs and testing team acts as quality gatekeeper.
Q Quality Assurance vs Quality Control
|
Quality Assurance
|
Quality Control
|
|
It is a process which deliberate on providing assurance that quality request will be achieved.
|
QC is a process which deliberates on fulfilling the quality request.
|
|
A QA aim is to prevent the defect.
|
A QC aim is to identify and improve the defects.
|
|
QA is the technique of managing the quality.
|
QC is method to verify the quality.
|
|
QA does not involve executing the program.
|
QC always involves executing the program.
|
|
All team members are responsible for QA.
|
Testing team is responsible for QC.
|
|
QA e.g. Verification.
|
QC e.g. Validation.
|
|
QA means Planning for doing a process.
|
QC Means Action for executing the planned process.
|
|
Statistical Technique used on QA is known as Statistical Process Control (SPC.)
|
Statistical Technique used on QC is known as Statistical Quality Control (SPC.)
|
|
QA makes sure you are doing the right things.
|
QC makes sure the results of what you’ve done are what you expected.
|
|
QA Defines standards and methodologies to followed in order to meet the customer requirements.
|
QC ensures that the standards are followed while working on the product.
|
|
QA is the process to create the deliverables.
|
QC is the process to verify that deliverables.
|
|
QA is responsible for full software development life cycle.
|
Key Points
- In QA, processes are planned to evade the defects.
- QC agreements with discovery the defects and modifying them while making the product.
- QA detects weakness.
- QC detects defects.
- QA is process oriented
- QC is product oriented.
- QA is failure prevention system.
- QC is failure detection system.
Q Defect Density Fundamentals
DEFINITIONDefect Density is the number of confirmed defects detected in software/component during a defined period of development/operation divided by the size of the software/component.
ELABORATION
The ‘defects’ are:
- confirmed and agreed upon (not just reported).
- Dropped defects are not counted.
- for a duration (say, the first month, the quarter, or the year).
- for each phase of the software life cycle.
- for the whole of the software life cycle.
- Function Points (FP)
- Source Lines of Code
DEFECT DENSITY FORMULA

USES
- For comparing the relative number of defects in various software components so that high-risk components can be identified and resources focused towards them.
For comparing software/products so that quality of each software/product can be quantified and resources focused towards those with low quality
This is not the case always. Sometimes testers may add complications in a testing process due to their unskilled way of working. In this post I have added most of the testing challenges created due to testing staff, developing staff, testing processes and wrong management decisions.
So here we go with the top challenges:
1) Testing the complete application:
Is it possible? I think impossible. There are millions of test combinations. It’s not possible to test each and every combination both in the manual as well as in automation testing. If you try all these combinations you will never ship the product ;-)
2) Misunderstanding of company processes:
Sometimes you just don’t pay proper attention what the company-defined processes are and these are for what purposes. There are some myths in testers that they should only go with company processes even these processes are not applicable for their current testing scenario. This results in incomplete and inappropriate application testing.
3) Relationship with developers:
Big challenge. Requires very skilled tester to handle this relation positively and even by completing the work in testers way. There are simply hundreds of excuses developers or testers can make when they do not agree with some points. For this tester also requiresgood communication, troubleshooting and analyzing skill.
4) Regression testing:
When a project goes on expanding the regression testing work simply becomes uncontrolled. Pressure to handle the current functionality changes, previous working functionality checks, and bug tracking.
5) Lack of skilled testers:
I will call this as ‘wrong management decision’ while selecting or training testers for their project task in hand. These unskilled fellows may add more chaos than simplifying the testing work. This results in incomplete, insufficient and ad-hoc testing throughout thetesting life cycle.
6) Testing always under time constraint:
Hey tester, we want to ship this product by this weekend, are you ready for completion? When this order comes from the boss, tester simply focuses on task completion and not on the test coverage and quality of work. There is a huge list of tasks that you need to complete within specified time. This includes writing, executing, automating and reviewing the test cases.
7) Which tests to execute first?
If you are facing the challenge stated in point no 6, then how will you take a decision which test cases should be executed and with what priority? Which tests are important over others? This requires good experience to work under pressure.
8 ) Understanding the requirements:
Sometimes testers are responsible for communicating with customers for understanding the requirements. What if tester fails to understand the requirements? Will he be able to test the application properly? Definitely No! Testers require good listening and understanding capabilities.
9) Automation testing:
Many sub-challenges – Should automate the testing work? Till what level automation should be done? Do you have sufficient and skilled resources for automation? Is time permissible for automating the test cases? The decision of automation or manual testing will need to address the pros and cons of each process.
10) The decision to stop the testing:
When to stop testing? Very difficult decision. Requires core judgment of testing processes and importance of each process. Also requires ‘on the fly’ decision ability.
11) One test team under multiple projects:
Challenging to keep track of each task. Communication challenges. Many times results in failure of one or both the projects.
12) Reuse of Test scripts:
Application development methods are changing rapidly, making it difficult to manage the test tools and test scripts. Test script migration or reuse is very essential but difficult task.
13) Testers focusing on finding easy bugs:
If the organization is rewarding testers based on a number of bugs (very bad approach to judgetesters performance) then some testers only concentrate on finding easy bugs those don’t require deep understanding and testing. A hard or subtle bug remains unnoticed in such testing approach.
14) To cope with attrition:
Increasing salaries and benefits making many employees leave the company at very short career intervals. Management is facing hard problems to cope with attrition rate. Challenges – New testers require project training from the beginning, complex projects are difficult to understand, delay in shipping date!
Q What is Iterative and Incremental?
- Iterative - you don't finish a feature in one go. You are in a code >> get feedback >> code >> ... cycle. You keep iterating till done.
- Incremental - you build as much as you need right now. You don't over-engineer or add flexibility unless the need is proven. When the need arises, you build on top of whatever already exists. (Note: differs from iterative in that you're adding new things.. vs refining something).
- Agile - you are agile if you value the same things as listed in the agile manifesto. It also means that there is no standard template or checklist or procedure to "do agile". It doesn't over specify. It just states that you can use whatever practices you need to "be agile". Scrum, XP, Kanban are some of the more prescriptive 'agile' methodologies because they share the same set of values. Continuous and early feedback, frequent releases/demos, evolve design, etc... Hence they can be iterative and incremental.
Q Test Deliverables
Test Deliverables are the test artifacts which are given to the stakeholders of a software project during the SDLC (Software Development Life Cycle).A software project which follows SDLC undergoes the different phases before delivering to the customer. In this process there will be some deliverables in every phase. Some of the deliverables are provided before the testing phase commences and some are provided during the testing phase and rest after the testing phase is completed.

The following are list of test deliverables:
1. Test Strategy
2.Test Plan
3. Effort Estimation Report
4. Test Scenarios
5.Test Cases/Scripts
6. Test Data
7.Requirement Traceability Matrix (RTM)
8.Defect Report/Bug Report
9. Test Execution Report
10. Graphs andMetrics
11. Test summary report
12. Test incident report
13. Test closure report
14. Release Note
15. Installation/configuration guide
16. User guide
17. Test status report
18. Weekly status report (Project manager to client)
2.Test Plan
3. Effort Estimation Report
4. Test Scenarios
5.Test Cases/Scripts
6. Test Data
7.Requirement Traceability Matrix (RTM)
8.Defect Report/Bug Report
9. Test Execution Report
10. Graphs andMetrics
11. Test summary report
12. Test incident report
13. Test closure report
14. Release Note
15. Installation/configuration guide
16. User guide
17. Test status report
18. Weekly status report (Project manager to client)
is a software testing method which is a combination of Black Box Testing method and White Box Testing method. In Black Box Testing, the internal structure of the item being tested is unknown to the tester and in White Box Testing the internal structure is known. In Gray Box Testing, the internal structure is partially known. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level.
Gray Box Testing is named so because the software program, in the eyes of the tester is like a gray/semi-transparent box; inside which one can partially see.
Example
An example of Gray Box Testing would be when the codes for two units/modules are studied (White Box Testing method) for designing test cases and actual tests are conducted using the exposed interfaces (Black Box Testing method).
SOFTWARE TESTING METHODS listed here are the major methods used while conducting various Software Testing Types during various Software Testing Levels:
|
Method
|
Summary
|
|
A software testing method in which the internal structure/design/implementation of the item being tested is not known to the tester. These tests can be functional or non-functional, though usually functional. Test design techniques include Equivalence partitioning, Boundary Value Analysis, Cause-Effect Graphing.
|
|
|
A software testing method in which the internal structure/design/implementation of the item being tested is known to the tester. Test design techniques include Control flow testing, Data flow testing, Branch testing, Path testing.
|
|
|
A software testing method which is a combination of Black Box Testing method and White Box Testing method.
|
|
|
A method of software testing that follows the principles of agile software development.
|
|
|
A method of software testing without any planning and documentation.
|
What is Re-testing?
Re-testing is a type of testing performed to check the test cases that were unsuccessful in the final execution are successfully pass after the defects are repaired
What is Regression Testing?
Regression Testing is a type of software testing executed to check whether a code change has not unfavourably disturbed current features & functions of an Application
Re-testing Vs Regression Testing is a common FAQ amongst QA aspirants.
Difference between Retesting and Regression Testing
|
Regression Testing
|
Re-testing
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

1 Comments
very useful this topics
ReplyDelete