Sanity testing vs Smoke testing

Sanity testing and Smoke testing. What is the difference?

For last 10 years I have answered this question justifying the answer which I read in book. This is also one of my question in interviews. It is such a question which do not have answer.

It is a question that should not be answered except perhaps by questioning the question: Why does it matter to you? Who is asking you? Have you looked it up on Google?

But if you still ask for it here is the answer

Both smoke testing and sanity testing refer to a first-pass, shallow form of testing intended to establish whether a product or system can perform the most basic functions. Both have one objective of accepting or rejecting the build available for testing. Some people call such testing “smoke testing”; others call it “sanity testing”.
“Smoke testing” derives from the hardware world; if you create an electronic circuit, power it up, and smoke comes out somewhere, the smoke test has failed.
“Sanity testing”– The product behave in some crazy fashion means sanity test failed.

Go through the blog below by Michael Bolton for more detail analysis.

http://www.developsense.com/blog/2011/11/smoke-testing-vs-sanity-testing/

Bug – Severity and Priority

Severity and priority go hand in hand. Often there is a misconception about it.

What should be the severity and priority of the bug? Sometime developer does not agree with the severity and priority given to the bug.

Is it possible to change Severity and Priority of a bug during the bugs life cycle?

Priority – Yes and Severity- NO

Why?

Severity” is the noun associated with the adjective, “severe”. Severity, with respect to a bug, is basically how big a bug is and how much trouble it is going to cause.In other words, severity is about risk a bug poses if the application with the bug is released to customer.

We assess risk of a bug by asking questions like

  • What impact the bug can cause and the probability of occurring?
  • How much harm could this bug cause to something the customer cares about?
  • How likely is this bug to manifest and how likely is that harm to occur if it does?

When we are testing, and we think we see a problem but we do not see everything about that problem. We see what some people call a failure, a symptom. The symptom we observe may be a manifestation of a coding error, or of a design issue, or of a misunderstood or mis-specified requirement. We see a symptom but we do not see the cause or the underlying fault. Whatever we are observing may be a terrible problem for some user or some customer somewhere—or the customer might not notice or care.

It is easy to believe that a serious problem will always be immediately and dramatically obvious. It is also easy to believe that a problem that looks like big trouble is big trouble, even when a fast one-byte fix will make the problem go away forever. We also become easily confused about the relationship between the prominence of the symptom, the impact on the customer, and the difficulty associated with fixing the problem, and the urgency of the fix relative to the urgency of releasing the product.

Most organizations have standard criteria for classifying bug severity

  • Severity 1 or S1 or Showstopper – Catastrophic bug or showstopper. Causes system crash, data corruption, irreparable harm, blocking further testing of the module/application etc.
  • Severity 2 or S2 or High Severity – Critical bug in important function. No reasonable workaround. Bugs related to data security, integrity problems, and inaccessibility and batch job aborts, errors causing system reloads that prevent a customer from conducting business or prevent to use the application.
  • Severity 3 or S3 Medium Severity – Major bug but has viable workaround.
  • Severity 4 or S4 or Low Severity – Minor bug with trivial impact.

As for priority: priority is the order in which someone wants things to be done. “Priority” is a tester’s assessment of how important it is to fix the problem—a kind of ranking of what should be fixed first. Again based on my experience, I do not see this as being a tester’s business at all.

Deciding what should be done on a programming or business level is the job of the person with authority and responsibility over the work, in collaboration with the people who are actually doing the work. When I am a tester, there is one exception: if I see a problem that is preventing me from
doing further testing, I will request that the fix for that problem be fast-tracked (and I’ll outline the risks of not being able to test that area of the product). Deciding what gets fixed first is for those who do the managing and the fixing.

Priorities can be classified as high, medium and low. Sometime it is also denoted by P1, P2, P3 and P4.

Theoretically, bug severity does not change. The potential for business or technical impacts stays pretty much the same throughout the development project. It is same as if you encounter a severe accident, the severity of the accident will never change.

Priorities for fixing bugs do change depending on where we are in the project. Initially it is tester’s priorities but as we inch closer to the release date, customer’s priority becomes more important. Consider the example, tomorrow you have an important meeting with client. In the morning you get up and while you are about to move out, you get a call from your friend that he has met and accident and require your immediate help. Now you need to re-prioritize your tasks and put the meeting low priority with respect to helping your friend and schedule the meeting for some other time. Same is with bug.

This answers the initial questions.

Exploratory testing

Exploratory testing is a powerful approach yet widely misunderstood. Almost all testers, sometime or the other do exploratory testing in some way or the other.

Once in an interview I asked the person “your entire test cases are passed is this mean that application is bug free”. He was intelligent enough to answer “no there might still be bugs left over in the application”. This is where exploratory testing comes handy. It helps you to find those hidden bugs missed by your test cases/
Exploratory testing (ET): simultaneous learning, test design and test execution is a simple concept. But the fact that it can be described in a sentence can make it seem like something not worth describing. Most books on software testing don’t describe it and some discourage it to use it.

Exploratory testing is also known as ad hoc testing. Consider chess. Procedure of playing chess remains constant, it is only the choices that change and the skill of the players who choose the next move. Similarly in exploratory testing it is the testers next move which impacts and the skill of the tester who choose the next move.

As per James Bach, “Exploratory testing is simultaneous learning, test design, and test execution“.

In short, Exploratory testing is any testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

There is a misconception that in exploratory testing the only officially result is as set of bug reports. Ideally exploratory tester should also prepare a written notes about the testing done and the areas covered which can be reviewed by leads/manager. It may also result in updated test data and test materials.
This helps in tracking the exploratory testing.

Following are the scenarios where exploratory testing can be used:

  • when you need to provide rapid feedback on a new product or feature.
  • when you need to learn the product quickly by exploring.
  • when You have already tested using scripts, and want to diversify the testing
  • when You want to find the single most important bug in the shortest time.
  • when You want to check the work of another tester by doing a brief independent investigation.
  • when You want to investigate and isolate a particular defect.
  • when You want to investigate the status of a particular risk, in order to evaluate the need for scripted tests in that area.

Fundamentals of Testing – Testing Principles

Fundamentals of Testing – Testing principles

Principal 1: Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof that the application is defect free

Principal 2: Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, we use risks and priorities to focus testing efforts.

Principal 3: Early testing
Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives. Static testings like review of requirement, design etc should be part of the process.

Principal 4: Defect clustering
A small number of modules contain most of the defects discovered during pre-release testing or show the most operational failures.

Principal 5: Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs. To overcome this ‘pesticide paradox’, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.

Principal 6: Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.

Principal 7: Absence of errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and expectations.