Demonstrations, Experiments, and Software Testing
Listen on the go!
Did you know there’s a difference between an experiment and a demonstration? Also, did you ever realize why this difference is critical to the team that includes coders, testers, and managers.
In the software development life cycle, both the experiments and demonstrations are referred to by the same name known as “tests”.
While demonstrations are considered to tell us something that we knew before, experiments are envisioned to help us study things we need or want to know.
The difference is critical as the need of testing must not be simply to show that the product can work.
We test to study about the product so that we can comprehend it well, and address glitches before it’s too late.
According to Michael Bolton, Lead Consultant, DevelopSense “The more similar a test is to a previous instance of it, the less likely it is to find a bug. That’s why it’s essential to include plenty of variation in your testing.”
In one of his articles titled Alternatives to Manual Testing, Michael Bolton explains how experiential and exploratory testing are not the same.
“Of course, there’s overlap between those two kinds of encounters. A key difference is that the tester, upon encountering a problem, will investigate and report it. A user is much less likely to do so. (Notice this phenomenon, while trying to enter a link from LinkedIn’s Articles editor; the “apply” button isn’t visible and hides off the right-hand side of the popup. I found this while interacting with Linked experientially. I’d like to hope that I would have find that problem when testing intentionally, in an exploratory way, too.)”
While the gamut of testing is huge, a context-driven approach to automation in testing certainly brings in more value.
A context-driven approach to Automation in Testing
Test Automation can certainly do much more than simply feigning a user who is pressing buttons.
Context-driven testers pick their testing purposes, practices, and deliverables by looking first at the facts of the explicit situation, including the needs of the investors who commissioned the testing.
In a paper authored by James Bach, creator of Rapid Software Testing methodology, and Michael Bolton, “There are many wonderful ways tools can be used to help software testing. Yet, all across industry, tools are poorly applied, which adds terrible waste, confusion, and pain to what is already a hard problem. Why is this so? What can be done? We think the basic problem is a shallow, narrow, and ritualistic approach to tool use. This is encouraged by the pandemic, rarely examined, and absolutely false belief that testing is a mechanical, repetitive process. Good testing, like programming, is instead a challenging intellectual process. Tool use in testing must therefore be mediated by people who understand the complexities of tools and of tests. This is as true for testing as for development, or indeed as it is for any skilled occupation from carpentry to medicine.”
The spirit of context-driven testing is a project-appropriate request for dexterity and finding. The context-driven testing places this approach to testing within a humanistic societal and principled framework.
Eventually, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very diverse practices will work best under different situations.
Seven Basic Principles of Context-Driven Testing
As laid down by Cem Karner, J.D., Ph.D, Michael Bolton, and James Bach, the seven basic principles of context-driven testing include –
- The best software testing is a challenging intelligent process.
- Individuals, working together, are the most significant part of any project’s context.
- Projects reveal over time in ways that are often not expectable.
- The worth of any practice depends on the situation.
- The product is a solution. If the issue isn’t resolved, the product will not work.
- There is nothing like best practices but only good practices in context.
- Only through judgment and talent, exercised supportively during the complete project, are we able to do the correct things at the right times to efficiently test our products.
The illustrations of the principles in action include –
- Testing is carried on behalf of participants in the service of debugging, developing, qualifying, examining, or selling a product. In an entirety, different testing approaches could be suitable for these diverse objectives.
- Testing groups exist to provide testing-related services. They do not run the development project, rather serve the project.
- Metrics that are not valid are dangerous.
- The critical value of any test case lies in its capability to deliver info (i.e. to reduce ambiguity).
- All oracles are imperfect. It might well have failed it in ways that you (or the automated test program) were not monitoring despite of the product seeming to pass your test,
- In essence, automated testing is not automatic manual testing. It’s illogical to speak about automated tests as if they were automated manual testing.
- It is completely appropriate for diverse test groups to have different undertakings. A main exercise in the service of one mission might be extraneous or counter-productive in the service of another.
- Various types of issues will be exposed by different types of tests. They should become more challenging or must emphasize various defects as the program becomes steadier.
- Test relics are valuable to the degree that they mollify their participants’ pertinent requirements.
From Michael Bolton’s point of view, Testing must be a social (and socially challenging), cognitive, risk-focused, critical (in several senses), analytical, investigative, skilled, technical, exploratory, experiential, experimental, scientific, revelatory, honorable craft. Not “manual” or “automated”. He urges that misleading distinction to take a long vacation on a deserted island.
Cigniti has conducted an interesting webinar where Michael Bolton, Lead Consultant, DevelopSense, discussed the difference between demonstrations and experiments in software testing.
Michael Bolton is a consulting software tester and testing teacher who helps people to unravel testing problems that they didn’t realize they could solve. He is the co-author (with James Bach) of Rapid Software Testing (RST), a strategy and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure. He has taught RST to testers in 35 countries. Michael has been testing, developing, managing, and writing about software since 1988.
In this presentation, Michael Bolton explained the difference, and the way scientists (and, yes, philosophers of science) came to differentiate between demonstration and experiment.
Cigniti Technologies Limited, a global leader in providing AI-driven, IP-led, strategic digital assurance, software quality engineering, testing and consulting services, is headquartered in Hyderabad, India, with offices in USA, U.K., UAE, Australia, Czech Republic and Singapore. Leading global enterprises including Fortune 500 & Global 2000, trust us to accelerate their digital transformation, continuously expand their digital horizons and assure their digital next. We bring the power of AI into Agile and DevOps and offer digital services encompassing intelligent automation, big data analytics, cloud migration assurance, 5G Assurance, Customer experience assurance and much more. Our IP, next-gen quality engineering platform, BlueSwan helps assure digital next by predicting and preventing unanticipated application failures, thereby assisting our clients in accelerating their adoption of digital.