Pages

Monday 16 April 2012

In the face of failure - part 2

Sometimes we fail to achieve what we have set our minds to achieve. That is not always a bad thing as we might learn something about the process of failing. The failure itself may require us to change our perspective or approach to the task at hand. In the first part I covered what happens when a test fails. In the second part I look into people's failure in effort to do something. All along I try to offer insight and learning possibilities to the subjects.

When a person fails (or doesn’t fail)…

Tests can fail. They reveal information when they do so. Even a passed test may reveal new information. Same thing basically apply to human failures. Why did my effort fail? What are the critical points that lead into the failure? Is there a possibility that the failure was evident and there was nothing that could be done to prevent it? If not, how can I see the critical points in the future?

Are we the cause of failure?

There are hundreds of self flagellation books that motivate you not to fail. If we are to attain a critical mind and keep it, we need to realize that we are fallible. We make mistakes, ok? There's nothing you can do to rid yourself of all failures. By keeping in mind that you are allowed to fail, you may prioritize with more precision, plan important thing more carefully, and learn from events that may lead into the failure.

The worst enemy of a person (who fails or succeeds) is ignorance. The ignorance itself may be the cause of failure, but by ignoring the root cause for a failure prevents all learning. Ignorance leading to failure is the worst cause, but still a person can learn from ignoring as much as from anything else. A person may have a reason to ignore something due to scheduling, required effort/skill/resources, insufficient knowledge, or a memory lapse. This is a kind of benign ignorance, as the ignorance is used as a tool towards the goal. If a person chooses to neglect, belittle, condescend either other people directly or indirectly, it is malign ignorance. A person failing to deliver on time because of procrastination or lack of interest in the subject falls to the same malign category.

People can learn from ignorance, more easily from benign ignorance than from malign. You can always choose to take note on the thing you previously ignored and decrease the risk to fail. Even attitude of ignoring can be changed, but it could require huge effort to do so. It is encouraged that people rid themselves of ignorance to important matters. They may prioritize some things lower than others and thus they do not ignore them completely. Asking "am I ignoring important things?" could result in fewer failures. If a failure occurs, what was ignored and why?

We may fail also because of choosing the wrong tools, approach or methods. For example making a presentation about some scientific thingy and structuring it poorly, may result in failure. Choosing wrong viewpoints (or too few) can result in biased view and thus result in failure. We may even choose the right tool and fail because of lack of skills to use it. If I don't have the sufficient skills to do this, who has? Can I utilize that person’s skills in this task? Do I have to adjust my views to reach a better result or to suit the given task?

We may make a efficient contribution towards succeeding in the task but it may still fail. We may have overlooked some important information that could drastically change the outcome of our task, or we could have made the wrong interpretation. The material or the baseline itself on which we built our effort upon may be faulty. Have we enough information to successfully complete the task? How do we determine that? Is the opinion I have conclusive enough to lead into success?

"It's their fault!" ;)

All things leading into failure are not reliant upon the person doing the task. The task mey require effort from multiple people or organizations and all cannot be controlled. To succeed in a multi-person, multi-venue task we need to establish a knowledge which person is holding what information / is responsible of which deliverance / responsible form which sub-task. If keys to our success lie in another person's hands, we need to make sure those keys open the doors for us.

It could be that the venue where you were to hold the presentation is accidentally overbooked and your effort fails. They may have suffered from a Force Majeure or something completely irrelevant to the task at hand but preventing from succeeding in it. There are some cases where one can find learning opportunities regarding the Force Majeure incidents. Can I prevent the task from failing due to unexpected events? Does a failure due to those events cause failure to retry? Can I take precautionary actions to prevent those things affecting my task?

If the environment and the nature (a stampeding elephant horde is considered a force of nature. "Jumanji!") don't cause a failure you still have take into account the audience of your task. This may include the people taking part to a presentation, the stakeholders of a software project, etc. The person doing the task has little power to effect the audience. The stakeholders may have beliefs, reasons, attitudes, etc. that may hinder the successful execution of a task. Audience may choose to ignore your topic and perform a no-show (or have more important thing to do at the time). What can I do to ensure that the task fulfills the goals regarding the audience/stakeholders? Do I need to adjust my approach so that the stakeholders don't have to struggle to relate to me or my topic? Do I need to do research on the receiving end of my task?

By asking important questions about the task and the surroundings can drastically decrease the possibility of a failure. And even if you fail, you will have a solid base for learning from your observations and the information received from the analysis of a failure. Accepting that you are fallible increases the ability to think critically. By anchoring ourselves to the opinion that we can't fail, makes us vulnerable to being biased and blind to important details and information.

Don’t think failing as a bad thing...

As we know, we all fail at something. There's nothing wrong with that. Succeeding might result in fewer learning opportunities than failing in the same task. Still, we can use the same techniques we use to analyze failures to apply to analyzing success. Did I prioritize efficiently my time and tasks? Was there a pit that I fell into thus decreasing my chances of succeeding? How can I remove that deficiency in my future doings?

Attitude towards failure is one of the biggest obstacles towards success, as we think it makes us losers or bad people (at least some people do that). Every attempt is a learning opportunity, pass or fail. Inquisitive mind is the best tool to enable learning in failure as in success.

If you found this piece of blog post inspirational, please comment that it was inspirational. If not, then comment that it wasn't. What ever the response, tell me why. If we (failing or succeeding in our doings) do not get feedback on from our effort, we cannot tell whether it is a success or not. By saying so, I don't encourage people to praise without justification or mock unnecessarily. The task for commenting is to succeed in commenting. Would you rather fail or succeed in that?

In the face of failure - part 1

Sometimes we fail to achieve what we have set our minds to achieve. That is not always a bad thing as we might learn something about the process of failing. The failure itself may require us to change our perspective or approach to the task at hand. in the first part I cover what happens when a test fails. In the second part I'll look into people's failure in effort to do something. All along I try to offer insight and learning possibilities to the subjects.

When a test fails (or doesn’t fail)…

As testers we do tests (DUH!). This may be a mission, a scenario, a flow, a check, basically of any proportion of work effort done in order to achieve some testing goal. This may also be a check conducted by a machine of some kind (test automation script, you name it). The point is that there is a test and we have or might not have some assertions (assumptions, expectations) regarding the outcome of the test. There may be tests that we have no idea what the outcome is (“What happens if I press this blank button at GUI?”) but it may result in a future assertion regarding the same test object.

Test may fail or they may pass. That is the binary nature of a test. They may however trigger a whole another result, for example “indefinable”, “false negative”, “false positive”, "what the bloody hell is that?". What happens if a test results in a “test passed”? What does it mean? Can it still fail? Can it mean something more?

Test fails because the test object doesn’t pass the assertion

This is what we want to happen, when a test fails: The test fails because there is something broken in the test object. By wanting this to be the case we might close our eyes from something important. The case simply states that the test object does not pass the assertions for reason unknown.

We think that the problem lies in the test object but are we sure? We need to eliminate the "think" and go towards the "know". We must analyse the test itself to determine whether the result is infact correct. The result may be a false-negative as where we need to determine if the test itself is not faulty. We may be testing the wrong thing or asserting the wrong things.

After we have determined that the test object infact is not built to match the test, we must ask: "have we built our test incorrectly?" This may result in information about the behavior of the test object just as well as the behavior of the TEST! The behaviour may be incorrect in both situations, but the newly found behaviour may be the thing customer/user/stakeholder (that is important enought) wanted or truly needed. The current behavior may also be a better solution than the intended/planned. In any case, this information must be revealed with more testing and analysing.

The test result may also uncover a risk that was not taken into account or was ignored before. This rises important questions about the the test object and the processes of development and testing. If there is a hole in our risk mitigation strategy could there be a need to revise the process or processes? Could there be more areas that we have not yet covered?

Lastly we may require to re-test some parts of the test object as the unwanted behavior mey be required to be fixed (or the test if chosen so). In any case the test requires revising, possibly some fixing and definitely more analysing. Do we need more tests to cover the area where the behaviour was found? Do we need to revise MORE tests? Was the test extensive enough to be feasible after the fix?

False negatives and positives

In a false positive/positive case we have already done some analysing to reach the conclusion that the test gave faulty information. Both cases may reveal something important about the test object but more than that they tell that there is something wrong with the testing. Do we have enough information about the test object to be making statements by the test results? Do we need to do more research on the test object to make our tests better? Have we missed something important in the process? These results always require analysis on why the tests give false results in the first place.

False positive is basically a situtation, where we think that the test object is behaving the way we think it is. This result is corrupted by the defects in the test itself and thus the test gives a "pass" result. it could be that the test is concentrating on the wrong area or function, thus giving false information. Therein could lie a risk that we have not covered a critical part of the product with sufficient testing. The assertion in the test itself could be missing or not strict enough ("Check if there is a response of any kind." -> Soap error message.) To get this situation corrected, usually both the test and the teset object need to be analysed and possibly fixed.

False negative is a situation where the test gives a falsely prompted negative result to a test. Almost all the situations apply to this as in the false positive case. We may need to back up the result with additional tests to rule out the possibility that the test object is behaving incorrectly. We may be lacking in skill, we may have ignored something or simply there is a defect in the assertions.

Tests always reveal something, and it may be important

By doing testing we uncover information. The goal of testing is to give enough knowledge for the deciding-people to make the right desicions. Even a poor test can reveal important information. If critical information is revealed at this stage (early or late), have we done something wrong in the previous stages? Is the poorly constructed test a waste? Do we need to construct better tests or are we aiming for fast results at this stage?

It could also be that because of some flaw in some process we have stumbled upon the wrong area to test. It could be that there are communication problems, ignorance or some other reason that we do testing to an area we are not supported to do that. It could be that the feature is under developments still, the environment is not ready, etc. Does the testing we did vale any value or was it waste? Do we value learning? Did we learn anything about the test object?

Even if we have the most beautifully constructed tests and the test object is in good shape, we may encounter problems if we do not know how to interpret the results of the tests. Results contain sometimes false results (positive/negative) that must be uncovered and examined so that we have correct information about the test object at all times. The one interpreting the results (tester during testing, test automation specialist, etc.) should have enough competence and criticism to question the results. We may ignore all "pass" results and just focus on the "fail" leading to false information.

We may think we know enough about defect. We did find it, didn't we? By analyzing the failure itself and the root cause we can uncover more information about the test object and its surroundings. By questioning the test results just as we question the test object, we may reveal some information about the testing methods, processes, tools, etc. and be able improve our testing.

Read also the part 2.

Wednesday 11 April 2012

Visualizing the testing effort

There has been some downtime with updating the blog, but that's about to change. The previous three months have been quite intense and I have learned quite a bit about software testing in a testing team. As some people already know, I work as a legacy system tester. The two months that have passed I have been learning the product and domain in a client team (Yay! Go team Raid!) with a bunch of expert testers. The team has three testers (four when I was in there), two test automation specialists and a Scrum master / manual tester. I was there to reinforce the manual testing and to learn as much as I can during the way.

While I was in there my goal was to hone my skills to perfection in exploratory testing and especially in scenario and flow testing. The features I was working with were part of a complex basic functionality that had been growing since the first product was out. This addition to that functionality pool was a much needed feature and affected basically every part of the functionality.

As the testing was being done we had to forgo the automation as we needed fast and tangible results, as in severe bugs as fast as possible and large coverage of the ground functionality. This resulted however in a pile of bugs and lots of rework. At this point we had no stable ground as all bugs were being fixed one by one and there were built coming twice, three times a day. Result: instability.

The decision to do all testing manually resulted in a HUGE amount of re-testing. We could have made a decision to write a basic set of tests to be run automatically and thus decrease the amount of checking done manually. On hindsight this would have been a great thing but it would have decreased the coverage achieved by manual testing.

To find the golden road with manual testing and automated testing is probably the hardest possible thing to achieve. Managers tend to veer towards automated testing as the tests are re-runnable and documented. Developers have the same approach as the script more easily presents the coverage of testing done. Is it possible to create the efficiency of a manual testing with the coverage visualization of automated testing?

Visualization


When we do automated testing its coverage is measured in different ways to sate the needs of visualization. We have the percentage of code covered by low level testing and we have some number of tests run through different scenarios through GUI. These high level tests are usually run through APIs or some engines that replicate the use of GUI. Both ways we measure have different ways to interpret the results and some pits they fall into when visualizing the results.

Code coverage tells you how much of the code is covered (DUH!) but more importantly it tells you how much is NOT covered. We can make assumptions about the coverage, but as the coverage doesn't cover the user using the product but the code using the code, this can be deceptive. It is however a way to build confidence to the product and to lessen the risk of regression (and we all hate regression, right?).

From tickbytick.co.uk
It is possible to take this to the next level by measuring branch and module coverage. This eases the visualization as you can point out the components within the flow that have not been tested. By taking the "untested" parts into a context we can see whether the coverage in that component is critical or trivial. Instead of relying on numbers (what does 49,21% percent of branch coverage tell you?) we can visualize the coverage and make decisions about the enoughness of coverage more easily and more accurately.

After the low level coverage we move to high level testing, system testing, acceptance testing, you name it (literally). When automating testing on a user experience level we need to make a choice whether to make tests easy to create or them to be realistic. Easy tests mean using APIs and direct calls to back end to replicate a situation in a GUI. This is great way to give false confidence in GUI. As the test engine cannot see the client, the GUI may be unusable but still pass all tests. The buttons, bars, boxes, whatever may be unreachable, unusable or plain ugly, and the machine doesn't see that as it "simulates" the use. This kinda testing is comparable to the code coverage. We have the knowledge of what has been tested but there is no knowing if the critical parts have been tested (or that they work in actual use case).

When doing high level testing and taking it closer to user, we lose some of the speed that we rely on when automating testing. By clicking artificially the elements that the test machine (program or script) sees we can be fairly sure that the user sees them too. This approach however comes with a load of updating the scripts and screenshots, but when done modularity this approach can solve lots of problems in client side test automation. When visualizing the latter GUI automation case we can draw lows of what forms and dialogs have been browsed and used. This tells the story of different scenarios that have been tested.

So we have possibilities to create a greater visualization of the coverage we have. By displaying these results in a radiator we can spread the information and more importantly raise questions about the results. "Why is that area uncovered by tests?" "Why is that part covered so vastly as it's only a supporting function?" From that point on we can make the test automation work for you instead the other way round.

So how can we visualize manual testing by using what we already have in automated testing? Visualizing code coverage is quite futile as it requires some sophisticated debugging/sniffing tools that should be run at the same time. And it may not give any value to the testing and/or visualization of manual testing. There may be some benefits with special cases like prototype testing when the code is not yet built into dll. This may have security problems and require a strong business case to be considered.

More reasonable way to approach is visualizing the scenario coverage of manual testing. If we visualize the testing we do manually it should be as easy as possible. If a larger team does testing and should record the coverage, the way should be standardized or at least agreed. Visualization through mind maps is a great and easy way to do this either with individual testers or with testing teams. As I wrote before on mind maps, taking the visualization a step further by adding the whole teams effort on a single mind map can visualize the total coverage. Using radiators as a way to spread information the coverage can be real time and dynamic.

If the aspect of using manual labor to create coverage charts is daunting, there is a possibility to record the testing done with a tool however and make it visualize the coverage. By recording the testing done manually, say with a screen recorder (Adobe Captivate, Bluberry FlashBack recorder) we record what we have tested and run it though a picture recognition program. It then lists the steps in a scenario (with possible side steps) and draws a graph of testing coverage. There is not yet known tools to do that but a simple Python application might do the job (I may have to return this eventually, but this should suffice at the moment).

As most of the information about testing lies in the brains of a tester, that information needs to be pulled out and visualized. Using radiators and simplifications we can provide the information the managers need to make decisions about the product. If we can make them veer away from the number-driven into the information-driven decision making, the products' quality will (or might) be increased.

Any way, I have gotten some good ideas from the client team to improve my work as a tester and hopefully I can start forcing these ideas to other so that visualization comes naturally with all aspects of testing. By being able to visualize the quality in both manual and automated testing they both become important to decision making and less ambiguous.