Wednesday, July 29, 2015

Games are practice for life - what I learned on Test Strategy from PvsZ

This spring, I was invited to DEWT on Test Strategy. For my experience report, I took a look at how I test at work and sought for "the ideas that guide test design".

The analysis was troubling, as I felt there was no strategy to speak of. There's general ideas of what is acceptable for our product and what we target for the ways of working as a team. But with agile and continuous delivery, features flow through on a steady pace, each requiring different kind of thinking but starting from seemingly similar ideas of what information is useful at first, what keeps me up and awake while testing and what features are connected.

Zeger van Hese listened in on my experience, and created a nice summary of it. 

I felt testing I was doing had transformed into selection of tactics, and that the most relevant part of the tactics selection was to choose enough over choose the right order. The idea was left to sizzle.

One day, my son was playing plants vs. zombies, and a friend of mine was teaching him how to play. They practiced the same scene over and over again, trying out different strategies. They would limit the plants they could use to three, and try out different order of placing them in. They would discuss that the strategy of placing the sunflower plants (that generate sun used as currency) early on gives you compound interest, and makes it easier to play the game later. The sunflowers aren't so cool, so my son had a tendency of rather getting the cool stuff first, at the sight of first zombie.

Listening in to the discussion, I realized there are different types of moves in testing. Some bring you compound interest, just like the sunflowers in the game when used early on. The tactics that build your relationship with developers pay back with the tedious finalizing of a feature. The tactics that enable us to release in special cases before it's all done-done because of a customer schedule promise. We need to complete the fight with all the different tactics, but the order in which we choose to take things in requires strategic thinking. 

There's more differences to specific features I'm testing than I had given credit for. Each scene to play is different, even in the same product. 

You may learn surprising things from games. Like a wise friend said: games are practice for life. We should pay attention to our lessons.




Tuesday, July 28, 2015

Target-rich environments in teaching testing

Some years ago, I looked at all the testing courses I was doing and I felt embarrassed. I had to come to the conclusion that on a course talking about testing is completely different from doing testing. All my courses by that time had exercises, but none of them was exercise-centric. They were entered around folklore, thing I had learned in working on testing that I felt would be useful for others.

I love hearing people's stories about testing. I love telling my stories on testing. And according to the smile sheet grades, I did well with my stories. But the skills of testing don't really get built without actual experiences on trying to test, succeeding and failing and getting feedback. Seeing hands-on examples of how other people tests.

I started experimenting with what I called "Exploratory Testing Work Course" - a course where we'd take an application and ideas of exploratory testing and test to learn more on how to test.

For a long time, I've used FreeMind as the target application. It's great in many ways. It's open source (although it is very hard to get to compile from the sources...). It has relevant downloads and presumably a user base. And it's full of bugs. It's what I would call a target-rich environment for testing.

Target-rich environment can be great in teaching testing. When you find problems all over the application, it gives you a feeling that you're in the right track: you can be good at this. I've had very insecure testers on in-house acceptance testing oriented courses who just needed the encouragement to shine in their tester role. And for this, target-rich environment is great. Everyone finds problems. Everyone finds problems others won't find. And exploring makes finding the bugs easy and fast.

But this is a slippery slope. After logging a hundred bugs on a feature, I ask if we could advise releasing it based on what we know about it. And the answer is almost unanimous: no. But we're actually talking about software that has been released, software that is in use and software that does not have a forum full of angry users complaining about the bugs we're experiencing.

So recently, I've been occasionally changing the target application to a company-specific application. The courses where we explore software I have never seen before and learn about it to find problems have been great with turning focus into learning testing as opposed to learning a new subject matter with an unrelated app I just nominated. It's been interesting to see how little people think around their own software, how few questions they ask of its purpose to exist and how hard it is seeing connections. And I've been trying to find a way to replicate some of those lessons to my exploratory testing courses by choosing another test target.

Most recent one I'm using is Dark Function Editor. It creates 2D sprite sheets. Animated gif editor. Around it I've found it possible to teach seeing things while using it. But for finding bugs that sucks. It has bugs and I've found many. But many people will have hard time seeing the bugs there, as you need to work harder to expose them.

Testing should not be that easy. But sometimes it is. So think about it: do you work in a target-rich environment? When testing is easy, you need to work harder to stretch to different ideas and observations. Like when walking on a field full of holes, it takes a lot of energy to walk steadily around the holes. Which one do you actually prefer as a tester? Would you feel disappointed if you tested for weeks without finding a single problem?

I believe agile is taking us slowly towards less target-rich environments. In those environments, more skill is required in the exploration you do. 

Saturday, July 25, 2015

Reading code, reading natural language

I enjoy reading and writing. I write this blog. I write articles. I write lighter private texts. And I read what others write as much as I can.

Growing up, I hated writing, loved reading. I remember leaving a note in every written piece in school to not read the stuff the the class, as sharing my thoughts was an unpleasant idea. I had teachers who would tell me they'd lower my grade for that. But I would also learn to write well enough but always just below the limit of being sampled for other students. Since then, things have changed.

Many of us that have ended up with software development, have to have some sort of an interest / ability towards reading and writing. There rarely was a teacher explaining and guiding us through every concept. We needed to learn to get stuff from books and papers, google. And recognise what it worth our time and what is not.

I have a special love for helping developers create clean code. I've come to that from caring for how the individuals I work with feel about their work, and observing that developer happiness as well as random regressions in software seem connected with quality of the code as developers perceive it. If developers are afraid or hesitant to touch something, when they do, it breaks more. If developers dislike a piece of code, the overall end user experience on that area won't improve. And with focus on clean code, I'm successfully working with a team without relevant test automation still safely releasing to production with continuous delivery.

While there's many technical topics that I listen to but feel disconnected with, readability and maintainability of code are things I love learning more on.

Less than a year ago, I learned to compare the amount of code we have to a pile of Harry Potter books. It made scope of our work so much more tangible - this is what we read through, piece by piece over and over again. I could easily see in my head how different the style of reading the actual books is to reading the code, and got interested in better use of tools to help with the different readings style for code.

Yesterday, for the purposes of organising a testing conference for developer and tester testing perspectives, I got to listen in to a discussion between two brilliant developers on a topic that really hit my interests: code readability.

The discussion readability was on the fact that it's very hard to say what is a good block of code. There's no absolutes on that. But there's a parallel to writing natural language. What is a good paragraph? With examples over the years, seeing good and bad, we've learned the idea of that. And the very same idea guides us when looking into code.


Over the years, we've all learned some ideas on what is good in writing natural language like English.   Both actively and passively, we're recognising what makes reading something easy and good. We recognise patterns of good and bad. Could it be that there's more parallels in our skills in recognising goodness in natural language and code?

Thanks for the inspiration to Kevlin Henney.


Thursday, July 23, 2015

The Great Developer Experience

I was following a discussion about tester - developer collaboration, when a realisation hit me. Most of us testers have interesting relationships with the developers.

Back when I started testing, the relationships were mostly adversial. From those times, we've created a lot of folklore we still keep on repeating on testing courses, sometimes giving significant amount of time to experiences unknown to the newcomers. I've had people look at me weirdly when talking about some of the pre-agile testing experiences they've never experienced.

Adversial relationships still exist. There's still experiences where developers tell the tester that they are "more valuable than you" - quoting one of my developer colleagues from 2,5 years ago. There's managers who will pay testers less than developers like Marlena Compton just mentioned in her blog post. And there's podcast-authors who still like to quote Joel Spolsky's ancient view into testers as cheaper labour. Testers share advise on how to speak to developers about bugs without hurting the developers feelings. And some developers are more easily hurt than others.

I find that most of my relationships with developers nowadays are no longer adversial. It could be that I work with particularly nice people, who value my contribution. It could be that I've learned to contribute more valuable information over the years.

The thing I realised from the discussion I followed is that many testers with positive relationships with developers still have just that one special developer they think back to as someone they wish they could have worked with more. Many testers don't have even one. There're the tester-developer -relationship where you co-exist for a common goal. And then there's the tester-developer -relationship where you collaborate, build on one another in a way that makes it hard to say who contributed what, but you both know that what you create together is better.

I was thinking back to the one special developer that I wish I could work with. He would take an idea of what I could test, and take it further, improving the code and adding test automation, all before I needed to execute a test. We were not only friends at work, but we were friends who finished the other's sentences. He was interested in what I saw as testing. I was interested in what he was working on as developing. It did not feel forced. It felt genuine.

There's been a lot of great testers I love working with, to same extent as this one special developer. But it feels so often that developers are a different breed, with different interests. My developers look at me all funny when I find a really cool bug and do little jumps up and down on the ingeniousness of something that no one could have created intentionally. And I look at them all funny when they're in love with yet another API that they find so cool. We understand enough to co-exist and like one another, but the energy is lower. Interests are elsewhere, there's a limited common ground.

I find that testers as group are often paying attention to trying to understand the developer world and mindset. And I find that the one special developer on my mind was one that paid a lot of attention to the tester world.

With a great experience in mind, it's time to go from one special developer to more. I'm hoping that an autumn on mob programming with my team will create more of this experience for me: mutual understanding. 

Tuesday, July 14, 2015

No progress over faking progress

One of these days, I was bogged with work - there was just too much of testing to happen, and as usual, I would ask for people from the team to pitch in their efforts. The developers do their best in testing, but we share the feeling that another pair of eyes might sometimes - most of times - do good. So as we were discussing the need for more eyes&brains on testing activities, the team concluded in some fuzzy way that the user interface specialist could help out.

We agreed on who would take on what, and I worked on what ever I ended up with. The other worked on his area.

Three days later, I was good with taking a break on the themes I was contributing, and I checked back  to see how testing with the other area was progressing. I learned "it was done". I asked a bit more: I asked how much time was used, to learn that about two days went into that one.

With a nagging feeling, I turned on the application and the feature I had not been testing. I just wanted to see it, so that I would know what it was like for later testing. I formulated a basic use scenario in my mind, and tried it out. And there were problems.

I felt puzzled, there must be something someone was missing. I looked into the feature a little more, and learned more ways in which it just did not work. I walked around the block to calm down, before I went to ask the temporary tester to work out an explanation with me.

The explanation turned out to be that testing is boring. It's boring enough that when faced with a task of testing, you report it done without doing it. You used time on "testing", so "testing" is now done. No results implied. And if lucky, and less buggy area to test, none would even know.

This left me thinking of the various times something similar has happened to me. Someone promising to contribute on a task of testing, but actually bringing in the hours into activity of coffee drinking.

We had this at a contractor company a lot. The company culture drove people into "invoicable hours",  and testing was the place where people could "work for a few hours in between other tasks". Really,  not.

We had this at a product company I used to work at, when people wanted to use time on their "pet projects", to an extent that I remember an expensive postponing of a release due to one person blatantly lying on their test progress to get time on implementing automation that was not a commonly agreed priority at that time.

Why I started thinking of this is that just before my vacation, I was procrastinating like crazy. I could use hours and not make progress. Getting a grip of myself was a key activity for me. I would still report no progress and feel like a failure. Thinking about this helps me remember that not progressing isn't a failure, when faking progress is a relevant option.


Thursday, July 2, 2015

Our working test automation failed me

I'm very happy with the progress on test automation my team is making. We work on three fronts: unit tests (with traditional asserts as well as approval tests), database checks that alert us on in-use inconsistencies and selenium web driver tests on single browser. Slowly but steadily we're moving from continuous delivery without automation to one where automation plays some role. A growing role.

A week ago, our Selenium Webdriver test automation failed us in a particularly annoying way.

We were about to release. I explored the changes for most of the day and all looked fine. I went home, attended to usual daily stuff with kids and finalised my tests in the evening. And as testing was done, I merged the test version to our staging version, ready for next morning to be pushed into production. Somewhere between these two times, a UI developer pushed a change in that broke a relevant feature. I did not notice as I wasn't careful looking at commits in the evening. He did not notice, as testing things he changes happens if he thinks his change can break things. This time he was confident it wouldn't.

The next morning around 7 am, I saw a comment in Jira mentioning that one of the Selenium Webdriver tests failed in the previous night. 7.30 am the version was pushed into production. 8.30 am I read the Jira comment, learning we released with a bug that the Selenium Webdriver tests found. The person doing the release never got the message.

The bug was not a big deal, but it pointed out to me things I've accepted even though I should not:

  1. Our Selenium Webdriver tests are brittle and controlled by an individual developer. No one else really recognises false positive and a real problem, but he does. So we were dependent on him mentioning problems to the right people. This time he did not. 
  2. Our Selenium Webdriver tests take two hours to run and we accept the delayed feedback. When one developer broke the build, he couldn't run these tests to learn sooner. And when he got the feedback morning after, he was disconnected from the fact that the thing he broke late previous night had already reached production. 
  3. We're making releases without running existing test automation for reasons that are in our power to change. We just haven't, yet. 
It's great that we have one developer who is proficient with Selenium Webdriver to the extent that he does his own features tests first, adding tests of changed behavior as he is getting ready to implement the feature. It's great he's build in page objects to make automating easier. 

It's not great that we accept the tests brittleness, keeping them one person tool. It's not great we accept the delay. And it's not great we have broke chain of communication with the accepted brittleness.

The most frustrating kind of bug to get to production is one where you had all means to find it on time. And yet it went through. But these things work as wakeup calls on what you might want your priorities to be: remove brittleness, make the tools we have useful for everyone for the purpose they exist to serve.