Thursday, April 19, 2018

Where Are Your Bug Reports?

Yesterday, I put together a group of nine testers to do some mob testing. We had a great time, came out a shared understanding, people's weak ideas amplified and people standing together in knowing how we feel about the functionality and quality, and value for users.

This morning, I had a project management ping: "There are no bug reports in Jira, did you test yesterday?".

Later today, another manager reminds over email: "Please, report any identified bugs in Jira or give a list to N.N. It's very hard to react/improve on 'loads of bugs' statement without concrete examples.". They then went on informing me that when the other testers run a testing dojo, they did that on a Jira ticket and reported everything as subtasks and hinted I might be worried about duplicates.

I can't help but smile on the ideas this brings out. I'm not worried about duplicates. I'm worried about noise in the numbers. I have three important messages, and writing 300 bugs to make that point is a lot of work and useless noise. This is not a service I provide.

Instead I offered to work temporarily as system tester for the system in question with two conditions:
  1. I will not write a single bug report in Jira, but get the issues fixed in collaboration with developers across the teams. 
  2. Every single project member pair tests with me for an hour with focus on their changes in the system context. 
Jury is still out on my conditions. I could help, but I can't help within the system that creates so much waste.  I need a system that improves the impact of the feedback I have to give through deep exploratory testing, focused on value.

I'd rather be anything but a mindless drone logging issues in Jira. How about you?
 

Tuesday, April 17, 2018

Interviews Are a Two-Way Street

We were recruiting, and had a team interview with a candidate. I was otherwise occupied, and felt unsure of hiring someone who I had no contact with, especially since the things I wanted to know of them were unanswered from my team. We look for people strong in C++, but also python and scripting, since a lot of a DevOps type of team's work ends up being not just pure homeground. So I called them, and spent my 10 minutes finding out what they want out of professional life, what makes them happy and making sure they were aware how delighted my often emotion-hiding team would be if they chose to hang out with us and do some great dev work. They signed. And I'm just as excited as everyone was after having had chance of meeting the candidate.

So often we enter an interview with the idea of seeing if the person is a fit for us. But as soon as we've established that, we should remember that most of the time, the candidates have options. Everyone wants to feel needed and welcome. Letting the feeling show isn't a bad thing.

There's a saying that individuals recruit people like them, and teams recruit people that fill the gaps - diverse candidates. For that to be true, you need to have first learned to appreciate work in team beyond your own immediate contribution.

All this recruiting stuff made me think back to one recruiting experience I had. I went through many rounds of checking. I had a manager's interview. Then a full day of psychological tests. Then a team interview. And finally, even the company CEO wanted to interview me. I required yet another step - I spent a day training testing for my potential future colleagues, in a mob. Every single step was about whether I was appropriate. If I would pass their criteria. They failed mine. They did not make me feel welcome. And the testing we did together showed how much use I would have been (nice bugs on their application, and lots of discipline in exploring) but also what my work would be: teaching and coaching, helping people catch up.

Your candidate chooses you just as much as you choose the candidate. Never forget.

Saturday, April 14, 2018

Second chances

"This does not work", they said. "We used to find these things before making a release", they continued. I see the frustration and understand. I feel the same. We lost an exploratory tester who spent 13 years with the application, and are reaping the results as they've been gone for a month. Our ways of working are crumbling in ways none of us anticipated.

We lost the tester, because for years they got to hear they are doing a bad job. How they are not needed. How they would only become valuable if they learned automation. And they were not interested. Not interested when the personal managers told it. Not interested when most conferences were full of it. Not interested when articles around the globe spouted that the work they were doing was meaningless.

They found the job meaningful. The team members found the results meaningful. And it was not like the manual exploratory testing they did had stayed the same over the years. As others in the team contributed more automation, their testing became deeper, more insightful, targeted on things where unexpected change was the only constant.

I reviewed their work long before they decided on leaving. I promoted the excellence of results, the silent way of delivering the information to make it visible. And when they decided it was time to let go of the continuous belittling, I was just as frustrated as anyone in the teams that lack of appreciation would lead to this.

Just as they were about to go, we we found them a new place. And I have a new tester for my own team. The very same tester who elsewhere in the organization wasn't supported is now my closest colleague. I got a second chance of helping non-testers and non-programmers see their value, for them to feel respected like I do.

And for that I feel grateful. I already knew my manager is a great match for me in my forward-thriving beliefs of building awesome software in collaboration with others, valuing everyone's contributions and expecting daily growth - in diverging directions. My good place - my own team - is again even better with a dedicated manual exploratory tester with decades of deep testing experience.

Wednesday, April 11, 2018

Task assigning does not teach self-organization

I was frustrated, as I was ticking away mental check boxes on the testing that needed to be done. It was one of the last tasks of a major effort so many of us had contributed on for the last 6 months. The testing I was doing wasn’t mine to do but I had agreed with our intern that this would be work they’d do. Yet I found myself doing it, after 3 days of pinging, reminding and explaining what needed doing.  The work I was doing wasn’t just something I expected from them, but as I was doing it, I learned they had also skipped my previous instructions of utmost importance.

As I completed the task, I shared the status to our coordination channel. Next up was a discussion I wasn’t sure on how to have on missing the mark of my expectations big time.

My feelings are a thing I can hardly not let show, and I approached the discussion letting my frustration be visible, and using my words to explain that I wanted to understand, and that failing wasn’t something I’d punish on, but something we just had to talk about.

We learned three things together.

For not doing an important thing I had reminded them on multiple times, there was a clear lack of understanding why I considered it so relevant. Discussing the big picture of risks, I’m sure that particukar thing gets done the next time. The intern expressing frustration on boring and repetitive task lead us also into identifying the root cause, and they volunteered to drive through an organizations fix - while not dismissing my instructions on remedies while the fix was not in place. I was delighted on the show of initiative.

For not doing the testing I ended up doing, I learned they were overwhelmed with number of requests all around, and the problem was a result of misprioritization. They were spending their time on a pesky test automation script, while the real priority would have been to complete the testing I just did.

We also reviewed the testing I had done, to realize they would not have known what to do. The task was one with many layers, dependencies to what happened while testing, requiring end to end understanding of business process and a perspective into lifecycle. All this was obvious to me, but they had worked on simpler tasks before. We had now identified a type of task that stretched too far.

We ended with celebrating how awesome this story of our mutual learning is and agreed to work on intake of complex work differently next time around.

As I mentioned the experience to a colleague, I was told their preferred way of dealing with this is Jira tasks with clear instructions. That’s what they were doing to me and learning I never obeyed. Others did. The discussion made a belief system thing visible: I was building each of my colleagues for their best future self as contributor. My colleague was focusing on how to get the work done with existing limitations.

Their style gave results where everyone did a little less than what they asked. My style gave results where we occasionally failed and reflected, but I could always assume a bit more next time around.

Assigning tasks wasn’t growing people. Quite the contrary, it created an environment where people consistently underdeliver to a standard never reaching their potential.

It takes past better experiences or exceptional courage to step to self-organization when you feel the organization around you just wants you to focus on assigned tasks. I’m lucky to have past experiences that allow me to never obey blindly.



Promoting the Air

Last week, I tweeted a remark:

The irony was that after 1.5 years at my current company I did a teaching women Java thing, and that was something people were excited about to an extent that they wrote about it (interviewing me) for the company blog. Meanwhile, I do 30 talks on testing/agile, most international, a year and none of that has crossed the news bar.


I talked further to many colleagues in testing, and came to the conclusion that we are in an interesting situation where our profession is very valued within teams, where a lot of managers are "helping us grow" by pushing more automation even by force and anyone outside the teams have increasingly skewed perceptions of what we do, and why.

We are like air we breathe. Invisible. As soon as it is lost, we notice. And we lost some of our testing very recently so now we are noticing.

This all leads me to think of  being the change I want to see. So instead of dwelling in my frustration of irony, I took the observation and promoted the other things I do. The amount of empathy and understanding sharing my frustration has been overwhelming. And the constructive actions of wanting to hear more, wanting to share more equally and learn about this stuff has been delightful.

We who see what testing is, appreciate it, need to talk about it more. We need to help others see and remember the invisible. Every single tester shares their part of promoting. Some of us get our voices heard just a little further, but our common voice is stronger than any individual's.

Thursday, March 29, 2018

A Developer's Idea of Exploration

"We did a neat thing exploring today", a developer exclaims. I look, interested, wondering what is the source of excitement this time. It's not the first time they've been excited about doing things clearly very close to my heart. But a lot of times we find our ideas of exploring take very different, yet fascinating turns.

"We did this combinations test", they explain. "We took bunch of values when we did not feel like thinking too much, and passed them all in, and created combinations", they continue. "We learned about behaviors we did not think of", they finish. And we agree that is wonderful. Learning while testing, appreciating the new information, absolutely something we share.

There's been little remarks like this coming my way a lot recently, and while I can share the excitement of learning something we did not know, I also find that  the ways of going to "as close to exploratory testing as I usually do" as a developer isn't quite where my exploration is.

There was a session about property based testing, and generating test cases to run through same partial oracles. Just doing it wider does reveal things of unexpected nature, especially when you have a way of identifying some relevant aspect of correctness with a property.

There was an exercise of creating combinations for a 3 variable method, finding out the application does not work as specified on its boundaries. Just having more cases easily available and visually verifiable revealed information of unexpected nature.

All the three examples I've had recently, are ways of programmatically doing more. While they uncover relevant information, there's still more to exploratory testing.

This makes me think back to exploration of someone else's web app we did in a mob yesterday evening. Just some things I remember us learning:
  • For a system aiming to enhance datasets to acceptable, it makes no sense that when there is a condition preventing the dataset from ever being acceptable, we would first need to fill in some info when the condition of rejecting exists without any additional info (a problem in the order of tasks in the process)
  • For uploading files, we'd like to be able to use explorer over just drag-and-drop. It matters how most others do things. 
  • When a user interface view would include many things to show in tables, having relevant tooltips for some but forgetting placeholders for others less obvious creates confusion. 
  • When you must choose of of two but not both, automatically emptying the other field isn't exactly the best way to present that. 
  • When rejecting an input, logging it might be useful.
  • When failing so that there's a big visible error in the log, it would be very nice if that error was made visible also for the user. 
  • When having a recurring element of guiding users, filling it in three different ways makes little sense.
  • When you can get to a functionality with one click as there is just one option, hiding it in a menu requiring extra click won't be helpful. 
None of these would have been found by the "let me just play with my unit tests" approach to exploring. Then again, none of what we did would have found things that approach could find.

It's not this or that, but this and that. And it's lovely when developers show ideas of applying the same ideas with the tools at their hands. I hope to get to experience a lot more of it going forward. 

Tuesday, March 27, 2018

The Test Automation Trap

There's a pattern that we keep seeing in "agile" projects again and again.

We work together as a team to implement a feature. We automate tests for that feature as part of its definition of done. As end result, we have some more tests than before, on all layers of tests. We get the tests run blue and we make a release.

We work together to implement a feature. The previously added tests make our tests run in all lights of a christmas tree, and in addition to adding the new tests for new functionality, we clean up the previous tests.

The longer we continue, the worse the christmas tree lights get. The more time we spend on fixing the past tests, the less time we have on the new tests. And we take shortcuts on our past tests fixing, just removing the ones we deemed so necessary before.

And no one talks about it. It is a ritual that we must go through. Like a rite of passage.

Over time no one cares about how well the automation tests things. All we care for is that it passes for us to get through the gate.

I've seen so many people trapped in the cycle of being too busy to think about *why the tests exists* and *what value are they really giving us*. These people have no time for manual testing, because - very honestly - automation eats up all their time. And they might not even see that the approach is not really working out for them.

The test automation trap creates testing zombies. Ones that make the moves, but that stopped learning on what they're doing.

The best way I know out of the trap is to start caring about testing again. Put testing, not the scripts, into the center. It's time to talk about risk and strategies again. It's time to build up a test automation asset that supports whatever strategies you're going for. Stop moving through the motions, and think. Learn. Look at where your time goes. Experiment your way out of the trap of magical moves that feel better idea than they are.