Saturday, April 6, 2013

Concerns on quality to actions you can take

Had yet another thought-provoking day at work. I've been working an extra week on a release due right about now, finding some more issues to be fixed before the actual release. But still, having an area checklist to know what I've tested to structure my notes, I know there's plenty of things that in my professional view would require attention and do not work as per the 5 week experience of going through the list one by one - slower than expected with the amount of problems.

I had raised this concern of lack of testing / lack of quality before, but to make sure we have the right information available, we had a quick meeting with head of product development, two representatives from product management and the head developer.

I described my concerns much as before, to get a reply as before that it's never perfect and there will always be problems the customers will find and a reminder that the current production version is bad (in known ways) and needs to be replaced asap and that we really can't wait.

I explained with examples testing results that myself and one of the product management representatives had just this week to show it still isn't ok, to hear that those problems are already fixed and it should now work great. Same thing as five weeks ago.

I explained, again with examples the top three testing things I haven't done that should be done, and that I will not be available to do those for this project. We agreed the product management representatives take another one from their ranks to help with testing, to get a "third opinion" to mine and theirs.

My manager asked the developer how he felt about the quality, and for the first time in this difficult discussion, I felt he showed some support. He said he has continuously received stuff to fix that he had been unaware of, but is progressing nicely with those. But as usual, his view was that in a week he's dealt with what he has now, and then it's ready.

I explained my main concern on the premature release: with one developer working on this area, and our team's incapability to move more people on the area without causing more harm than help, the flow of fixes from production needs to be such that we don't end up with delivery schedules of weeks, not hours as our customers are used to. That seemed to raise some new thinking, but even with that, same conclusion remained.

We agreed that a little bit of testing would still happen next week, the known issues would be fixed and then there's the release. A tiny win on the discussion to get a third opinion, but otherwise I keep  hearing the same message:
  • the some hundred problems you have found and we have fixed as "must fix before release" must be all there is to find even though every day you test shows otherwise
  • there can't be that many problems remaining that customers would complain about as they rarely complain about anything, and we're not up to shape on all other things either
Half an hour later, I talked about deployment processes that some organizations use where only some customers get the new stuff first, and others later. I mentioned we had hoped to be able to do that with the feature we have at hand, to control the flow of feedback back to the developer. And all of a sudden, the bigger win in the discussion was found - a way of delivering to one first, following that through and then moving on to the others.

And for the first time in five weeks, I felt I did not need to be the only one worried. The concern without the actions we can take that will actually help isn't moving it forward. But coming up with the right actions, more viewpoints tends to better. 

Thursday, April 4, 2013

One thing at a time or mixing it all together

With the stuff I'm currently testing, I've been heavily pressed with schedule. I'm pretty sure I had half of the time compared to what I would need, and I wasn't aiming for perfection.

With the schedule pressure on my, I realized I was doing too many things at once, not completing any of my ideas of what to test. To a degree, that wasn't a problem, as I was still finding relevant problems (ones others think need to be fixed before we release) with that strategy too, but I wasn't very happy with myself trying to explain myself why none of the things was not done.

Yesterday I decided I'll dedicate today on just one idea of how to test. I set up the old version and the new version, with exactly same production data with migration magic done in between and run the two version side by side. I decided for today, I will focus on just problems on the data being shown differently with an easy oracle - the version that was not remade in the last six months for maintainability and problems where functionality is present in the old version but missing from the new.

I had a very productive day, and I feel more satisfied with myself setting to do something and knowing where I am with that - halfway through as the 20 newly identified problems did slow me down from my optimistic schedule - and having a nice list of variables I could still attack with the same comparison approach.

How I tested before? Mixing all the aspects. I would use different users, data, browsers for pretty much any functionality I'd work with. That works too.

It's easy to say now that I should have done this change earlier. There's nothing that was exactly preventing me from doing it. Just my concept of best use of time that was finetuned with seeing the bugs that mattered enough to get fixed and finding a way to test to target those now.