We just got a release of several pages where specific field validation needed to take place on these pages. The dev team decided to release just the pages in the first sprint, without the field validation, then develop and release the validations in the next sprint.
It sounded like a fine idea - I'm all for incremental releases to keep everyone busy and the project moving forward, and that's a big part of what agile is about, right? But this turned out to be a lousy idea, and the biggest reasons were:
- We had to ignore errors in the fields constantly, which goes against the purpose of QA.
- Each error required evaluation of what was an expected error and thus ignored (and not unnecessarily reported), and what was a real error and needed reporting (and not errantly ignored).
- There were many more errors then usual thereby halting the line of work more frequently, and depending on the severity sometimes requiring the testers to restart a test from the beginning.
- Once the validations were in place nearly a complete regression of the pages had to be performed, thus doubling the work performed (or more, given the above issues).
Ignoring bugs
The first issue foundational - if QA personnel ignore bugs, that is an invitation to poor quality. Granted, you might say (and be correct!) that any bugs ignored in this sprint due to missing functionality to be built in the next should be found during that next sprint's testing or during a regression test. But the fact is you're leaving it up to a future possibility rather then an in-this-moment action. Obvious defects should be reported at the time of discovery, and not left up to the tester to determine if it's a 'real' bug or not.
Bug evaluation
Which brings up the second issue, that every bug now needs evaluation whether to report or not. If reported unnecessarily, you just wasted your time, the developers time, and potentially increased team tension. If incorrectly ignored, you now have introduced a bug into your system that you hope will be caught in the next sprint or in regression.
You've also introduced some assumptions and expectations into the picture: first, that all team members are clear on what's being built this sprint and the next (laugh, but you know these gaps happen), and second, that the QA team is technical enough to know what is a defect due to validation (and therefore ignore) and what is a 'real' bug and must be reported. Of course having all QA engineers at this level may be ideal, but the reality is not all testers know why certain errors are thrown.
A case in point: one of our fields (the user's SSN) was required to have a value, but in the first sprint this requirement was not enforced. A 'Null Reference Exception' was thrown when the field was blank and the form submitted; from the developer's perspective this error will be caught during the next sprint's validation work, and the error should be ignored. From a QA perspective this error is major and should be reported. Only if the test engineer had a development background would they likely understand that this error occurred because the Person object which contained the SSN was blank and the code wasn't setup to handle this, as it was relying on future client-side validation.
Incremental page releases = doubling the effort
The third and fourth items are self-explanatory. We had more bugs then typical because of the lack of complete functionality, and once the form validations in place, just about the entire page needed retesting to make sure both the UI and the functionality now worked as expected. Since we already spent a significant time wading through the half-built pages, this was double-effort to now test again.
My recommendation: if you're in the agile process, stay in the process. Don't revert back to a pseudo-waterfall method, releasing incrementally to QA and hoping everything gets caught in the end. You'll save time and resources, not to mention the sanity of the testers.