The goal of this document is to describe the current state, future needs, and innovations required for the QA Tech4MF function.
Our Vision:
We consistently and rapidly release software that delights our customers with its ease of use, performance, and quality.
As documented in the Mifos technology plan, we have 3 main goals for the next two years:
1. Make feature development 10 times faster.
2. Transform Mifos into a best-of-breed business intelligence system for microfinance.
3. Make Mifos scalable to 10 million clients hosted in cloud datacenters.
We need to reduce the time from feature request to customer delivery. We must also ensure these faster deliveries still meet the customer’s requirements. For MFI’s our software must be stable, accurate, and have a consistently easy-to-use interface.
To decrease clock time, we will:
a) Build more automated acceptance/regression tests. This automation will reduce the manual execution time currently required at the end of each release cycle. When all functional tests are automated we could immediately drop 2 weeks off the end of our quarterly release schedule and increase our confidence when releasing hot fixes to our cloud deployments. Automating these tests will also increase the time available for exploratory testing on the newly created features for a new release.
b) Increase collaboration with development team. Work in tightly coupled teams of PM, developer, and QA. The QA engineer will be involved in early design decisions, testing functional requirements, pairing with developers on unit and integration tests, building automated acceptance tests, and exploratory testing of newly developed features. In the next year, each QA engineer should be teamed with no more than three developers.
c) Build test framework to test functional aspects of Mifos with business level APIs. Creation and maintenance of functional tests will be faster at the API level. First step will be writing API tests for a single module of Mifos (e.g. Savings) and validate we can “push” down automated tests that are currently executed via the UI using Selenium. As a result we will have a “pyramid” of tests – with the majority of tests at the unit test level, with fewer tests built on at the integration, API, and UI levels:
d) Make Mifos more testable. This includes replacing parts of Mifos that are currently custom modules with established FLOSS modules which are tested by external teams. Examples of this would be Spring Security and Quartz scheduler.
e) Measure and establish criteria for test code coverage at all testing levels – unit, integration, API/service, and UI acceptance. Set criteria for code coverage on per module basis. We will have a clear and measurable view of what parts of the application have tests and what areas require additional test automation. This will help us ensure we’re spending our testing efforts wisely.
f) Change our release management process so we deliver only stable, tested features to our release branch. This will allow us to be more agile and avoid situations where we might be hold up an entire release for one feature that isn’t ready.
g) Reduce complexity for writing tests. In addition to writing tests below the UI layer, we will also introduce easier test authoring in easyb or some other language common to the rest of Mifos. We will also find easier methods for test data storage, generation, and comparison.
h) Identify root causes for customer bugs. Build regression tests for these bugs and work with development team to eliminate cause for issue. Example causes might include overly complex modules, unclear requirements, product usability, overly complex configuration options, etc.
i) User stories will be signed off by QA and product owner (business analyst) as part of iteration, making feature ship-ready sooner.
Stage 1 – Release quarterly for 2 releases
Stage 2 – Release Monthly for 3 releases
Stage 3 – Release Twice Monthly for 12 releases
Variables in determining how much our delivery time can compress:
We must validate proper report results are delivered by our new Business Intelligence system. This testing includes end user Pentaho reports, ETL jobs, and integration of data with other business systems.
To test this we will:
Stage 1 – Test Reports Manually, ship monthly
Stage 2 – Test Reports with automated functional tests, ship monthly
Stage 3 – Test Reports with layered test automation, ship sub-monthly
Mifos will be deployed on a frequent cycle, using a traditional 3 stage deployment model – testing, staging, and production. Tests will be conducted with each stage to promote a release to the customer environment. Testing emphasis includes performance, scalability, reliability, and security.
Stage 1 – Test Deployments using documented manual checklists
Stage 2 – Manually monitored automated tests for deployment provisioning
Stage 3 – Automated provision testing with automated rollback/promotion