MIFOS QA Technology Plan

MIFOS QA Technology Plan

The goal of this document is to describe the current state, future needs, and innovations required for the QA Tech4MF function.

Our Vision:

We consistently and rapidly release software that delights our customers with its ease of use, performance, and quality.

As documented in the Mifos technology plan, we have 3 main goals for the next two years:

1.       Make feature development 10 times faster.

2.       Transform Mifos into a best-of-breed business intelligence system for microfinance.

3.       Make Mifos scalable to 10 million clients hosted in cloud datacenters.

QA plan for making development 10 times faster

We need to reduce the time from feature request to customer delivery.  We must also ensure these faster deliveries still meet the customer’s requirements.  For MFI’s our software must be stable, accurate, and have a consistently easy-to-use interface. 


To decrease clock time, we will:

a)      Build more automated acceptance/regression tests.  This automation will reduce the manual execution time currently required at the end of each release cycle.  When all functional tests are automated we could immediately drop 2 weeks off the end of our quarterly release schedule and increase our confidence when releasing hot fixes to our cloud deployments.  Automating these tests will also increase the time available for exploratory testing on the newly created features for a new release.

b)      Increase collaboration with development team.  Work in tightly coupled teams of PM, developer, and QA.  The QA engineer will be involved in early design decisions, testing functional requirements, pairing with developers on unit and integration tests, building automated acceptance tests, and exploratory testing of newly developed features.  In the next year, each QA engineer should be teamed with no more than three developers. 

c)       Build test framework to test functional aspects of Mifos with business level APIs.  Creation and maintenance of functional tests will be faster at the API level.  First step will be writing API tests for a single module of Mifos (e.g. Savings) and validate we can “push” down automated tests that are currently executed via the UI using Selenium.  As a result we will have a “pyramid” of tests – with the majority of tests at the unit test level, with fewer tests built on at the integration, API, and UI levels:

d)      Make Mifos more testable.  This includes replacing parts of Mifos that are currently custom modules with established FLOSS modules which are tested by external teams.  Examples of this would be Spring Security and Quartz scheduler.

e)      Measure and establish criteria for test code coverage at all testing levels – unit, integration, API/service, and UI acceptance.  Set criteria for code coverage on per module basis.  We will have a clear and measurable view of what parts of the application have tests and what areas require additional test automation.  This will help us ensure we’re spending our testing efforts wisely.

f)       Change our release management process so we deliver only stable, tested features to our release branch.  This will allow us to be more agile and avoid situations where we might be hold up an entire release for one feature that isn’t ready.

g)      Reduce complexity for writing tests. In addition to writing tests below the UI layer, we will also introduce easier test authoring in easyb or some other language common to the rest of Mifos.  We will also find easier methods for test data storage, generation, and comparison.

h)      Identify root causes for customer bugs.  Build regression tests for these bugs and work with development team to eliminate cause for issue.  Example causes might include overly complex modules, unclear requirements, product usability, overly complex configuration options, etc.

i)        User stories will be signed off by QA and product owner (business analyst) as part of iteration, making feature ship-ready sooner. 

Results by Stage

Stage 1 – Release quarterly for 2 releases

  1. Ship Leila E release in December 2010 and 1 quarterly release in first half of 2011.  Each release requires 15 days of manual regression test execution. 
  2. Capture current automated test code coverage of unit, integration, and acceptance level tests.
  3. Add 200 new automated acceptance tests of possible 250.
  4. Spike on writing automated functional tests at API level for a module and validate that approach.

Stage 2 – Release Monthly for 3 releases

  1. Complete automated regression test suite, adding all tests for backlog of existing features and add new acceptance tests for new features.
  2. Build API level and/or lower tests for all new feature work in concert with development team’s unit and integration tests.

Stage 3 – Release Twice Monthly for 12 releases

  1. Reduce time at end of Mifos release cycle for dedicated QA testing from 5 days to 2 days by validating requirements closer to the completion of each feature. 
  2. Expand the coverage matrix of our test automation for additional browsers, operating systems.
  3. Convert 75% of UI based acceptance tests to API/service level tests.
  4. QA team tests features closely with developers (including pair programming/testing), identifying issues earlier in the development process, reducing the overhead for fixing bugs that were introduced weeks earlier.
  5. Strong regional QA presence in India and Africa to support our regional customer’s issues. 
  6. We ship with no known open bugs.

Variables in determining how much our delivery time can compress:

  • QA staff level
  • Developer staff level
  • Product road map focus.  Some features are simple to implement and hard to test.

Transform Mifos into a best-of-breed business intelligence system for microfinance.

We must validate proper report results are delivered by our new Business Intelligence system.  This testing includes end user Pentaho reports, ETL jobs, and integration of data with other business systems.


To test this we will:

  1. Define a testing process for all Mifos business reports that is part of our overall design and release process for these deliverables.  This process will include ability to test and deliver updates to individual or sets of standard reports.  Reports will be deployed to one or more cloud customers using testing, staging, and production systems. 
  2. Create a testing frameworks for automated testing of the data warehouse ETL job and Pentaho reports.  These frameworks will be independent of the Mifos test automation so a report developer or tester could run it quickly and easily.  Report testing framework efforts will be broken up into segments: 
    1. ETL job execution and validation
    2. Report generation testing data accuracy
    3. UI validation testing.
    4. Performance/scalability testing.
  3. Develop standalone procedure, scripts for creating test data sets to test reporting functionality and load.

Results by Stage


Stage 1 – Test Reports Manually, ship monthly

  1. Manually test new standard reports, requiring 1-4 days of testing depending on complexity of report and test data.
  2. Spike on automated testing for one standard report


Stage 2 – Test Reports with automated functional tests, ship monthly

  1. Automated functional tests for each standard report
  2. Customer test data
  3. Integration testing automated, possibly using mock interfaces.


Stage 3 – Test Reports with layered test automation, ship sub-monthly

  1. Continuous, automated functional tests
  2. Automated integration, UI validation tests
  3. Stress, performance testing and test data
  4. Continuous automated unit and integration testing, using mock interfaces.


Make Mifos scalable to 10 million clients hosted in cloud datacenters.

Mifos will be deployed on a frequent cycle, using a traditional 3 stage deployment model – testing, staging, and production.  Tests will be conducted with each stage to promote a release to the customer environment.  Testing emphasis includes performance, scalability, reliability, and security. 


  1. Define a repeatable, consistent testing process for all Mifos cloud deployments that are part of our overall design and release process for staged deployments.  This process will include acceptance criteria for promotion of a release from Test, Stage, and Production.  We will also define fallback procedures for the system when a system fails one or more acceptance criteria. 
  2. Enhance and use continuous integration and continuous performance tests.  All tests must pass acceptance criteria to promote from Test to Stage.  Acceptance criteria for each stage will be published and refined as process improves.
  3. Build test plan and perform security vulnerability testing. 
  4. Build test plan and perform tests to validate future deployment models working towards multi tenant. 
  5. Use system administrator automation tool to promote deployments in automated process.  To have confidence in our scripts, we will need to have tests written to test these scripts.
  6. Improve logging, error capture capabilities in Mifos to collect detailed diagnostic and quickly solve customer issues.  

Results by Stage

Stage 1 – Test Deployments using documented manual checklists

  1. Manually test new deployments using checklists.
  2. Continuous Performance Testing to 2M clients.

Stage 2 – Manually monitored automated tests for deployment provisioning

  1. Automated tests for Test, Stage, and Production systems.  This includes Mifos application servers, database servers, and reporting servers.  Automated test results reviewed by engineering and provision/rollback controlled manually based on test results.
  2. Security vulnerability testing.
  3. Continuous Performance Testing to 3M clients. 

Stage 3 – Automated provision testing with automated rollback/promotion

  1. Able to run automated functional and performance tests against testing, stage, and production instances to validate environments.  Automated promotion to next stage when tests pass or automatic roll back with failure.
  2. Failover and load balance testing.
  3. Single sign on testing.
  4. Automated tests of fleet monitoring tools and scripts.
  5. Multi-tenant test environment built for load, scalability testing.