Test Report - Leila E

Introduction

This report summarizes the test execution results of testing completed by the Mifos team during the version 2.0 (Leila E) project. 

This release adds several major features to Mifos as documented on the Leila E Release page.

Test Planning

Individual feature test planning and regression testing documents for this release were stored on mifosforge.  The test cases are maintained in the “Mifos Test Cases” project.   Test plan and testing schedule are also maintained on mifosforge.

Automated acceptance tests are executed with each continuous build and are maintained as part of the project source.  Acceptance tests are stored under acceptanceTests/src/test in the E release branch repository.  Test results are captured with each continuous build.

Scalability performance testing was performed by the SunGard team in Bangalore, India.  Test plan, JMeter performance scripts, and data generation scripts are all stored under documents/performance/SunGard.   Testing is done twice weekly and results are kept on the hudson server that schedules these performance test.

Test Environment

For version 2.0 functional testing, tests were executed against the system recommended operating system, tomcat, database, and browser.

Functional Test Results

Functional testing for 2.0 focused on the new features added for this release:

  • Business Intelligence (Reporting / Data Warehouse)
  • PPI
  • Question Groups
  • Tally Accounting Integration
  • MPESA Loan Repayments plus Savings Deposits v2
  • Waive interest on Loan Prepayment
  • Quartz Batch Jobs
  • Savings Interest Calculation Refactoring




Automated Acceptance Test Results

 

Passed

Failed

Total

174

0

174

 

             

The automated tests are split by component as noted on hudson:  https://ci.mifos.org/hudson/job/head-e-release/lastStableBuild/org.mifos$mifos-acceptanceTests/testReport/

 

 

Functional Test Execution Results

Test results from first test cycle:   (http://mifosforge.jira.com/secure/IssueNavigator.jspa?mode=hide&requestId=10254)

Of these cases, 114 are automated and the remainder are currently manual test cases.

For failed tests, subsequent test cycle was completed once related bugs were fixed.  Test results from second test cycle:  [http://mifosforge.jira.com/secure/IssueNavigator.jspa?mode=hide&requestId=10266
]

Scalability Testing

 

Work on scalability lab this release was focused on building continuous automation on the testing lab.  Defined performance tests were executed on Amazon ec2 test lab.  Performance lab results are kept on the hudson server.

 
 

Issue Statistics

Issues were logged in the issue tracker – mifosforge.jira.com.  The Leila E dashboard  provides a summary of the issues found or targeted for version 2.0:

Test Risks

 

Testing related risks for this release:

  • Dual testing work going with E release and upcoming Mifos BI release.
  • Large set of features developed by teams in multiple locations.  Risk mitigated by having QA staff on site with teams in Poland and India.
  • Large number of manual test cases to execute for regression testing.
  • Data migration testing for question groups caused late changes. 


 Release Criteria

 

Each release criteria item is followed by notes in italic commenting on our success in meeting that release criteria.

 

  • All new features pass acceptance criteria based on the feature's functional requirements or other agreed release criteria.
  • All the features targeted for this release are feature complete and tested to the satisfaction of the entire team. 

Primary features passed all functional tests.  Some blocking issues were reported during initial testing, but those issues were resolved and retested. 

  • There are no remaining High (P1 or P2) priority open issues targeted for the release.  Some high priority issues identified during testing may deferred based on acceptable workarounds. 

There are no defects still targeted for the v2.0 milestone.

  • Planned acceptance and regression tests have been executed, with all tests passing or having an acceptable workaround.

Tests were prioritized based on project risk and areas of change.  For the tests executed, all tests passed or have an acceptable workaround. 

  • Final release candidate build has been tested and has passed all acceptance and installation tests.

Release candidate build #379 from e release branch was tested and finalized on December 20, 2010.

 

8.0              Summary

On the positive side, the main goal of this release was scalability, and in that area we had good success.  A big hurdle was moving our performance testing to EC2.  While setting up the lab on EC2 was time consuming, the benefit was immediate.  We were then able to run tests for different versions and scenarios in parallel, as well as no longer be at the mercy of the availability of the hardware in the SunGard lab.  The tests we ran focused on the critical use cases for Grameen Koota, and in those areas we were able to show negligible slowing of the application with 1 million clients compared to the baseline of 350,000 clients.

For the first time in several Mifos releases, we did NOT ship on schedule. The delays were not related to the scalability work, but instead to these factors:

  •  
    1. Volume of changes to verify became large with the larger development team, resulting in delayed feedback (rejection) of some changes.
    2. Some regressions did not appear on initial verification, but only when testing changed features with related
    3. Production priority 1 bug reported by customer towards the end of the release schedule.
    4. Distraction of from the 1.5 release early in the Shamin D design and implementation stages.
    5. Lacking additional automated acceptance tests (or lower level tests) to catch refactoring regressions.
    6. Using Jira for test case tracking gives better transparency but is slower for input of manual test case test results.