...
- get the updated Word document for the country from the SPM team
- from OpenOffice Writer open the document and then "saveAs" choosing "Text Encoded" and then "Western Europe (ASCII/US)" and save the file into $PPI_BASE/data/<CountryName><CountryName>.txt which should replace the existing file
- update the line for this country in nicknames.csv, increasing the pointsVersion by 1
- run $PPI_BASE/parseall.sh to regenerate the Mifos PPI files
- include the updated $PPI_BASE/generated/scoringEtl/<country><year>PPIScore.sql in load_dw_ppi_survey.sql (which loads all scoring sql)
- cat generated/scoringEtl/*PPIScore.sql > ../JohnWoodlockWorkInProgress/MifosDataWarehouseETL/load_dw_ppi_survey.sql
When changes are made to PPI category likelihood (poverty line) values.
- Get the updated "Lookup Tables v15.xls" document
- Open "Lookup Tables v15.xls" in OpenOffice Calc and navigage to the "<country_name> Mifos" tab for the country who's data has changed.
- Do a "Save As..." in CSV format to the $PPI_BASE/data/percents/<CountryName><CountryName>.csv file
- run $PPI_BASE/parseall_percents.sh to regenerate the poverty line files (generated/povertylines/*PovertyLines.sql
- put all the poverty line sql into a single file that is loaded by the data warehouse
- mv ../JohnWoodlockWorkInProgress/MifosDataWarehouseETL/load_ppi_poverty_lines.sql /tmp
- cat generated/povertyLines/*.sql > ../JohnWoodlockWorkInProgress/MifosDataWarehouseETL/load_ppi_poverty_lines.sql
...
- get the updated Word document for the country from the SPM team
- from OpenOffice Writer open the document and then "saveAs" choosing "Text Encoded" and then "Western Europe (ASCII/US)" and save the file into $PPI_BASE/data/<CountryName><CountryName>.txt which should replace the existing file
- update the line for this country in nicknames.csv, increasing the questionsVersion by 1
- run $PPI_BASE/parseall.sh to regenerate the Mifos PPI files
- include the updated $PPI_BASE/generated/scoringEtl/<country><year>PPIScore.sql in load_dw_ppi_survey.sql (which loads all scoring sql)
- cat generated/scoringEtl/*PPIScore.sql > ../JohnWoodlockWorkInProgress/MifosDataWarehouseETL/load_dw_ppi_survey.sql
When a new major version of PPI is released.
...
- in bi/ppiparser run parseall.sh (this will regenerate generated/testData/*.properties files
- initialize a clean Mifos database with base etl test data
- echo "drop database mifos_ppi_test" | mysql -u root
- echo "create database mifos_ppi_test" | mysql -u root
- mysql -u root mifos_ppi_test < mifos_testetl_db.sql
- run PPITestDataGenerator from inside Eclipse with an arg pointing to the test data dir (e.g. -a /home/van/reportingWorkspace/bi/ppiparser/generated/testData)
- save the resulting Mifos database (with completed ppi surveys)
- mysqldump -u root mifos_ppi_test > load_testppi_db.sql
- run the etl (bi/ppi_build.sh) to populate with DW with PPI survey scores
- ppi_build.sh mifos_ppi_test mifos_ppi_test_dw ~/pentaho/data-integration/ '-u root'
- now PPITest.groovy can be run