Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

...

One time setup to start working with cloud. There are two major tools that need to be setup, the amazon ec2/rds tools and the chef command line client. This guide assumes Ubuntu desktop, lucid release (10.04). Later versions of Ubuntu might work , tooas well. 

EC2/RDS command line tools

...

No Format
$ mkdir -p ~/git/mifos-cloud
$ git clone git://mifos.git.sourceforge.net/gitroot/mifos/cloud ~/git/mifos-cloud
$ cd ~/git/mifos-cloud
# update version before doing anything: vi chef/cookbooks/<cookbook>/metadata.rb
$ make changes
$ knife cookbook upload -o ~/cloud/chef/cookbooks <cookbook you changed>
$ git add/commit/push

...

No Format
knife exec state.rb

How to change Pentaho to run reports and ETL against an RDS replica

If you wish to use the RDS instance for Mifos and Pentaho, ignore this section.

1. Set up RDS replica.

2.

No Format

knife role edit mifos_MFI

3. Edit override_attributes.pentaho.mifos_database_replica_host (optionally, adding this setting). "null" means fall back to override_attributes.mifos.database_host, and is the same as omitting override_attributes.pentaho.mifos_database_replica_host (see cookbooks/pentaho/recipes/default.rb in the cloud repo for details).

NOTE: nothing maintains Pentaho's database (ex: "MFISHORTNAME_prod_hib"), so "SourceDB" must be changed manually here.

NOTE: data sources in BIRT reports must be maintained manually, separately.

NOTE: data sources in Jasper Servers must also be maintained manually, separately.

Starting a new mifos/pentaho instance

...

Most of persistent data is stored in RDS.  This implies the data is highly available as it is replicated synchronously in two availability zones.  However, however it is certainly not impossible to lose an entire region e.g. due to natural disaster etc.  In addition to relying on multi-AZ functionality we also save and encrypt daily full mysqldump to cloudboss (in us-east-1b AZ) here: https://cloudboss.mifos.org/cloud   In the event of disaster you would need to download and decrypt the snapshot and create a new RDS instance and follow the instructions that apply when migrating an MFI from the old infrastructure.

...

In the event of AZ being unavailable or hardware failure.

  • Identify
            •  Identify which
    situation, by
            • situation by checking if other nodes in the same AZ are available or not.
  • If hardware failure, simply launch a new instance with the appropriate AMI, add it to chef config, remap elastic ip, mount volumes, etc.
  • If an entire AZ is down then :
  • create new volumes in alternate AZ, and retrieve from backuppc the uploads, custom reports etc.
    • only use "tar download" restore method, and only of /etc/mifos/uploads dir
    • download tar to local machine, then copy to remote host and untar as user "tomcat6"
  • relaunch each frontend into an alternate AZ, add to chef config, remap elastic ips, mount new volumes etc.
Warning

If you manually stop Mifos, sayfor example, during a restore of /etc/mifos/uploads, Chef will automatically restart it. To temporarily disable this behavior, you can use: sudo service chef-client stop, then sudo service chef-client start when you're finished.

...

  • updated from ci periodically (right now * */3 * * *)
  • commited/tagged/pushed to "cloud" git repo at sf.net if there is a change (can monitor commit logs) to be notified of a change
  • tag has build number and job name

Image maintenance

When upgrading machines, be sure to schedule outages.

Upgrades for new Mifos versions

...

  • modify the AMI generation script for that Mifos release to use the new point release version. (We would modify imaging/mifos_2_1_bi_1_2.sh to get the 2.1.x Mifos version along with BI 1.2).
  • update the "mifosversion" variable in the script to be 2.1.9 (commit and push)
  • rerun re-run the hudson job "cloud-mifos_2_1-bi_1_2-image" to create a new AMI with the updated Mifos war (the name of the new AMI will be in the console log output on of the hudson job).
  • follow the groovy script usage below using the new AMI generated in the previous step.

...

  • Statefiles with lists of latest packages are created periodically (see above).
  • An administrator must keep track of security releases in upstream Ubuntu packages.
  • A groovy script is available to move a customer from one image to the next (it could be an upgrade or just a security update), this script should only be used if the mfi deployment for the environment already exists, volumes created, elastic ips associated etc. It can be invoked like so:
Code Block
groovy upgrade.groovy <mfi e.g. rise> <environment testing|prod> <ami id>

...

Warning

TODO: how to move a customer to a newer image, what adhoc/manual tests to perform after bouncing their servers, how/when to notify customers of the change(s)when to notify customer.s of the change(s)

When using the upgrade script use the "Long Name" from below for the MFI argument

Long Name

Short Name

secdep

sec

rise

ris

light-microfinance

lmf

digamber

dig

keef

kee

Image production

Image production ci jobs are manually kicked off since there is a cost associated with storing Amazon EC2 images. Fire off the cloud-mifos-image job on the ci server to create a new image.

...

No Format
ldapdelete -x -W -D cn=admin,dc=mifos,dc=org -h ldap.mifos.org -ZZ 'uid=johndoe,ou=people,dc=mifos,dc=org'

Also, you should fill in their data bag with an invalid key e.g.:

...

Code Block
#!/bin/sh

# ./reset.sh johndoe <secret>

cat << EOF

dn: uid=$1,ou=people,dc=mifos,dc=org
changetype: modify
replace: userPassword
userPassword: $2
EOF

then you can invoke it like so:

Code Block
./reset.sh johndoe <THE NEW PASSWORD> | sudo ldapmodify -x -W -D cn=admin,dc=mifos,dc=org -h ldap.mifos.org -ZZ

...

  • Update role (usually) mifos_<MFI>
  • run chef-client on each host to see changes immediately
  • update uploaded reports in /etc/mifos/uploads/report with something like: sudo find /etc/mifos/uploads/reports -type f -exec sed -i -e 's/secdep-db.mifos.org/secdep.cz2a1vveusgo.us-east-1.rds.amazonaws.com/g' {} \;
  • update pentaho datasources in <mfi_shortname>_<environment>_hib e.g. sec_prod_hib with a query similar to:

...