Setup local machine
One time setup to start working with cloud. There are two major tools that need to be setup, the amazon ec2/rds tools and the chef command line client. This guide assumes Ubuntu desktop, lucid release (10.04). Later versions of Ubuntu might work, too.
EC2/RDS command line tools
- http://aws.amazon.com/developertools/2928?_encoding=UTF8&jiveRedirect=1
- http://aws.amazon.com/developertools/351
- http://docs.amazonwebservices.com/AmazonRDS/latest/CommandLineReference/
- shell environment variables, place in your .bashrc so you don't have to set it up repeatedly (keys and credentials are in vault):
export EC2_HOME=$HOME/ec2 export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem` export EC2_CERT=`ls $EC2_HOME/cert-*.pem` export EC2_REGION=us-east-1 export AMAZON_ACCESS_KEY_ID=<redacted> export AMAZON_SECRET_ACCESS_KEY=<redacted>
- unzip tools and place bin directory in your path
chef
Chef provides configuration management of machines, starts/stops services when configuration changes, etc.
Setup chef locally, create account, connect with your chef server. We've been using "Opscode console" (they host our chef server).
- http://wiki.opscode.com/display/chef/Package+Installation+on+Debian+and+Ubuntu** just install "chef" package, chef-server not needed since we use opscode's
- http://help.opscode.com/kb/start/2-setting-up-your-user-environment (scroll down to section Create your Chef repository)
- organization key is in vault directory as mifos-validator.pem.cpt (same password as the password vault)
- when setting up your chef env if you lost your client key or want to generate a new one go to this (replace with your username): http://community.opscode.com/users/YOUR_USERNAME/ (be sure to login again if you don't see the "get private key" link)
- create
~/.chef/knife.rb
. Here's a template:# Replace USERNAME, ORGANIZATION with yours current_dir = File.dirname(__FILE__) log_level :info log_location STDOUT node_name "USERNAME" client_key "#{current_dir}/USERNAME.pem" validation_client_name "ORGANIZATION-validator" validation_key "#{current_dir}/ORGANIZATION-validator.pem" chef_server_url "https://api.opscode.com/organizations/ORGANIZATION" cache_type 'BasicFile' cache_options( :path => "#{ENV['HOME']}/.chef/checksums" ) # Customize as necessary. Mifos cookbooks are in the cloud git # repository, under chef/cookbooks. Multiple paths are allowed. cookbook_path ["#{ENV['HOME']}/git/mifos-cloud/chef/cookbooks"]
- Copy the keys and knife configuration you downloaded earlier into
~/.chef
:$ mkdir -p ~/.chef $ cp USERNAME.pem ~/.chef $ cp ORGANIZATION-validator.pem ~/.chef $ cp knife.rb ~/.chef
-
- verify your connectivity:
- you should see a list of nodes that are currently managed by chef
knife node list
Knife hints
Cookbooks
List cookbooks that chef server knows about
knife cookbook list
Updating a cookbook
Cookbooks are stored in git in the chef directory of the cloud repository. If you want to update a cookbook, UPDATE/COMMIT/PUSH IN GIT FIRST AND BUMP VERSION NUMBER, before sending it to chef server. Here is a step by step instructions
$ mkdir -p ~/git/mifos-cloud $ git clone git://mifos.git.sourceforge.net/gitroot/mifos/cloud ~/git/mifos-cloud $ cd ~/git/mifos-cloud # update version before doing anything: vi chef/cookbooks/<cookbook>/metadata.rb $ make changes $ knife cookbook upload -o ~/cloud/chef/cookbooks <cookbook you changed> $ git add/commit/push
How to get the AMI of every node
Starting a new mifos/pentaho instance
- Stop mifos instance
- Dump database
- Copy uploads/config in MIFOS_CONF
- Create security group in EC2, setup permissions
- SSH "gateways" setup/info
- allows us to limit points of entry for our hosted machines
- note hosts below in ec2-authorize commands... the gateways are currently birch.mifos.org (the whole Seattle GTC, actually) and cloudboss.mifos.org.
- add to your
.ssh/config
(substituting MFINAME for something meaningful):Host *MFINAME.mifos.org ProxyCommand ssh birch.mifos.org exec nc %h %p
- one-time setup for EC2 physical firewall
- manually change EC2_ACCOUNT_NUMBER with the 12-or-so digit number fetched from the AWS console
- SSH via gateways only
- 18980-18981 is for monitoring JMX over RMI via OpenNMS
#!/bin/bash set -ex SEC_GROUPS="digamber light-microfinance rise keef" EC2_ACCOUNT_NUMBER=000000000000 for SEC_GROUP in ${SEC_GROUPS} do ec2-authorize -P tcp -p 22-22 -s 75.149.167.24/32 ${SEC_GROUP} ec2-authorize -P tcp -p 22-22 -s 10.252.50.116/32 ${SEC_GROUP} ec2-authorize -P tcp -p 22-22 -s 184.72.240.48/32 ${SEC_GROUP} ec2-authorize -P tcp -p 443-443 -s 0.0.0.0/0 ${SEC_GROUP} ec2-authorize -P tcp -p 80-80 -s 0.0.0.0/0 ${SEC_GROUP} ec2-authorize -P tcp -p 18980-18981 -s 10.252.50.116/32 ${SEC_GROUP} ec2-authorize -P tcp -p 18980-18981 -s 184.72.240.48/32 ${SEC_GROUP} ec2-authorize -P udp -p 161-161 -s 10.252.50.116/32 ${SEC_GROUP} ec2-authorize -P udp -p 161-161 -s 184.72.240.48/32 ${SEC_GROUP} ec2-authorize -P tcp -p 161-161 -s 10.252.50.116/32 ${SEC_GROUP} ec2-authorize -P tcp -p 161-161 -s 184.72.240.48/32 ${SEC_GROUP} ec2-authorize -P icmp -t -1:-1 -s 10.252.50.116/32 ${SEC_GROUP} ec2-authorize -P icmp -t -1:-1 -s 184.72.240.48/32 ${SEC_GROUP} ec2-authorize -o ${SEC_GROUP} -u ${EC2_ACCOUNT_NUMBER} ldap done
- Create RDS security group
- Authorize EC2 security group for MFI
- Authorize default EC2 security group temporarily to make importing existing database more straightforward, remove after importing
- Create RDS instances as m1.small initially
- v5.1.50
- enable auto minor version upgrade
- allocate 10GB
- use MFI long name for MySQL instance (ie: "rise", "secdep")
- initial user/pass can be anything simple, this will be changed later
- leave Database Name blank
- Db Parameter Group: "mifoscloud"
- backup retention period: 8 days (best for PITR/binlogs)
- backup window: 1600-1700 UTC (good for India/Philippines/Africa)
- maintenance windows Saturday 1700-1800 UTC
- example:
Engine: mysql Engine Version: 5.1.50 Auto Minor Ver. Upgrade: Yes DB Instance Class: db.m1.small Multi-AZ Deployment: Yes Allocated Storage: 10 DB Instance Identifier: rise Master User Name: mifos Master User Password: mifos Database Name: Database Port: 3306 Availability Zone: Using a Multi-AZ Deployment disables this preference. DB Parameter Group: mifoscloud DB Security Group(s):rise Backup Retention Period: 8 Backup Window: 16:00-17:00 Maintenance Window: Saturday 17:00-Saturday 18:00
- Create chef roles, base + test + prod + optional MFI specific recipe
- look at an existing role
knife role show mifos_digamber
- create a new role
knife role create mifos_rise
knife role create mifos_rise_test
(look at mifos_digamber_prod, mifos_digamber_test for examples)
- look at an existing role
- Create 2 EBS volumes 1 for test and prod each (storing uploads)
for testing:for prod:ec2-create-volume --snapshot snap-5abd2f36 -s 1 -z us-east-1d ec2-create-tags -t Name=testing-digamber.mifos.org <vol-id>
ec2-create-volume --snapshot snap-5abd2f36 -s 1 -z us-east-1a ec2-create-tags -t Name=digamber.mifos.org <vol-id>
- Update DNS if required
- Create EC2 instances
- get ami-id from hudson job: https://ci.mifos.org/hudson/view/cloud/job/cloud-mifos-image
- latest ami-id for 2.0.2:ami-8a8d7fe3 (see at end: https://ci.mifos.org/hudson/view/cloud/job/cloud-mifos-image/44/console)
ec2-run-instances ami-8a8d7fe3 --instance-type m1.small -z us-east-1d -d '{ "run_list": ["role[ldapclient]", "role[base]" ] }' --disable-api-termination -g rise ec2-create-tags -t Name=testing-rise.mifos.org -t Service=Mifos INSTANCE_ID
- on boot, the node will add itself to the chef server (see rc.local, imaging/create_image.py, cloud source code)
- make sure you can login via SSH. If not, fetch console output (this is something you may have to do from time to time):
ec2-get-console-output INSTANCE_ID
- attach ec2 volume
ec2-attach-volume -i INSTANCE_ID -d /dev/sdc1 vol-ee7e2386
knife node edit INSTANCE_ID.mifos.org
, add "role[mifos_rise_test]" to run list section or do through http://manage.opscode.com- log into box and run
sudo chef-client
to see change immediately or wait 30 minutes or so
- set up Mifos and Pentaho databases
sudo /etc/pentaho/system/mifos_pentaho_init.sql -u mifos -pmifos
- change mifos password via AWS Web UI (Modify RDS instance, put a password generated with, for example, apg, in the "Master User Password" field)
- add backup jobs to BackupPC
- when adding new backup host, use the NEWHOST=COPYHOST syntax mentioned on the "edit hosts" page
- edit ~backuppc/.ssh/config , disabling strict host checking
- add monitoring of box to OpenNMS
Monitoring systems
- Monitoring server: https://cloudboss.mifos.org/opennms/ (also see System Monitoring)
- overview of what's running where and current status** also see "knife status --run-list"
- outage notifications are sent to the mifos-adm google group
Disaster recovery
Database
TODO
Front-end
Application server (Tomcat/Jetty). What clients hit.
In the event of AZ being unavailable or hardware failure.
- Identify which situation,
Statefiles
Statefiles are lists of specific versions of packages to be included in images. They are kept in the statefiles/ dir in the "cloud" git repo.
- updated from ci periodically (right now * */3 * * *)
- commited/tagged/pushed to "cloud" git repo at sf.net if there is a change (can monitor commit logs) to be notified of a change
- tag has build number and job name
Image maintenance
Upgrades for security/features
- Statefiles with lists of latest packages are created periodically (see above).
- An administrator must keep track of security releases in upstream Ubuntu packages.
TODO: how to move a customer to a newer image, what adhoc/manual tests to perform after bouncing their servers, how/when to notify customers of the change(s)
Image production
Image production ci jobs are manually kicked off since there is a cost associated with storing Amazon EC2 images. Fire off the cloud-mifos-image job on the ci server to create a new image.
LDAP
TODO
- How to add or remove a sysadmin from the LDAP server
- (or just add to LDAP and point there?)
MySQL/RDS maintenance
Growing a database
TODO
Changing master user password
Do this from the AWS console. Make sure you check the box next to "Apply Immediately", or you may have to wait some amount of time (maybe a few minutes) before your changes are applied.
TODO
- deploy/migration procedure (from old EC2 setup to new cloud images)
- document
- practice
- migrate RiSE
- manually copy over BIRT reports
- delete test nodes from chef server: i-80b55cef.mifos.org, i-91746cfd.mifos.org
- migrate RiSE
- new disaster recovery procedure
- document
- practice
- document new system
- architecture
- persistent data
- configuration
- separate setup/initial instructions from specific maintenance procedures
- don't need step: "clone chef repo"
- commit cookbooks into the cloud repo
- DONE 2011-03-14 by Adam
- script to wrap ec2-run-instances
- keep trying to attach volume until succeeds immediately after running "ec2-run-instances"
- nevermind... changed procedure so mifos_X role isn't included
- be able to generate both 2.0 and 2.1 images etc.
- aka branches, parallel development so release maintenance and new development can occur (currently we just have one Mifos 2.0 image, cloud/master can't handle branched development. Statefile probably needs to be maintained outside of the "cloud" repo)
- [groovy?] script to reimage a box (for instance, for a security upgrade)
- do a couple of test restores, document them
- we can't currently pin recipe/role versions in a role run_list
- chef client crashes!
Trash
- Clone chef repo
$ cd ~ $ git clone http://github.com/opscode/chef-repo.git $ mkdir -p ~/chef-repo/.chef