For one project we built a nice Amazon OpsWorks stack: Elastic Load Balancer (ELB), two web servers and a DB server. Elegant solution designed for easy code deployments and OS updates. Some time later I discovered how easy it is to break it.
Amazon Elastic Load Balancer monitors the IP addresses of attached instances. If it’s able to receive content then all is fine and dandy, the problems start when it does not.
Whenever the IP produces errors for extended time the ELB marks it as in a bad health. If all servers attached to it are in a bad health 503 Service Unavailable: Back-end server is at capacity is returned.
If you open the instance IP, Apache returns the content of first app in alphabetical order. So if you have incredible_website and remarkable_website the incredible_website will display.
Let’s imagine you want to add amazing_website, a simple WordPress blog. You deploy the code and start creating the dedicated database.
And then the incredible_website and remarkable_website stop working and throw 503 error. If there are any other websites on the servers they will fail as well.
Simple: amazing_website is now the first app in OpsWorks. Load balancer pings instances and sees Error establishing database connection as the DB is not yet ready. After a few more retries it considers it broken.
If you deployed the code to all servers in the layer the ELB will mark them as not working. After that it will return 503 Service Unavailable: Back-end server is at capacity.
You could always configure the database first or not deploy the code to all instances in the layer at once.
As there is no config switch, the best way is to create application called 0_default_vhost. It can only display your company logo and say hi as its main purpose is to provide some content for ELB.