Application Environments


An application environment is managed and provisioned using OpsWorks, a configuration management and automation platform from AWS that uses Chef.

We've written the workarea CLI to make it easy for developers to maintain, develop, and deploy their Workarea applications to these environments.

How environments are provisioned with Chef

Each environment has an OpsWorks Stack and at least 1 OpsWorks Layer (app).

Stacks and Layers have JSON attributes, and a set of Chef Cookbooks/Recipes that are run at different steps of an instance's lifecycle: setup, configure, deploy, undeploy, shutdown.

Chef Cookbooks and Recipes are versioned with an environment, and use Stack/Layer attributes to determine which actions and functions need to be performed.

The Environment Config File section below has more information on how attributes are modified and overriden by the environment config file.

All cookbooks and recipes are written with the notion that a Client or Developer may have a business case that requires modifying or extending the cookbooks used in an environment.

Resources inside an Environment

Environment Config File

An environment config file is used to specify Stack/Layer attributes on a per client/environment basis.

The WebLinc Hosting team writes and maintains a base attribute set that is shared across all WorkArea projects.

When updating an environment, the pertinent parts of the config file are deep merged on top of the base attribute set for that environment's version.

This means that on an as-needed basis, the Hosting team or a Developer of a project can override or modify the attributes used in an environment.

While Stack/Layer attributes in OpsWorks are formatted in JSON, we store and write the config in YAML and then convert to JSON prior to saving to OpsWorks.

For more information on feature flags and usage, check out the Environment Configuration page.

S3 Buckets

There are three types of S3 buckets you'll find for each environment.

The name is deterministic, given the following:

Client ID:         myclient
AWS Account ID:    01234567891011
Environment Name:  staging

You will have three S3 buckets:

Config Bucket:      myclient-staging-config-012345
Resources Bucket:   myclient-staging-resources-012345
Integration Bucket: myclient-staging-integration-012345

Environment Config Bucket

Your Config bucket is used to store files related to the configuration of your environment.

Typically this contains a secrets.yml file for the secrets your app will use in this specific environment, as well as a /cookbooks/ directory with a list of cookbook/recipe tarballs your environment has used.

All application servers have read/write access to this bucket.

Resources Bucket (Public)

Your Resources bucket is a publicly available bucket that is used by the application to store assets, product images, and other media.

Do not store any sensitive or public files in this bucket. See integration bucket below.

All application servers have read/write access to this bucket.

Integration Bucket (Private)

Your Integration bucket is a private bucket available only to read/write from your Application servers.

While developing your app, you may come to a point where a 3rd party integration needs a place to put/get files, whether it be for fulfillment, reviews, inventory, product imports, etc.

Upon request, the WebLinc Hosting team can create IAM users with Programatic access, giving 3rd parties the ability to use this bucket for any needs you may come across.

Content Delivery Network (CDN)

All environments are provisioned with their own CDN. Application instances in your environment will have an environment variable, WORKAREA_ASSET_HOST=https://cdn.myclient.cdn available to them.

We also add config.action_controller.asset_host = ENV['WORKAREA_ASSET_HOST'] to the pertinent environment.rb file in your rails project upon provisioning an environment.

Load Balancing App Servers

All application servers in an environment sit behind an Elastic Load Balancer (ELB).

ELBs in Production

Production environments have a Public ELB that sits in front of a Web Application Firewall (WAF). The WAF then forwards requests on to an Internal ELB that sits in front of a minimum of 2 application servers in different subnets.

ELBs in Staging/QA

These environments have a Public ELB that sits in front of 1 or 2 application servers, depending on the traffic the environment receives.

ELB Health Checks

Every ELB has a health check that hits http://yourapp/health_check on each instance every 30 seconds.

If the health check endpoint does not return an HTTP 200, the instance is considered to be OutOfService. This means the instance will not receive any traffic until it returns a 200 again.

In some cases, you may deploy a code change only to realize a minute later that your application is returning a HTTP 503 Service Unavailable.

This is typically due to an error in code that causes the health check endpoint to return a HTTP 500. The Rails and NGINX logs are the best place to look for debugging an unhealty instance.

The health check endpoint proxy target is configurable, the Environment Configuration guide has details on how to do so.

Auto Scaling App Servers

AutoScaling Groups (ASGs) create or destroy instances depending on the load average across it's instances. Typically, your Production environment is the only time you'll use an ASG.

Production environment ASGs have a minimum of 2 application servers. It's also worth noting that the WAF instances mentioned above also have their own ASG.