Twelve-Factor App Methodology on the Public Cloud

Reading Time: 7 minutes

Twelve-Factor App Methodology

The popular twelve-factor app is a methodology for building software-as-a-service apps.  Let us first quickly describe what the twelve factor app methodology is. Next, let us see how this can be implemented in public cloud providers AWS and Google Cloud.

Reference: The Twelve-Factor App (12factor.net)

I. Codebase One codebase tracked in revision control, many deploys
II. Dependencies Explicitly declare and isolate dependencies
III. Config Store config in the environment
IV. Backing Services Treat backing services as attached resources
V. Build, release, run Strictly separate build and run stages
VI. Process Execute the app as one or more stateless processes
VII. Port Binding VExport services via port binding
VIII. Concurrency Scale out via the process model
IX. Disposability Maximize robustness with fast startup and graceful shutdown
X. Dev/prod parity Keep development, staging, and production as similar as possible
XI. Logs Treat logs as event streams
XII. Admin Processes Run admin/management tasks as one-off processes

5 Key Features of the methodology

  • Use declarative formats for setup automation, to minimize time and cost for new developers joining the project;
  • Have a clean contract with the underlying operating system, offering maximum portability between execution environments;
  • Are suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration;
  • Minimize divergence between development and production, enabling continuous deployment for maximum agility
  • ;And can scale up without significant changes to tooling, architecture, or development practices.

These five key features are associated with CI/CD (continuous integration and continuous deployment), microservices, and containerization.

Reference: Twelve-factor app development on Google Cloud  |  Solutions

Reference: Applying the Twelve-Factor App Methodology to Serverless … (amazon.com)

Implementation considerations

I. Codebase
Track your codes in a version control system, such as Git or Mercurial. All revisions are tracked in the code repository or just code repo or just repo. The repo provides a place from which to do continuous integration (CI) and continuous deployment (CD). The repo can set up a privately or hosted in house. Public cloud providers also offer version control services that you can use to privately store and manage assets (such as documents, source code, and binary files) in the cloud.

  • GCP uses Cloud Source Repositories to collaborate and manage your code in a fully-featured, scalable, private Git repository.
  • AWS uses a Git-based service CodeCommit to eliminate the need for you to manage your own source control system or worry about scaling its infrastructure. We will talk more on CI/CD in V. Build, release, run.

II. Dependencies
Explicitly declare dependencies in a CI/CD process and isolate the application with its dependencies by packaging them into a container. Many programming languages offer a way to explicitly declare dependencies:

Node.js: npm

Python: pip

Java: maven

C#: NuGet

Ruby: Bundler

Go: Go get packages

  • GCP’s Container Registry and
  • AWS Elastic Container Registry

You can integrate the container registry with existing CI/CD to simplify your development to production workflow.

III. Config
Do not store config items as constants in the code. Config varies substantially across deploys (based on releases, environments, etc), but code does not vary across deploys. So the best practice should store external for each environment and strict separation of config from code. 

  • AWS: leverage Lambda environment variables to store secrets securely and adjust your function’s behavior without updating code.
  • Kubernetes: create Kubernetes ConfigMaps to bind environment variables, port numbers, configuration files, command-line arguments, and other configuration artifacts to your pods’ containers and system components at runtime.

IV. Backing Services
Backing service is any service the app consumes over the network as part of its normal operation such as databases, messaging/queueing systems, file systems, and caching systems. These services should be accessed as a service and externalized in the configuration as previously covered. The public cloud providers offer the cloud-native fully managed services for the backing services. For example:

  • AWS S3, EFS, Amazon Simple Queue Service (SQS)
  • GCP Cloud Storage, Pub/Sub

V. Build, release, run

The build, release and run stages need to be strictly separated. So you should have a CI/CD process for development and deployment. The deployment tools typically offer release management tools. Every release should always have a unique release ID that’s a result of combining an environment’s configuration with a build. The release management tools should have the ability to roll back and track the production deployment history. 

  • GCP CI/CD Pipeline
  • AWS CI/CD Pipeline

VI. Process
Twelve-factor processes are stateless and share-nothing with each other. Any data that needs to persist must be stored in a stateful backing service, typically a database. If the application has “sticky” sessions on-prem, then you need to change the way to handle the persistent data in the cloud.

  • AWS ElasticCache(leverage AWS step functions to coordinate the components of distributed applications and microservices using visual workflows to execute the processes in order and as expected.) or
  • GCP Memorystore 

VII. Port Binding
The twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. So you should not hard-code port numbers in your code.

  • AWS Elastic Kubernetes Service (EKS) or
  • GCP Kubernetes Engine (GKE) t

VIII. Concurrency
You should adopt the microservices architectural approach to software development. Microservices architectures allow a large application to be decomposed into small independent services that communicate over well-defined APIs. It makes the applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features. The following cloud-native services offer auto-scaling:

  • AWS Lambda, AWS Auto Scaling or AWS EKS
  • GCP Cloud Functions or GCP Autoscaling groupsor GCP GKE

IX. Disposability
The twelve-factor app’s processes are disposable, meaning they can be started or stopped at a moment’s notice. If one of the instances of the application is causing errors, or is slow in responding to requests, or is not responding at all, it should be possible to gracefully shut the instance down.

In addition, the other applications in the system should not be affected by this change in the environment. You should be able to bring in new instances as they are needed, and take down instances when required. This property is known as disposability, and is a measure of the system’s robustness. Processes shut down gracefully when they receive a SIGTERM signal from the process manager

  • StopTask in AWS ECS, terminating with grace in Kubernetes

X. Dev/prod parity
The twelve-factor app is designed for continuous deployment (CD) by keeping the gap between development and production small. You should follow I. CodeBase and V. Build, release, run to manage CI/CD. If you have this parity among these stages, then most of the problems that could arise with the application, would appear in the earlier stages. Not many surprises would be in store for you at production. To provision and model your cloud resources from development to production environments use:

  • AWS CloudFormation or
  • GCP Deployment Manager 

XI. Logs

By treating each log message entered into a centralized logging system as an event, you get a sequence of actions that are performed on a request when it enters the system, right up to when it is completed or abandoned. A twelve-factor app never concerns itself with routing or storage of its output stream. It should not attempt to write to or manage logfiles. Public cloud providers follow this factor by offer operations services to help you track the performance of an application. 

  • AWS CloudWatch or
  • GCP Cloud Logging.

XII. Admin Processes
One-off admin processes should also follow the same codebase, dependency isolation, and config as any process in the same release. There are a number of one-off process that you need to run – batch programs, database migrations, scripts. Treat one-off processes the same way as long running processes. Have the same standards – have code base in version control, follow standard deployment processes and use the same environments

  • AWS CloudWatch events, the CronJob in Kubernetes (e.g. AWS EKS), complicated jobs in AWS Batch, or AWS ECS scheduled tasks
  • Google