Scaling WordPress on AWS: An Overview

Here’s a story you’ve probably heard before:

A team is using WordPress to power their web app, which works great for a while, but then they start to have performance problems. They audit their installed plugins and get rid of the ones they’re not using. They add “caching” plugins to help speed up WordPress and start to follow all the best practices, but the website is still slow. Eventually, they realize that the limitations are not with the software itself, but rather with the architecture of the infrastructure behind it. 

At this point, teams will likely look for a better hosting provider that promises better speed and enterprise-level support. This is a good idea, but in some cases, it’s not feasible; perhaps the data must be on dedicated hardware, maybe the core has been modified, or perhaps it’s tightly integrated with another custom application. What follows is a short high-level guide on achieving a highly available and scalable WordPress solution on AWS.

The easiest component to separate out is the database. WordPress is already designed so that the database and the web server are independent. On AWS, you can either create a cluster on EC2 and setup replication or use the RDS managed database service. RDS can be configured with Multi-AZ replication and automated point-in-time backups to ensure a minimal RTO and RPO in the event of an outage. Additionally, enabling Multi-AZ allows for scaling the DB with almost no disruption. 

CDNs are another good option to help speed up serving resources and reduce the load on your web server. There are plugins that will automatically sync your uploads folder to S3 and CloudFront and rewrite the URLs that are served to users.

These two steps alone will be enough for most large sites. However, for a system that is either extremely heavy or receives a lot of traffic, the next step is to scale the web servers. This can be done in a few ways but the first step is to separate out static and dynamic files. The uploads and temp directories can be moved to a shared file system (like EFS) and user sessions can either be moved to the database, a Redis or Memcached cluster (using ElastiCache) or the shared FS. 

AMI’s containing the core as well as any other static files (plugins…) can be built (either manually or using a tool like Packer) and deployed in an autoscaling group with the shared FS mounted where needed, with an ALB to balance the traffic. If terminating SSL at the load balancer, remember to set the WordPress config accordingly (https://codex.wordpress.org/Administration_Over_SSL#Using_a_Reverse_Proxy)

If you wish to use docker, ECS with autoscaling on both the container and node level is generally a good option. This requires using an ALB to automatically handle dynamic port mapping.

Scaling out the webserver does not come without its challenges. Unless you put everything on the shared filesystem (not recommended) you’ll lose the ability to automatically update. At this point you’d have to recreate your AMIs or containers and then do the update manually (detailed here:  https://codex.wordpress.org/Upgrading_WordPress_Extended).

Hopefully, this overview can get you to the point of being able to support a production site on WordPress.  If you’d like to talk about your AWS environment and setup, drop us a line or click here to chat with us.

Related Posts