Building Performant, Scalable, Fully Automated and Cost Efficient Web Infrastructure for 2021 (Part 3)

Scott Ashmore
7 min readJan 19, 2021

Setting up Docker & Bitbucket Pipelines

Introduction

Welcome to part 3 of the series! In this instalment we’ll be setting up a Cloud SQL proxy for local database development, configuring our app to run in a Docker environment and setting up Bitbucket pipelines.

Let’s get started!

Cloud SQL Proxy

Now that all our infrastructure is provisioned, we can make a small efficiency improvement around how we handle local database development. Google allows us to test Cloud SQL databases locally using their Cloud SQL Proxy. It’s really easy to setup so let’s dive in.

Create a new file at the root of your project named google-credentials.json and add the local database JSON that was output by Terraform when you provisioned your infrastructure (You can always go back to your Terraform workspace and inspect the last successful plan for outputs). Copy the JSON and paste it into the file you just created. Make sure you add this file to your .gitignore file as you do not want it checked into git since it’s very sensitive.

Now create a .env file also at the root of the project and add a variable DB_PASSWORD that is equal to the local user password output by our Terraform plan from step 2.

We also need to update our Strapi development database config to work with our Google Cloud SQL proxy. In your editor, navigate to cms/config/environments/development/database.json and update it to match these settings:

Finally, open up your docker-compose.yml file and update it like so:

Make sure to update line 6 with your Cloud SQL connection name. Now if you run the app locally, you should connect to Cloud SQL using the proxy we setup. This will make syncing databases a lot easier when deploying to production.

Docker

In order our for our app to run on Google Cloud Run, we need to containerise it. Open your editor and create a new Dockerfile in the cms directory of our repo and copy the following code into it:

First we pull in the image we want. I’m using the alpine distribution of Node.js since it’s much leaner than the base image and includes everything we need to run our application. Next we set the working directory, NODE_ENV, pass in our package.json and package-lock.json, install our dependencies and copy them into our image. Finally we build and start the application. You should also create a .dockerignore file and make sure to ignore the node_modules directory.

Now we need to update our Strapi database config to work with our Google Cloud SQL instance. In your editor, navigate to cms/config/environments/production/database.json and update it to match these settings:

All we need to connect to Cloud SQL is the socket path, database name, username and password so we can remove host and port from our config. Let’s move on to our Next.js app.

Move to the app directory and crate another Dockerfile. The config is almost identical to the cms except for a couple of aditions:

Since Next.js generates a static version of our app, it will need access to the cms during build time. To help with this we will pass an argument to our Dockerfile. Docker doesn’t actually expose these arguments to our app so we need to create and environment variable from this argument. Now Next.js will have access to Strapi via the CMS_GRAPHQL_URL variable which we added to next.config.js in step 1 of this series. As with the cms, you should also create a .dockerignore file and make sure to ignore the node_modules and .next directory.

Pipelines

Now that both our services are ready to run in a docker environment, we can start setting up our pipelines. At the root of the project, create a file named bitbucket-pipelines.yml. Here’s how it should look:

Let’s go through the step field:
- First we give our step a name.
- Next we set the deployment target.
- Since we’re deploying to GCP, we need access to gcloud cli commands. We can do this by pulling in the gcloud sdk using the image field.
- We also need to enable docker as a service so we can access the Docker daemon from our pipelines.
- Next we add docker to the caches field. Docker can be quite heavy so we want to cache all the layers generated by our build. This will speed up subsequent deployments for us.
- Finally we point our script field to four scripts which we will be creating in a moment.

In the definitions section we are setting the memory for docker to 2048mb. Docker tasks can be intensive and therefore we need to give our pipelines a little more memory to work with.

There’s a few things we need to do in our pipeline. Instead of writing it all in the script field, it’s better to break the tasks down into separate files. Create a directory named scripts. In that directory create a file named pipelines-prepare.sh and populate with the following code:

First we create a json file from the content of our GCLOUD_SERVICE_ACCOUNT_KEY environment variable. Then we activate the service account using that file. Next we set our project. This is purely a convenience so we don’t have to set it every time we run a gcloud command. Next we configure docker so we can pull/push images from/to GCR (Google Container Registry). Finally we execute the deploy scripts for each service. Let’s create those scripts now.

In the same directory, create a file name pipelines-deploy-cms.sh and add the following code:

Lets go over this line by line:
- 8: Create a variable that contains the path of our image on GCR.
- 9: Query GCR for any images with the cms tag.
- 11: Check if existing_tags is an empty array.
- 12: Build a fresh image with the tag cms.
- 14: Pull the older image from GCR.
- 15: Build an image and cache from our older image.
- 18: Push the image to GCR.
- 20: Deploy the image to Cloud Run.

Now let’s create pipelines-deploy-app.sh. This one is almost identical bar a couple of things:

The first difference can be seen on line 14. Remember in our Dockerfile we create the ARG and ENV variables for CMS_GRAPHQL_URL? This is how we pass that argument to our Docker build step. The other two minor changes are the env vars on line 29 and setting the port to 3000on line 30.

Finally create a file named pipelines-sync-db.sh in the same directory and populate with the following code:

This file will be responsible for syncing our development database with our production database during deployments. Let’s have a look at all the important lines:
- 3: Install the mysql client.
- 5: Create a query that checks how many tables are in a given database.
- 6: Create a variable from the result of that query.
- 10: Check if there are no tables in the given database.
- 11: Dump the database with no options.
- 13: Dump the database with --no-create-info and --replace options.
- 17: Merge our local database dump into our production database.

We’re all set now! Go ahead and commit your changes to Bitbucket and open up the pipelines page in Bitbucket. You should see your pipeline running:

Click into it to see the logs and when finished, hopefully you’ll have a successful build:

As you can see above it took me 18 builds to get it right so don’t be surprised if you run into some issues. It wouldn’t be fun if everything worked perfectly first run 😅 Now open each Cloud Run service in your browser. You should see your services running instead of the default Cloud Run service.

Conclusion

Congratulations! 👏 We’ve successfully setup Strapi and Next.js, provisioned all our Google Cloud infrastructure using Terraform and automated our deployments using Bitbucket pipelines.

All the steps we’ve taken start to form a really strong foundation for shipping solid production applications but there is still a lot of improvements and optimisations we can make! Here’s a couple of improvements you should think about to solidify this approach to web infrastructure even further:

  • We’re only setup for a single deployment environment. In reality you will be deploying to multiple environments (qa, staging, production, etc). How can we automate that from provisioning infrastructure through to our pipelines.
  • Currently we build the cms and app on every deploy. This is fine for a small apps like ours but when things begin to increase in size, we’ll want to run some checks to see if we actually do need to build the cms/app. Pipelines cost money and if you have many projects running these costs can add up so only perform the tasks you really need to. Also, nobody likes 20 minute builds if they can be avoided 😅

I hope you’ve enjoyed this series and hopefully learned a thing or two in the process. Please leave a comment if you have any questions or maybe some thoughts on improvements that I could make. Thanks for reading!

You can find the full source code below:
https://bitbucket.org/scottashmore/next-strapi-cloud-run

--

--

Scott Ashmore

Co-Founder at PitchedIt | Venture capital for everyone