Building Performant, Scalable, Fully Automated, and Cost-Efficient Web Infrastructure for 2021 (Part 2)

Setting up Google Cloud & Terraform

Scott Ashmore
11 min readJan 15, 2021

Introduction

Welcome to part 2 of the series. In this installment, we’ll be provisioning our Google Cloud infrastructure using Terraform.

Similar to part 1, I won’t be going into too much depth explaining any of the technology used. There are plenty of articles and tutorials out there that go into great depth for all the technology used in this series. I will provide relevant links where appropriate.

Let’s get started.

Google Cloud Platform

Firstly we need to set up a new project on GCP that will handle creating all our resources when requested by Terraform. You will also need a billing account as some of the resources we will be provisioning come with costs attached.

Go ahead and create a new project from within the GCP console. I named mine Terraform Admin. Once created, take note of the project ID as it will be different from the name. Now we need to create a service account with some specific roles that will give Terraform permission to create all the necessary resources.

From the left menu, go to IAM & Admin > Service Accounts. From the service accounts page click + CREATE SERVICE ACCOUNT at the top of the page.

Finally, enter a name (I named mine terraform) and click done.

Now we need to create a key for the service account. From the service accounts page, click the service account you just created:

It will be the name you just used followed by the project ID. After you click that, you should see an option to create a new key from the bottom of the page:

Go ahead and create a new key using the JSON key type:

Once created, the key will automatically be downloaded by your browser. Save it somewhere safe. I use a service called 1password but there is a multitude of options available for storing secrets.

Finally, we need to set some permissions for our service account. From the IAM home page, click ADD at the top of the page. Enter the email address of the service account you just created, give it the Storage Object Admin role and click save.

Finally, we need to create a bucket that will store our terraform state. Give it a unique name but something recognizable, like: project-id-terraform-admin. You can see the details of how I created mine below:

That’s the initial GCP setup complete, now we can move on to the next step.

Bitbucket

Firstly we need to enable pipelines. Navigate to Repository Settings > Pipelines Settings from the side menu and enable pipelines as below:

We also need to create an app password that will be used by Terraform for adding all our environment variables to our pipelines. From the Bitbucket web UI, in the side menu, go to your Personal Settings > App Passwords and create a new password. Here’s how I set mine but you might not need to enable this many permissions for this particular use case:

Once that’s created it will show you the generated password and a little modal. Save this somewhere safe like the Google service account key we downloaded earlier (not in the repo or anywhere publicly accessible), we’ll need it later.

Terraform Cloud

Now we are ready to start configuring Terraform so it can provide all the infrastructure we need. If you don’t already have an account, you can create one here. It’s totally free if you are a team of fewer than 5 engineers which is pretty great!

When your account has been created, you’ll need to set up Bitbucket as a VCS provider. Terraform has detailed instructions on just that here. Once you’ve enabled Bitbucket go ahead and create a new workspace from your account home page. In the first step choose the Version control workflow option:

In the second step the Bitbucket Cloud option should appear:

When it does click it to move to step 3:

If you have a lot of repositories, the one we created might not appear and you’ll have to type the id into the box as you can see above. In step 4 we’ll configure some basic settings. You can copy what I have done below.

Click create workspace and after a minute or so you should see the final success confirmation below:

Now we need to add a couple of variables for our plans to execute successfully. Click theConfigure Variables button and create a new variable under the Terraform Variables section named bitbucket_password, set the value to the Bitbucket app password we created earlier, and check the sensitive checkbox. Now under Environment Variables create a variable named GOOGLE_CREDENTIALS and set it to the value of the Service Account key we created earlier. You’ll need to minify the JSON as Terraform doesn’t handle whitespace. When you’re finished you should have something like this:

Terraform Configuration

Now it’s time to move to our editor and start creating our Terraform configuration files. In your editor, create a new directory infrastructure at the root of the repo. Create a file named providers.tf in that directory and populate it with the following data:

In order to create resources Terraform needs to pull in providers that describe how to create resources in a given platform. We’ll be creating resources on Google Cloud Platform and Bitbucket so that’s what we’ve referenced in the file above. We’ll also be using the random provider which allows us to use randomness in our config files. You’ll see this in action further down.

Any secrets (passwords, private keys, etc) we use will be visible in our state. Terraform does not encrypt this so it’s safer to store your state somewhere more secure that offers encryption. Terraform allows us to do just this using a concept called backends. With that in mind, let’s create a file named backend.tf:

All this file does is tell Terraform we want to store our state in the Google Cloud Storage bucket we created earlier. Remember to update the bucket name to whatever you used previously. Now we need to create some variables that will make our lives a little easier when provisioning resources and also help us pull in some sensitive data. Let’s create a file called variables.tf:

Let’s go through this variable by variable:

  1. gcp_project_name: The name of the project we will be creating on GCP.
  2. gcp_region: The region in which all our GCP resources will be created.
  3. google_project_services: We need to enable a few services in GCP before we can create resources within those services. In order of appearance:
    - Cloud Run (hosting and serving our applications)
    - Container Registry (storing our Docker images)
    - Cloud SQL (managing our SQL databases)
    - Cloud Logging API (logs 🙂)
  4. pipelines_ip_addresses: This is simply a map of IP addresses used by Bitbucket in their pipelines. We’ll add these to our Cloud SQL instance so we can access our databases during deployments.
  5. bitbucket_username: The username associated with your Bitbucket account.
  6. bitbucket_password: The app password we created earlier in this article.
  7. bitbucket_repository_slug: The slug of the Bitbucket repo we created in part 1 of this series.

Now we can get started configuring our infrastructure for GCP. Create a file named gcp.tf and populate with the following data:

Let’s go over what we’ve created above in order of appearance:

  1. provider “google”:
    Provider configuration for Google. You can see the details for this here. We are omitting the project field since we are creating a new project via Terraform.
  2. random_id: id:
    Helps us generate a unique ID for our new GCP project. We also prefix the ID with our GCP project name to make it more recognizable.
  3. google_project: project:
    Configuration for generating our new GCP project. You will need a valid billing account ID here since we are spinning up some services that come with costs attached. I have also set the organization ID but this is not strictly necessary so feel free to omit that if your account is not associated with any organization.
  4. google_project_service: services:
    Responsible for enabling all the services we need within GCP in order for our resources to be created successfully. If you remember in variables.tf we created a list of services. Here we are using Terraforms count helper to loop over that list and create each resource within a single block instead of creating a resource block per service.
  5. google_sql_database_instance: cms_database_instance:
    Provisions a Cloud SQL instance. Note that we are using db-f1-micro machine type. You may want to opt for something a little beefier in production, depending on the demands of your application. Since we are using Next.js’ static generation, the database is nearly always only accessed during build time and when cms updates are made so db-f1-micro is perfectly capable of handling small to medium-sized apps.
  6. random_password: cms_sql_root_password:
    Generates a random password for our SQL database root user.
  7. google_sql_user: root:
    Root user for our mySQL database.
  8. random_password: cms_sql_local_password:
    Generates a random password for our SQL database local user.
  9. google_sql_user: local:
    Local development user for our mySQL database.
  10. google_sql_database: cms_database:
    Creates a database within our Cloud SQL instance
  11. google_cloud_run_service:
    - cms: Creates a Cloud Run service for our cms and links our Cloud SQL instance so we can access our database from the service.
    - app: Same as above just without linking Cloud SQL. You’ll also notice here we are setting our resource limits. I’m using defaults for the purpose of development but you may want to play with these settings in production to see what fits your application's needs best. You can find more detailed info on this topic here and here.
  12. google_iam_policy: noauth:
    Creates an IAM policy that will allow our Cloud Run services to be publicly accessible.
  13. google_cloud_run_service_iam_policy:
    - noauth_cms: Apply’s the policy above to our cms service.
    - noauth_app: Apply’s the policy above to our app service.
  14. google_service_account: bitbucket_pipelines:
    Creates a new SA (service account) to be used by Bitbucket pipelines.
  15. google_project_iam_member: bitbucket_pipelines_cloudrun_admin:
    Gives the pipelines SA Cloud Run Admin permissions.
  16. google_project_iam_member: bitbucket_pipelines_storage_admin:
    Gives the pipelines SA Storage Admin permissions.
  17. google_project_iam_member: bitbucket_pipelines_service_account_user:
    Gives the pipelines SA Service Account User permissions so Bitbucket can interact with our GCP resources.
  18. google_service_account_key: bitbucket_pipelines:
    Generates a new SA key to be used in conjunction with pipelines.
  19. google_service_account: local_database:
    Creates a new service account for local development
  20. google_project_iam_member: local_database_viewer:
    Gives the local database SA Project Viewer permissions.
  21. google_project_iam_member: local_database_sql_client:
    Gives the local database SA Cloud SQL Client permissions.
  22. google_project_iam_member: local_database_sql_viewer:
    Gives the local database SA Cloud SQL Viewer permissions.
  23. google_service_account_key: local_database:
    Generates a new SA key to be used in conjunction with local development.

That’s all the infrastructure we need for GCP. Let’s create our configuration for Bitbucket. All we need to create on Bitbucket is some repository variables for our pipelines. Go ahead and create a file named bitbucket.tf and add the following data:

I won’t go into details for each resource here as all we’re creating is a bunch of repository variables and the resources themselves are descriptive enough I feel but if anything is unclear just leave a comment and I will respond.

Finally, we should to output some values which we’ll need for setting up the rest of our infrastructure later on. Create a file name outputs.tf and add the following data:

That’s all our configuration complete! Commit all your latest updates to git and let’s move to the next step.

If you navigate to your workspace in the Terraform Cloud UI, you should see a plan already running:

Click on the plan to open it and you should see a screen similar to mine once Terraform has finished planning your infrastructure:

Go ahead and click Confirm & Apply. Actually provisioning your infrastructure will take around 10 minutes so grab a coffee and put your feet up for a little bit. Once the run is complete you should see some outputs in the Apply tab if it was successful:

Congratulations. All your infrastructure has been created. Navigate to your GCP console and open the Cloud Run page. You should see your app and cms services like below:

Click into either service and open the public URL generated for you. You should see a screen similar to mine in your browser:

If you navigate to the Cloud SQL page within GCP, you should see your database instance that we also spun up:

We need to update the local database user account we created to limit its privileges so it can only work with the development database. Click the instance and on the next page select Connect using Cloud Shell from the Connect to this instance panel. It should fire up a shell environment for you:

Login as the root user and enter the password that was output by Terraform earlier in our setup. Once logged in enter the following query:

REVOKE ALL PRIVILEGES, GRANT OPTION FROM local;

This will revoke all privileges from our local database user. Next we need to tell the server to reload privileges in the MySQL system schema:

FLUSH PRIVILEGES;

Then we grant our local user limited permissions for only the development database. This prevents any unwanted disasters happening on the production database:

GRANT SELECT, UPDATE, INSERT, DELETE ON development.* TO local;

Nice 😎 I remember how good that felt the first time I successfully ran Terraform. The amount of time and effort saved is phenomenal. If you’re spinning up a lot of projects like me, this is a serious productivity booster.

Conclusion

That’s a wrap for part 2 of the series.

We’ve managed to set up a GCP project that’s sole purpose is for provisioning infrastructure via Terraform. This project can be used for any projects/resources you create in the future. It has no costs attached so keep it around.

We’ve also set up a Terraform account, linked our Bitbucket, created a new workspace, and set up all our .tf configuration files. I used Terraforms cloud workflow to manage our workspace instead of their CLI driven workflow. The CLI may be quicker and more intuitive to work from but it’s much easier to manage the cloud workflow within teams.

I hope you’ve enjoyed this instalment. It may seem like a lot of work now but once you’ve done it once, you’ll never look back. Stick around for part 3 where we’ll configure Docker, Bitbucket pipelines and deploy the first version of our application. See you there and thanks for reading folks.

Click here for part 3

--

--

Scott Ashmore

Co-Founder at PitchedIt | Venture capital for everyone