A detailed guide on deploying a web application on Google Cloud Run

We love Google Cloud Run. We deployed the “Checklist” app on Google Cloud Run. Here’s is a list of steps which we followed to deploy this Rails app on Google Cloud Run.

Most of these steps are applicable to a web application written in other languages too and should help you to deploy your own app.

Assumptions:

  • We assume that your application is ready to be deployed.
  • We also assume that you have signed up for your Google Cloud account and have setup the gcloud SDK.

The GCP services used:

Can you use other services too. But for the sake of this application below are the services used.

Let’s do the deployment now.

1. Get gcloud setup
$ PROJECT_ID=<<project_id>> 
$ gcloud auth login <<account_name>> 
$ gcloud config set project $PROJECT_ID
$ gcloud config set run/region us-central1
2. Enabled API’s
$ gcloud services enable run.googleapis.com      # Cloud Run API
$ gcloud services enable sqladmin.googleapis.com # Cloud SQL API
$ gcloud services enable cloudkms.googleapis.com # Cloud KMS API
3. Rails Master Key (Rails specific, skip if you using other language)
EDITOR="atom --wait" bin/rails credentials:edit

This step should create two files config/credentials.yml.enc and config/master.key.

4. Set up SQL database (skip if you already have a database OR if you want to set it up using console.cloud.gcloud.com)
# Get a small Cloud SQL instance
$ gcloud sql instances create cloudanix-checklist-production --tier=db-f1-micro --region=us-central1 --assign-ip
# Protect database root account
$ gcloud sql users set-password root --host % --instance cloudanix-checklist-production --password your_root_db_password
# Create a new database account for Rails
$ gcloud sql users create prod_db_user --instance cloudanix-checklist-production --host % --password your_prod_db_password

After the above is done, see if you got your instance all setup.

$ gcloud sql instances list
5. Update your connection details in your configuration file

Even if the syntax below is Rails specific, your app will also have some configuration file where you need to update. For Rails, we do this in database.yml file.

Please note the following:

As you can notice, we are not storing the “database password” which you set above in the configuration file. It’s being read from environment variable which we will control via KMS as we proceed.

production:
  <<: *default
  database: cloudanix_checklist_production
  username: cloudanix_checklist_dbuser
  password: <%= ENV[‘DATABASE_PASSWORD’] %>
  socket: “/cloudsql/project_id:us-central1:cloudanix-web-pg”
6. Service account to run the application

This account will be used to run the CloudRun. Access will be given to other resources (e.g. if you are storing documents, then the bucket can be given access on this service account)

$ gcloud iam service-accounts create cloudanix-checklist-srvacc --display-name “Service Account for Cloudanix Checklist”

You will see an output like below

Created service account [cloudanix-checklist-srvacc].

7. Giving access to this service account on resources and get the service account key
# Get the  name of the account in a variable for later use
$ SRV_ACCOUNT=cloudanix-checklist-srvacc@$PROJECT_ID.iam.gserviceaccount.com

# Grant client role on CloudSql
$ gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member serviceAccount:$SRV_ACCOUNT --role roles/cloudsql.client

As you can see we are giving appropriate access to this service account on the resources required. If you are using additional resources, you can extend the permissions accordingly.

The output should look something like this

Updated IAM policy for project [cloudanix-app].
bindings:
<<snip>>

8. Create a key file for this service account
$ gcloud iam service-accounts keys create ./config/cloudanix_checklist_srvacc.key --iam-account cloudanix-checklist-srvacc@$PROJECT_ID.iam.gserviceaccount.com

Your output should like below

created key [6bc27e420377c3124e3172cf2b76b89d4axxxxx] of type [json] as [./config/cloudanix_checklist_srvacc.key] for [cloudanix-checklist-srvacc@cloudanix-app.iam.gserviceaccount.com]

Let’s now store our secrets (the json file above, the master key, database password) more secret and securely! Time to KMS.

9. Store the keys and master to KMS
# Create key ring
$ gcloud kms keyrings create cloudanix_checklist_ring --location=us-central1

# Encrypt the credentials of the service account
$ gcloud kms keys create cloudanix_checklist_srvacc_key --location us-central1 \
  --keyring cloudanix_checklist_ring --purpose encryption

$ gcloud kms encrypt --location us-central1 --keyring cloudanix_checklist_ring --key cloudanix_checklist_srvacc_key --plaintext-file ./config/cloudanix_checklist_srvacc.key \
  --ciphertext-file ./config/cloudanix_checklist_srvacc.key.enc

# We should also encrypt and store the Rails master key file. For other languages, this step could be optional, unless you too got a master key which you want to encrypt with KMS.
$ gcloud kms keys create cloudanix_checklist_web_key --location us-central1 --keyring cloudanix_checklist_ring --purpose encryption

$ gcloud kms encrypt --location us-central1 --keyring cloudanix_checklist_ring --key cloudanix_checklist_web_key --plaintext-file ./config/master.key --ciphertext-file ./config/master.key.enc
10. Database password setup
$ gcloud kms keys create db_password_key --location=us-central1 --keyring cloudanix_checklist_ring --purpose encryption

# replace the password of your database inside the quotes. The output of this, make sure you copy and keep it in a textfile. We will need it later.
$ echo -n "<<your database password>>" | gcloud kms encrypt --location us-central1 --keyring cloudanix_checklist_ring --key db_password_key --plaintext-file - --ciphertext-file -| base64

The 2nd line from above will give you a base64 encoded string. Copy it and you shall need it for your cloudbuild.yml file.

10. Using Google CloudBuild

We will use Google CloudBuild to get our master branch running into CloudRun.

# get the service account for CloudBuild which you can fine in IAM
CB_SRV_ACCOUNT=xxx...xxx@cloudbuild.gserviceaccount.com

# Grant Cloud Build the right to decrypt Rails master key
$ gcloud kms keys add-iam-policy-binding cloudanix_checklist_web_key --location=us-central1 --keyring=cloudanix_checklist_ring --member=serviceAccount:$CB_SRV_ACCOUNT --role=roles/cloudkms.cryptoKeyDecrypter

# Grant Cloud Build the right to decrypt Rails the production database password
$ gcloud kms keys add-iam-policy-binding db_password_key --location=us-central1 --keyring=cloudanix_checklist_ring --member=serviceAccount:$CB_SRV_ACCOUNT --role=roles/cloudkms.cryptoKeyDecrypter

# Grant Cloud Build the right to decrypt the cloud service account credentials
$ gcloud kms keys add-iam-policy-binding cloudanix_checklist_srvacc_key --location=us-central1 --keyring=cloudanix_checklist_ring --member=serviceAccount:$CB_SRV_ACCOUNT --role=roles/cloudkms.cryptoKeyDecrypter
11. Creating cloudbuild.yaml file in your code (root folder)
steps:

# Decrypt Rails Master key file
- name: gcr.io/cloud-builders/gcloud
  args: ["kms", "decrypt", "--ciphertext-file=./config/master.key.enc",
         "--plaintext-file=./config/master.key",
         "--location=us-central1","--keyring=cloudanix_checklist_ring",
        "--key=cloudanix_checklist_web_key"]

# Decrypt Cloudanix Checklist Service account credentials
- name: gcr.io/cloud-builders/gcloud
  args: ["kms", "decrypt", "--ciphertext-file=./config/cloudanix_checklist_srvacc.key.enc",
         "--plaintext-file=./config/cloudanix_checklist_srvacc.key",
         "--location=us-central1","--keyring=cloudanix_checklist_ring",
         "--key=cloudanix_checklist_srvacc_key"]

# Build image with tag 'latest' and pass decrypted Rails DB password as argument
- name: 'gcr.io/cloud-builders/docker'
  args: ['build', '--tag', 'gcr.io/$PROJECT_ID/cloudanix_checklist:latest',
         '--build-arg', 'DB_PWD', '.']
  secretEnv: ['DB_PWD']

# Push new image to Google Container Registry
- name: 'gcr.io/cloud-builders/docker'
  args: ['push', 'gcr.io/$PROJECT_ID/cloudanix_checklist:latest']

# Deploy the new image to Cloud run instance
- name: 'gcr.io/cloud-builders/gcloud'
  args: ['beta', 'run', 'deploy', 'cloudanix-checklist', '--image', 'gcr.io/cloudanix-app/cloudanix_checklist', '--region', 'us-central1','--set-cloudsql-instances','cloudanix-app:us-central1:cloudanix-web-pg','--platform','managed', '--allow-unauthenticated']

secrets:
- kmsKeyName: projects/cloudanix-app/locations/us-central1/keyRings/cloudanix_checklist_ring/cryptoKeys/db_password_key
  secretEnv:
    DB_PWD: "<<your encrypted password from step 10>>"

timeout: 1800s
12. Creating the Docker file (part of this file will vary based on your language and runtime requirements)
# Leverage the official Ruby image from Docker Hub
# https://hub.docker.com/_/ruby
FROM ruby:2.6

# Install recent versions of nodejs (10.x) and yarn pkg manager
# Needed to properly pre-compile Rails assets
RUN (curl -sL https://deb.nodesource.com/setup_10.x | bash -) && apt-get update && apt-get install -y nodejs

RUN (curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -) && \
    echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
    apt-get update && apt-get install -y yarn

# Install MySQL client (needed for the connection to Google CloudSQL instance)
RUN apt-get install -y postgresql-client

# Install production dependencies (Gems installation in
# local vendor directory)
WORKDIR /usr/src/app
COPY Gemfile Gemfile.lock ./
ENV BUNDLE_FROZEN=true
RUN bundle install

# Copy application code to the container image.
# Note: files listed in .gitignore are not copied
# (e.g.secret files)
COPY . .

# Pre-compile Rails assets (master key needed)
RUN RAILS_ENV=production bundle exec rake assets:precompile

# Set Google App Credentials environment variable with Service Account
ENV GOOGLE_APPLICATION_CREDENTIALS=/usr/src/app/config/cloudanix_checklist_srvacc.key

# Setup Rails DB password passed on docker command line (see Cloud Build file)
ARG DB_PWD
ENV DATABASE_PASSWORD=${DB_PWD}

# For now we don't have a Nginx/Apache frontend so tell
# the Puma HTTP server to serve static content
# (e.g. CSS and Javascript files)
ENV RAILS_SERVE_STATIC_FILES=true

# Redirect Rails log to STDOUT for Cloud Run to capture
ENV RAILS_LOG_TO_STDOUT=true

# Designate the initial sript to run on container startup
RUN chmod +x /usr/src/app/entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
13. Creating the entrypoint.sh file
#!/usr/bin/env bash

cd /usr/src/app

# Create the Rails production DB on first run
RAILS_ENV=production bundle exec rake db:create

# Make sure we are using the most up to date
# database schema
RAILS_ENV=production bundle exec rake db:migrate

# Do some protective cleanup
> log/production.log
rm -f tmp/pids/server.pid

# Run the web service on container startup
# $PORT is provided as an environment variable by Cloud Run
bundle exec rails server -e production -b 0.0.0.0 -p $PORT
14. Submit the build and deploy the application (or you can setup trigger to start the build when commit happens on a branch)
$ gcloud builds submit --config cloudbuild.yaml

If you go back to your cloudbuild.yaml file, you will notice this comment – # Deploy the new image to Cloud run instance. Below this comment are the instructions which will deploy the newly created build onto the Cloud Run instance.

15. Done

Your application is now running on Google Cloud Run! Congratulations 🙂

Common Errors

1. Step #2: Your Ruby version is 2.6.5, but your Gemfile specified 2.6.3

Solution: Ensure that your Docker file and Gemfile do not conflict for Ruby versions

A Complete Developer’s Guide to Single Sign On (SSO) For Enterprise Applications

This in depth guide is written for Developers and Operators to demystify role of SSO, Oauth, SAML, IDP and various providers like Okta, OneLogin, Auth0, Ping Identity etc.

While building SSO for Cloudanix, the team came across quite a few design decisions, options, prioritization crossroads which resulted into this blog post. We felt there is no complete and authoritative guide for a developer and operation teams and so we put this together for you.

We plan to explain, demystify and help you understand about SSO, Oauth, SAML, IDP and various providers like Okta, OneLogin, Auth0, Ping Identity etc.

Let’s get started.

Authentication (and authorization)

Authentication is the process of verifying who you are, while authorization is the process of verifying what you have access to. These two terms are pretty common in apps both consumer and enterprise.

Authentication comes in many flavors and the most common of them is “password” based authentication. Other examples are one time pin, bio-metric or adding a second factor (like SMS) on top of password based authentication. The objective of authentication is making sure the right person can enter the concerned application.

In some documentations, you may come across the following terms too:  Principal (entity) wants to authenticate to get access to a System (another entity)

An analogy: Let's say your neighbor is going on a vacation. She comes over and gives you a her house key to water the plants. So, you are authenticating into her house with the key. Only the person who has the "right" key is authorized to enter her house. But let's say she has locked one of the rooms where she doesn't want you to enter as you are authorized to go directly to say, backyard, where the plants are. So, you are authorized to gain access to designated areas. That's the difference between authentication (I am x) Vs authorization (Can I do x).

SSO

Once the popularity of web increased, it became evident that individuals signed up for several applications (running over the web) which they use. Each application had their own set of credentials (login id and password). It is cumbersome for the individuals (user) to remember so many various IDs and passwords. SSO came to help.

SSO stands for Single Sign On.

SSO is a mechanism for authentication where a single pair of credentials can be used across several applications. Imagine, if you went to every application and signed up with same user id and password and the fact that if you change password in one application, all the other applications got your new password and now you can login into those other application which changed password – All of this effect BUT WITHOUT this implementation of sharing passwords.

So, how does SSO work?

Since a single identity of a user should be able to get access across multiple service providers, it requires that the authentication process is managed by a single Identity provider or a Directory Service or some other solution.

This single authentication could be implemented as below

  • an LDAP server
  • a database
  • Active Directory
  • Federation based

In simple terms, user can go to N service provider and they all via LDAP, Trust relationship or some other SSO protocol go to this “single source of truth” system of records (database) which recognizes the user with the credentials provided.

Now, the terms LDAP, SSO protocols are nothing but an agreed upon industry standard format which various systems understand and recognize. We will see more such acronyms below when we talk Federation.

Federation

In the journey of user authentication there came a moment when this same user spanned across multiple organization (e.g. Multiple SaaS applications is used by the same user of an organization) and security domains.

Federation is part of SSO.

Now when we talk about various system across security domains and part of multiple organizations talking, we should be clear that there will be variety (and evolution too) of protocols/ languages to accomplish this. Thus the below acronyms.

  • Oauth2
  • SAML (1.1/ 2)
  • OpenID Connect
  •  WS-Federation

Two popular open protocols (and third party providers) are as below:

  • SAML 2.0 compliant Identity Provider (IdP) that is configured to communicate with your app (SP).
    • Example of IdP are: ADFS, Auth0, Okta and Ping Identity.
  • OpenID Connect (OAuth 2.0) identity management.

Benefits of using Federated SSO

  • Single pair of corporate credentials across all your service providers. One user, one password across domains, service providers and all the various services you access.
  • Maintenance of identities become simpler with no additional data migration.
  • Admin of a corporate finds it easy to manage ex-users of the corporate across several service providers.

WordPress installation on a subdirectory of an existing app (Ruby On Rails)

We didn’t want our blog to be on blog.cloudanix.com but instead the way it’s right now https://www.cloudanix.com/blog

The web application cloudanix.com is primarily a Ruby On Rails application hosted on Google Cloud. We wanted a WordPress site as a subdirectory to an existing web application which was running on a different tech stack (Rails) and may be hosted on a different provider.

There are few good tutorials and write-ups around (links at the end) but those only provided a good start. Following them didn’t help us complete the setup. The below post is a log of how we managed to do it. This post neither endorses any hosting provider nor suggests this is the only way to do it. Infact, we welcome any suggestions to tell us if we have done anything wrong or taken any steps which we should have, so that we can fix it.

Here we go:

A. Ruby On Rails app:

Step 1: Gemfile
gem 'rack-reverse-proxy', :require => 'rack/reverse_proxy'

bundle update follows.

Step 2: Config.ru
require_relative 'config/environment'

use Rack::ReverseProxy do
  reverse_proxy(/^\/blog(\/.*)$/, 'https://blog.cloudanix.com$1', opts = { preserve_host: true })
end

run Rails.application

Note: Please notice that there is no trailing ‘/’ after the blog name so it’s followed by $1

https://blog.cloudanix.com
Step 3: Route.rb
Rails.application.routes.draw do

  get '/blog', to: redirect('https://www.cloudanix.com/blog/', status: 301)

B. WordPress

Step 1: Installation

We tried installing the WordPress blog on GetFlywheel and Pantheon but couldn’t get this working. Then we moved on to DigitalOcean.

During your wordpress installation you have to make sure that your subdomain is setup before you start configuring the wordpress droplet.

blog.cloudanix.com is our subdomain which we configured.

Step 2: Changing the WordPress and SiteUrl

Go to Settings -> General (from the left hand menu) and change both the WordPress and SiteUrl. Ours look like shown in the image below.

Step 3: Using Classic Editor instead of Gutenberg

The current version of WordPress when we installed had Gutenberg editor, out of the box. We could open the sample Hello World post but neither we could edit it nor we could create a new post.

After a lot of search and experimentation, the issue got resolved by installing and using the Classic Editor.

Step 4: Changing permalinks

We assume that like us even you would want your blog to have a url like https://www.cloudanix.com/blog/this-is-so-awesome Vs https://www.cloudanix.com/blog?p=123. If you want to achieve this you have to mere change the Permalink. We did that too.

When we changed it, we saw that we could load https://www.cloudanix.com/blog but the actual individual post were not loading. Fixing .htaccess file resolved the issue.

C. .htaccess file changes

Step 1: Overriding the default

This could be the only controversial step but also for us the step which fixed the issue. When we changed the Permalink, the .htaccess file got updated (make sure you see this, otherwise you have got other issues with permissions etc) going on. But our .htaccess file looked like below.

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /blog
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /blog/index.php [L]
</IfModule>

# END WordPress

This default entries in .htaccess created by Permalink changes did not allow an individual post to load as mentioned above.

We had to make the .htaccess file like below and then things started to work just fine. So, below is how our .htaccess file looks.

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>

# END WordPress

Notice we removed “blog” from it.

So far posts, pages, assets, new plugin installations and all the other major use cases are working just fine. Kindly share with us if you have ideas which can make this better or your experience following this post to install your own WordPress.

Links we came across: