Repeatable Deployment Workflow with Cloud Build and Cloud Run
Table of Contents
When you first start building web applications, “deploying” usually means taking the code on your laptop and finding a way to get it onto a server so the world can see it. In the beginning, this might involve manual steps like dragging files into an FTP client or logging into a server via a terminal (SSH) to install updates.
However, as your project grows, these manual steps become dangerous. You might forget a file, or your server might have a different version of Node.js or Python than your laptop. This leads to the infamous “it works on my machine” syndrome—where the app runs perfectly for you, but crashes the moment it hits the live server.
Making Deployments “Automatic”
The goal of a modern DevOps (Development + Operations) workflow is to take the guesswork out of this process. We want our deployments to be:
- Automated: No more manual copying of files.
- Repeatable: If you deploy ten times, the process should be identical every single time.
- Isolated: The environment where the app runs should be “packaged” so it doesn’t matter what is installed on the underlying server.
Enter Cloud Build and Cloud Run
If you are managing a modern web app—whether it’s a single large project (a monolith) or a project where the frontend and backend live in the same folder (single repo) — Google Cloud provides two powerful “assistants”:
- Cloud Build: Think of this as a “Robot Factory.” It takes your raw code, follows a set of instructions, and builds a finished “container” (a package that includes your code and everything it needs to run).
- Cloud Run: Think of this as the “Automatic Host.” It takes that package from the factory and puts it on the internet. It handles the servers, the scaling, and the web traffic for you so you don’t have to manage a single virtual machine.
By combining these two, you create a streamlined path from your keyboard to your users. Let’s look at how to set up this “one-command” workflow.
The Core Idea: Stop Building Locally
The traditional way of deploying often involves building a Docker image on your local laptop, pushing it to a registry, and then manually updating the server. This introduces variables: your local Docker version, your OS, and even your internet upload speed can affect the outcome.
Instead, you can let Cloud Build handle the heavy lifting. The concept is simple: you provide the code, and Google’s infrastructure handles the rest.
The Benefits of this approach is:
- Consistency: The build environment is identical every time.
- Zero Infrastructure: No need to manage Jenkins or build-servers.
- Security: Credentials stay within the Google Cloud ecosystem rather than living on your local machine.
The “Magic” Command
The entire deployment is triggered with a single command from your terminal:
gcloud builds submit --config cloudbuild.yaml
Unlike many CI/CD tools, this command does not require a Git push. It packages your current local working directory as the source and sends it to Cloud Build. This is incredibly useful for rapid prototyping or for teams that want to verify a deployment before committing to the main branch.
What Happens Behind the Scenes?
Once you hit enter, Cloud Build initiates a structured lifecycle:
- Source Archiving: Your local source code is zipped and uploaded to a temporary storage bucket.
- Containerization: Cloud Build spins up a managed environment and builds your Docker image based on your
Dockerfile. - Registry Storage: The finished image is pushed to Artifact Registry (or Container Registry).
- Deployment: Cloud Run is notified to create a new revision using that specific image.
- Traffic Shifting: Once the new version passes health checks, Cloud Run automatically routes traffic to the new revision.
Implementation: A Sample cloudbuild.yaml
To make this work, you need a configuration file in your root directory. Here is a typical example of what that looks like:
YAML
steps:
# 1. Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-web-app:$SHORT_SHA', '.']
# 2. Push the image to Artifact Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/my-web-app:$SHORT_SHA']
# 3. Deploy to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'my-web-app'
- '--image'
- 'gcr.io/$PROJECT_ID/my-web-app:$SHORT_SHA'
- '--region'
- 'us-central1'
- '--allow-unauthenticated'
images:
- 'gcr.io/$PROJECT_ID/my-web-app:$SHORT_SHA'
Common Pitfalls to Avoid
Even with a streamlined workflow, you might run into these common “gotchas”:
| Issue | Solution |
| Port Binding | Cloud Run expects your app to listen on Port 8080. Ensure your code uses the PORT environment variable. |
| Missing Env Vars | Environment variables in your local .env file aren’t sent to the cloud. Define them in the Cloud Run service settings. |
| IAM Permissions | Ensure the Cloud Build service account has the “Cloud Run Admin” and “Service Account User” roles. |
| OAuth Redirects | If you use social login, remember to add your new Cloud Run URL to the allowed redirect URIs in your OAuth provider’s console. |
Scaling to Full Automation
Once this manual command is stable, you can easily transition to a fully automated CI/CD pipeline. By adding Cloud Build Triggers, you can set the system to:
- Auto-deploy whenever you push to the
mainbranch. - Run automated tests before the deployment begins.
- Create separate pipelines for
stagingandproductionenvironments.
Even without these extra steps, the gcloud builds submit workflow is a massive upgrade over manual deployments, providing a professional, repeatable, and scalable foundation for any web project.