Tips for a Smooth Migration from AWS to GCP


In the early days of cloud there was a clear market leader, however now as other service providers have caught up it’s becoming more common for organisations to have a hybrid cloud strategy or to migrate from one ecosystem to another due to cost or a global mandate. Regardless of the motivation, moving from one cloud provider to another needs to be well thought through to ensure minimal disruption to the business. 

We recently led the migration of an enterprise custom application from Amazon Web Services (AWS) to Google Cloud Platform (GCP) for one of our global customers. There were many steps we took in planning and execution to ensure a smooth transition. In this article I will outline my top tips and considerations for a successful migration from AWS to GCP.

Architectural Considerations

A good place to start in any cloud-to-cloud migration is to map the tools in each platform so that you can identify any architectural changes you’ll need to make. In this case Google provides great documentation around their tools and the equivalent in AWS so this was a relatively straightforward first step. 

Our client had a dedicated team that was responsible for managing the GCP Platform. They provided a portal and critical services to help onboard projects, as well as centralised management of networking and firewalls. In addition to this, they also provided consistent guidance and expertise, and helped us reach out to Google Engineers and Architects when required.

The four main changes in architecture we had were:

  1. Migrating from a regional AWS VPC to a Global GCP Shared Network

  2. Migrating our backend containers from AWS Fargate to Google Kubernetes Engine (GKE).

  3. Migrating our event/notification services from AWS EventBridge and SQS to Google PubSub

  4. Migrating our static websites from CloudFront+S3 to Global HTTP Endpoints and Storage backends


The complexity of migrating our networking was greatly simplified as our client had already established a global shared network that was shared across all their GCP Projects. The automatic provisioning of a GCP Project would include allocated subnets in this shared network. The GCP Platform team was also responsible for provisioning the required GKE subnets in each region.


Our Java backend services were deployed as Docker containers running on AWS using ECS Fargate. Migrating to GKE was straightforward.  We had injected secrets into the Fargate task definition using AWS Secrets Manager.  When deploying to GKE, we achieved the same result by referencing Kubernetes Secrets in the Deployment spec file.  

Events / Notifications

On AWS, our solution relied heavily on AWS EventBridge as the core message distribution mechanism. We had many AWS Lambda and SQS targets for messages put onto the event bus.

After some investigations, we decided that migrating to PubSub would provide the most straightforward migration. As part of the migration planning, I produced a detailed design that showed all the message producers and consumers, such that each consumer could be articulated as a “push” or “pull” subscriber, and if message ordering was required.

One of the key differences between AWS EventBridge and GCP PubSub, is that with PubSub subscription filtering can only be done on message attributes and not the message payload.  This meant that the message producers needed to know in advance which attributes to make available to enable the required filtering. 

Static websites

We were hosting a React website and a Content Delivery Network (CDN) using AWS CloudFront and S3.  In AWS you can set up a “CloudFront Origin Access Identity” (OAI) such that only CloudFront can access the content of your S3 bucket.

At the time of our migration, the equivalent option in GCP was to deploy a Global HTTPS load balancer with a Storage Bucket backend.  Unfortunately there is no equivalent to the OAI, so storage buckets need to be public.

There is also no direct equivalent to CloudFront Functions or Lambda@Edge.  These two limitations need to be taken into account during the migration planning and execution.

Plan, Plan & Plan

Planning is key to any smooth cloud migration, without it you are likely to end up with an unplanned failure. My top tips for planning are:

Do as many Practice Migrations as time permits

Migration testing and rehearsals were one of the most important factors of our AWS to GCP migration, as it allowed the team to understand what tools were needed, potential risks and room for failure.  I used the initial migration plan to migrate the four non-production environments, updating and refining the process each time. I worked alongside another team (testing/quality assurance) to ensure each environment was working before the actual production migration took place. We did a final “dummy” migration of our production environment so that we knew all the system components worked, and the application deployment pipelines were configured correctly.  For the final migration it was just a case of shutting down our components in AWS, and then migrating data from AWS to GCP.

Plan for Disaster Recovery in Advance

One challenge we ran into was when it came time to do Disaster Recovery (DR), we’d already made the decisions of all our technology choices and done part of the migration work. Due to data sovereignty requirements, our GCP regions were limited to Sydney (Primary) and Melbourne (DR).  When we went to deploy resources into the DR region, we discovered a few key services (Firebase and Cloud Functions v1) were not currently available in the Melbourne region. As a result, the DR strategy was re-evaluated and we decided to simply replicate data to the DR region, and look to implement a complete DR solution at a later time.

Create Clear Checkpoints during each stage of the migration 

Having clear checkpoints in place can ensure all steps of the migration are running to schedule and that it will be completed within the proposed time frame. It allows for expectations to be continuously (checked-in on) and keeps momentum within the team.

Document Migration Processes

Creating clear and detailed documentation of processes is a critical part of a successful migration. I made sure throughout the practice migrations, there was detailed documentation of the step by step process. That basically meant by the time it got to production, I had three documented and scripted migrations that had been tested all the way through. This helps ensure all staff are on the same page, and also allows for new staff to easily be caught up to speed with the migration at any stage. Having each step documented helps with the final handover and ensures your team can review key learnings at the end of the migration process.

Communication/Change Management 

Change management and communication is a key consideration when planning for a migration. In this particular case, we sent out communication two weeks prior, the day before, and the day of, to ensure all users had ample time to prepare for the system outage. The (AWS) application was shut down at 9am and we had users back on the system (GCP) by 2pm in the afternoon. When planning the outage communication, I find it is best practice to have buffer time incase of ‘the unexpected’. With effective communication, users have the opportunity to understand and buy-in to changes within an organisation’s IT infrastructure. It’s also important to keep up communication with executives and project leaders in the business so they can appreciate projects at a high level. 

Be Firm on your Code / Deployment Freeze 

As aforementioned due to the extensive testing and documentation the AWS to GCP migration was a relatively straight-forward process. In the immediate lead up to the production migration we enforced a code freeze to our development and release branches, as well as a freeze on our weekly release cycle. We needed to ensure that we were testing the same versions throughout our migration path. For example, if we had version 3.1.0 in AWS, then we needed to test the migration process with 3.1.0 in GCP, and conversely if we had verified the migration using 3.1.0, then we wanted to perform the final migration with 3.1.0. 

Concluding Words

While each of the top cloud providers have their own take on common services, and the API’s to interact with them differ, the services still essentially do the same thing as far as we are concerned.  Even though we were making heavy use of AWS specific services (Event Bridge, SQS, Farage, Lambda, S3, DynamoDB), we were not “locked in” to AWS. With the right team, and a sound migration approach and buy-in from the executive team, a migration from one cloud provider to another is easily achievable.

Author Details

Jeremy Ford
Jeremy has over 16 years’ experience in software engineering. He specialises in Java and Web technologies and has extensive experience across all phases of the SDLC. Jeremy has led the successful delivery of multiple solutions for our clients utilising agile principles and processes. Jeremy is known for his exceptional technical knowledge, as well as his outstanding ability to apply this to achieve optimal solutions for clients; he is a certified AWS solutions architect and is highly experienced utilising the diverse AWS ecosystem. Jeremy is also a member of Intelligent Pathways’ internal consulting group, which identifies and recommends suitable technologies, coding practices and standards across the company.

You might be interested in these related insights