In our previous article, we explored the benefits of adopting a multi-cloud architecture. In this article, we’ll deliver as promised some introductory architectures that can serve as your starting point for adopting a multi-cloud infrastructure.
If you’re unsure of what multi-cloud is and what benefits it can offer you should read the first part of this series where we cover the immediate benefits to utilising multiple cloud providers.
These initial architectures will not unlock the full potential of a multi-cloud strategy in one full cycle, but they will immediately bring you the previously discussed benefits:
Additionally, depending on the architecture patterns you adopt and which services you include you’ll also gain benefits for the elements in scope such as:
. Implementing these solutions early—when you have the capacity to do so, rather than when you’re forced to—positions your organisation for long-term success. This proactive approach offers several advantages:
Let’s begin!
The 3-2-1 backup strategy is a backup strategy originating from on-premises environments. This concept ensures that you have three copies of your data, distributed as follows:
As the cloud has been around for a number of years many environments and services do not have on-premises data - all of their data has been hosted in the cloud since day one. Even with multiple backups in the cloud if your data is not segregated outside of that one cloud provider you’re not running a true 3-2-1 strategy. This is because even if that third copy of the data is in a different geographical location it still suffers the same provider risk.
When services have all their data in a single cloud provider, implementing multi-cloud backups provides the missing component to achieving a 3-2-1 backup strategy.
If you’re operating within a single cloud, it’s relatively straightforward to host the first two copies of your data. Most cloud providers including AWS, Google Cloud, and Azure offer automated features like scheduled backups, snapshots, and built-in redundancy across data centres. However, all copies of data stored within the same cloud account or cloud provider are still vulnerable to account-level or provider-level incidents. This can include accidental deletions, full account compromises, or even malicious insider threats. A single cloud account should be considered as one location as access to the account generally gives you access to all regions and services. For example, our Cloud Vulnerability Scans can access everything from a single scanning identity.
To achieve true isolation for the third backup copy, you need a separate location — either another cloud provider or an on-premises setup. If you’re not on-premises, then you need another cloud provider! By placing your third copy in a completely separate cloud, you achieve the required isolation to protect your data against cloud-specific failures or attacks. This ensures that even if your primary cloud environment is compromised or even completely destroyed, your data is still safe and accessible in the second cloud.
Implementation depends on the format and location of your data. For example, if you’re working with blob or object storage - such as AWS S3, Google Cloud Storage or Azure Blob Storage - you can easily replicate this data to another cloud provider using batch jobs, cron jobs, or serverless functions such as AWS Lambda, Google Cloud Functions, or Azure Functions.
The technical aspect of copying a backup is simple, however, the key aspect to get correct is to ensure that there are no permission leaks across cloud providers. If your credentials for access to another cloud are hosted on the original cloud provider and they still have full access to the copied data, then your solution has not properly isolated that third copy of your data.
We have two approaches to implementing this separation depending on the cloud provider, the cost and amount of storage and the Identity and Access Management policies required by the organisation.
This is the easiest and simplest approach, however it does not necessarily protect your data from being read and copied when compromised. This approach instead protects against total data loss rather than enforcing full data privacy. That satisfies the requirements for 3-2-1 backups but may not meet organisational policies.
This approach is the same approach as the above but includes additional steps to prevent long term data access from credentials accessible from your original cloud provider. This is achieved by migrating inbound data from the bucket it is uploaded to in the new cloud to a location where those credentials cannot access it. This adds a level of complexity but also data protection as if your original cloud is compromised then there are no credentials available to access long term data storage on your alternative cloud provider. This means in case of breach on your original cloud provider, your third data copy which may included archived data is not accessible.
This can be achieved in a number of ways but our most common approach is to deploy batch computation services on the additional cloud provider which has access to the newly uploaded third copy of the data and to migrate this data to a different bucket which the service identity does not have access to.
It’s important to note that we’ve assumed blob style storage whereas backups of other data types such as relational data can be more complex. Depending on your requirements you may want to alter the above approaches to accommodate different data types.
We’re happy to advise as you need!
DNS is an older technology primarily used as a query-answer service for domain information. It is a foundational component of the internet, serving as the starting point for nearly all communications. However, DNS is often a source of problems, and its nature and handling can lead to persistent issues that are often compounded by DNS caching. Experiencing DNS issues alongside a cloud provider outage can be highly disruptive resulting in extended and intermittent outages.
To mitigate this risk, it’s possible to host your DNS services outside of your primary cloud provider. This ensures that DNS updates and redirection to alternative endpoints can still occur even during a cloud outage, helping to maintain service continuity. This strategy can be utilised with other similar points of contact, such as load balancers or content delivery networks (CDNs), where the initial endpoint for users is hosted separately from your application infrastructure. By doing so, you maintain the ability to reroute traffic away from a failing infrastructure, reducing your dependency on that provider and minimising the impact of its issues.
There have been instances where authentication failures and other issues have prevented the creation or provisioning of cloud resources, as well as the updating of DNS or CDN settings to avoid service disruptions. Therefore, keeping your service endpoints separate from your computation provider can offer significant resilience, providing a critical advantage when problems arise with your primary provider.
When building a server or container image using suitable tooling, it’s near friction-less to simultaneously create the same image on a separate cloud provider as part of the same build process. This provides part of your foundation for switching to an alternative provider as your build is fully provisioned and ready for configuration. While this strategy doesn’t necessarily cover data replication, specific configuration, routing or access management, it does implement the following immediately:
By being able to build and provision container or server images to a different cloud provider, you will also have created some of the necessary operational processes required to migrate to a different cloud provider. It is far better to complete these steps now before being forced to adopt another provider under duress.
The process of provisioning Linux or Windows servers is generally similar across different cloud providers. Building and deploying container images are even easier due to their isolated nature. By standardising your image provisioning commands and processes, you can create pre-provisioned images that are across multiple cloud platforms. Automating this process allows you to generate server images that are immediately deploy-able to multiple providers as part of your standard build process.
This approach provides significant flexibility, not only for disaster recovery but also for other use cases, such as creating isolated demo environments. For instance, maintaining a separate cloud provider alongside your usual development and test environments allows you to offer a completely isolated setup for demos or for scenarios that require a more secure or controlled environment.
This concept can be extended further. For clients who do not approve of your original cloud provider, or in cases where specific data or regional restrictions apply, having the ability to duplicate your server provisioning across multiple clouds opens up opportunities that may not have been possible. This can widen your customer base and allow your sales and marketing teams the flexibility to agree to client requirements without loading significant changes onto product teams to deliver within tight time frames.
All three of these patterns are introductory patterns that we have utilised in a number of environments to introduce multi-cloud patterns to clients. They’re easy to adopt, are mostly non-invasive and act as a powerful way to introduce flexibility into your business.
For example, once you’ve adopted multi-cloud tooling to allow for cross cloud data backups, you may opt to utilise dedicated storage providers and half your blob storage bills whilst retaining the same resiliency and up time.
If you want support on implementing the above or guidance on the next steps feel free to get in contact. We’ll even chuck in a scan of your current accounts so you know of any vulnerabilities. Get in contact below!
Leveraging multi-cloud technologies and architectural patterns is becoming an increasingly important part of modern technology architecture. Whilst multi-cloud approaches offer numerous advantages, they also add complexity resulting in an expanded knowledge burden, additional access controls and an absence of replicable patterns.
multicloud security architecture
If you are hosting applications on Amazon Web Services (AWS), it is important to consider the impact to AWS from your penetration testing. A key aspect of this consideration is determining whether what penetration testing can be safely conducted on the AWS platform without advanced permission and which testing should be abstained from without prior agreement.
penetration testing cloud aws
An AWS security assessment evaluates the security posture of an AWS account, analysing the cloud resources contained in an account and their configuration. The goal of this assessment is to find any resources or vulnerabilities that can be maliciously utilised to compromise any services hosted in the AWS Account. Minimising these vulnerabilities will result in the hosted services being more resilient to attack and therefore adopting a stronger security posture.
vulnerability scan cloud aws