Presented at Activate Customer Service 2021. Learn about the security benefits of moving to the cloud with Google Cloud Platform.
Tiffany Lewis, Google Cloud Security and Compliance Specialist
Tiffany Lewis: Hey folks. Thank you so much for having me. My name is Tiffany Lewis. I am a security and compliance specialist at Google Cloud. Prior to working at Google, I worked at a consultancy where my job was to advise customers on the pros and cons of moving to cloud versus staying on-prem, as well as reviewing cloud native companies and giving ways to improve their overall security and compliance posture. That said, today’s discussion is going to be around scaling security with Google Cloud Platform, otherwise known as GCP.
The objectives of today’s discussion is really to enable and empower, those joining the session, when it comes to things within GCP. So understanding the security fundamentals, how they apply to your security infrastructure, and specifically the idea of shared responsibility model and how that applies to all customers. That said, now, usually when I have these conversations, I like to start with what I call the legal or the disclaimer slide. There’s a lot of, I feel like misnomers when it comes to cloud and Google Cloud platform, specifically. As a result we have very clear contractual commitments that we make to our customers for GCP that’s within the DPST otherwise known as the Data Processing and Security Terms. These are publicly available terms that apply for all of our customers using GCP.
GCP is our enterprise public cloud offering that we have available for consumers to use. This enterprise offering is distinctly different from some of our consumer offerings let’s say on the ads side, map or some of our free g-mail. With our enterprise services, we have very explicit contractual agreements and these are some of the things that we have explicitly for those. First of which is, we’ve to tell you that you own your data, not Google. Google will never sell customer data to any type of third party. We’ll never scrape it to use for customer advertising or resell to an advertising agency but then the DPST, we use the terms of a data processor and data control.
Ultimately, you are the data controller for all of your data. You decide what information goes up to the cloud. You decide the security controls and parameters that are instituted around that data. Google Cloud will only do what you tell us to do around processing that data. For instance, if you want to spin at the end that then command to spin that up we will spin that up on your behalf. That we do have some technical commitments that we make to customers within the DPST such as all of your data will be encrypted at both rest and in-transit, we’ll talk a little bit more about the technical implementation details a few slides later.
We also guard against insider access to your data. So insider access could be a Googler trying to maliciously access customer’s data or inadvertently accessing that data. We discussed those more within the DPST as well as our Google Cloud security white papers, but we have both technical controls against those as well as a data incident notification policy that is a contractual commitment within the DPST. The technical controls that we have to provide protection against insider access have been validated by third party auditors. Also, we never get any type of government entity backdoor access to your environment. If a government entity is trying to gain access to data that’s being hosted on Google Cloud platform, we will in the majority of cases contact our customers to let them know that this is something that is being requested versus just handing over the data.
We also have a public facing white paper that I think goes into a decent amount of detail describing the process that happens when a government entity ask or request access to customer data. It’s our government access white paper. You can also do a Google search of Google Cloud, transparency reports which will then show you the amount of government requests we get, further information about the divisions, things along those lines, if those, that’s sort of your interest, right? And the final thing that I will mention here is that our security practices, our data centers are essentially audited against international standards. When I say international standards I’m referring to ISO 27001, 1718, SOC 1, 2, 3, PCI, DSS, AOC. These are commitments that we make explicit within our DPST to maintain at least annual audit for those various certifications to ensure that our data centers are operating in line with best practice and that you don’t just trust us when we say it but we also have this verified, right? We also have this validated by third party industry respected groups. That said, the fun legal kind of disclaimer, as I think we are done.
So we get to jump into what I really love there the technical stuff. So the thing, if I’m being really honest and frank that drew me to Google was the security narrative, right? So in my prior job I worked with a variety of different public clouds. I worked for a variety of different on-prem service providers all these different things. And the encryption narrative is really what drew my attention to GCP. So a lot of times, what I see in the industry is people saying that they do encryption at rest or encryption at transit by default. And when they talk about encryption at rest they’re talking about something similar to BitLocker where you simply have encryption at the hard drive level of a singular key which helps you if you’re concerned, if a hard drive walking out of your data center. But outside of that it doesn’t provide really additional layers of protection or encryption. What GCP does by default, when we talk about encryption at rest is implementing encryption at rest to all of our GCP services, because we implement this via the storage layer via our distributed file system.
So the way that this happens, is displayed in this diagram and we’ll do a walkthrough example. But essentially any time that data is uploaded to the cloud whether it’s a picture, a video, what have you, right? And you’re storing it in something like, let’s say GCs or Google Cloud Storage, that data is uploaded to GCP. Once it is uploaded, that file is then divided into chunks. So the amount of chunks that it’s subdivided in depends on the overall format and file size. So it could be tens of chunks, hundreds of chunks, thousands of chunks. It just depends. Right? So it’s chumped up. Each little chunk is then wrapped with a data encryption key, otherwise known as a DEK. Now it’s important to note here is that even though it’s starting with the same sort of origin file each of these DEKs is unique from one another. So what this means is that each one is yeah, not the same, right? So that said the DEKs are then wrapped in KEKs or key encryption key so we leveraged envelope encryption here and those encrypted chunks are then wrapped and distributed across Google Cloud’s, global storage infrastructure. So all of this is happening behind the scenes. It doesn’t introduce any additional latency into the process itself.
If you’re wondering this is also what we have kind of running behind the scenes for other things like, let’s say Google sheets anytime that there is a keystroke and you can kind of see it say, save at the top. This process is happening on the backend because encryption was built as something core and central, to Google itself versus kind of like an ad hoc thing that was added to kind of meet a check mark, right? Or a check box. That’s why I think you have kind of like, you know truly no significant impacts on the overall performance when you talk about this default encryption narrative. We also provide encryption at transit by defaults. You know, there’s lots of ways to customize these things but essentially we use a TLS protocol or Quic protocol to do the encryption. As far as which version it uses it’ll essentially do a negotiation between, you know Google Cloud and the client’s browser to support that the highest supported one on the client side.
So let’s walk through this example. So let’s say that I am uploading a montage of Basset Hounds, Basset Hound videos, love Basset Hounds, by the way. So if you hear other references, there they’re. But we have our Basset Hound montage that has been uploaded to the cloud. Once it’s uploaded to the cloud the Basset Hound montage is broken up. Size of the video, let’s say into hundreds of chunks that are then each chunk is essentially wrapped in its own DEK. The DEK is then wrapped in KEK from the service being provided on GCP and that’s been distributed across Google’s global infrastructure. Now, one of the common questions that we get aside from latency is does Google also encrypt the backups? The answer is yes. So the other thing I wanna say before we move on to the next slide is this is what we do kind of table sticks, right? This is done, you know, for essentially all of our services on GCP.
With that said, we do use, you know, our crypto module for our encryption at rest narrative is FIPS 140-2 verified. So it’s good for a lot of different workloads. But for those folks who have really high regulatory internal security requirements or compliance requirements, we do have customizable, if you’ll, we can use that term here or choose your own adventure, key management options that’ll support things like HSM. If you need to have more controls over the keys themselves such as the creation, deletion, and rotation I need to be able to control ACOs on those specific keys. We also support that as well, VR or cloud KMS. Networking. So this is another fun one. So Google Cloud, one of the unique things is that Google has one of the largest backbone networks in the world.
We have over 130 points of presence spanning 35 plus countries and we’re continuing to add zones and regions which are just like logical areas of segmentation with data centers and all the other services needed to run GCP right to meet customers essentially preference and policy requirements. Now folks than us, that’s great, love me some networking, but why are you talking about networking during a session about security? That’s because I think our networking narrative really compliments and strengthens the overall security narrative when it comes to understanding the benefits of utilizing GCP. So sure, network delivers low latency because it’s so big, it spans the globe, but it improves our security posture because once a customer’s traffic is in our network it’s no longer transiting the public internet making it less likely to be attacked, intercepted or manipulated.
The other benefit of using GCP is we’ve essentially been, cloud native, if you will, for 15 plus years which means that we’ve had a lot of learnings. Part of the reason why we’ve invested so much in a lot of these proprietary networks, hardware, software that Google Cloud has is simply because, you know, things like nation state feud and nation state attacks are things that we’ve had to think about in the past, right? The idea of future-proofing not for just tomorrow but years ahead is something that we’ve been thinking about for a while and I think we got a lot of these things into our overall ecosystem. Talk a little bit about vendor in the middle. So Google was thinking about vendor in the middle I think before it really became a central point within industry to talk about this, right? So when I talk about vendor in the middle and Google, thinking about it they were working towards this over a decade ago. So when you use cloud and I’m sure everyone here is familiar, but just to kind of reiterate, when you use cloud if we were to simplify it down and boil it down at the end of the day, you’re essentially renting and using, you know, servers, networking peripheral devices from a cloud service provider, right?
You know, the narrative gets more complex depending on the service that you use, but at its core, that’s what it is. Google, if you were to look by the amount of servers that we have in the manufacturing assemble would be the third largest server manufacturer in the world. Which is mind-boggling. So it’s extraordinarily important to make sure that the servers, that the network and the peripheral devices that we have within our data centers, haven’t been altered maliciously in any way, or have any type of third party hardware or software that could potentially introduce like a net new risk factor that we don’t to see. Right. So traditionally, when we look at vendor in the middle the thing that we’re concerned about is firmware. Some type of hardware, software that is ultimately trying to alter the firmware that we have on our existing devices. Titan is pretty neat because it exist to check the firmware for any type of irregularities that may exist. It provides a hardware based root of trust to establish a strong identity as a machine and it also has tamper-proof admissible logs tied to it as well. So we can do additional troubleshooting and view that, right. If you were to simplify, Titan, really think about it as a TKM that we have on all of our servers, within our data centers all of our peripherals, all of our networking devices, it’s something that we have implanted in them by default so that we were able to detect if a vendor in the middle attack were ever to happen, and to also mitigate the risk.
Other things mentioned in the slide that we have, within our data centers, is we have, you know we’ve got the purpose-built chips purpose-built servers, purpose-built storage. We have the network that we talked about as well as purpose-built data centers. Live migration. So the idea of feeling being to upgrade at scale with no disruptions is really one of the benefits of cloud especially when we’re talking about IS. So one of the differentiators about GCP is essentially our ability to update our infrastructure without disrupting the customer experience using a technology called live migration. So, you know, for instance, let’s say that you have updates that are adding functionality and that you need to have some type of software, you need to have some type of software patch, live migration for our Google Compute Engine or GCE VMs allows us to do that on the backend without impacting negatively the overall customer experience. So a good example of something that of a case where I think live migration was is real in FT is if we look back at the Spectre and Meltdown vulnerabilities a few years ago. So they were discovered by researchers from the project zero team, which is essentially like Google’s James Bond team. Real exciting, real smart real broad folks. Google essentially worked with the project zero team and other industry leaders to address these vulnerabilities, right? There are a lot of bright minds put together to fix them. And what customers then saw is whether in their cloud environment, or, you know, a lot of times in their cloud environment was that they would see this red blaring sign that said that they had to restart their fleet of VMs in order to apply the patch to fix that, right, which led to increased downtime, it meant that the fleet had to go down and clear disruption to their overall infrastructure.
For GCP customers, they barely had to pay attention because we were able to apply these updates in the backend via live migration. The reason that we were able to do this is because Google Cloud is, if you look at the infrastructure underlying GCP, built on containers. So essentially it gives us the ability to migrate our running VM from one host to another. Essentially moving the complete instance states from the source to the destination in a way that’s completely transparent to the guest OS and anyone communicating with it leading to that seamless experience. If wanna know more about it, just Google live migration and you can find NGCP and you’ll find some pretty cool stuff.
The final thing that I wanted to touch on here is really shared responsibility and how that impacts the overall security and compliance narrative. So when we’re talking about cloud, whether that’s DCP, AWS, Azure just cloud, right, we all have this common idea of a shared responsibility model or a customer responsibility model, SRM, CRM are terms that you may hear or see very frequently, right? It’s the idea that the responsibility to secure, maintain and update the platform isn’t just on the cloud provider. And it’s not just on the customer, rather it’s a joint responsibility between the two. Now how much responsibility goes to the cloud service provider versus how much goes to the customer really depends on the service being used, but, you know from that cloud provider, right? So when we talk about shared responsibility model and the cloud service provider’s responsibility, imagine like a table sticks is that the cloud service provider their responsibility is to essentially maintain the integrity and security of the infrastructure underlying the cloud platform that the customer is using. This is true for IAS services, PAS services, SAS services. And if we were to add another CAS services which is a container as a service.
Now, if you’ll notice here, the further sort of right that we go on this continuum or this sort of evolution of different as a service platforms that we have infrastructure as a service platform as a service software as a service. The further right that you go, the more responsibility that the cloud service provider agrees to maintain or takes control of and the less responsibility that the customer has, right? There’s pros and cons and there’s reasons why different folks take different types of services for different reasons. But some of the things is I’ve noticed that customers who feel confident in cloud security who don’t necessarily have a lot of sensitive data or who feel confident in their security controls feel a lot of times more comfortable to go further right for the PAS services, the SAS services or the containers as a service whereas IAS services, which, you know certainly a great option as well.
In general folks who are moving from on-prem to the cloud may feel more comfortable making that transition because they have the most likes towards mapping especially if you are already virtualized on prem. So the idea of you maintaining responsibility for let’s say, managing guests or assets on your VM, network security access authentication, identity deployment, all of that would be customer responsibility. In the IAS model GCP pains responsibility for maintaining the underlying infrastructure, running GCP but not too much outside of that. When you move to platform as a service our customers really assume responsibility of application, application layer security and their deployment uses access policies and the cloud service provider assumes responsibility for things like access and authentication. Some of the guests throw us data and content management.
Within the software as a service model really your responsibility is towards the access policies and the content that you have running within your SAS services and container as a service, which we don’t have listed here but I think is one that we could concern is on then we could say is its own category, if you will. that when your responsibility is essentially to think about it similar, to PAS in some extent, right? So you’re responsible for implementing access and control policies for your containerized environments. Final thing is compliance offering, GCP makes you know, a very concerted effort. It’s definitely a guiding principle, if you will, to be able to meet continuously more and more security regulations, certifications and policy requirements that customers have.
These are just a few of the ones that we currently need. We need many more than that and we are always expanding. Like I mentioned earlier we make contractual commitments to maintain to the best of our abilities, our ISO 27001, 1718. We also make commitments around SOC 2, SOC 3, SOC 2, SOC 3 as well as the PCI, DSS. Also have best practices cater for things like GDPR and GXP. Now, with that said I did wanna thank everyone for y’all’s time. I hope you all have a great day and that this was a useful session. And you learned a little bit more about security on the cloud. All right. Have a good one. Bye.