LongCut logo

CCSP Exam Cram (Full Training Course - All 6 Domains)

By Inside Cloud and Security

Summary

## Key takeaways - **CCSP Exam: 150 Questions, 4 Hours**: The CCSP exam updated in 2022 now includes 150 questions up from 125, with 50 unscored pre-test questions, and lasts four hours instead of three. [02:09], [02:40] - **CISSP Substitutes CCSP Experience**: Earning the CISSP credential substitutes for the entire CCSP experience requirement of five years IT, three years security, and one year in a CCSP domain. [03:12], [03:35] - **Spaced Repetition Beats Forgetting Curve**: Spaced repetition counters the forgetting curve by repeating study sessions at increasing intervals from minutes to days, weeks, and months for long-term retention, unlike short-term cramming in boot camps. [04:19], [05:44] - **Elasticity vs Scalability Difference**: Elasticity automatically grows/shrinks resources based on demand like auto-scaling, while scalability handles steady growth via SKU selection or instance count for day-to-day operations. [18:39], [19:11] - **Shared Responsibility Shifts by Model**: In IaaS customer handles OS/apps/data while CSP manages infrastructure; PaaS adds OS/middleware to CSP; SaaS CSP manages nearly all with customer configuring access/data recovery. [38:49], [40:25]

Topics Covered

  • Full Video

Full Transcript

welcome to the ccsp exam cram 2023 Edition this is the complete course designed to help you get further faster in your exam prep

with coverage of all six domains of the ccsp exam along with exam prep strategy guidance leveraging proven learning techniques used successfully by many thousands before you

as someone in a cyber security leadership role who works with these Technologies every day I'm certain you're going to find the ccsp exam challenging but equally confident you'll find the skills you take away very

relevant in your future cyber security roles more importantly last year I helped hundreds of thousands just like you achieve cyber security certifications

like the Security Plus the cissp and now bringing that formula to the ccsp exam to help you prepare for exam day without the need for expensive boot camps

and because this is the complete course we'll be covering all six domains from the ccsp exam and covering every line item from the official ccsp exam

syllabus and you'll notice the domains have approximately equal weighting from domain 1 to domain six so we'll be giving these equal time throughout this

course as always I recommend the ccsp official exam study guide and practice test bundle to help prepare which includes a

thousand practice questions two practice exams and flash cards to help you review can find a link to the latest and least expensive copy on amazon.com in the

video description because it's frequently requested I've included a PDF copy of these presentation materials in the video description so you can download and review at your leisure as you prepare

for exam day and I've also included a clickable table of content in the video description so you can jump forward and back throughout this video and indeed

throughout the series as you prepare for the exam so let's talk a moment about the exam itself so the last update to the ccsp was in

2022 on August 1st the new version was released it includes now 150 questions up from 125. that includes 50 unscored

pre-test questions which in the words of IFC squared are included to help protect the security and integrity of the exam they're really protecting against

question dumps sometimes called brain dumps which are against the NDA of the exam and truth be told very unnecessary anyway this is a multiple choice exam

it's four hours in length it used to be three hours when ISC squared added the extra questions they added an hour to the allotted time now in terms of experience candidates must have a

minimum of five years cumulative paid work experience in Information Technology as well as three years in information security and one year or

more in one of the six domains of the ccsp common body of knowledge however

there's a nice surprise if you earn ISC squared's cissp credential that can be substituted for the entire ccsp experience requirement cissp comes with

its own five-year experience requirement albeit less specific than the ccsp but you can just wipe that off the map by taking the cissp exam and I have a link

to my free cissp examgram course that's a very popular cert with employers and a great exam to focus on when you're three to five years into your cyber career

passing score for the ccsp 700 of 1000 possible points and there is no award for longest study

time so I recommend you make the most of your time and you knock this exam out in the least amount of time you need to master this material so I'd like to talk for just a moment about my recommended

exam preparation strategy I have a number of techniques here that have a lot of science behind them to help you get further faster in your exam prep I want to start by talking about the power

of repetition in particular I want to talk about a technique called spaced repetition so every time you study a piece of material over time you're going

to forget part of what you've learned we call that the forgetting curve but what you'll find through repetition is that with each repeating session you're going to

remember a bit more for a bit longer so in other words the forgetting curve becomes a bit longer and a bit shallower with repetition so how much time does it take to

remember anything for the long term anyway so to memorize material quickly you'll need to go through that process of spaced repetition in shorter Cycles

repeating right after learning and then a few minutes to a few hours apart this is great if you're trying to remember content for the short term you're doing

it all within potentially even a couple of days and this can be effective if they're just rough spots that you have that you can't quite commit to memory

just before exam day now to remember concepts for a long time we need to space those repetitions those repeating

sessions out from a few minutes to a few hours to a few minutes and then days weeks and even months potentially so you can see the process is parallel but the

space between the repetitions makes a difference in how long we commit that information to memory and that actually explains why those five day three thousand dollar boot

camps are actually so effective in getting you through exam day they cram a lot of material in there you repeat it quite a few times and you remember it long enough to get through that exam

many of us were labeled as a specific type of learner as children perhaps a visual learner or an auditory learner or even a tactile learner since that time

research has actually shown that everyone benefits from a variety of sources so I recommend you mix and match the techniques you like best as you

prepare for the exam that might include targeted reading from the official study guide practice exams live quiz or flash card review perhaps with a partner

PowerPoint review you can review the PDF that comes with this course and video content like this course tends to be my anchor I like to use video learning and then mix in other techniques to fill in

my rough spots but mix match and Repeat based on your preferences now there's a question I get a lot and that is what do I mean by targeted reading well I'll tell you I

mean use the official study guide for topics you are struggling with but not as a book you will read cover to cover use it when and where you need it to

make the most of your time very unlikely cover to cover in the official study guide is going to be your best approach and it's also been shown that

understanding concepts before you attempt to memorize greatly improves retention because you understand what it is you're trying to memorize it's not simply memorizing words

all right so let's get down to business in domain One Cloud Concepts architecture and design again I will cover every topic mentioned in the exam syllabus and I'm also going to

incorporate examples of Concepts using different Cloud providers when possible to supplement your knowledge in areas where you may not have exposure to a cloud service provider yet

the ccsp exam is what we'd call Vendor agnostic it doesn't focus on a specific vendor like AWS or Microsoft Azure but you'll find that the examples I provide

give you context to help you more effectively remember the concepts so I'd like to talk for a moment about what the official study guide calls exam

Essentials critical exam topics that according to the guide are very important to remember for the exam not the only topics but definitely amongst the most important for domain one they

call out the different roles in cloud computing like service providers Service Partners access service Brokers identifying the key characteristics of cloud computing like on-demand

self-service multi-tenancy elasticity and scalability explaining the three cloud service categories is Paz and SAS and the

differences between them describing the five Cloud deployment models public private hybrid community and multi-cloud identifying important related

Technologies and you see a lot of cutting edge modern tech here machine learning AI devsecops Quantum we're going to touch on all of them and

finally shared considerations in the cloud interoperability portability privacy resiliency we'll touch on the mall and I'm going to give you a tour of

the shared responsibility model for cloud which will provide a foundation that makes onboarding all the concepts related to this exam much easier

so let's get into 1.1 which is understanding cloud computing Concepts we have cloud computing definitions so literally the definition of cloud computing according to nist cloud

computing roles and responsibilities you're going to hear some roles here you've probably heard of like cloud service provider and others maybe you haven't we'll touch on key cloud computing

characteristics what are the promises of the cloud Concepts like on-demand self-service broad network access multi-tenancy elasticity and scalability and just as importantly what's the

difference between those two and we'll finish out 1.1 with building block Technologies diving into virtualization storage networking databases and

orchestration again with examples from your various cloud service providers wherever I can give them to you first up is nist special publication

800-145 which is the nist definition of cloud computing this is a model for enabling Universal convenient on-demand network access to a

shared pool of configurable computing resources that might include networks servers storage apps and services it depends on the cloud category and the

specific service we're working with that can be rapidly provisioned and released with minimal management effort or service provider interaction

and that's the promise of the cloud that resource consumption for the customer is easier while the cloud service provider handles the care and feeding of the underlying Cloud infrastructure

and now on to cloud computing roles we'll begin with cloud service provider that's the company that provides the cloud-based platform the cloud

infrastructure and applications to other organizations to customers as a service examples here Amazon's AWS Microsoft Azure Google Cloud platform those are

the big three if you see the CSP acronym in a question they're talking about the cloud service provider not to be confused with cloud services partner

our cloud services partner is a company that helps organizations to obtain and deploy cloud services they may offer Consulting Services they may offer software that runs in the cloud or both

on the services side avanad Tata Accenture all good examples of big cloud services partners next we have the customer that's the

business or individual consuming the cloud services from the CSP they're often using public Cloud to complement or augment existing on-premises compute resulting in a

hybrid Cloud configuration currently the most common Cloud implementation and then we have the cloud service auditor this is a third party that can conduct an independent assessment of cloud

services Information Systems Operations performance security around the cloud implementation truly the audit scope may vary the key here is an independent

assessment that means the auditor is external to the customer or CSP organizations

then we have the cloud broker this is an entity that manages the use performance and delivery of cloud services so more directly they often negotiate

relationships between cloud service providers and cloud service consumers so between the CSP and the customer they serve as an intermediary an advisor a

negotiator between customer and CSP so let's go a level deeper on the functions of a cloud broker so they may serve in the area of service intermediation so enhancing a given

service by improving specific capabilities and providing value-added services to the cloud consumer to the customer they may help with service aggregation

this is where we combine and integrate multiple Services into one or more new services and then service Arbitrage which means the broker has the flexibility to choose

services from multiple agencies potentially resulting in a multi-cloud architecture now let's take a look at a few other

cloud computing roles these are less likely to appear on the exam but just in case worth knowing before you walk into the exam room there's the cloud administrator responsible for implementation monitoring and

maintenance of the cloud not unlike a systems administrator on-prem Cloud application architect the person who's adapting porting and deploying the applications let's talk about that word

porting for a moment reporting an application means moving that application its Associated services and databases from an on-premises environment to the cloud that may

include some refactoring to prepare that application for operation in the cloud then we have the cloud architect who designs and develops Solutions so not

unlike an architect on premises the cloud operator responsible for daily operational tasks the cloud data architect who manages data storage and data flow within as

well as to and from the cloud so you may notice a pattern here that some of these roles while they have the word cloud in them definitely look parallel to roles that we've seen in it on premises

we have the cloud service manager responsible for business agreement and pricing for the cloud customer so maybe working as an employee within the customer negotiating contracts with the

CSP or Partners we have cloud storage administrator managing storage volume and repository assignment configuration and potentially security the cloud service business manager

overseeing business and billing Administration so this is the person doing the paperwork with regards to paying that bill that operational expenditure then cloud service

operations manager who prepare Systems Operations and support for the cloud and administer services and the further we went down this list the more Niche I'd say these roles are you're not going to see all of these

roles in any organizations and and most of them certainly only in the larger organizations so finishing out cloud computing roles one more less likely to appear on the

exam but important to mention the managed security service provider or mssp which is a company that maintains the security environment for companies

running in the cloud they may manage firewalls idps your sim Solutions and other Security Services and infrastructure and they may even provide

an outsourced security operations center that's staffed to monitor security operations and provide incident response now we're going to take a look at Key

cloud computing characteristics those characteristics common in Cloud platforms and services there's on-demand self-service where customers can scale their compute and storage needs with

little or no intervention or prior communication from the provider from the CSP they can use what they want when they want and their technologists can access Cloud resources almost

immediately when they need to do their jobs providing agility in Service delivery they're going to be more responsive and Broad network access services are

consistently accessible over the network regardless of the user's physical location and it's no accident for this reason that your big csps all have a

global presence across not only every major continent but across the busiest countries and regions within those countries so close points of presence

we have multi-tenancy which means many different customers share use of the same Computing resources physical servers that support our workloads might be the same physical servers supporting

other customers workloads in the underlying Cloud infrastructure the compute the storage the networking it's all shared and shared generally by multiple customers

you should also be familiar with the concept of over subscription Cloud providers are going to over subscribe their total capacity which means they'll sell more capacity than they actually

have so why would they do that well because in the big picture customers won't collectively be using all of that capacity simultaneously

and this is true in is Paz and SAS scenarios all three now the level of cloud service provider responsibility will vary in those is paths and test scenarios we'll talk about that in the

shared responsibility model later in this video we have rapid elasticity and scalability this allows the customer to grow or

Shrink the it footprint as necessary to meet their compute needs their storage needs without excess capacity these two are related but they're unique so let's

talk about the difference between elasticity and scalability elasticity is the ability of a system to automatically grow and Shrink based on

app demand capabilities can basically be rapidly provisioned and de-provisioned think Auto scale scaling out and scaling in

adding additional instances quickly Auto deploying these instances they are ephemeral instances available only for the time they are needed scalability on

the other hand is the ability of a system to handle growth of users or work the ability to grow as demand increases the scalability is generally controlled

by a SKU or service tier selection or the number of instances you're deploying scalability is more about deploying the necessary capacity for steady state

operations day-to-day elasticity is adding that burst capability when we have sudden increases in traffic then there is resource pooling this

enables the cloud provider the CSP to apportion resources as needed across multiple customers so resources are not underutilized and they're also not over

utilized or over taxed this enables the provider to make Capital Investments that greatly exceed with any single customer could provide on their own using their own budget in their own Data Center

and it allows the provider the CSP to meet demands from multiple customers while remaining financially viable while remaining profitable one of the downsides here is this can

result in some degree of location dependence it's beyond the customer's control however most of your major csps and I'm talking about AWS Azure and

Google Cloud platform off the top of my head do generally provide flexible options enabling customers to choose location even in their SAS offerings and

the ability for a customer to choose location can be very important in data residency compliance so if our data needs to reside in a particular country as would be true in Germany and

potentially in the EU with gdpr that flexibility to choose where data resides in particular is one that is top of mind and rounding out cloud computing characteristics let's talk about

measured service which means that almost everything you do in the cloud is metered it's measured and tracked for management and billing and your Cloud providers will measure metrics of

resource consumption they'll be looking at the number of minutes a virtual server is running they'll look at the amount of disk space you consume they'll look at the number of function calls you

make in a serverless scenario and potentially the amount of network egress and Ingress generally with cloud service providers Network Ingress so getting data into the

cloud is free but when you try to take it out you're often going to pay for that it's a tough metric to predict so you do want to be careful in looking at how a service is built

and measured service is also known as metered service so whether you see measured service or metered service on the exam two phrases for the same concept

finishing up section 1.1 we're going to talk about the five building block Technologies of the cloud which are

compute network storage databases and orchestration so let's begin with compute and in the area of compute

infrastructure as a service or is is the basis for compute capacity in the cloud the CSP provides the server the storage and the networking hardware and virtualization of all of these

components the customer installs middleware and applications on Virtual machines and the customer only pays for what they use the charges stop when the instance is stopped or deleted we're

going to dig into the boundaries of responsibility a bit later in this video in the shared responsibility model again something not mentioned in the official study guide but you're going to

appreciate the context it brings to everything we're learning in this series let's talk about the basics of network so Cloud networking in the cloud is all

virtualized to allow customers to design and customize the network to their needs this enables customers to segment networks and restrict access however they'd like implementing preferably a

zero trust Network architecture and physical Network components are virtualized into a software defined

Network or sdn examples in major csps include v-net in Azure and VPC in AWS and Google Cloud platform

so let's talk about an sdn this is a network architecture approach that enables the network to be intelligently and centrally controlled or programmed using software and sdn is defined by

three separate planes or layers if you will there's the management plane the business applications that manage the underlying control plane and are exposed with Northbound interfaces I'll

visualize a Northbound and southbound interface for you in a moment we have the control plane which is where control of network functionality and programmability is made directly to the

devices through the southbound interface OpenFlow is the original protocol at the control plane and is still common today and then we have the data plane and the

network switches and routers located at this plane are associated with the underlying Network infrastructure now data forwarding happens here so this is also sometimes referred to as the

forwarding plane so just to visualize where you hear we have the control play in the sdn controller and the management plane which is exposed to the control plane through the

Northbound interface and the data plane where our switches and routers are underlying Network infrastructure reside so again management plane exposed through the Northbound interface data

plane exposed to that controller through the southbound interface and to dig in just a bit further here the Northbound interface ensures only trusted authorized applications access

critical network resources and open Flow as a protocol at the control plane interfaces with devices through the southbound interface

let's move on to storage and storage varies by model so whether we're talking about infrastructure as a service platform as a service or software as a service your storage

is virtualized as you'd expect and the ccsp exam considers three types of storage long-term ephemeral and raw so ephemeral is relevant for is instances and it exists only as long as

the instance the VM is up that would be attempt disk generally and raw storage maps to a Lun to a logical unit number on a storage area

network attached to a VM so that concept pops up more in a hybrid scenario where we're dealing with a sand directly on-prem you're not going to see much

around sand in a cloud portal itself it's it's all really virtualized now long-term storage offered by some csps is tailored to the needs of data

archiving that may include features like search immutability and data lifecycle management and long-term storage typically uses either volume or object

storage infrastructure so let's talk about each of those so an example of a volume or block storage would be Amazon EBS and Azure disk storage

an example of object storage would include Amazon S3 and Azure blob storage so let's talk about storage in the context of platform as a service here

the focus is on databases usually multi-tenant relational SQL database as a service it might be Microsoft SQL it could be MySQL it could be postgresql

and then there's Big Data as a service these are non-relational or nosql data repositories like document graph column

or key value stores examples here include mongodb Cassandra and hbase you should also be familiar with the concept of storage consistency which

describes the time it takes for all data copies to be the same we have strict consistency at one end of the scale that ensures all copies of the data have been

duplicated amongst all relevant copies before finalizing the transaction to increase availability and then there's at the other end of the scale eventual

consistency where data consistency is relaxed and it reduces the number of replicas that must be accessed during read and write operations before the

transaction is finalized data changes in this case are eventually transferred to all data copies via asynchronous propagation over the network and depending on the no sequel flavor

you're working with some cloud services will offer you degrees of consistency so you'll have options somewhere between strict and eventual to set the level of consistency that works in your

applications model so let's talk about storage in the software as a service or SAS context so we have content or file storage so this is file based content stored within the

application Microsoft Office is the perfect example a Content delivery network is where content is stored in object storage and then replicated to multiple

geographically distributed nodes to improve internet consumption speed what that does is places content near the points of presence

where customers will access a service there's information storage and management so data entered into the system via the web interface and stored within the SAS application this often

utilizes databases which are in turned installed on object or volume storage and when we think about software as a service that's really just a service we use and don't think too much about the underlying infrastructure so we'll talk

about the boundaries of responsibility there in the shared responsibility model later in this video so let's move on to databases so multiple options are

available you have multiple flavors of relational and non-relational that we touched on a moment ago but there are managed database services so Paz options that shift infrastructure maintenance to

the cloud service provider there are also is hosted databases that are an option where Paz is not possible or practical so you have examples on the path side of azure DB for Microsoft SQL

they also have a my sequel flavor of postgres flavor on the Amazon side you see Amazon RDS and Dynamo DB

It generally Paz is preferable but we see the is database options pop up where customers have isolation or compliance requirements that make paths just not

practical at all so let's talk about orchestration the fifth building block so Cloud orchestration creates automated workflows for managing Cloud environments and they're building on the

foundation of infrastructure as code reducing manual Administration tasks and orchestration may be a script a function a run book or developed in an external

workflow engine so a few examples here you have Azure automation AWS systems manager or you could even look at third parties like zapier that integrate with hundreds of services and multiple Cloud

platforms we're going to close out section 1.1 with a look at some virtualization Concepts you may see on the exam so virtual assets can include virtual

machines which we talked about previously virtual machines can factor in a virtual desktop infrastructure or vdi solution a managed desktop if you will

software-defined networks and virtual storage area networks so these are all virtualization Concepts and hypervisors are the primary components that manage

virtual assets but also provide attackers with an additional Target so both hypervisors and the VMS that run on them need to be patched and secured so

we have our compute our Network and our storage let's talk about security issues with cloud-based assets so storing data in the cloud increases

our risks so steps may be necessary to protect that data depending on the value of that data we need to focus on our valuable asset when releasing

cloud-based Services you should know who is responsible for the maintenance and the security and depending on the category we're working with ispaz or

says the level of responsibility of customer versus cloud service provider will vary we're going to touch on these and break them down in the shared responsibility model shortly

the cloud service provider provides the least amount of Maintenance and Security in the ions model let's talk about hypervisors we have the type 1 or bare metal hypervisor that's installed

directly on to the server Hardware flavors there include VMware esxi KVM and Microsoft hyper-v and then we have

the type 2 or hosted hypervisor which is installed on top of a host OS like Windows or Linux varieties here include VMware

Workstation and Oracle virtualbox to name a couple so let's look at the characteristics of the type 1 hypervisor to start so it certainly has a reduced

attack surface when we compare it to the type 2 hypervisor that has a host operating system and this makes the type 1 hypervisor more secure if implemented properly

we see type 1 hypervisors typically implemented for QA load testing and production scenarios and the type 1 hypervisor is typically more expensive

than a type 2 hypervisor now Switching gears to the type 2 hypervisor the characteristics here vary just a bit we have the increased attack

surface due to the host operating system and this makes it slightly less secure versus type 1 even if implemented properly it's commonly used for individual

development and lab scenarios and it's typically less expensive than a type 1 hypervisor that brings us to section 1.2 describe Cloud reference architecture so

here we'll touch on cloud computing activities cloud service capabilities cloud service categories this is where

we touch on is paths and SAS talk about Cloud deployment models including public private hybrid community and multi-cloud

we'll talk about the shared considerations of cloud but just ahead of cloud deployment models I'm going to slip in a quick talk on the shared responsibility model which is really

going to ease your onboarding of most of these Concepts then we'll touch on the impact of related Technologies everything from data science and machine

learning and AI to Containers Quantum and devsecops let's start with cloud computing activities and we'll Begin by

looking at activities according to ISO 17789 Cloud reference architecture and which parties map to which activity so

according to ISO 17789 the following are the responsibilities of the customer so certainly using cloud services performing any service trials to ensure

the appropriateness of a specific cloud service monitoring Administration billing and usage reports you know certainly the cloud platform the CSP

will provide billing and usage tooling often but billing and usage reporting falls to the customer and also operations handling problem

reports performing business administration administering Cloud tenants selecting and purchasing services and requesting audit reports

these are all again customer responsibilities now according to ISO 17789 these are the responsibilities of the cloud service provider the CSP

preparing systems providing the services managing assets and inventory providing audit data whether contractual or required by law managing customer

relationships handling customer requests performing peering with other cloud service providers ensuring compliance at least where compliance is mandated by

law or the CSP promises compliance with regulations such as gdpr or fedramp or HIPAA and providing of course network connectivity

and again that's the CSP responsibility now let's look at the third party here the partner so partner responsibilities according to

ISO 17789 include design creation and maintenance of service so that would be typical in an architecture scenario where the partner is providing

Consulting Services testing potentially performing audits as an independent Assessor setting up legal agreements assessing customers assessing the

marketplace or really determining where the partner can effectively provide services that are value-add in the cloud

scenario so just as a quick refresher on partner versus provider so we have the CSP that delivers the cloud platform and infrastructure that customer subscribe

to and use and then we have the partner that provides the guidance and Implement services and potentially software so Microsoft Amazon and Google are

providers and partners are the service and software companies we talked about some of these like Accenture Tata avanade

and again anywhere you see the CSP acronym that is absolutely referring to the cloud service provider next on the agenda are cloud service

capabilities the capabilities advantages and efficiencies of the public Cloud so first we have application capability types where we see an overall reduction

in cost reduced application and software licensing reduced support cost and a reduction in the need to worry about our backend systems and those capabilities

our cloud service provider gives us the ability as a customer to focus on business use cases while they handle the care and feeding of the underlying platform and infrastructure that

previously would have been in our data center then platform capability types language and framework support support for multiple environments and allowing customer choice and reducing vendor

lock-in as well as improving a customer's ability to Auto scale giving us that ability to scale in and out as

demand necessitates and then infrastructure capability types again scale converge Network shared capacity remember the cloud service provider over

subscribe so they give us the appearance of infinite capacity self-service on-demand capacity High reliability service resilience through

distribution across regions and where this is a capital expense on premises it's an operational expense in the cloud so as organizations move to

the cloud we see a shift in budget from capex to Opex and remember the customer only pays for what they use so let's talk about

Cloud models and services in particular I want to talk about the shared responsibility model as it applies to is Paz and SAS and this will help to

differentiate the responsibility of the CSP versus the customer in your mind as we go through different scenarios so

when we're on premises responsibility is easy it belongs 100 to the customer the customer is responsible for resiliency availability

redundancy all the way down to the wire 100 yours as we move into the cloud in the is model infrastructure as a service we see the cloud service provider takes

on care and feeding of our virtualization infrastructure the servers the storage the networking and the hypervisor we're really just consuming virtual machines running on

top of the hypervisor when we move into paths we see the cloud service provider taking on more responsibility now managing the OS middleware and runtime

and as a customer we're really just responsible for our applications and our data and when we move into SAS thinking services like Office 365 you see the

cloud service provider the CSP takes on even more responsibility and now we have a shared responsibility there for data and application configuration but

largely we're just using the service so you see as we move from is into Paz and SAS the CSP is progressively onboarding more and more responsibility allowing us

to focus as a customer on simply using the service so just to name the advantages here you know the CSP provides the building blocks network storage and compute in

the is scenario the CSP manages the staff the hardware the data center and that gives us some key benefits here usage is metered so we're paying for

what we use it eases our scale our scale up our scale out or scale down it reduces our energy and cooling costs in the data center because we're shifting our data center to the csps data center

right examples here include Azure virtual machines Amazon ec2 and Google Cloud platforms compute engine

so moving into path here we saw the customer is responsible for deployment and management of apps and the CSP manages provisioning configuration Hardware in the operating

system they're taking on more responsibility for us key benefits in paths is that the core infrastructure is updated by the provider it's really entirely off our plate

Global collaboration for app development and running multiple languages seamlessly so in the past Constructor for hosting web apps or functions we're

going to have a good idea of the language support that comes from those csps but they're generally going to support a variety of languages examples of past services on the

Microsoft platform would be Azure SQL their API management function Azure app service so all path Services where we're really just focused on the application

and our data then finally in the SAS bucket again we're just configuring features as a customer the CSP is responsible for management operation and

service availability and SAS brings even more advantages here the customer still has some responsibility in terms of access management and data recovery for

example in Office 365 we typically have backups of that data so if we have a ransomware attack we can recover more quickly certainly in many of these SAS scenarios we'd be able to recover with

the built-in capabilities of the SAS service but the customer can step in and add some recoverability of their own key advantages here key benefits would

include limited Administration responsibility and really limited skills required the bar for entry is very low when it comes to sash or just a consumer as a customer and the service is always

up to date we're never worrying about the care and feeding the patch management it's all handled for us and again we have Global access here so available across continents countries

and regions examples of SAS Services Office 365 servicenow Salesforce all well-known SAS applications

I want to talk about just one bit of nuance in the paths in the platform as a service bucket and that's really how is serverless when we see that term serverless have a serverless different

from platform as a service in terms of customer responsibility so we have paths on one hand serverless on the other and they do have some commonalities and in both cases devs

have to write code and there is no server management so we're really focused on our application and deploying that app now there are some differences though between paths

and serverless so for example in paths we have more control over the deployment environment if I think about hosting web applications and paths we can typically

pick service tiers that give us some control over scale and isolation and some of our features on the serverless side we have less control over the deployment environment now on the past side the application has to be

configured to Auto scale we typically have to configure the scale and scale out through whatever mechanisms the past service offers serverless generally scales automatically

on the pass side the application typically takes a while to spin up if you have a web app that hasn't been accessed in a few hours the threads are going to die it's going to take a few seconds for that app to spin up on the

serverless side application code only executes when it's invoked when it's basically called so it means it's going to start faster it also means it's only going to be

billing for execution typically so it's going to be very inexpensive certainly Paz is going to reduce our operating expense versus the is model and serverless May in certain use cases

reduce our costs even further when serverless is the right tool for the job so just talking through serverless architecture a bit further it's a cloud

computing execution model where the cloud provider dynamically manages the allocation and the provisioning of the servers it's hosted on a pay-as-you-go model based on use

and the resources are stateless servers ephemeral and often capable of being triggered function as a service is a great example of serverless architecture

Azure functions for example or Amazon Lambda would be the equivalent on the AWS platform and services integration so provisioning

of multiple Business Services is combined with different IT services to provide a single Business Solution next up we're going to dive into Cloud deployment models but let's start with

just a quick recap of the benefits of cloud computing it's cost effective it's Global we have present around the world secure scalable

elastic always current so the cloud allows us to focus as a customer on our business use cases and hand over a lot of the care and feeding of the infrastructure and the platform

to the cloud service provider let's talk about the public cloud so in the public Cloud Model everything runs on your Cloud provider's Hardware

so the advantages here we have the perception of infinite capacity easy scalability agility a pay-as-you-go model so we're not investing large

amounts of capital in the data center we're paying for what we use no maintenance low barrier to entry in terms of skills the private Cloud on the other hand

where we're hosting a hundred percent in our own data center offers advantages of its own so certainly in Legacy scenarios

a dedicated environment to our infrastructure can make a lot of sense it allows us to support Legacy applications where maybe we're not ready to bring it up to a supported version

for the public Cloud maybe we need control over the environment maybe we have specific compliance requirements so the the private Cloud enables greater control of upgrade cycles and Legacy

apps and support for some compliance scenarios these are key use cases where private Cloud factors prominently today and a hybrid Cloud combines public and private clouds allowing you to run your

apps in the right location and most would say that this is the most common model today that organizations have by and large moved at least some of their

workloads into the public Cloud but they still have in large environments and on-premises data center or at least some presence of on-premises infrastructure so hybrid Cloud where we're connecting

the private Cloud to the public cloud is very common today and it gives us flexibility in the Legacy and compliance and scalability scenarios if we have a legacy scenario where we need control

where we have compliance concerns we can leave that in our private data center in our private cloud and for apps where we don't have those hurdles we can move those into the public cloud in the near term

but it enables the organization to control the pace of public Cloud adoption and then there's the community Cloud this is similar to private clouds in that they're not open to the General

Public but they are shared by several related organizations in a community so you can see this with an industry group for example that wanted to take advantage of

the benefits of public cloud and perhaps take on the learnings and the risks together and then the multi-cloud scenario this combines resources from two or more

public cloud service providers this allows organizations to take advantage of service and price differences but it can add some complexity and we do see

multi-cloud in scenarios where customers are moving to a new Cloud for that price Advantage but it's easier said than done to take all of your services out of that

old Cloud so we wind up in a multi-cloud scenario so at the end of the day confidentiality integrity and availability are core objectives of

security so let's just touch on what we call the CIA Triad briefly confidentiality integrity and availability so confidentiality

access controls to ensure that only authorized subjects can access objects on exams if you see subjects subjects are generally talking about people

security principles accessing objects that's our data our assets Integrity ensures that the data or system configurations are not modified

without authorization and availability authorized requests for objects must be granted to subjects within a reasonable amount of time if we don't have

availability all of our security is for naught now we're going to get into Cloud shared consideration so considerations in a multi-tenant environment like the cloud

the first is interoperability so the ability of one cloud service to interact with other cloud services that could be within a single CSP it could be between

csps it could be another third party most of your csps also have a cloud marketplace with certified apps and services that provide paths for interoperability across platforms as

well another consideration reversibility so this speaks to the process for cloud service customers to retrieve their data and their application artifacts and for

the CSP to delete all cloud service customer data and contractually specified cloud service derived data after an agreed period

now customer access to data also appears in some regulations gdpr is one that comes immediately to mind let's talk about the five facets of

cloud interoperability the first is policy the ability of two or more systems to interoperate while complying with governmental laws

regulations and organizational mandates the next is behavioral where the result of the use of the exchanged information matches the expected outcome

the third is transport the commonality of the communication between Cloud consumer and provider and other providers and this really speaks to known standard

secure methods of Transport https for example or various message queuing standards the fourth is syntactic two or more systems should understand the other

system's structure of exchanged information through encoding syntaxes such as Json and XML would be two good examples and the fifth is semantic data

the ability of systems exchanging information to understand the meaning of the data model within the context so virtual machines containers storage

and networking Concepts so continuing down the path of considerations portability the ability to move applications and Associated data

as in stored within storage or data based repositories between cloud service providers between Legacy and Cloud environments or between public and

private Cloud environments so think hybrid Cloud for example and a couple of sub-considerations here cloud data portability the ability to easily move data from one cloud service

to another without the need to re-enter that data we see this commonly with blob storage or with databases where we need to move a database to migrate it from

one provider to another and then Cloud application portability the ability to migrate an application from one CSP to another or between a

customer's environment and a cloud service portability prevents vendor lock-in so let's talk about the three facets of

cloud data portability and they are syntactic so transferring data from a source system to a Target system using formats that can be decoded on the

target system with features like XML or open virtualization format the second is semantic so transferring data from a source system to a Target so

that the data model is understood within the context of the subject area by the Target and the third is policy transferring data from a source system to a Target so

that governmental laws regulations and organizational mandates are follow the fact of the matter is if you want interoperability and you want portability you need to pick cloud

service providers that offer services that are highly standardized that are using open and standard communication formats like XML like Json like https

so continuing with shared considerations availability so systems and resource availability defines the success or failure of a cloud service that's no surprise you

know check service level slas and how multi-service slas are calculated your major csps like Microsoft like AWS will provide instructions in their

documentation for how an SLA is calculated when you've Incorporated multiple Services into an integrated solution and then resiliency so the ability of a

cloud service Data Center and its Associated components including your server storage and so on to continue operating in the event of A disruption

and you want to look for a cloud provider with global present Regional redundancy and then Zone redundancy within that region when it comes to availability we'll talk

about slas Olas and plas in depth in domain six so that's service level agreements operating level agreement and privacy level agreements

now let's shift gears and just talk through an example so the the following example here explains these Concepts in Microsoft Azure AWS and Google Cloud platform support the same Concepts and

in fact the terminology is generally very similar or in some cases even the same remember the ccsp exam is cloud service provider agnostic so I wanted to give

you an example I had in hand to give you context so we'll start at the global level and work our way down so starting with an Azure geography so this is a discrete Market typically containing two

or more regions that preserves data residency and compliance boundaries so with Azure they have geographies and you can see here there's a geography for

North America for Europe for Australia China has its own geography for political reasons and and legal reasons you see Africa you see South America

similar Concepts exist in AWS and Google Cloud platform and then we have Regent so this is a set of data centers deployed within a latency defined

perimeter and connected through a dedicated Regional low latency Network so regions we'd call these so if I just look at the map here you can see Azure

includes regions all around the world I see Japan East UAE North West Europe Canada Central Central U.S West U.S so

regions all around the globe then we have region pairs this is where it really starts to get interesting this is a relationship between two Azure

regions in the Microsoft case within the same geographic region for Disaster Recovery purposes so imagine redundancy in the event of regional data center

failure so region pairs for example you'll find that there is a pair a primary and a backup chosen by the CSP

Microsoft in this case generally speaking there's 300 plus miles between those two data centers and your various services like the storage platform like

the database platforms have configurations that facilitate automatic failover of those services to the backup region so if a region Goes Down You're

Not Dead in the Water necessarily now availability zones are unique physical locations within a region with independent power Network and cooling

it's comprised of one or more data centers and it's tolerant to Data Center failures via redundancy and isolation so it's really focused on data center

failures within a region so my load balancer for example would be Zone redundant across the data centers within the region so for example West U.S for

Azure is not a single data center sitting in the West U.S it's several data centers multiple data centers within that region fairly close together

that give us these availability zones but the focus here is data center failures within a region but know that these Concepts exist

equally in AWS and Google Cloud platform I just wanted to give you an example in case you don't have exposure so let's talk about security as a shared

consideration so protection of customer data Access Control Data encryption very important protection of cloud applications against attacks for example

attacks of scale like distributed denial of service protection of cloud infrastructure the underlying servers storage and network running the environment the

shared responsibility model explains who is responsible for security in each model and scenario we talked about that earlier in the video If you skipped over shared responsibility model go give it a watch

continuing down the road of shared considerations let's talk about privacy so data privacy and cloud computing allows collecting storing transferring

and sharing data over the cloud Network without putting the privacy of personal data at risk there are a couple of prominent sources of privacy concerns number one

oftentimes the customer does not have knowledge about exactly how their personal information is stored and processed in the cloud your major csps do a pretty good job in terms of

transparency those contracts as Agreements are long-winded so not always easy to find in some cases and then the reality that we have data

breaches in recent years that have brought data privacy to the Forefront as a crucial factor in cloud computing Now privacy versus confidentiality I

wanted to touch on the difference here so privacy focuses on the right of an individual to have some control over how their personal information their personally identifiable information

their protected health information is collected used and potentially disclosed confidentiality is the duty to ensure private

information is kept secret to the extent that is possible that's a legal obligation in regulatory scenarios like gdpr and a due care obligation in U.S

law so to State all this more simply privacy focuses on the rights of the individual person or the customer confidentiality focuses on the data

keeping that data confidential private encrypted next we have performance which is the ability of a service to remain responsive to requests to that service

with an acceptable level of response latency or processing time remember one of the benefits of public cloud is it delivers the perception of unlimited scale typically for less than the cost a

customer would incur to develop the same level of service in their own Data Center and there's governance enforcement of security policies and regulatory requirements often through policy

controls and regular audits csps often have policy Automation in which restrictions can be defined and automatically enforced throughout the service life cycle in fact in Microsoft

Azure that feature is called Azure policy that allows us to do exactly that next up auditability the ability to provide clear documentation of the

actions in a data event like a data breach or unauthorized access and there are a couple of related activities that we need to ensure in place in order for

auditability to be true the first is accountability the ability to determine who caused the event this is known sometimes as identity attribution this

requires non-repudiation so we must have unique accounts for end users for example to ensure that we can pin that event back to a specific individual and then

there's traceability the ability to track down all events related to an investigated event bottom line auditability is only

possible with proper logging providing accountability and traceability so these two things must be true for auditability to also be true

so service level agreements these stipulate performance expectations like maximum down times and often include penalties Financial penalties if the

vendor doesn't meet expectations service level Agreements are typically used with vendors an operating level agreements and

privacy level agreement Olas and plas may also show up on the exam we're going to touch on those in domain six let's talk about Outsourcing for a

moment obtaining goods or services like cloud services from an external supplier this introduces considerations like reversibility interoperability and

vendor lock-in all of which we've talked about previously it's worth noting that vendor lock-in can be a technical or a contractual

constraint that prevents A customer from moving from a provider now we're going to move into impact of related technology so call that explicitly in the syllabus we see data science machine

learning artificial intelligence blockchain devsecops Internet of Things containers Quantum Computing confidential computing and

Edge Computing I'm going to throw a couple of extras related or mentioned in the official study guide just in case and that's deep learning fog Computing

and post Quantum cryptography related topics I think you should be familiar with but let's start with data science so data science is the study of data to

extract meaningful insights for business now it combines principles and practices from multiple Fields mathematics AI Computer Engineering to analyze large

amounts of data and it helps data scientists to ask and answer questions about past current and future events purely through evaluation of data

now cyber security data science is the practice of applying data science to prevent detect and remediate cyber security threats data is collected from

cyber security sources and then analyzed to provide timely data-driven patterns at scale at the end of the day the goal is to deliver more effective security

insights at high scale in an automated fashion really so let's talk about artificial intelligence machine learning and deep learning knowing the difference

can definitely help on really any security exam so artificial intelligence focuses on accomplishing smart tasks combining machine learning and deep

learning to emulate human intelligence and machine learning is a subset of AI using computer algorithms that improve

automatically through experience and the use of data machine learning algorithms learn by being fed data to process and

learn from and deep learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the human

brain called artificial neural networks and certainly in the realm of cyber security data science Ai and machine learning go hand in hand so knowing the difference amongst these will be helpful

somewhere in your career and potentially on this exam blockchain was originally a technology that powered Bitcoin it does have broader uses though it's a distributed

public Ledger that can be used to store Financial medical or other transactional data anyone is free to join and participate it's a public Ledger it does

not use intermediaries such as Banks and financial institutions the data is chained together with a block of data holding both the hash for that block and the hash of the preceding block

to create a new block on the Chain the computer that wishes to add the block solves a cryptographic puzzle and sends the solution to other computers

participating in that blockchain this is known as proof of work and we have iot Internet of things which is a class of devices connected to the

internet in order to provide automation remote control or even AI processing in a home or business setting due to the relevance of this topic it's safe to Guess that questions involving

iot devices are a bit more likely to appear in the 2022 exam update now iot devices suffer from some common attack vectors one of these is default settings every device that you put on

your network to manage has a default username and password often the defaults are open and available for anyone to use and this is

true of Wi-Fi and iot devices both those defaults may be well known and botnets and offensive security tools will find and exploit devices with weak default

settings still in place just change these defaults to shut down the attack Vector that's pretty simple in a business setting we find that these

default settings lingering generally point to a process issue when we onboard devices to a business Network we'd expect the default settings are going to

be updated to a more secure standard we see iot and wearables you might be wearing an iot device like a fitness tracker or a smart watch they come in the form of facility automation which

would be more relevant in a business scenario in a large facility iot devices are able to manage the heating and AC lights as well as motion Fire and Water

detection and they enable facility managers to be able to configure Automation and monitoring a device function but for critical functions like this it's important that we shut down the low

hanging fruit that attackers are going to be looking for and you'll even see iot incentives vehicles have very specialized sensors embedded assisting with vehicle function

often containerization will come up more than once in this course will introduce the concept here it's a lightweight granular and portable way to package applications

for multiple platforms it reduces the overhead of server virtualization by enabling containerized apps to run on a shared OS kernel in other words

containers do not have their own OS although the containers in the app within won't know that they share many of the same concerns as server virtualization isolation at the

host process Network and storage levels but it can be used in some cases to isolate existing applications developed to run in a VM with a dedicated

operating system just allowing us to get greater density out of our compute because we can run more apps than we could using virtual machines so there's a cost advantage

you'll hear about containerization and topics related to Docker and kubernetes so Quantum computing this is a rapidly

emerging technology that harnesses the laws of quantum mechanics to solve problems too complex for classical computers it replaces the binary one and

zero bits of digital Computing with multi-dimensional Quantum bits known as qubits no widespread use cases as of 2023 for Quantum Computing so there's very little

impact outside the world of scientific research and testing and that being said I don't think there's a lot you could see on the exam a quantum computer though could render all modern

cryptography completely ineffective and require the redesign of new stronger Quantum encryption algorithm now this is

where post Quantum cryptography can help let's talk a bit more and you'll understand why I bring up post-quantum cryptography in just a moment so Quantum cryptography is the practice

of harnessing the principles of quantum mechanics to improve security and detect whether a third party is eavesdropping on Communications it leverages the fundamental laws of physics such as the

observer effect which states that it is impossible to identify the location of a particle without changing that particle now Quantum key distribution is the most

common example of quantum cryptography by transferring data using photons of light instead of bits a confidential key transferred between two parties cannot

be copied or intercepted secretly now post Quantum cryptography is something else altogether post Quantum cryptography refers to cryptographic

algorithms usually public key algorithms that are thought to be secure against an attack by a quantum computer now post-quantum cryptography focuses on

preparing for the era of quantum Computing by updating existing mathematical based algorithms and standards so let's go a bit further with post

Quantum it's really the development of new kinds of cryptographic approaches that can be implemented using today's conventional computers but that will be

impervious will be resistant to attacks from tomorrow's quantum computers post Quantum algorithms are sometimes called Quantum resistant cryptographic

algorithms I actually go much deeper into post-quantum cryptography in my cissp exam cram you're not going to need it for this exam so we're not going to

go there let's talk about Edge Computing so some compute operations require processing activities to occur locally far from the cloud out in the field

this is common in various Internet of Things scenarios like Agricultural Science and Space military even Health Care all the processing of data storage is

closer to the sensors rather than in the cloud data center itself with large Network connected device counts in varied locations data encryption spoofing protection and

authentication are all going to be important they're related to Edge Computing is fog Computing now while not cold out in the syllabus this is mentioned in the

official study guide so this complements cloud computing by processing data from iot devices fog Computing often places Gateway devices in the field to collect

and correlate data centrally but at the edge generally it brings cloud computing near to the sensor to process data closer to

the device in a centralized fashion so this is important to speed processing time and reduce dependence on cloud or internet connectivity for Mission

critical situations Healthcare is a great example of this next up we have confidential computing so the problem solved by confidential

Computing is that sensitive data must be encrypted in memory before an app can process it leaving the data vulnerable so confidential Computing solves for

this by isolating sensitive data in a protected CPU Enclave during processing this Enclave is called a trusted execution environment and it's secured

with embedded encryption keys and embedded attestation mechanisms ensure that the keys are accessible only

to authorized application code next we have Dev secops this is a port mono a combination of development

security and operations Dev secops it integrates security as a shared responsibility throughout the entire IIT life cycle devsecops really preaches

that security is everyone's responsibility and it builds a security Foundation into devops initiatives it appears in the CI CD process throughout

the process really moving back to the left to the beginning of the process and it often includes automating some of the security gates in the devops process

and in the last couple of years we've really seen devsecops elevated to become kind of the de facto approach infrastructure as code

this is the management of cloud infrastructure your networks VMS load balancer storage your topology described in code and just as the same source code

generates the same binary code in the infrastructure as code Model results in the same environment every time it's applied so what I mean by that is if you had an infrastructure as code template

that defined an environment with four virtual machines if you applied that template and only three virtual machines were present it would redeploy the fourth if you deployed that

infrastructure as code again in itself four VMS were present it would not deploy any VMS because the environment matched the template we call that behavior ident

but infrastructure of code is a key devops practice and it's used in conjunction with continuous integration and continuous delivery or CI CD

creating the cicd pipeline but infrastructure is code cicd and devops are standard elements of deployment change and release in the cloud

in devsecops as I mentioned is quickly growing in popularity as well and these four all go hand in hand in the same conversations that does it for 1.2 moving on to

section 1.3 which is understand security Concepts relevant to cloud computing we'll touch on cryptography and Key Management

identity and Access Control Data and media sanitization network security virtualization security common threats

and security hygiene and you see some specific examples called out under the headings here we'll absolutely touch on all of those and more so let's start with cryptography and Key

Management so trusted platform module this is a chip that resides on the motherboard of a device like a laptop for example it's multi-purpose like storage and

management for Keys used for full disk encryption Solutions like BitLocker like dmcrypt on the Linux platform and it provides the operating system

with access to keys but it prevents Drive removal and subsequent data access and then there's a Hardware security module commonly referred to Simply as an

HSM this is a physical Computing device that safeguards and manages digital Keys performs encryptions and decryption functions for digital signatures

strong authentications and really a variety of cryptographic functions in many ways it's like at TPM but these are often external devices so let's talk about the key management

strategy for the encryption key life cycle so encryption Keys should be generated within a trusted secure cryptographic module

fips 140-2 validated modules provide tamper resistance and key Integrity so that's a clear bar of high security and then encryption Keys should be

distributed securely to prevent theft or compromise during Transit best practice here in Crypt keys with a separate encryption key while

Distributing to other parties and then storage encryption Keys must be protected at rest and should never be stored in plain text this includes keys in volatile and

persistent memory so we shouldn't have keys sitting in memory unencrypted and then in use so think of clients whether those are users or their devices

they will use keys for resource access as access controls allow acceptable use policy since the guard rails for data usage

so when a user is onboarded they should sign an acceptable use policy that establishes what is and is not okay with regards to data

revocation we need a process for revoking access at separation in the event of a policy breach or a device or key compromise for example in the world

of pki you would revoke the certificate on the issuing certificate Authority and the final phase of the life cycle is

key destruction this is the removal of an encryption key from its operational location and key deletion goes further and removes any info that could be used to

reconstruct that key for example in mobile device management systems they will remove certificates from a device during device wipe or retirement when I say mobile device

management I'm talking about Solutions like Microsoft InTune or AirWatch but regardless of how those keys are issued and how those devices are managed

there needs to be a process an operational process for key destruction and or deletion there are some other key management Concepts we should touch on

the level of protection so encryption Keys must be secured at the same level of control or higher as the data they protect and the sensitivity of the data is what dictates this level of

protection it's defined in the organization's data security policies and when we're accessing that data the system we are accessing the data from must be secured at the same level as the

data as well key recovery circumstances where you need to recover a key for a particular user without that user's cooperation like in termination or key loss you

always want to have a way to recover keys so we can get to data in those odd circumstances key escrow copies of keys held by a trusted third

party in a secure environment which can Aid in many other areas of Key Management including recovery so let's talk about Key Management in the cloud Key Management Systems in

particular so all of your csps Azure AWS Google Cloud platform they all offer a cloud service of some sort for centralized Secure Storage and access

control for application secrets of volt solution in Azure it's called key Vault AWS it's KMS Google Cloud platform it's Cloud KMS vault

a secret in this context is anything that you want to control access to like an API key could be a password certificates tokens cryptographic keys

and the service will typically offer programmatic access they may have GUI access of course through a portal but also programmatic access often via an

API to support devops and your CI CD automation access control is available at the Vault instance level and to the secret stored

within the Vault so generally control plane access and then data plane access is how I'd phrase that and secrets and keys can generally be

protected either by software or via Phipps 140-2 level 2 validated HSN and whether you let the CSP manage keys

in the back end or you require Hardware management yourself or need to bring your own keys that level of support is going to vary by CSP and you want to have those conversations directly so

let's shift gears and tonk identity and access control so in the cloud services should include strong authentication mechanisms for validating the user's identity and their credentials not

unlike on-premises that means we need to standardize streamline and develop an efficient account creation process as

well as a timely deprovisioning process so when a user departs from the company at the point of Separation we need to de-provision access quickly and

efficiently centralized directory Services active directory you know tends to be the most common on-premises we have Kerberos and ntlm authentication there Kerberos being

preferred and from a privileged user management perspective we want to manage our privileged access accounts and to enforce least privilege and need to know

at all times separation of Duties also a good idea as an effective risk mitigation technique we'll dive into these a bit deeper in just a moment

and then from an authentication and access management perspective we want to focus on the manner in which users can access required resources that means we

need to design a secure authentication and authorization experience and that's where multi-factor authentication comes into play that's where we authenticate a user with

something they know like a pin or a password something they have like a trusted device a mobile phone with an authenticator app for example

something you are biometric thumbprint really common that we use face ID or or thumb print on our mobile devices now and this is going to be a prevention

mechanism for multiple types of attacks phishing spear fishing Key loggers credential stuffing Brute Force reverse Brute Force attacks man in the middle

MFA can solve a lot so let's talk about how we limit access and damage so need to know in the principle of least privilege I mentioned are two Standard Security principles

that are implemented in Secure networks and they limit access to data and systems so that users and other subjects have access to only what they require

that's going to help prevent security incidents and if we have a security incident it's going to help limit the scope of damage so when we apply these

principles we're going to minimize the potential impact when we ignore need to know and least privilege we're going to expose ourselves to potentially far greater damage in the event of a

security incident so let's talk about fraud and collusion so collusion is an agreement amongst multiple persons to perform some unauthorized or illegal actions

separation of Duties helps here a basic principle that ensures no single person can control all the elements of a critical function so for example the

person who configures and approves privileged access shouldn't be the person who is then using that privileged access to carry out a task if I can

grant myself God right and do that work and then reverse those rights if there's not a separate person in that process that's a problem that's a risk to the organization

and even if it's not malicious someone might Grant themselves too much permission too much access job rotation is another one if employees

are rotated into different jobs or tasks are assigned to different employees that helps with collusion for sure but implementing these policies helps prevent fraud because it limits the

actions individuals can do without colluding with others so getting specific to Cloud let's talk about account types so we have service accounts and the cloud I see those

commonly called service principles or managed identities we'll see on some platforms like Azure for example now when software is installed to run on a computer or a server it may need

privileged access to run when we have Cloud apps they may need an identity as well a service principle it's a lower level administrative

account and the service account is really what fits the bill in many of these cases a service account is really a type of administrator account used to run an application when we get into the

cloud that service principle can be configured with the specific permissions the application needs how that service principle is referenced will vary by

platform what they call it in Azure will be a bit different than AWS and Google Cloud platform but the concept certainly carries over to Cloud when we have people who perform the same

duties like members of customer service they could use a shared account that happened but when user level monitoring auditing or non-repudiation are required

you absolutely must eliminate the use of shared accounts non-repudiation is how we guarantee that an action was performed by a specific person and we can prove it beyond a shadow of a doubt

so they cannot deny it with a shared account non-repudiation is broken most Cloud identity providers have options to eliminate the need for shared account

let's talk about privileged access management for a moment this is a solution that helps protect privileged accounts within a tenant preventing attacks it also provides visibility into

who is using privileged accounts and what tasks they're being used for so native to Some Cloud identity providers today you may see a

just-in-time elevation feature that allows users to activate a privileged role to perform their tasks and then that privilege is removed when they

indicate they are done with the task or after a period of time a finite period of time which is great because it's a more granular interpretation and application

of least privilege when we have that just in time elevation feature that automatically removes those rights it's it's definitely a process enhancement and that's definitely true in Azure active

directory which is the identity provider that Microsoft gives us in the cloud it's the identity provider with Office 365 you'll see similar capabilities with identity providers on other Cloud

platforms as well data and media sanitation may come up on the exam and the relative security of each of the methods so we'll cover a few here the first is erasing performing a

delete operation against a file files or the media itself clearing or also known as overriding this is preparing the media for reuse and ensuring data can't be recovered

using traditional recovery tools now overwriting may involve one or multiple passes and it may use random data or zeros depending on the chosen methodology

purging is a more intense form of clearing that prepares the media for reuse in less secure environments now media is reusable with any of these

methods the media itself data may be recoverable with forensic tools so in the case of overwriting which is called out explicitly in the syllabus so you should definitely know it

data may be recoverable depending on whether the overwriting method uses one or multiple passes with random data or zeros

fact is it depends now more secure Data Destruction crypto shredding called out in the syllabus cryptographic Erasure is another way to say this one data is

encrypted with a strong encryption engine the keys used to encrypt the data are then encrypted using a different encryption engine then keys from the

second round of encryption are destroyed so the pro here is data cannot be recovered from any of the remnants on the media

the downside is this definitely involves High CPU and performance overhead if the exam poses a question on secure Data Destruction

crypto shredding is almost certainly the answer then we have Data Destruction on media like hard drive or DVD or CD-ROM these are a bit less likely to come up on the

exam but will could be distractors on questions so we'll cover them just to be thorough degaussing which creates a strong magnetic field that erases data

on some media and destroys Electronics we have shredding you can shred a metal hard drive into powder you can pulverize the drive with a hammer or drill through

all the platters rendering it inoperable media is not reusable with any of these methods data is also not recoverable by any means with any of these methods but

again overriding and crypto shredding are called out in the official exam syllabus and therefore the most likely to appear on the exam next up is network security

so we'll start with network security groups which provide an additional layer of security for cloud resources they act as a virtual firewall for virtual networks and the resource instances

within those networks like your VMS your databases and your subnets they carry a list of security rules that include IP addresses and Port ranges that will

allow or deny Network traffic to Resource instances but they act as a virtual firewall for a collection of cloud resources within that network with

the same security posture and how a network security group works exactly varies slightly from CSP to CSP in Azure for example you can apply a network

security group to a subnet or to a network adapter but it's going to give you that IP port range filtering functionality so it's essentially like a layer 4 firewall

Network segmentation involves restricting services that are permitted to access or be accessible from other zones using rules to control inbound and outbound traffic rules are enforced

typically by IP address ranges of each subnet and within a virtual Network segmentation can be used to achieve isolation oftentimes we'll see network

security groups used to implement Port filtering so for example we could put database servers in their own subnet and restrict inbound traffic just to

listening database service ports API inspection and integration to rest apis are the modern approach to writing web services and this enables

multi-language support and rest can handle multiple types of calls return different data formats and API is published by an organization should

include encryption authentication rate limiting throttling and quotas and in fact in many of your csps you'll find a path service that allows you to

publish and manage your apis and to implement these security controls traffic inspection we're really talking about packet capture here and packet

capture in the cloud generally requires specialized tools or Services designed for this purpose in that particular CSP environment because traffic is often

sent director resources and promiscuous mode on a VM network adapter is not possible and maybe not effective so what you'll find is your csps have a service

to do this for example in Azure it's called Network Watcher in AWS it's VPC traffic mirroring but your csps will offer tools to facilitate packet capture

within your tenant geofencing so geofencing uses global positioning system GPS or RFID to establish Geographic boundaries and once

a device is taken past the defined boundaries the security team would be alerted typically so for example we could use geofencing to restrict

access to systems and services based on where the access attempt is being generated from we could prevent devices from being removed from the company's

premises in high security situations and we can use this to identify unusual traffic patterns and prevent misuse next we're going to talk about zero

trust Security in the network context but first I want to cover just a few zero trust basics so zero trust addresses the limitations

of the Legacy network perimeter-based security model with a firewall and trusted and untrusted networks it treats user identity of the control plane it really addresses the reality that we

have users that work who are much more mobile than they used to be working from anywhere and it assumes compromise or breach in verifying every request essentially no

entity is trusted by default so let's just touch on the core three principles of zero trust security the first is verify explicitly we always authenticate and authorize

based on all available data points we're going to look at the user's identity the location they're coming from are they coming from a healthy compliant device what service or workload are they trying

to access are they trying to access sensitive data is there anything anomalous about the request and we always use least privilege access

we limit user access we use just in time and just enough access risk based adaptive policy so we're going to you know definitely deny

authentication requests where we see for example impossible travel or a very unexpected location or other anomalous behavior that raises that user's risk

profile and data protection like DLP data loss prevention policies and we assume breach we're going to segment access to minimize the scope of

impact and we'll verify end-to-end encryption we'll use analytics to get visibility and drive threat detection and improve defenses so the zero trust Security in the

network context means micro segmentation of our network network security groups firewalls inbound and outbound traffic filtering inbound and outbound traffic

inspection idps looking for potentially malicious traffic attachment anomalous traffic patterns and

centralized security policy management and enforcement we talked about the policy engines that our csps have for example Azure policy you'll also find in

some cases csps will offer centralized policy for their firewall services for their software-based firewalls

containerization we touched on earlier examples being Docker and kubernetes remember the key difference between containerization and server virtualization as containers do not have

their own OS they share a single OS kernel and we can use containerization in some cases to isolate existing applications that were developed to run on a VM with a dedicated operating

system the container does fool the application into believing it has its own kernel even though it doesn't so if we look at the type 1 bare metal

hypervisor we talked about this earlier VMware esxi Microsoft hyper-v with virtualization we have virtual

machines with each VM in its own OS kernel and memory resulting in more overhead and when we look at a container host which is generally itself a virtual machine

you'll see that the container has its binaries its libraries its applications and those containers are sharing a single operating system so

they're sitting on a single host OS that's the real difference each VM has its own OS kernel and memory so the overhead is greater containers are isolated but they share a single OS

kernel as well as binaries and libraries where they can and to unpack that just a bit further the core components in a container platform like Docker or kubernetes will

include orchestration a scheduling controller Network and storage a container host which is a virtual machine container images think of container

images as the equivalent of a VM template a container images to a container as a VM template is to a virtual machine and a container registry which is where

we store our container images and we need to secure access to that container registry the isolation with containers is logical for isolating processes compute storage

Network secrets in the management plane there's a lot of isolation happening on that container host you need to be an expert in containerization for the exam but you'll

want to understand the concepts that we're talking about here now when we look at containers in the cloud context really what is the de facto standard

today is managed kubernetes what you'll find is your container host or cloud-based virtual machines this is where the containers run again but your

csps offer hosted kubernetes services and these handle the critical tasks for you like Health monitoring and the management cluster you basically pay for

the agent nodes within your clusters you don't pay for the management clusters you're really paying for the container host and the compute that you're running there your major csps also offer some

sort of monitoring solution that will identify at least some potential security concerns in your kubernetes environment

and like I said this shares many of the concerns of server virtualization but you need to enforce isolation of network data and storage access at the container level but really you're going to find

that managed kubernetes is really the de facto standard that everybody is using in the world today in your big three csps all have a managed kubernetes service in the

Microsoft world it's Azure kubernetes service it's eks on the Amazon platform and gke on Google Cloud platform

so we talked about serverless technology back in section 1.2 just a couple of concerns around security related to serverless so where possible using an

API Gateway as a security buffer to protect that serverless endpoint to avoid distributed denial of service

attacks attempts to exhaust capacity and we want to configure secure authentication whether that's oauth saml openid connect we might use multi-factor

authentication if it's an endpoint being accessed by a person directly we'll want to separate our development and production environments Implement

lease privilege so some pretty standard security practices here but applied in the serverless context now ephemeral computing is the practice of creating a

virtual Computing environment as a need arises so basically the environment's destroyed once needs are met and resources are no longer needed the

primary use case here would be an auto scaling scenarios where we need elasticity rapid scale up as demands

increase and well security more or less should take care of itself in some respects in that when you're scaling out any security guard rails you have

through security policy for the service itself should apply to the compute instance of scaling out and security should take care of itself in that when the environment's no longer needed the

resources are destroyed you do want to make sure that when you have a service that auto scales out that it automatically scales back in that in fact those instances that were created

are in fact destroyed and they're not sitting there dormant but excessive now we're going to talk about some common threat so the first is data breach where sensitive data is stolen

this could include personally identifiable information or protected health information often due to poor application or database security design

or configuration where data is exposed without proper authorization this is preventable by following secure development practices and adhering to recommendations in the secure data life

cycle which we're going to touch on just a bit later in this video and then there's data loss when sensitive data is unknowingly exposed to the public this is often through a

system or service misconfiguration or over sharing we'd see this commonly in the early days of cloud with Amazon S3 storage buckets that were not secured by

default exposing data unexpectedly but the data breach is typically the result of a Cyber attack data loss is sometimes called a data leak it's

unintentional we have a common threads to our apis our web services like soap or restful Services these are exposed interfaces

that allow programmatic interactions between services and they're definitely an Avenue for security breach if they're not implemented properly and secured so rest

uses the https protocol for web Communications to offer API endpoints rest is the standard you're going to see most commonly today but that

API endpoint and https makes it a target for distributed denial of service attacks security mechanisms include API Gateway authentication IEP filtering

throttling quotas data validation most of your major csps offer a path service that you can use to host and to secure your web services

and you'll also want to make sure that you store distribute and transmit your access keys for your apis in a secure fashion so malicious insiders these are

disgruntled employees that can wreak havoc on a system internal acts of disruption may include theft or sabotage so sabotage being intentional destruction

then there's traffic hijacking when attacks are designed to steal or wedge themselves into the middle of a conversation in order to gain control abusive cloud services sometimes

customers misuse their cloud services for illegal or immoral activities insufficient due diligence which is the process or effort to collect and analyze

information before making a decision or conducting a transaction that's due diligence and failure to perform due diligence can result in a due care violation knowing

the difference between due diligence and due care is actually important for your career and will be helpful in understanding what insufficient due diligence means so let's just talk about

the difference between the two briefly so due diligence is the process of collecting and analyzing information before making a decision or conducting a

transaction due care is doing what a reasonable person would do in a given situation it's sometimes called the prudent person rule

so together these will reduce Senior Management culpability and downstream liability when a loss occurs because the

organization has a responsibility of due care of implementing reasonable security to protect user data so let's just break these down

a bit further so if we have a decision due diligence typically happens before the decision there's research planning evaluation

largely before the decision and due care is the implementation the operation and upkeep the reasonable measures of security that the doing after the

decision really so due diligence increases our understanding of the situation and therefore reduces our risk and do care

that prudent person rule implementing reasonable measures to reduce our liability and our exposure

but the evaluation that we perform in due diligence really helps us to determine what we need to implement as part of our due care obligation

to look at it another way due diligence is about thinking before you act and do care really dictates that actions speak louder than words so you

get the idea here another way due diligence do detect and do care do correct that's just a little mnemonic you can use to remember the difference between the two so hopefully

that clarifies due diligence versus due care and just to give you a couple of examples here on the due diligence side knowledge and research of laws and

regulations industry standards best practices and examples of due care would be delivery or execution including reporting security incidents security

awareness training disabling access at separation in a timely way so another common threat in Cloud shared technology vulnerabilities so the

underlying infrastructure of the public Cloud was not originally designed for the types of multi-tenancy we see in the public cloud modern virtualization software does

Bridge most of the gaps now what threats remain in a shared public Cloud infrastructure well Cloud infrastructure can still be

vulnerable to Insider threats for sure unintentional misconfigurations are also a concern and to a lesser degree disruptive attacks of scale denial a

service or distributed denial of service most of your csps have some layer of protection by default and additional Services you can Implement to reduce

your DDOS exposure noisy neighbors can be a problem so when you're in a multi-tenant scenario if you have another tenant sharing a server capacity

for example and they are noisy meaning that they're taking up a lot of capacity they're a heavy user that can potentially impact you in the

wrong circumstances less concerned today now for Regulatory Compliance and high criticality scenarios csps often offer some higher

isolation and flexible scale out options in fact in recent years we've seen for the highest security scenarios you can even find some dedicated physical host

scenarios available in a public cloud so let's talk about baselining and this really comes in the context of configuration and change management so I'd like to tackle the bigger topic so

configuration management ensures that systems are configured similarly configurations are known and they are documented

so baselining ensures that systems are deployed with a common baseline or starting point and imaging is a common baselining method now change management helps reduce

outages or weaken security from unauthorized changes to that Baseline configuration and versioning uses a labeling or numbering system to track changes in the

updated versions of our Baseline whether that's an image an application or other system and it requires changes to be requested approved tested and documented so

finishing up security hygiene here at the end of section 1.3 we have patch management this is the process of identifying acquiring installing and verifying patches for products and

systems it's a function included in change management patches correct security and functionality problems and software and firmware both applicability

and install are often automated with management tools an applicability assessment is performed to determine whether a particular patch or update actually applies to a system

before an attempt is made to install that patch you'll sometimes see patch management referred to as update management

that brings us to section 1.4 understand design principles of secure cloud computing so here we'll touch on the cloud secure data lifecycle

cloud-based business continuity and Disaster Recovery planning business impact analysis functional security requirements security considerations and

responsibilities for different Cloud categories and finally devops security so let's start with the cloud secure data lifecycle

so the life cycle begins with data creation data can be created by users for example a user creates a file data can also be

created by systems a system logs access to ensure data is handled properly it's important to ensure data is classified

as soon as possible after creation ideally data is encrypted at rest data should be protected by adequate

security controls based on its classification controlling its use and when data is shared or in transit it

should be secured anytime it's in transit over the network preferably encrypted archival is sometimes needed to comply

with laws or regulation requiring the retention of data and the secure data life cycle ends with destruction when data is no longer needed it should be destroyed in such a

way that it is not readable nor recoverable crypto shredding happens in this phase so I mentioned multiple potential states

of data in the secure data life cycle let's just touch on these briefly we have data in transit which is data on the wire in Flight commonly protected with transport layer security

certificate-based security or tunneled through a VPN we have data at rest in storage on disk in a database protected through encryption quite typically

we have data in use data in memory in Ram CPU or Cache it should be flushed from memory when transaction is complete or the system is powered down and

sensitive data should always be encrypted in memory so how can we encrypt different types of data at rest

well there's storage service encryption so your CSP storage providers usually protect data at rest automatically they usually encrypt by default before

persisting it to manage disks object file or queue storage we have full disk encryption which helps you encrypt windows or Linux is VMS

using BitLocker on the Windows platform and the DM Crypt feature of Linux to encrypt your OS and data disks and then there's transparent data

encryption this helps protect Microsoft SQL database and data warehouses against the threat of malicious activity with real-time encryption and decryption of

database backups and transaction log files at rest without requiring app changes it's transparent and has essentially no performance

impact some database platforms also provide row level encryption column level encryption or data masking on the topic of data security you may see questions regarding data roles on the

exam so two roles you should definitely know for the exam data owner who holds the legal rights and complete control over a single piece of data usually a member of Senior

Management they can delegate some day-to-day duties they cannot though delegate their total responsibility for that data

then there's the data custodian who's responsible for safe custody transport and storage of data implementation of business rules technical controls

confidentiality integrity and availability audit Trails usually someone in the I.T department they don't decide what controls are needed but they do Implement those controls for the data

owner tip here if the question mentions day-to-day responsibility that's the custodian so there are a couple of gdpr data roles

that are worth knowing just in case for the exam that's the data processor a natural or legal Person Public Authority agency or other body that processes

personal data solely on behalf of the data controller the data controller is the person or entity that controls processing of the data

these are in the official study guides they may Well appear on the exam and other data roles there's the data subject and I've mentioned this briefly earlier in the

course this refers to any individual person who can be identified directly or indirectly via an identifier the subject I called them

identifiers may include name and ID number location data or VF Factor specific to the person's physical

psychological genetic mental economic cultural or social identity any way we can identify that person and then the data steward who ensures the data's context and

meaning are understood and business rules governing the data's usage are in place they use that knowledge to ensure the data they are responsible for is

used as intended let's talk about business continuity and Disaster Recovery a couple of definitions related to business continuity and Disaster Recovery worth

knowing so we have the business continuity plan which is the overall organizational plan for how to continue business and then the disaster recovery plan which is the plan for recovering from a

disaster impacting I.T and returning the I.T infrastructure to operation

I.T infrastructure to operation so what's the difference between business continuity planning and Disaster Recovery planning well business continuity planning focuses on the whole

business where Disaster Recovery planning focuses more on the technical aspects of recovery business continuity planning will cover Communications and process more broadly basically business

continuity planning is an umbrella policy and Disaster Recovery planning is part of it Disaster Recovery is built into Cloud architecture and there are some Concepts we covered earlier that

come into play here region pairs address site level failure so we could lose an entire region like West U.S for example and region pairs are 300 plus miles

apart they're selected by the CSP to ensure that a disaster doesn't impact both the primary and the backup availability zones address data center

failures within a cloud region to remember within a region like East U.S

or West U.S you have multiple data centers and fairly close proximity a CSP region like East U.S would include multiple data centers not just a single

Regional Data Center and availability sets address rack level failures within the data center itself so this consists of two or more fault

domains for power or network Etc you may see questions around business impact analysis on the exam and a

business impact analysis contains two important items a cost-benefit analysis and a calculation of the return on investment now a cost benefit analysis lists the

benefits of the decision alongside their corresponding cost and it can stop there a cost-benefit analysis can be strictly quantitative just adding the financial

benefits and subtracting the associated cost to determine whether a decision will be profitable a thorough cost benefit analysis will

consider intangible benefits those that you cannot calculate directly functional security requirements so what's the difference between functional

and non-functional security while functional security requirements Define a system or its component and specifies what it must do it's captured in use cases it's defined

at a component level for example application forms must protect against injection attacks we do this by writing input validation

into our application forms it specifies a specific function non-functional security requirements on the other hand specify the system's quality characteristics or attributes

they apply to the whole system the system level for example security certifications are non-functional if we say a system must

be fed ramp certified or ISO 27001 or HIPAA certified security considerations for different

Cloud categories we'll touch on isaf the Three core categories of cloud computing so in the is space where we're thinking about server virtualization we

have VM attacks our virtual Network hypervisor attacks VM based root kits virtual switch attacks

co-location denial of service on the pathic side we have system and resource isolation user level permissions access management

protection against malware backdoors and Trojans and in SAS we have data segregation data access and policies and web

application security notice here as we move from is to pass to SAS the list of considerations gets shorter and if you remember back to the

shared responsibility model the customer has the most responsibility in the is category and the cloud service provider takes on more and more responsibility as

we move to the right so your considerations are fewer as we move from left to right on this page the attack service shared responsibility and data sensitivity all influence

attack and defense strategies but if you go back to the shared responsibility model it makes sense why we see fewer concerns as we move from left to

right on this chart so virtualization focused attacks there is VM Escape where an attacker gains access to a VM then attacks either the

host machine that holds all the VMS the hypervisor or even some of the other V apps protection from VM Escape we ensure patches and hypervisor and VMS are

always up to date guest privileges are low and server level redundancy intrusion prevention and detection are also in place and in effect

there's VM sprawl when unmanaged VMS have been deployed onto your network and because it doesn't know it's there it may not be patched and protected and

thus it's more vulnerable to attack So to avoid VM sprawl enforcement of security policies for adding VMS to the network as well as periodic scanning to

identify new virtualization hosts on our Network and these would apply to both VMS and VM container hosts that we use in containerization like Docker and

kubernetes application attacks so these are attacks attackers used to exploit poorly written software we have rootkit which is escalation of

privilege these are freely available on the internet and they exploit known vulnerabilities in various operating systems enabling attackers to elevate privilege on a system

and we can stop most root kit based threats by simply keeping security patches up to date anti-malware software is good EDR xdr

Solutions installed on our host to watch for malicious activity backdoor attacks these are undocumented command sequences that allow individuals with knowledge of the back door to

bypass normal access restrictions these are often used in development and debugging operations a back door for the developer countermeasures here would be firewalls

anti-malware network monitoring and code review to catch these back doors before they make it into production we want to make sure that back doors don't exist generally speaking

Network attacks there's denial of service which is a resource consumption attack intended to prevent legitimate activity on a victimized system and then there's distributed denial of service

DDOS which is a denial of service attack utilizing multiple compromised computer systems as sources of attack

countermeasures are firewalls routers intrusion detection a Sim solution disabling broadcast packets entering and leaving our Network disabling Echo

replies patching dos attacks are really a class of attacks there are many different types of denial of service and distributed denial of service attack so we consider

these a class of attacks so types of distributed denial of service you have network based attacks which are targeting flaws and network protocols they often use botnets

techniques like UDP icmp flooding send flooding application attack so exploit weaknesses in the application layer the layer 7 by

opening connections and initiating process and transaction requests that consume the finite resources like disk space and available memory we have

operational technology or OT DDOS attacks these Target weaknesses of software and Hardware devices that control systems in factories power

plants and other Industries like iot devices they often Target weaknesses using network and application techniques that we described above here in the other two there are a few effective

countermeasures to DDOS attacks intrusion detection and prevention rate limiting limiting the number of requests that can come to a system in a given period of time

firewall egress and Ingress filters and your cloud service providers will generally have DDOS protections built into their platform so there's really an

invisible layer of protection that's there by default and they even in some cases have a premium skew of DDOS protection you can buy that has

additional levels of configurability for your cloud services and your environment and wrapping up section 1.4 we have devops security so devops relies heavily

on deployment automation to deliver continuous integration and continuous delivery that CI CD automation we talked about earlier and security control

should be implemented to mitigate risks so you have two categories of security controls you have technical controls like automated software scanning

vulnerability scanning web application firewalls software dependency management access and activity logging and application Performance Management and

then we have administrative controls like developer application security training documented policies and procedures code review approval gates in

our CI CD process and that brings us to section 1.5 evaluate cloud service providers

in fact section 1.5 is the last in domain one so we'll finish up with a look at verification against criteria and we'll touch on ISO

iec27017 as well as the payment card industry data security standard or PCI DSS and we'll talk about system product certifications we'll have a look at

common criteria as well as the federal information processing standard or Phipps 140-2 so let's start with ISO IEC

27017 so this provides the guidelines for information security controls applicable to the provision and use of cloud services it provides cloud-based

guidance on several ISO IEC 27002 controls along with seven Cloud controls that address who is responsible for what between the cloud service provider and

the cloud customer the removal and return of assets when a contract is terminated protection and separation of the

customer's virtual environment virtual machine configuration administrative operations and procedures associated with the cloud environment

customer monitoring of activity within the cloud and virtual and Cloud Network environment alignment PCI DSS stands for payment card industry

data security standard it's a widely accepted set of policies and procedures intended to optimize the security of credit debit and cash card transactions

and it's not a government enforced regulation at all it was actually created jointly in 2004 by four major credit card companies Visa Mastercard Discover and American Express and it's

enforced contractually between these card companies and their vendors their merchants so it's based on six major objectives

a secure network must be maintained in which transactions will be conducted card holder information must be protected wherever it's stored

systems should be protected against the activities of malicious hackers cardholder data should be protected physically as well as electronically

and networks must be constantly monitored and regularly tested and finally a formal information security policy must be defined

maintained and follow next up we have ISO iec15408 common criteria which enables an objective evaluation to

validate that a particular product or system satisfies a defined set of security requirements it ensures customers that security products they purchase have been thoroughly tested by

independent third-party testers and meets customer requirements the certification of the product only certifies product capabilities if it's misconfigured or mismanaged

software is no more secure than anything else the customer might use it's designed to provide assurances for security claims by vendors and in fact common criteria is used

almost exclusively by government agencies let's talk about fips 140-2 that's federal information

processing standard it was established to Aid in the protection of digitally stored unclassified yet sensitive

information was developed by the National Institute of Standards and technology nist for use in computer systems by non-military American

government agencies and government contractors so there are three Phipps security levels you want to be familiar with there's level one which is the lowest

level of security there's level two that specifies the security requirement for cryptographic modules that protect sensitive information and level three

that requires physical protections to ensure a high degree of confidence that any attempts to Tamper are evident and detectable and as we close out domain one I want to

point you to some useful documentation from csps and Industry groups with guidance on cloud design and security so in the architecture category

we have the AWS well architected framework the Azure well architected framework and the Google Cloud architecture framework from industry groups we have Enterprise

architecture reference guide from the cloud security Alliance the cloud computing reference architecture from nist and these really obviously focus on architecture more than security but

there's going to be some security content in there now security focused we have the Microsoft cyber security reference architecture the AWS security

reference architecture and the Google Cloud security foundations guide now from the industry we have the Enterprise Cloud security architecture from Sans

the security technical reference architecture from sisa and the cloud computing security reference architecture from nist skimming the San Francisco and this docs

may be helpful for the exam purely optional if you're curious and that's a wrap on domain one

so let's get into domain two cloud data security and I will cover every topic mentioned in the official exam syllabus I will also provide examples and Concepts when possible and in this

particular installment I'll actually provide a little show and tell in a couple of cases in a live Cloud environment just to give you that additional bit of context to ensure that

these Concepts really sink in this is a multiple choice exam but a little real world exposure always helps let's take a quick look at the exam Essentials those areas the official

study guide says will Factor prominently on exam day we have risk and controls in each phase of the cloud data lifecycle which risks appear in each phase and

which controls should be used to address various cloud data storage architectures we touched on these in the building blocks in part one will expand on that

coverage here in domain 2. how and why encryption is implemented in the cloud the role of cryptography encryption key in certificate management HSM

practices of obscuring data masking anonymization tokenization and a few others elements of data logging storage and Analysis and the importance in the

data life cycle as well as the importance of egress monitoring so this will include data loss prevention identification through tagging pattern matching and labeling what you really

see here is the role of all of these in the cloud secure data life cycle so we need to be familiar with these Technologies but also where and why we

apply them throughout the secure data life cycle data flows and their use in a cloud environment and we'll take a look at an actual data flow diagram I put together for you so you can see the

concept in action the purpose and method of data categorization and classifications how to assign data categories and classifications as well as data mapping and labeling again all

in the context of the secure data life cycle roles rights and responsibilities of data ownership so roles like data subject owner controller processor and custodians will touch on the most

important roles for this exam data Discovery methods the difference between structured unstructured and semi-structured we see that semi-structured category pop in with the

2022 update of this exam objectives and tools for information Rights Management and finally policies for data retention

deletion And archiving so retention and Disposal formats which we touched on in domain one and how these affect

regulations and our policy life cycle so in section 2.1 which is describe cloud data concept we'll get right to it starting with a look at the cloud data

lifecycle phases where we need to apply these Technologies to our data throughout its lifecycle we'll touch on data dispersion as well as data flows

and we'll have a look at the data flow diagram I mentioned a moment ago now the cloud secure data lifecycle this is the model put forward by the cloud security Alliance and it starts in the create

phase data can be created by users a user creates a file for example data can also be created by a system a system logs user access for example

now in the store phase to ensure data is handled properly it's important to ensure data is classified as soon as possible only through classification can

we apply appropriate security controls and ideally data is encrypted at rest and in the cloud that is doubly important in the use phase data should be

protected by adequate security controls based on its classification and sharing refers to anytime data is in use or in transit over a network

next we have the archive phase archival is sometimes needed to comply with laws or regulations that require retention of data and finally the destroy phase when

data is no longer needed it should be destroyed in such a way that it is not readable nor is it recoverable crypto shredding happens in this phase we talked about crypto shredding as a

more secure method of Data Destruction and if you see a question on the exam around Data Destruction odds are crypto shredding is going to be the answer you'll want to know the cloud

secure data lifecycle well for the exam and we'll touch on various phases of the secure data lifecycle throughout this domain data dispersion this is a core principle

of business continuity that says it's important that data is always stored in more than one location in other words data is dispersed across multiple locations and this is actually easier in

the cloud because the CSP owns the underlying complexity that delivers site level resiliency and cloud storage for is includes several different levels of

storage redundancy including local where we see replicas within a single data center Zone where we see replicas to multiple data centers within a region

and Global where we see region level resiliency where replicas are backed up to a backup region we talked about those local Zone and Global Concepts back in

domain one in section 1.2 so data flows let's take a look at a data flow diagram which is useful to gain visibility to ensure that adequate

security controls are implemented throughout our Cloud infrastructure so for example in our CSP environment this may be Azure AWS Google Cloud platform

we may Implement a database subnet which would typically be private an app subnet also often private so by private I mean not directly accessible from the

internet then we have a perimeter subnet this is internet facing so we may have an API Gateway we may have a web app firewall for user requests coming to our website but you see here we have Micro

segmentation so we've carved up our Network so we can secure these individual workloads appropriately to restrict Ingress and egress only allow

traffic from expected endpoints on expected ports and then we'll map the flow of data through our infrastructure based on the types of requests so for

example a system API request and we see the data flow will map our user request a typical HTTP request coming through a web app firewall hitting a front-end website which may also hit that backend

database but you see we've called out authentication and authorization here and we have our data flows and our security controls we can call out at

each layer as appropriate so we talked about network security groups as an example back in domain one so just an example but that is basically the data flow diagram concept you're going to map

out your infrastructure your application your data flows and your security controls and you'll want to be familiar with the benefits of the data flow diagram for the exam so our benefits include

decreased development time and faster deployment of new system features and with reduced security risk visibility into Data movement which is going to be critical for Regulatory Compliance where

data security is often mandated in law so we need to understand how our data is Flowing so we can add appropriate security controls at every layer

encrypting at rest encrypting in transit encrypting in use so some compliance Frameworks actually require data flow diagrams to capture specific information like geographic

location of data flows or ownership of systems where data is flowing bottom line creating that data flow diagram can be both a risk assessment

activity and a crucial compliance activity so moving on to 2.2 which is design and Implement cloud data storage architectures

so here we'll touch on storage types as well as threats to storage types now for the exam you want to be familiar with the types of storage and the security

concerns associated with storage for all your cloud computing categories so we're talking about infrastructure as a service platform as a service and software as a service now if you watch

domain one you may recall we covered the storage types in the building block technology section of domain one if you've not watched The Domain one video definitely worth going back and having a

look but I'll go ahead and touch on these again briefly here so starting with the is category we have raw storage that would be physical media and you know in the private Cloud world that's

what we'd use to map a VM to a storage line on a storage area Network volume storage attached to an is instance so there we're talking about a lettered

volume in the form of a virtual disk and then object storage that's your S3 storage bucket or Azure store bridge where you commonly see blob storage

in the past category we have structured data that your relational databases like SQL MySQL postgres and then unstructured or Big Data sometimes called nosql

and in the SAS category we have information and storage management that's your data entered via the web interface of a SAS solution content or file storage so that's your file based

content like you'd see with box Dropbox OneDrive ephemeral storage that's any temporary data storage for cache buffer

session data swap volume and then finally content delivery Network that's Geo distributed content for a better user experience or ux for

short so we see cdns used to place content closer to users around the world to deliver that content more quickly be that web content file content video

content Etc these are just a few key examples not an exhaustive list but I wanted to make sure you're prepared for the exam let's shift gears and look at threats to

storage types so there are Universal threats to data at rest regardless of location be that on-premises or in the cloud naturally we're going to be more focused on the cloud in the ccsp exam

but let's start with a look at Universal threads from the perspective of the CIA Triad so unauthorized access threatens confidentiality

improper modification of data threatens integrity and loss of connectivity threatens availability of that data other threats

include jurisdictional issues denial of service data corruption or destruction theft or media loss malware and ransomware

and improper disposal now all of these can happen in the cloud it's largely a question of who's responsible for prevention and Recovery so let's run through these one by one so

we'll start with unauthorized access so a user accessing data storage without proper authorization presents an obvious security concern right the customer must

Implement proper access control the CSP must provide adequate logical separation if we go back to the shared responsibility model we'll remember that

a customer has responsibility for Access Control at least to some degree even in the SAS mom unauthorized provisioning so this is primarily a cost and operational concern

so ease of use can lead to unofficial use unapproved deployment and unexpected cost that unapproved deployment and unexpected cost comes from what we call

Shadow I.T a very common issue where

Shadow I.T a very common issue where folks lay down a credit card they start using cloud services that aren't formally approved by the I.T Department

then loss of connectivity pretty straightforward loss of connectivity for any reason whether that's network connectivity access controls authentication services that takes away

availability of our data jurisdictional issues so data transfer between countries can run afoul of legal requirements regulatory requirements and

privacy legislation bars data transfer to countries without adequate privacy protection and the customer definitely Bears some responsibility by customer I

mean the consumer of cloud services certainly in the gdpr context I'm going to think about businesses in Germany a customer using a cloud which would be a

business is going to Bear responsibility for making sure they meet the Privacy requirements the laws and legislation of

the country of origin and then denial a service I mean the event a network connection is severed between the user and the CSP your csps are going to be better prepared to defend against

distributed denial of service attacks and other disruptive attacks at scale they have the infrastructure to support they have the services in place many of

your large csps have built-in DDOS defenses that they don't even charge you for protections that you may not even be aware of as a customer of those csps

then there's data corruption or destruction so that's human error and data entry malicious insiders hardware and software failures natural disasters

that render data and storage media unusable so some of these issues intentional others maybe not but all concerns we must protect against

that means implementing lease privileged access role-based access control or rbac we call that and off-site data backups then we have theft or media law so in the cloud

csps retain responsibility for preventing the loss of physical media through appropriate physical security controls even in the infrastructure as a service model where we have physical

servers with hypervisors hosting our VMS where the customer has the most responsibility of any Cloud Model the CSP still owns the care and feeding of

the physical server and the associated physical media malware and ransomware so ransomware not only encrypts data stored on local drives such as in an is model but it

also seeks common cloud storage locations like SAS apps so think box drop box or OneDrive ransomware that infects an endpoint a client endpoint

like a Windows workstation will try to crawl into those SAS folders and encrypt that data responsibility there is going to vary by Cloud category though

improper disposal so ensuring that Hardware that has reached the end of its life is properly disposed of in such a way that data cannot be recovered now in that case the CSP is definitely

responsible for Hardware disposal I want to shift gears and talk ransomware for just a moment because this is such a significant threat to virtually every organization so we have

some common countermeasures to ransomware so backing up your computer storing those backups separately so if the files on a computer are encrypted we can restore those from an alternate

location file Auto versioning is very handy so being able to revert to a previous version of a file now some of your cloud-hosted email and file storage Services ease that process Microsoft

OneDrive is one that comes to mind that offers access to 500 previous versions of a file although you can imagine that's handy it's also going to be very labor intensive if you're restoring

files one at a time in that fashion so that's where a backup stored off site can come in very handy and perhaps backing up all the files in your SAS service

so preventative techniques updating and patching computers just shutting the door on known vulnerabilities using caution with web links and email

attachments verifying email senders and email is the most common way in the door for ransomware attacks through web attachments and email attachments that

are carrying that malicious payload we really just need to be careful what we open now there are preventative software programs that give us protections at the

email layer that will detonate email attachments that will check web links to make sure they're not malicious before we are directed to that link there are

AI driven cloud services that offer help with these so you'll find endpoint Focus Services you know xdr whether that's Microsoft 365 defender or any of your

third parties you have email based Protections in services like proofpoint or Microsoft Defender for office 365 that give us those email protections

with the detonation chamber to make sure that our web links and our attachments are safe but user awareness training that's the most important defense of all if we can

teach our users to exercise caution when looking at messages from external senders and being careful about the links that they click and the attachments they download that's going

to be the best way to prevent ransomware exposure so let's talk Regulatory Compliance of certain cloud service offerings may not meet all the organization's compliance

requirements which leads to two security concerns and the customer Bears responsibility in making sure that the services they are subscribed to meet the organization's compliance requirements

but our two main concerns here are the consequences of non-compliance these can include fines or even suspension of business operations the second is data

protection now requirements may include the use of specific encryption standards handling retention the geographic location where the data

is stored now all the regulatory standards you need to be prepared for this exam are going to be covered in this exam cram series I'll be touching on them throughout the six domains but

if I don't mention it it wasn't called out in the exam syllabus and that brings us to 2.3 design and apply data security Technologies and

strategies so in this section we'll touch on encryption and Key Management hashing data obfuscation techniques including masking and anonymization

tokenization data loss prevention a big Topic in and of itself as well as management of Keys Secrets and certificates and certainly at least a couple of these topics or

areas I find folks tend to be a bit less familiar so we're going to touch on some fundamentals I'll provide clear explanations examples where I can and we'll even jump over into a cloud

service provider portal and look at a couple of services so you can see these Concepts in a real world context now this is a CSP agnostic exam it doesn't

focus on any one cloud service provider but it always helps to see examples and I'll provide those where I can give them to you let's start with symmetric versus

asymmetric encryption symmetric encryption relies on the use of a single shared secret key now it lacks support for scalability easy key

distribution and non-repudiation and by scalability I mean scaling from one to many users because key distribution is a real challenge with symmetric encryption on its own

asymmetric encryption relies on public private key pairs for communication between parties it supports scalability easy key distribution and non-repudiation

symmetric encryption tends to be faster therefore it's used for bulk data encryption asymmetric encryption is scalable it's easy to scale to many

parties so we often use asymmetric encryption to distribute keys for symmetric algorithms the two can work together in that respect encryption is a difficult topic for many so we're going

to dig just a bit deeper here so symmetric encryption to touch on this again the sender encrypts the data using that single shared key the recipient then

decrypts that data using that same shared key in asymmetric we see the sender encrypting data using

the public key of the recipient and the recipient decrypts the message using their own private key so let me put that into an example to make it a

bit more clear so we have Franco who sends a message to Maria requesting her public key Maria sends her public key to Franco he

then uses Maria's public key to encrypt the message and sends it to her Maria uses her own private key to decrypt the message because that private key is

unshared and that is the only key that can decrypt the message that was encrypted with Maria's public key we know that only Maria will be able to

decrypt that message assuming she has kept her private key private so recapping there on asymmetric Keys public keys are shared among

communicating parties so anyone can encrypt a message using another user's public key the private keys are kept secret

so to encrypt a message you use the recipient's public key to decrypt a message the recipient uses their own private key to decrypt the message that

was encrypted with their public key for digital signatures to sign a message you use your own private key to validate a signature you use the sender's public

key each party has both a private key and a public key in asymmetric encryption

so if we're sending messages back and forth with asymmetric encryption we each need a private key to decrypt the message sent by the opposing party

encrypted with our own public key the scenario we saw in that example a moment ago often comes in the form of certificates which contains a public private key pair and the trust model

explains how different certification authorities which issue certificates trust each other and how their clients will trust certificates from other

certification authorities the four main types of trust models that are used in public key infrastructure in certificate services that is are bridge

hierarchical hybrid and mesh we'll dive into pki a bit further in just a couple of minutes in case you're unfamiliar so let's put a pin in that I want to talk about key management for a moment and

key escrow addresses a common problem that is the possibility that a cryptographic key may be lost the concerns usually come with symmetric keys that single shared key scenario

where if we lose that key our data is encrypted and can't be decrypted or with the private key in asymmetric cryptography the key that is not shared

now if that occurs there's no way to get the key back then the user can't decrypt the message in either case so organizations establish key escrows

to enable recovery of lost keys so having a copy of those keys with a third party with an external entity now let's

talk Key Management strategy for our encryption key life cycle so encryption Keys should be generated within a trusted secure cryptographic module

and they should use strong random Keys using cryptographically sound inputs like random numbers fips 140-2 validated modules provide tamper resistance and

key Integrity we'll dive into fips 140-2 and a couple of other spots in the series now encryption key should be distributed securely to prevent theft or compromise

during Transit we talked about that challenge of key distribution with symmetric algorithms best practice is to encrypt keys with a separate encryption key while

Distributing to other parties that's where we can take that symmetric key that shared key and encrypt it for

transport using an asymmetric algorithm and you want to plan for securely transferring symmetric keys and distributing keys to the key escrow

agent as well so in case a key is lost we can recover and decrypt that data in terms of storage encryption Keys must be protected at rest and should never be

stored in plain text and this includes keys in volatile and persistent memory so we shouldn't have a scenario

where our encryption Keys remain unencrypted in memory of a computer or other device that concept of Secure Storage extends

to a key stored in a key Vault or on a physical device really anywhere that key may be stored and we also want to consider handling in the process of storing copies for a retrieval if a key

is ever lost that concept of key escrow we discussed a moment ago usage focuses on using keys securely primarily for access controls and accountability authentication and

authorization revocation refers to a process for revoking access at separation like employee separation or in the event of a

policy breach or a device or key compromise perhaps a private key has been compromised and stolen through an infected workstation then we need to

revoke that compromised key now if that compromised key were the private key of a certificate and a pki scenario we would revoke the certificate on the

issuing certification Authority we also need a process for archiving Keys no longer needed for routine use in case they are needed for existing data

now key destruction is the removal of an encryption key from its operational location that's the last phase of the key life cycle

and key deletion goes further and removes any information that could be used to reconstruct that key for example mobile device management

systems remove certificates from a device during a device wipe or retirement I'm talking about MDM systems like Microsoft InTune or AirWatch

so for the exam you will want to understand where encryption can be deployed to protect the organization's data and systems and not only knowing the layers and the services where you can label encryption but also your

options for key storage and management and some Basics about whether keys are CSP managed or they're self-managed and why that would make a difference

so for key storage many of your csps offer fips compliant virtualized hsms that's the Hardware security module that

we can leverage to securely generate store and protect our secrets our cryptographic keys in this case now self-managed keys are typically not the

default and they may have a cost it's worth understanding what would drive the need to self-manage your keys because the CSP is generally speaking always going to default to offering to manage

those keys for you to organizations that use multiple Cloud providers may need to retain physical control over key management and use a

byok or bring your own key strategy so do know that generally to let the CSP manage the keys is a good idea unless you have requirements that mandate your

organization manages the keys for example Regulatory Compliance sometimes necessitates byok or self-managed strategies to recap multi-cloud and Regulatory

Compliance are two scenarios that can drive the need for byok or self-managed key functionality there are a few other Cloud encryption

scenarios related to storage cloud services and applications you should be familiar with on the exam we'll touch on a few of those here starting with Storage level encryption that's

encryption of data as it's written to storage utilizing keys that are controlled by the CSP at least by default unless you opt otherwise and

generally speaking you'll find that your csps now AWS and Azure for example will encrypt data in their storage accounts by default just out of an abundant to caution

volume level encryption provides encryption of data written to volumes connected to specific VM instances utilizing Keys controlled by the customer the underlying technology in

this case is quite usually BitLocker for Windows and DM Crypt for Linux and then we have object level encryption which is encryption of objects as they

are written to storage in which case the CSP likely controls the keys and could potentially access the data that would typically be an Insider scenario a

malicious Insider but possible so to give you a bit of context I want to do a little show and tell of Storage level encryption in the cloud and again

the ccsp exam is cloud service provider agnostic I'm going to show you an example in Microsoft Azure AWS has similar functionality you're not going

to see any Hands-On questions on the exam but this will be important context to help you understand what's typically available and how it works so here on the Microsoft Azure portal

I'll click on storage account and then I'm going to click on one of my existing storage accounts and we'll start under encryption and here I can see with the message

by default this storage account is encrypted it encrypts the data as it's written into the data center and then automatically decrypted when we access

it and for storage services in your major cloud service providers Azure AWS Google Cloud platform I would say that's generally speaking the default you'll

notice that infrastructure encryption is possible so this is an additional layer of encryption a second layer of encryption they sometimes call Double encryption

and you'll also see here that it shows me that Microsoft is managing the keys but we do see that there is a customer managed Keys option should that be necessary for my scenario

now I'm going to scroll down a bit and look at some other security related settings so I see here that we can require secure transfer that means we

are disallowing HTTP without that TLS encryption we are disallowing unsecure versions of server message block

commonly used in file Services I can allow or disallow blob Public Access so I can configure even Anonymous access if

my scenario requires it and I can also allow her to disallow account key access when we disable this any request to the account that are authorized with a

shared key would be denied disabling that setting once enabled could allow me to reverse a previous decision that I regret I'm going to go

up under data management here and click on data protection and just see what other features are available to me here so I see I can enable operational backup

with Azure backup point in time restore soft delete think of soft delete as a recycle bin for your deleted items from your storage blobs or containers

we can enable permanent delete for soft deleted items some version tracking and a blob change feed so really some audit trail of a fashion and if I scroll up

here under networking I'll click on networking under security and networking you'll see here that I can enable or disable public network access I can enable from selected

virtual networks and IP addresses and you see I can add existing virtual networks or new Networks and then I also have a resource firewall here a storage

resource firewall I can add my own IP address you see it automatically identifies my IP but I can add IP addresses or cider ranges and when I scroll down here under exceptions you'll

notice that they allow Azure services on The Trusted Services list to access this storage account that's going to be very handy if we're using multiple Azure services and we need one of those

services to access data in this storage account and that concept of essentially a resource firewall that we see here is not specific just a storage you'll see

resource firewalls come up in multiple cloud service context enabling us to restrict the flow to cloud services from the internet so we

see here in the storage example that's definitely a handy feature so I hope that little show and tell was helpful so back to our Cloud encryption

scenarios next up is file level encryption which is implemented in client apps word processing apps like Microsoft Word or collaboration apps like SharePoint

how encryption is implemented will vary by app and CSP platform of course then there's application Level encryption which is implemented in an application typically using object

storage data entered by the user is typically encrypted before storage and then we have database level encryption this is transparent data

encryption it encrypts database files logs and backups we have column level and row level encryption as well as data masking now the functionality is going

to vary by your relational database management system so Microsoft SQL functionality will differ slightly from MySQL and postgresql typically they all

have some flavor of every one of these options you'll need to be familiar with data obfuscation techniques for the exam and I thought we'd cover these through

an example a use case reducing our gdpr exposure so if we wanted to reduce or eliminate our gdpr requirements we could try data anonymization which is the

process of removing all relevant data so it's impossible to identify the original subject or person if we do this effectively gdpr is no longer relevant

for the anonymized data on the other hand we can no longer recognize the original subject or person so this is only good if we don't need the data anonymization is sometimes called

de-identification in fact de-identification was the term used in the 2019 version of the exam we don't see that in the syllabus any longer then we have pseudonymization which is the

de-identification procedure using pseudonyms or aliases to represent the other data the original subject or person this can result in less stringent

requirements than would otherwise apply under gdpr you want to use this if you need the data and you want to reduce your exposure I find it helps to explain

hashing by talking about how hashing is different from encryption so encryption is a two-way function what is encrypted can be decrypted with the proper key

hashing is a one-way function that scrambles plain text to produce a unique message digest conversion of a string of characters into a shorter fixed length

value there's no way to reverse a hash if it's properly designed a few common uses would include verification of digital signatures

generation of pseudo-random numbers and integrity services so we see hashes used with a file hash for example I can hash

a file send it to another person they can then produce the hash for that same file if I'm using the md5 hash for example if the recipient produces the

md5 hash and it matches the hash of the original file we know that the Integrity is intact we received the same file that was sent

but that file hash comparison is Far and Away the most common that comes to mind so a good hash function has five requirements they must allow input of any length

they must provide fixed length output and they make it relatively easy to compute the hash function for any input they provide one-way functionality what

is hashed cannot be reversed and it must be Collision free and that Collision problem is one of the reasons we don't see the md5 hashing

function used anymore it's because it has a collision problem in certain scenarios we'll see md5 hashing used in file hashing in comparison but that's really about it these days

so data masking so we only have partial data left once we've masked data in a field for example a credit card would be shown as a series

of asterisks with only partial data we'll see a similar technique used for Social Security numbers commonly implemented within the database

tier but it's also possible in code of front-end applications we have another good opportunity here for a quick Show and Tell for database level encryption in the cloud we'll take

a quick look at Azure SQL database again the exam is CSP agnostic and the features we look at here will vary by relational database management system

but this will give you a good idea of some of the features available for A Pas database solution in a major CSP so here in the Microsoft Azure portal I

click on SQL servers and I see the server instances for the Azure SQL database service so if I click on one of these servers I can scroll down here and under data management I can see some

information about backups so there's a built-in backup feature in this Paz service and you'll notice it mentions databases are backed up automatically and the backups are listed below now if

I jump over to retention policies I can see a bit more information I can configure my retention policies it mentions and my point in time restores are available from 1 to 35 days based on

my preference and my long-term retention policies enable me to keep full backups for up to 10 years definitely very relevant when we're talking about data

retention later in this video now I'm going to go down just a bit further and under security I'll click on networking and here I'll see some familiar other features you'll notice I can enable or

disable public network access so it's either disable or selected networks and if I scroll down I can configure virtual networks I can gate which virtual

networks in my subscription can get to this particular service instance if I scroll down a bit further you see firewall rules so again that resource firewall functionality so I can add an

IP address or even a cider range and when I scroll a bit further I see the exception checkbox to allow Azure services and resources to access this

server and I'm now going to click on SQL databases and we'll look at some database level functions under one of my database instances here and again I'm focused on security so I'm

going to scroll down and under security I'll look at auditing I see of an option here to enable audit Trail so default audit settings include a set of action

groups so it's going to audit queries and stored procedures executed against this database as well as successful and failed login

going a bit further down the list I see Dynamic data masking and what I notice here is that it's even recommending a

field that it suggests imask the email address field that's going to be personally identifiable information of a sort for a user and you'll notice here it tells me what

the mask function looks like by default so I have some native capabilities that are more or less provided for me with no additional effort very little

configuration effort and just a couple of clicks down here in the menu I see transparent data encryption which is on here so that's going to encrypt my

database file my backup files and my log files and just an FYI for later in this video you'll also notice here a data Discovery and classification feature so when we're talking about data

classification mapping and labeling a bit later we see there's some built-in functionality here for our structured data another data protection and obfuscation technique called out in the syllabus is

tokenization where meaningful data is replaced with a token that's generated randomly and the original data is held in a vault it's stateless stronger than encryption

the keys are not local and I'm going to compare that to a technology we called out earlier which is pseudonymization to de-identification procedure in which

personally identifiable Fields within a data record are replaced by one or more artificial identifiers or pseudonyms so reversal requires access to another

data source so in this case tokenization goes further than pseudonymization replacing your pseudonym with an unrecognizable token

data loss prevention so DLP is a system designed to identify inventory and control the use of data that an organization deems sensitive it

spans several categories of controls including detective preventative and corrective policies can typically be applied to

email SharePoint cloud storage removable devices and even databases and it's a way to protect sensitive information and prevent its inadvertent

disclosure it can identify Monitor and automatically protect sensitive information in document and that automation is a very common and

important characteristic it monitors foreign alerts on potential breaches and policy violations like over sharing but the protections travel with the document

file or other data preventing local override of DLP protections preventing local override is a key point there let's shift gears and talk Keys Secrets

and certificate management so keys are most often used for encryption operations and they can be used to uniquely identify a user or system key should be stored in a tool that

implements encryption and requires a strong passphrase or MFA to access in the cloud that is typically a key vault which we talked about in domain one Now secrets are often a secondary

authentication mechanism used to verify that a communication has not been hijacked or intercepted certificates are used to verify the identity of a communication party and

they're also used for asymmetric encryption by providing a trusted public key remember the trusted public key is

used by the sending party to encrypt the data which is then decrypted by the recipient using their private key which hasn't been shared and we also talked

about how certificates in asymmetric encryption are often used to encrypt a shared session key or other symmetric key for secure transmission so it helps

us overcome that weakness of symmetric encryption which is key distribution secure key distribution next up we have Key Management Services

so all your csps offer a cloud service for centralized Secure Storage of application Secrets called a vault

and in Azure that's key Vault and AWS it's called Key Management Services and Google Cloud platform it's Cloud KMS fault and a secret in this context is anything

that you want to control access to like API Keys passwords certificates tokens or cryptographic keys and the service

will typically offer programmatic access via API to support devops and continuous integration and continuous deployment or delivery which is CI CD

access control is generally offered at the Vault instance level as well as to the secret stored Within so I think of that instant level

security as management plane security and then to the secrets that's data plane security but your secrets and keys can generally

be protected either by software or by a fips 140-2 level 2 validated HSM this is a good opportunity for another quick Show and Tell of a service you may

not have seen in the real world and that's key Vault for Secrets management so we'll take a quick look at some of the features in Azure key vault I'm here in the Azure portal I'm going

to search for key Vault and I'll look at one of my existing key Vault instances here in Azure I'll look under Access Control this is where we configure

access to the key Vault itself that management plane security I'm going to click on ADD role assignment and here I see a number of existing security roles

with key vault in the name which give me options for easily assigning least privilege access to delegates who need access to this key Vault so I can give

them just enough permissions now under access policies I can configure the permissions at the data plane to the secret types and the operations themselves we're going to look at my

permissions at the data plane here and I see I have permissions for keys for secrets for certificate and I'll scroll down here I see some other privileged operations some key

rotation permissions so quite a lot of granularity and permissions there now I'm going to jump down to the object menu here and here I'll see my secret

types I have Keys Secrets certificates and if we look at keys I'll just click generate or import and I can generate or import or restore a key from backup

we see I can choose my key type here RSA or elliptic curve my key size I can set activation and expiration dates I can enable or disable this key I have rotation policy options

exportability now more interesting perhaps are certificate and I think you'll find generally speaking some Advanced functionality across all your csps we'll

look at the Azure capabilities here so I can generate a new search I can import an existing I see here I can create a self-signed certificate perfectly okay for development purposes but I can also

issue a certificate from an integrated CA or a non-integrated CA so an integrated CA in this case is a trusted certificate Authority that has been

integrated into Azure key Vault to provide enhanced lifecycle capabilities so you'll notice here I can automatically renew the certificate at a given percentage lifetime or a number of

days before expiry so I have some Advanced functionality here and we'll go away from here I want to look at the properties just to show you one

more area where we see some Advanced functionality built into the key Vault feature so there's a soft delete feature think recycle bin for key Vault so the soft delete's been enabled for this

Vault and we can also set a retention period for deleted vaults and secrets so you see for deleted vaults we have 90 days and we have Purge protection so if we enable Purge protection it

enforces a mandatory retention period for deleted Vault and Vault objects so I have quite a lot of functionality in the key vault

so that's just a quick tour if you've never seen a key Vault before now you have next I want to look at digital signatures with you digital signatures are similar in concept to handwritten

signatures on printed documents that identify individuals but they provide more Security benefits it's an encrypted hash of a message encrypted with the sender's private key

and an assigned email scenario it provides three key benefits authentication it positively identifies the sender of the email

ownership of a digital signature secret key is bound to a specific user so it gives us non-repudiation the sender cannot later deny sending the

message this is sometimes required with online transactions and integrity it provides assurances that the message has not been modified

or corrupted recipients know that the message was not altered in transit these Basics should be more than enough

for the ccsp exam if you see a digital signatures question so Key Management refers to the management of cryptographic keys in a cryptosystem and operational

considerations include dealing with generating Keys exchanging these Keys storing these Keys securely their use destruction of keys at the end of their

life cycle crypto shredding we talked about earlier as well as replacement of keys when they are lost or compromised design considerations

would include cryptographic protocol design your key server configuration the number of key servers their roles user

procedures and other relevant protocols now the certificate Authority sometimes called a certification Authority depending on the vendor you're working with

certificate authorities create digital certificates and own the policies related to those certificates and a pki hierarchy can include a single

certificate Authority that serves as the root certificate Authority and the issuing authority the one that's issuing the certificates this is not recommended

the reason being in a single layer pki hierarchy if the server is breached no certificate including the root certificate can be trusted your entire

pki hierarchy at that point is suspect and untrustworthy let's talk certificate types for a moment so we have a user certificate that's used to represent a

user's digital identity and more often than not that certificate is mapped back to a user account a root certificate is a trust anchor in

a pki environment it represents the root certificate Authority from which Trust of the entire chain is derived in a conversation where entities be those users or servers are authenticating one

another with certificates each side must trust the root of the opposing side if it's not trusted the certificate's not going

to be accepted but that route is the root certificate Authority a domain validated certificate is an x.509 certificate that proves ownership

x.509 certificate that proves ownership of a domain name and extended validation certificates provide a higher level of trust in identifying The Entity that's using the

certificate these are commonly used in the financial services sector and our pki hierarchy includes an issuing CA

a subordinate CA or intermediate or policy CA it's sometimes called and the root CA or certificate Authority so let's

look at the roles of these three in a bit more detail so the root certificate Authority is usually maintained in an offline State IT issues

certificates to new subordinate certificate authorities and the route CA other than when it needs to issue those certificates where there's an operation where it is absolutely required it's

powered off the support net CA is also called a policy CA or an intermediate CA it depends on the vendor you're working

with but that subordinate CA issues certificates to new issuing Cas and the issuing CA is where certificates for

clients servers devices websites Etc are all issued all of your day-to-day certificate issuance comes from here so that path that chain from the issuing

CA up to the root is your chain of trust and we could consolidate these functions into fewer servers you can have a two level hierarchy or even a single level

hierarchy but you're going to have a less resilient public key infrastructure in that single level hierarchy if the server is compromised your route of

trust is compromised as well so your entire pki is is blown at that point you have to go to ground and a three-tier hierarchy if the issuing CA has

compromised we can we can recover without having to re-establish the entire org so let's just unpack a few important details about that subordinate CA that

intermediate CA and talk about how it helps us in the event of compromise so the subordinate CA regularly issues certificates so typically we don't have them staying offline as often as you would a root

they do have the ability to revoke certificates making it easy for us to recover if a breach does happen so if a if we have a breach at an issuing CA the

subordinate can revoke that issuing CA certificate we can issue a new one and deploy a new issuing CA now any certificates coming from that issuing server that was compromised those will

have to be reissued as well but we can recover so when that issuing CA is breached we just revoke the certificate and issue a

new one and then reissue those affected client certificates but a single compromise CA does not result in compromise of the route when we have

that multi-tier hierarchy if we have two or three layers we're good so let's talk about the certificate revocation list which contains information about any certificates that

have been revoked by a subordinate CA due to a compromise to the certificate of the pki hierarchy Cas are required to publish crls but

it's up to certificate consumers the client if they check these lists and how they respond if a certificate has been revoked so it's up to client Behavior so let's talk about certificate

revocation so revoking or invalidating a certificate before it has actually expired a certificate in this case is effectively canceled and the certificate

serial number is added to that certificate revocation list or crl but parties checking the certificate to

verify Identity or authenticity must check the issuing authority on validity two potential options for tracking revocation are to ask for the crl or if

available the ocsp endpoint or service the online certificate status protocol endpoint

the endpoint to query for crl or ocsp is actually listed on the certificate itself and if the other client or service again doesn't check the crl or the ocsp for

validity they may accept an invalid certificate as valid the benefit of online certificate status protocol is that it offers a faster way to check a

certificate status compared to downloading a crl which contains a list of all the certificates that have been revoked in that organization and that

list can grow quite long so it's not great performance with ocsp the consumer of a certificate can submit a request to the ocsp endpoint listed on the

certificate to obtain the status of a specific certificate now a certificate signing request records identifying information for a

person or a device that owns a private key as well as information on the corresponding public key it's the message that's sent to the CA in order

to get a digital certificate created and the common name is the fully qualified domain name of the entity like a web server which you'll see on a

certificate you'll see a CN it might help you if we take just a minute and look at these properties on an actual certificate so if you just click the Windows key on a Windows machine and

type certificates you'll get the certificate snap in and I'll go to trusted Publishers and certificates and I'm going to look at this VPN certificate I use for testing every now

and again and I click on that cert and it brings up the properties here if I go to details I can see the serial number of the search the signature algorithm so the cryptographic algorithm used

validity to and from dates if I scroll down just a bit here under the subject I'll see that common name the CN I mentioned right there on the top line

and I scroll a bit further I can see the usage of this certificate so it's a code signing cert I see the crl distribution

point so the crl distribution pointer CDP gives us the path to the certificate revocation list and you see it's expressed in the form of a URL which

ends in a file with a DOT crl extension scroll down a bit further if I look at Authority information access this is where the ocsp endpoint is published the

online certificate status protocol endpoint and there you'll see that's also expressed as a URL so it's really just up to your client to make the call

and moving on that brings us to 2.4 which is Implement data Discovery and in this section we'll be focused on structured data

unstructured data and new in the 2022 exam they mentioned semi-structured data directly here in the syllabus and we'll

also talk about the effects of data location on the data Discovery process let's start with a look at structured data this is data that's contained in

rows and columns like an Excel spreadsheet or a relational database so Microsoft Excel Microsoft SQL MySQL postgresql mariadb all relational

databases that would fall into this structured category it often includes a description of its format known as a data model or a schema which is an abstract view of the data's format in a

system we'll often hear relational databases referred to as schematized data and the data is structured as elements rows or tuples and given context through

that schema how do we discover data sensitive data in a structured database well metadata that describes the data is a critical

part of Discovery because it will give us some hint as to what the data represents semantics or the meaning of the data is described in the schema or the data model and explains

relationships expressed in data next we have unstructured data this is data that cannot be contained in a row column database and it does not have an

Associated data model there is no schema images video files social media posts generally fall into this category Discovery occurs through content analysis which attempts to parse all

data in a storage location and identify sensitive information so how does that content analysis that Discovery take place well there are a few methods pattern matching is one

which Compares data to known formats like credit card numbers DLP tools often have predefined sensitive data types that will look for credit card Social

Security numbers banking information and the like and many of these DLP tools will have some imaged based recognition as well some OCR and other capabilities

there's lexical analysis which attempts to find data meaning and context to discover sensitive information that may not conform to a specific pattern and then there's hashing which attempts

to identify known data by calculating a hash of files and comparing it to a known set of sensitive file hashes this is only useful for data that does not change frequently

and then their semi-structured data which is a combination of structured and unstructured data quite often it's unstructured content that contains metadata that facilitates organizing the

data it's fluid but it's organizable by Properties or metadata good example here would include Json XML HTML email

messages nosql databases this is really a mix of data types that will require a combination of Discovery methods and tooling capable of Discovery

in these co-mingled data types so really it's going to take elements of structured and unstructured Discovery to fully discover data in a semi-structured

scenario effectively but let's talk about that a bit further in the context of data location and discoverability the location of data will impact both its

discoverability and the choice of tools we use to perform discovery so tools must be able to access data to perform the scanning and Analysis needed

in the discovery process of course this may require different tools for cloud and on-premises discovery not all Cloud Solutions may offer a local agent for on-premises something we

need to consider in tool selection network-based DLP tools may not analyze all traffic between on-premises endpoints and the cloud so another consideration

an optimal DLP approach will discover data in on-premises and in Cloud repositories as well as data in transit

well why would that be important well where the data is going could matter right if we have a sensitive email going out and that's going from one employee to another who are both authorized to

see the data that may be no problem but if that email is going to an external party we may want to encrypt that automatically on the way out the door so the recipient can't read it the same

could be true of over sharing of sensitive data through a collaboration platform like SharePoint if we can't evaluate our data in transit there's potentially a big gap there that leads

to data leakage and our tools must be able to scan unstructured data within structured data sources like relational databases a good example of this would be a

problem description inside a help desk ticket stored in a SQL database we have a problem description that's just unstructured free text but it's stored

within a structured data repository so when we have both structured and unstructured data in the same repository it's often going to increase our tool

cost and complexity it may also present classification challenges we might not be able to classify that specific row of data if our tool won't support it and

that can lead to some less attractive consequences if we have to put a single classification label on a large data source the most sensitive classification

found would apply so in the case of that help desk ticket stored inside a SQL database if they're sensitive data within that ticket we might have to put a single very sensitive classification

on a large repository that has a bunch of less sensitive data there so tooling will matter in that case so data Discovery ensures that data is

appropriately classified for protection Discovery is really the first step so we can then classify so metadatabase Discovery is a list of traits and characteristics about specific data

elements or sets it's often automatically created at the same time as the data and then there's label-based discovery which is based on examining labels

created by the data owners during the create phase of our secure data life cycle or we can do that in bulk with a scanning tool often which may use

built-in sensitive data types that I described earlier you might see this used with structured data with a relational database it's certainly going to be much more common

with file data that brings us to 2.5 which is Implement data classification once we've discovered our data we're now going to classify

we'll talk about data classification policies data mapping and data labeling so let's just do a simple comparison of data classification in the government

context versus a public entity or a commercial company so at the lowest level of classification we have unclassified data what we'd call public data in the non-government

space so no damage if exposed one level higher we have confidential or sensitive data where the organization is going to sustain some damage now the type of damage that the organization

would sustain in the government context at a certain level of classification we're talking about National secrets that could endanger human life in the public context we're talking about

corporate reputation competitive disadvantage or monetary loss the next level of classification we have secret or what we call Private data on

the public side where the organization could sustain serious damage and then at the highest level of classification we have top secret and confidential or

proprietary and we see some common terms here right we see serious damage at the secret slash private level and then for top secret or confidential we see exceptionally grave

and non-government scenarios are often called commercial or public so let's talk through some common sensitive data types you'll want to know for the exam the first is personally

identifiable information or pii that's any information that can identify an individual their name social security number birth date or birthplace biometric records

we have protected health information or Phi which is health related information that can be related to a specific person Phi is actually regulated by HIPAA High

Trust and then cardholder data that's allowable storage of information related to credit card and debit cards and transactions that's defined and

regulated by PCI DSS which isn't regulated in law that was a standard put together by the big four credit card companies now let's talk data policies we have a

data classification policy which would be labeling or tagging of data based on type like pii or Phi that we just described you could have the data retention policies that ensure legal and

compliance issues are addressed retaining data for a period of time as specified in law then we have Regulatory Compliance which for legal and

compliance reasons may require us to keep certain data for different periods of time and will drive classification it's going to be a driver of classification and retention just a

couple of simple examples some financial data needs to be retained for seven years by law and some medical data may need to be retained for 20 to 30 years and will need to classify that data

appropriately going a bit further data classification is a process for categorization of data and defining the appropriate controls based on that category so categories

could include the data type based on its formatter its structure jurisdiction or any other legal constraints ownership the context of the data can be important

contractual or business constraints think PCI DSS which is contractually enforced trust levels and the source of origin value sensitivity criticality we want to

protect intellectual property Trade Secrets and retention and preservation again retention and preservation may be driven

by Regulatory Compliance requirements or even legal proceedings but data should be classified as soon as possible after creation we'll go back and look at that secure data life cycle

here in a couple of minutes and I'll remind you of where that happens but we classify as soon as we can after creation so we can get protection in place so mapping and labeling so mapping

informs the organization of the locations where data is present within applications and storage it brings understanding that enables implementation of security controls and

classification policies usually precedes classification and labeling though labeling goes hand in hand with the mapping process and labeling requirements that apply consistent

markings to sensitive data should accompany classification it's often applied through classification policies in DLP tools providing a target for data

protection it's often applied in bulk using classification tools but we discover our data we map our data we label our data we classify our data and

then we protect our data and I've talked a fair bit about the DLP process here in DLP tools and it does occur to me that maybe you've never seen one maybe that's not been your role so I think this is a

great opportunity for some quick show and tell I'm going to show you data Discovery mapping labeling and classification in Microsoft's

cloud-based DLP solution for a quick example of data loss prevention and information protection functionality here is the Microsoft purview portal at compliance.microsoft.com and I'll start

compliance.microsoft.com and I'll start under data classification I mentioned that some of your DLP Solutions have some pre-defined sensitive information

types so Microsoft for example has over 300 here and if I type words like driver for example I see predefined information types that will

help me to identify driver's license numbers there should be a credit card number identification a sensitive info type if I type social I'll see Social Security

numbers for a variety of countries so many predefined information types now you can create your own custom types using

regular expressions and other matching capabilities but a lot out of the box and there's the concept of trainable classifier so I can train the system to recognize documents

for what they are bank statements my company's invoices and the like and you'll see that there are several here that are published off the shelf and we can train these to be smarter based on

our company's documents they're just worth noting we have that off the shelf and I'll go over to the content Explorer tab here under data classification and the system will show

me sensitive information types for which the system has already identified documents and data for example for all full names it shows

me 350 matches and it shows me the repositories where it's found that information so whether it's teams SharePoint OneDrive exchange so whether it's email collaboration or chat US

driver's license number similar capabilities and I can drill down to see what it's found just to make sure that it's matching as I expect

and then real quickly I want to show you the label label policy and Auto labeling capability so I'll go down the menu here to information protection and here I'll find

the labels and I see some default labels have been created for general public confidential highly confidential here's a confidential

Finance and when I go to label policies this is how I publish a label to my user so they can apply these labels from their office

apps or in SharePoint sites or in email and once published they can apply those labels to protect their document now go over to the auto labeling policy tab

here and I'll click on create auto labeling policy just to show that we can pick the info easily that we want this label to be applied to so for example my

companies in the United States I'll pick United States of America and now I can see some of the categories and the templates that are created for me in

advance so glba HIPAA us Patriot Act when I go to financial I say PCI DSS medical and health there's HIPAA so we

can create these Auto labeling policies using off-the-shelf functionality that's been created for us and then I can choose the locations where I'd like to apply that

label or I can even configure some custom functionality there and every platform is going to be different but in the Enterprise space What you're going to find is that many of your DLP and

information protection Solutions have this sort of enhanced functionality right off the shelf okay I hope that little show and tell

was helpful now I'd like to revisit with you the secure data life cycle looking at the life cycle through the lens of data classification so as we've said in

the past data can be created by users data can be created by systems but after that data is created we need to classify that data as soon as possible it's only

through classification that we can then determine appropriate protections for that data ideally the data will be encrypted at rest regardless of its sensitivity

and data should be protected by adequate security controls based on its classification so classification will drive the need for protection and

as data is used in modified classifications may change so our process of scanning and labeling and classifying data is an iterative process

where scanning these repositories repeatedly you know certainly a user may reclassify a document when they take a harmless low sensitivity Word document

and incorporate protected health information into that document but we also need an automated bulk classification process that's going to

catch those changes and reclassify that data on the Fly and then as data is shared of course when we're transmitting data over a

network sharing a document through a collaboration platform like SharePoint we need to make sure that we protect that data from data leakage if we have

sensitive company information in the document it may be fine for that data to be shared in that document within our employees but we want to block that

sharing to external entities whether that's going through just a simple share link or through the email Channel we need to have data loss

prevention capabilities to help us to secure our data and prevent that leakage in use so data protection policies May

block external sharing many times and archival again is sometimes driven by laws or regulations that require us to retain data for a specific period of

time and classification can drive data retention policies for example the organization's annual Financial reports would be classified as financial data

and that sensitive data classification will drive retention of that data based on the type of information in that document and when data is no longer

needed it should be destroyed in a way that it is not readable nor is it recoverable but classification really drives the rest of the life cycle if you look at it

from this perspective including destruction and that brings us to section 2.6 design and Implement information Rights Management only two topics in this

section irm objectives and appropriate tools so let's start with a definition of information Rights Management so an irm

program is designed to enforce data rights provisioning access and implementing Access Control models it's often implemented to control access

to data that is designed to be shared but not freely distributed it can be used to block specific actions like print copy paste download and

sharing and it can provide file expiration so that documents can no longer be viewed after a specified time many popular SAS file sharing platforms

Implement these Concepts as sharing options which allow the document owner to specify which users can view edit download and share and for how long

this always includes a cloud service it may also include a local agent the objectives of information Rights Management you know one is persistence our ability

to control access to enforce our restrictions must follow the data meaning protection must follow that document or data wherever it travels

Dynamic policy control so an irm solution must provide a way to update the restrictions even after the document has been shared

irm tools can enforce time limited access to data as a form of Access Control the ability to expire or revoke access require the user to check in from

time to time to see if they still have access to contact that cloud service as a prerequisite for continuing access continuous audit Trail an irm solution

must ensure that protected documents generate an audit Trail when users interact with protected documents it's required for accountability and

non-repudiation interoperability irm Solutions must offer support for users across different system types we need support for Windows

and Mac OS desktops laptops mobile phones tablets and different apps will be important appropriate tools so irm tools comprise

a variety of components necessary to provide policy enforcement and other supporting attributes of that enforcement capability so a centralized

service is one for identity proofing certificate issuance store of revoked certificates and access and for unauthorized identity information access

this enables enforcement from anywhere secret storage so irm Solutions require local storage for encryption Keys tokens or digital certificates used to validate

users and access authorizations local storage requires protection primarily for data Integrity to prevent tampering with the material local to the device

used to enforce information Rights Management but irm must prevent local modification of access controls and credentials otherwise a user might modify the permissions granted to extend their

access beyond what the data owner originally specified whether that's extending the period of time or the level of access

bottom line local changes must never supersede controls implemented by the cloud service now since certificate revocation was

called out in the syllabus in section 2.6 I want to revisit the key management strategy for the encryption key life cycle just to touch on certificate

revocation which we talked about a bit earlier so the process for revoking access at separation policy breach device or key compromise happens in the

revocation stage and how exactly that revocation goes down will depend on the irm solution you're working with but if we think about it in the pki context with certificate revocation you would revoke

the certificate on the issuing certificate Authority presumably you would remove access control settings for a particular user revoke that access and

the irm solution would handle some of this in the background for you but at the end of the day that revocation of the certificate would be recorded on the certificate revocation list and activity

logged as part of the audit Trail so let's talk about intellectual property protections for just a moment and I want to talk about intellectual property not because you will be tested

on this directly but because I see these mentioned in part at least in the official study guide into a lesser degree in the common body of knowledge so I think the basics of intellectual

property Protections in the U.S may be something you're just expected to already have knowledge of so we'll touch on these just briefly so there's trademarks which cover words

slogans and logos used to identify a company and its products or services so that would be a trademark to cover the Apple logo for example or Nike slogan

just do it a trademark lasts 10 years it can be renewed and protect the intellectual property rights of inventors

a patent provides the inventor exclusive use of their invention for a period of time generally 20 years and filing requires public disclosure

which is undesirable in some cases that's where Trade Secrets come into play intellectual property of an inventor that is absolutely critical of their business and must not be disclosed

is a great candidate for a trade secret it's valid as long as secrecy is maintained and not discovered by others once it's no longer a secret protection

is lost then there's copyright which is automatically granted to the creator of a work upon creation but can be registered to prevents others from

reusing and copyright protection lasts 70 years beyond the Creator's death then the work moves into the public domain where it is freely reusable

and that brings us to 2.7 plan and Implement data retention deletion And archiving policy so here we'll touch on data retention policies

data deletion procedures and mechanisms data archiving procedures and mechanisms and the concept of legal hold so I want to revisit the secure data

life cycle with you in the context of retention and Data Destruction so retention is driven by security policies and regulatory requirements retention happens here between archival and

destruction audits or a lawsuit may require production of some data and that may trigger retention

now sarbanes-oxley for example requires tax returns are kept for seven years and payroll and bank statements are kept forever sarbanes-oxley is

a regulatory requirement of every publicly traded company in the U.S now

when data is no longer needed it should be destroyed in a way that it is not readable and keeping data longer than needed increases risk and organizations know

this they know they cannot produce data that they do not have for a legal case so when data hits the end of it retention requirement

it should be destroyed and that's where crypto shredding secure destruction comes into play we touched on crypto shredding briefly in domain one we'll touch on it again

here so this is cryptographic Erasure data is encrypted with a strong encryption engine the keys used to encrypt the data are then encrypted using a different

encryption engine then keys from the second round of encryption are destroyed on the pro side data cannot be recovered from any Remnant the downside here is

high CPU and performance overhead there's a lot of processing involved now if the exam poses questions on secure Data Destruction this is almost certainly the answer

but know the steps of crypto shredding for the exam so data archiving refers to placing data in long-term storage for a variety of

purposes the optimal approach in the cloud differs in several respects from the on-premises equivalent so key elements of data archiving in the

cloud data encryption data monitoring e-discovery and retrieval we need all of these capabilities backup and Dr options and the data format and media type do

matter we need to think about our ability to search and retrieve data from an archive so let's talk about each of these six at greater depth so we'll start with

encryption our encryption policy should consider which media is used and our data search and restoration needs as well as our regulatory obligations we

need the right balance of security and retrievability searchability what threats should be mitigated by the encryption how will the encryption Keys

be managed long-term archiving with encryption can present Key Management challenges access controls and encryption are important to protect data Integrity by preventing unauthorized

access then we have data monitoring so data stored in the cloud tends to be replicated as part of storage resiliency or business continuity and Disaster

Recovery to maintain data governance it is required that all data access and movements be tracked and logged monitoring to ensure all security controls are being applied properly

throughout the data life cycle also important accountability traceability and auditability should be maintained at all times e-discovery and retrieval archive data

may be subject to retrieval according to certain parameters like dates subjects and authors this could be due to audit requirements

or even a legal proceeding but retrieval can definitely become important the archiving platform should provide the ability to perform e-discovery on the data to determine which data should be

retrieved so we only retrieve the data that is necessary the minimum necessary data that is subject to more frequent search should be kept in a service that

enables e-discovery with a manageable level of effort we need to ensure that staff can manage the e-discovery support burden if we take logs and store them in

a raw format and blob storage in the cloud that's not going to give us great searchability that's going to be high effort for our staff so we need to

balance the need for security and cost controls with our operational overhead then we have backup and Dr operations

so all requirements for data backup and restore should be specified and clearly documented and business continuity and Disaster Recovery plans are updated and aligned

with whatever procedures are implemented we need to know our options here and when it comes to backup and Dr of our archive data both resiliency to

disaster ensuring archive data availability and knowledge and control of data replication are both important so data format and media type this is an

important consideration because it may be kept for an extended period And The Format needs to be secure accessible and affordable

media type should support the other data archiving requirements but physical media concerns fall to the CSP at the end of the day we want to make sure we are storing our data in a secure easily

accessible but also affordable fashion you're often paying by the gig in the cloud so we want to be careful now AWS S3 and Azure of storage both offer cool

tier infrequent access storage for low-cost archiving and they generally speaking have an immutability flag you can flip to ensure integrity

you might get your data storage down to less than a penny a gig in those cool tier options but your search ability your accessibility is certainly going to

be less versus keeping that accessible and indexed through an e-discovery feature often cloud storage is built by the gig

so beware cost however just balance with your access needs that's the bottom line that's your take away for the exam and legal hold is called out explicitly in the exam syllabus so let's touch on some

details here legal hold protects any documents that can be used in evidence and legal proceedings from being altered or destroyed data protection Suites in the cloud

often have a feature to ensure immutability which ensures that the data marked immutable cannot be modified in fact when we look at cloud storage like

Azure storage or AWS S3 they offer an immutable storage feature so I can Mark a container as immutable when

we think about data protection software legal hold generally implements permanent retention until a human authorizes release when the possibility

of production and legal proceedings has passed a legal hold is sometimes called a litigation hold and that brings us to another good

opportunity for a quick Show and Tell and we'll revisit that same Cloud Suite we explored for data classification for labeling and mapping and we'll have a

look this time at data retention again just for some real world context as you consume these concepts for exam day so we're back again to the Microsoft

purview compliance portal at compliance.microsoft.com and to see the

compliance.microsoft.com and to see the retention label functionality I'll scroll down here to data lifecycle management and under Microsoft 365 I see retention policies

and here I will find some retention policies that have already been created so for example let's just click on personal financial pii and I'll click on

edit so we can look at how this policy is configured and I see here it has a name a description and they have options here

for adaptive or static Scopes so I can configure a policy to be adaptive where it automatically adapts based on labels and other functionality to include new

locations in the retention policy so we'll look at the current settings which are static and under here I can see that it's applied to multiple locations to different types of data so we see email

we see collaboration with SharePoint OneDrive which would be our file data more mailbox data Skype for business so applying across a number of locations

but if I also look over here to the included column I can see to whom and what this applies so for example I see it applies to all

mailboxes all SharePoint sites all user accounts all Microsoft 365 groups for the group mailboxes and sites and let's have a look at our retention

setting so I see here it's applying for seven years and it's applying that retention when the items are created you see I have the option here to change that to apply that seven year retention

to when the items were last modified and then you'll notice I have some options here in terms of what to do at the end of the retention period I can delete the item automatically I can do nothing I

can retain the item forever or only delete when they reach a certain age so I have a number of functional options here in terms of what we call record

disposition what we do at the end of the retention life cycle and that's really all I have to show you on this one quite simple really

and again this is a CSP agnostic exam this is purely just to give you a real world example to give you a bit of additional context for the concepts we were talking about here

and with our Show and Tell out of the way we're now ready to move on to section 2.8 design and Implement auditability traceability and

accountability of data event our last section of domain two we'll touch on definition of event sources and requirement of event attributes logging storage and Analysis of data

event as well as chain of custody and non-repudiation so let's start with accountability so accountability is maintained for

individual subjects using auditing logs record user activity and users can be held accountable for their logged actions it directly promotes good user

behavior and compliance with the organization's security policies when folks know someone is watching they tend to behave simple as that

security Audits and reviews so these help ensure that management programs are effective and being followed they're commonly associated with account management practices to prevent

violations with least privilege or need to know principles they can also be performed to oversee many programs and processes so security Audits and reviews are useful in maintaining programs like

patch management vulnerability management change management configuration management periodic Audits and reviews to ensure our processes are being followed or helpful in a wide

variety of areas so let's talk about event sources and attributes in the context of auditability traceability and accountability of our data events so

oauth provides a comprehensive set of definitions and guidelines for identifying labeling and collecting data events it ensures events are useful and

pertinent to applications and security weather in a cloud or a traditional Data Center so the definition of event sources which events are important and available for

capture will vary based on cloud service model that we're employing whether that's is paths or SAS so let's take a look at that

and we'll start with is or with our is event sources within an is environment the cloud customer has the most access and visibility into system and

infrastructure logs of any cloud service model that's because the cloud customer has nearly full control over their compute environment including system and network

capabilities virtually all logs and data events should be exposed and available for capture this is because the customer has more responsibility than in any other Cloud

Model if you go back and look at the Shared responsibility model in domain one that becomes crystal clear moving on to Paz event sources a Paz environment does not offer or expose the

same level of customer access to infrastructure and system logs as if however the same level of logs and events is available at the application Level

again due to responsibility responsibility for system and infrastructure in Pas belongs to the cloud service provider so we have less access to those logs

and that brings us to software as a service now because in a SAS environment the CSP is responsible for the entire infrastructure and application the amount of log data available to the

cloud customers understandably less customer responsibility is limited to access control shared responsibility for data recovery and feature configuration

so in this case service responsibility equates to log visibility let's take a look at the who what where

and when of logging from OAS where we find some excellent guidance ultimately logs should be able to answer the question who did what and when

sufficient user ID attribution should be accessible or it may be impossible to determine who performed a specific action at a specific time this is called

identity attribution in my mind this goes a step further now it's this is what's necessary for non-repudiation when I think of it I think of who did what when and from

where I like to know a bit about the device and the location as well but at the end of the day who did what and when is the minimum we should be focused on

so in terms of who wasp advises we need Source address and user identity if known and the what would include type of event severity of event

security relevant event Flags if the log contains non-security events as well as a description and the where application provider

application address the service geolocation window four page the URL HTTP method code location the script of the module name

and the wind and for when we see log date and time event date and time and the interaction identifier

so while the question we need to answer is who did what and when you can see from the who what where and when why I would also ask who did what when and from where

and you can find this oauth guidance in the oauth logging cheat sheets that they maintain for developers on building application logging mechanisms especially related to security logging

you'll find these cheat sheets on GitHub at the URL you see here I've also included that URL in the video description you simply go to that URL

browse to the cheat sheets folder and click on the logging cheat sheet.md

that's a markdown file so logs are worthless if you do nothing with the log data they're made valuable only through the process of review that is they are valuable only if the

organization makes use of them to identify activity that is unauthorized or compromising now a security information event monitoring tool can help solve some of

these problems by offering some key features log centralization and aggregation data integrity and normalization giving us a standardized

event format even when it's pulling data from many disparate sources automated or continuous monitoring

alerting and investigative monitoring now we will cover the Sim in depth in domain five but while we're here let's touch on these six key Concepts that are

called out so the Sim features necessary to optimize event detection and visibility and to scale our security operations so log centralization and aggregation so

rather than leaving log data scattered around the environment on various hosts the Sim platform can gather logs from a variety of sources including operating

systems applications Network appliances user devices providing a single location to support investigations and when we have all those disparate sources of data

it gives us greater context as to how activities including Bad actors are moving about our environment it gives us a better idea of the scope of an incident

there's data Integrity so the Sim should be on a separate host with its own Access Control preventing any single user from tampering very easy to do in the cloud

normalization Sims can normalize incoming data to ensure that data from a variety of sources is presented in a consistent format automated or continuous monitoring so

sometimes referred to as correlation simsu's algorithms to evaluate data and identify potential attacks or compromises and alerting Sims can automatically

generate alerts like emails or tickets when action is required based on analysis of incoming log data investigative monitoring so when manual

investigation is required the sem should provide support capabilities such as querying log files and generating reports so broad Sim visibility across the

environment means better context in log searches and security investigations when we can see into Data apps identities endpoints and infrastructure all in one place we're going to have a

better idea of the big picture as it relates to a potential security incident or malicious activity and you should be familiar with chain of custody which tracks the movement of

evidence through its collection safeguarding and Analysis life cycle this is evident in a legal proceeding so what are the functions and importance

of chain of custody well it provides evidence Integrity if I were to coin a phrase through convincing proof evidence was not tampered with in a way that damages its reliability

so it documents key elements of evidence movement and handling including each person who handled the evidence the date and time of movement or transfer

and the purpose for the evidence movement or transfer so what if evidence is left unattended or handled by unauthorized parties well

then criminal defendants can claim the data was altered in a way that incriminates them and thus the evidence is no longer reliable

chain of custody is a foundational principle of evidence handling in legal proceedings and you should also be familiar with non-repudiation

non-repudiation is the guarantee that no one can deny a transaction there are a few methods to provide non-repudiation so systems enforce non-repudiation

through the inclusion of sufficient evidence and log files including unique user identification and time stamps digital signatures prove that a digital

message or document was not modified intentionally or unintentionally from the time it was signed based on asymmetric cryptography a

public-private key pair it's the digital equivalent of a handwritten signature or stamp seal we talked about digital signatures a bit earlier I want to just reinforce this in your memory it's it's

definitely part of what you'll want to be familiar with on exam day multiple accounts make non-repudiation more difficult if we have a user logging in

with different identities it's more difficult to track their movements and activities throughout our environment shared accounts make non-repudiation

virtually impossible because we can then no longer tie individual actions to a specific individual beyond their ability

to deny they perform those accents and that brings us to the end of domain two so let's get into domain three Cloud platform and infrastructure security as

always I will cover every topic mentioned in the official exam syllabus I'll also provide examples of Concepts wherever I can to give you some additional context and as in domain 2

I'll also do a bit of Show and Tell in a real Cloud environment again the ccsp is CSP agnostic it doesn't focus on any one Cloud platform but I do find a bit of

Show and Tell in a real environment gives you some context for those areas where maybe you don't have any experience in your work life yet so let's have a look at a few exam

Essentials applicable to domain three those areas the official study guide promises will Factor significantly on exam day we have risks associated with each type

of cloud computing essentially more services generally equals more risk and more control over your environment means more risks you are responsible for mitigating it goes back to that shared

responsibility model we first talked about in domain one and we'll touch on here again in this session in multiple respects explain Key business continuity

terms like RTO RPO and RSL if you are not familiar with these acronyms you will be by the time we're done with this session these are key Concepts that help set the bar for your business continuity

plan and Disaster Recovery plan requirements responsibility sharing between customer and provider so essentially who is responsible customer or CSP in each area

of cloud infrastructure we'll talk about design and description of a secure data center we'll look at the build versus buy decision physical

and environment design considerations and the pros and cons in each area business continuity and Disaster Recovery in the cloud that's similar to on-premises but there's certainly more

complexity in the agreements between the cloud customer and the cloud service provider I will add that these exam Essentials are my rough mapping from the official

study guide because the fact of the matter is the exam Essentials and the book chapters themselves in the official study guide do not map one to one to exam domains you'll notice there are more than six chapters in the book

because some domains are covered in part across each of multiple chapters so let's jump into 3.1 comprehend Cloud infrastructure and platform component

we'll touch on several areas of infrastructure and platform here including physical environment Network and Communications compute virtualization storage and the

management plane now in the shared responsibility model customer and CSP share security responsibilities so in each area we will review

responsibilities and security controls and who owns them so you can imagine in a cloud scenario we'll talk a bit less about the physical environment because

that physical data center is entirely the domain of the cloud service provider we will talk about how you can do your due diligence on ensuring that your

cloud service provider is designing and managing that data center effectively so let's start with a talk about the physical environment so there are infrastructure components that are common to all cloud service delivery

models most of those components are physically located with the CSP but many are also accessible via the network so the CSP is taking on customer data

center facilities infrastructure and management responsibilities they are responsible for the physical by and large in the shared responsibility model though we know some elements of operation are shared by the CSP and the

customer just a reminder for the exam you want to know who owns Which roles who is responsible for What from that shared responsibility model so if we think about it from a physical

perspective the CSP owns all aspects of physical security in their data centers they own it down to the wire the facilities the equipment the environment

and the Personnel that care for that physical infrastructure but the csps utilize common controls to address these risks so for physical

security standard measures like locks security Personnel lights fences and visitor check-in procedures just as we do in our own data center logical access

controls like identity and access management single sign-on multi-factor authentication and logging so they have an audit Trail and controls for data confidentiality and integrity just as

any Cloud customer would but with much broader controls so let's look at what I mean by broader controls in the form of an example so for example ensuring that communication lines are

not physically compromised by locating telecommunications equipment inside a controlled area of the csps building or campus so physical security that would be broader control it protects data

integrity and service and resource availability for that matter so let's move on to network and communication we'll start with is where we know the customer is responsible for

configuring VMS the virtual Network and guest OS security but the CSP is responsible for the physical host physical storage and the physical Network

moving into platform as a service the CSP is responsible for the physical components the internal Network and the tools it's cheaper for the customer but the customer has less control if you

remember that diagram in the SAS model the customer remains responsible for configuring access to the cloud service for their users as well as shared responsibility for data

recovery the CSP owns physical infrastructure as well as Network and communication security so let's break it down another way

so if we just look at those three models we'll look at is first where we know that the customer is responsible for configuring the VMS the virtual Network and the guest OS security as if the

systems were on premises the CSP provides the tooling to secure the VM but the customer must configure those tools and the CSP is responsible for configuring the security of the network

the storage and the software for the physical host the CSP owns all physical security here moving into paths where we know that the

CSP is responsible for everything from the is model all the physical components they are also responsible for internal Network and tooling the customer is responsible for

configuring the application and data access security any additional customer control is generally provided through service skus or service tiers

and what I mean by that for example in a path web application context for example you'll find some service tiers may give a customer their own physical host

or access to Greater compute capacity but they have to spend to get that greater control in the form of a different service tier within that past service

so moving on to software as a service where the customer remains responsible for configuring use access to the service they are configuring access

control for their users the customer also has shared responsibility for data recovery now what do I mean by that well the CSP May provide tools for recovery

but the customer may need to perform recovery themselves in some cases perfect example in Office 365 users have access to hundreds of previous versions

of a document available for self-service recovery right there from within Microsoft Word or PowerPoint but the user must perform that recovery themselves

next we have compute the infrastructure components that deliver compute resources like our VMS disk processor memory and network resources for customers so how does the CSP manage

compute capacity but reservation is one way a minimum resource that's guaranteed to a customer you'll see that in the form of a VM SKU for example

limits maximum utilization to compute Resources by a customer that's handled through a VM SKU we can set a minimum and a maximum limits are allowed to change dynamically based on current

conditions and consumption remembering that a CSP is going to over subscribe their infrastructure by Design and shares a waiting given to a particular VM used to calculate

percentage-based access to pooled resources where there's contention and you'll even see VM skus that allow us to select a lesser SKU at a lesser price

for non-production workloads where we know we're going to be de-prioritized in times of contention but we pay less for that resource over the course of the month as we're paying for that subscription

in case of a shortage though host scoring will determine who gets capacity generally speaking but what we see in those VM skus is that we can choose inexpensive skus that get

de-prioritized and have low resource limits or expensive VM skus that give us very high resource guarantees so in each delivery and service model

the CSP remains responsible for the maintenance and the security of the physical components of compute they are dealing with that physical host and that physical storage and that physical

Network the customer remains largely responsible for their data and their users but between the physical component there can be quite an array of software and other components

so who is responsible for each of these remaining parts varies by service and delivery model and sometimes by the CSP the details should be spelled out in the contract and you want to be familiar

before you enter into a production workload scenario the CSP also deals with the challenge of multi-tenancy and we could argue that customers deal with multi-tenancy in their own private

clouds but those multi-tenant customers are all internal customers generally speaking where the CSP is dealing with external customers with signed contracts so it's certainly a stickier situation

but let's shift gears and talk about virtualization responsibilities and risks so the security of the hypervisor is always the responsibility of the CSP the virtual Network and the virtual

machine may be the responsibility of either the CSP or the customer it depends on the cloud service model and there are risks associated with

virtualization you should be familiar with a flawed hypervisor for example can facilitate inter-vm attacks Network traffic between VMS is not necessarily visible So Bad actors posing as

customers could certainly carry out attacks of their own if we don't have the right network controls in place resource availability for VMS can be impacted now we talked about how the CSP

can prioritize resource allocation but we still have that lingering worry about noisy neighbors those neighbors that are sharing our physical infrastructure and

always consuming maximum capacity and VMS and their disk images are simply files they can be portable and movable so if the CSP doesn't have the right

controls in place we could fall prey to a different sort of malicious Insider attack if they don't have their own separation of Duties and access controls in place to limit access to those files

so let's talk through security recommendations for the hypervisor installing updates to the hypervisor as they're released by the vendor of course restricting administrative access to the

management interfaces of the hypervisor capabilities to monitor the security of activity occurring between guest operating systems the VMS essentially and then security

recommendations for the guest OS so again installing all updates to the guest OS promptly backing up Virtual Drive used by the guest OS on a regular basis

those hypervisor recommendations are all the responsibility of the CSP the security recommendations for the guest os are customer responsibility though the CSP May provide tools to facilitate

ease of patching and backups so the csp's hypervisor security includes preventing physical access to the servers

limiting both local and remote access to the hypervisor and the virtual Network between the hypervisor and the VM is also a potential attack surface responsibility

for security and this layer is often shared between the CSP and the customer these components include the virtual Network virtual switches virtual firewalls virtual IP addresses the

responsibility is going to vary by model whether it's is paths or SAS and when I say hypervisor in this case just to make sure we're Crystal Clear we talked in domain one about the hypervisor types we

have the type one which is the bare metal hypervisor that's VMware esxi Microsoft hyper-v KVM dedicated host no operating system in the middle whereas a

type 2 hypervisor is hosted on a guest operating system that would be VMware Workstation Oracle virtual box so type 1 is that production scenario hypervisor

type 2 is much more common in development and test scenarios so we're always talking about a type 1 hypervisor in this case and again the CSP is always

responsible for security of that physical host and the hypervisor running there now there is a virtualization focused attack called out in both the official study guide and the common body

and knowledge I wanted to mention and that's VM escape this is where an attacker gains access to a VM and then attacks either the host machine that

holds all the VMS the hypervisor or any of the other VMS for a malicious user breaks the isolation between VMS running on a hypervisor by gaining access outside

their VM now VM Escape is generally preventable one protection would be ensuring patches on the hypervisor and VMS are always up to date we do know that the CSP is responsible for patching

that hypervisor who's responsible for the VM depends on the model we know that the customer is responsible in the is model for patching and backing up their VM

the CSP can also ensure guest privileges are low they have server level redundancy in place as well as post-based intrusion prevention and detection

so let's shift gears and talk about storage so cloud storage has a number of potential security issues various types of cloud storage are discussed in domain one we're going to touch on some of the

highlights here in terms of risk so data spends most of its life at rest so understanding who is responsible for securing cloud storage is very important now CSP responsibilities include

physical protection to Data Centers and the storage infrastructure they contain security patches and maintenance of the underlying data storage Technologies and other data services they provide

on the customer side properly configuring and using the storage tools we know that sometimes the CSP is responsible for giving us tools potentially but the customer must

configure and use those tools and then logical security and privacy of data they store in the csps environment so I want to unpack customer responsibilities a bit further I

mentioned csps often provide a set of controls and configuration options customers can use to secure the use of their storage platforms but they may need to make some specific

configurations beyond the default so the customer is going to be responsible for assessing the adequacy of these controls and properly configuring and using the available

controls access over public internet VPN or internal networks for example as I actually showed you in domain 2 in the world of cloud storage when we're

looking at a storage account your csps often give you the ability to block internet access altogether to force TLS security for data in transit and to

limit access from internal Networks but you have to use those controls as a customer ensuring adequate protection for data at rest and motion is based on the capabilities offered by the CSP

feature configuration Key Management would even be a customer concern if the the customer is managing their own keys and configuring secure access whether

that's private or public at the end of the day when you're looking at a cloud service provider's storage account they've issued to you the data is generally going to be encrypted at the account level at rest

but you have a number of additional configuration options to restrict access but the bottom line here is in the cloud the customer loses some control over storage they lose control of the

physical medium where the data is stored but they retain responsibility for data security and privacy so how can customers deal with their challenges and responsibilities without

control of the physical storage medium because after all the inability to securely wipe physical storage and the possibility of another tenant being

allocated the same previously allocated physical storage space is a definite concern our logical storage account sits on a physical storage medium somewhere

and the customer retains responsibility for secure deletion in spite of that lack of control over the physical medium and that's where compensating controls

come into play for example only storing data in an encrypted format as we saw in domain 2 in some of our show and tell the cloud storage account was encrypted

by default we had the option to add another layer of encryption called double encryption and a customer can choose to retain control of the keys needed to decrypt the data so not

allowing the cloud service provider to hold those keys together these permit crypto shredding when data is no longer needed rendering

any recoverable fragment useless so let's talk about the management plane so what is the management plan exactly well it provides the tools the web

interface and the apis necessary to configure Monitor and control your Cloud environment it provides virtual management options equivalent to the physical Administration options a legacy

data center would provide so we can power VMS on and off provision new VM resources migrate VMS just as a few examples you interact with the

management plane through tools including the csps cloud portal Powershell or other command line or even client sdks now this is separate from and it works

with the control plane and the data plane so let's talk about these two for just a moment the control plane is what you're calling when you create top level

cloud resources such as with arm or bicep and Azure cloud formation and AWS or even terraform infrastructure as code is what I'm talking about here and the data plane performs operations on

resources created through that control plane essentially management plane control equals environment control so let's talk about securing the management plane so

the key interfaces we're worried about include the cloud portal the main web interface for the CSP platform the Azure portal AWS Management console the Google Cloud console from a scheduling

perspective our ability to stop or start resources at a scheduled time we have tools available like the instant scheduler or Lambda in AWS Azure automation or Azure functions on the

Microsoft platform and then orchestration automating processes to manage resources services workloads and infrastructure as code deployments cloud

formation and AWS Azure devops on the Microsoft platform Cloud build in Google Cloud platform and then we have our maintenance functions updating upgrading security

patching Etc we can secure all of the above in the same fashion across these platforms we secure management plane interfaces with multi-factor authentication role-based access control

and role management next up is 3.2 design a secure data center here we'll talk through logical Design Elements like tenant partitioning and access control

physical Design Elements like location selection and the build or buy decision environmental design heating ventilation and air conditioning and multi-vendor

pathway and then what the syllabus calls design resilience so building resiliency into design and since the CSP is responsible for design of the physical

data center we'll talk about how customers can do their due diligence to ensure that the csp's physical data center design decisions are adequate

so we'll start with logical design where I expect more Focus will be given on the exam and The Logical design of a data center is an abstraction in the now

Legacy co-location scenario customers were separated at the server Rack or cage level so it's a physical isolation in a logical data center designed in the

cloud customers utilize software and services provided by the CSP and The Logical design of the cloud infrastructure should create tenant

partitioning or isolation limit and secure remote access monitor the cloud infrastructure and allow for the patching and updating of

systems the ccsp exam focuses largely on tenant partitioning and access control which are called out in the syllabus so we'll take a look at both of those

so in the cloud logical isolation and CSP multi-tenancy makes cloud computing more affordable but it creates some security and privacy concerns in the process if isolation between tenants is

breached customer data is at risk multi-tenancy is a concept that was developed decades ago though business centers physically housed multiple tenants co-location data centers

supported multiple customers but their isolation was in many respects physical and the risk in these scenarios is largely physical it's a server Rack or cage

isolation in the public Cloud tenant partitioning is largely logical customers are sharing capacity across the CSP data center including the physical components

CSP and tenants share responsibility for implementing and enforcing controls that address the unique multi-tenant risks of the public cloud in this scenario access control is a

primary if not the primary concern a single point of access certainly makes Access Control simpler it facilitates monitoring through an audit Trail but

any single point can become a failure Point as well in the hybrid Cloud which is very common in large organizations a single login for on-premises and Cloud can simplify identity and access

management a very common identity model one method of Access Control is to Federate a customer's existing identity and access management system with their CSP tenant another method is to

facilitate identity and access management between cloud and on-premises using identity as a service a couple of examples of identity as a service would

be Azure active directory used in Office 365 or Google's Cloud identity used with Google workspace there are multiple local and remote access controls available including

remote desktop protocol the native access protocol for Windows operating systems as well as secure shell which is the native remote access protocol for Linux and Unix operating systems and

very common for Remote Management of network devices as well and RDP and SSH both support encryption and MFA in their modern versions now secure terminal or console based

access is a system for secure local access in the Legacy co-location scenario we would commonly see a keyboard video mouse or KVM system with

access controls to limit console access in a scenario where multiple customers have physical servers in a single shared rack you could actually rent Rackspace

without committing to a full rack and that would be coupled with oversight from the Colo data center staff to ensure that one customer didn't touch another customer's physical server in

that rack jump boxes Sebastian host at the boundary of lower and higher security zones your csps offer this as a service in some cases we have Azure Bastion and

AWS Transit Gateway as a couple of very popular examples virtual clients software tools that allow remote connection to a VM for use as if it is your local machine virtual desktop

infrastructure or vdi for contractors is very common in this scenario so let's take a look at physical design starting with the build versus buy decision building your own data center

from scratch and buying an existing facility each have their advantages and disadvantages so let's compare build versus buy build requires significant investment to build a robust data center

that has the resiliency we need buying that capability is generally a lower cost of Entry especially in a shared scenario the build option offers the most control

over data center design so buy has less flexibility and service design because it's limited to what the provider offers the build option requires knowledge and

skill to match the quality of the buy option in the buy scenario we know someone with a high level of skill generally speaking is designing that data center shared Data Centers do come

though with additional security challenges the fact of the matter is csps offer many advantages of the build option at a Buy price tag customers can leverage the

csps experience to get that build level quality and near build level flexibility but at a buy cost of Entry so in physical design location selection is one of the first decisions so

availability of affordable stable resilient electricity is important natural disaster exposure needs to be considered are we exposed to flood hurricane tornadoes availability of

high-speed redundant internet connectivity as well as other utilities add say propane natural gas and Diesel to run your generators

physical site security so securing against vehicular approaches bollard's gate visibility location relative to existing customer

data centers so business continuity Disaster Recovery considerations and geographic location relative to customers and when you move to the public Cloud

most of these are CSP decisions a customer just chooses which CSP Regions they're going to reside in and you need to know the challenges of

physical security belong to the CSP a strong fence line of sufficient height and construction lighting and facility perimeter and entrances video monitoring and alerting electronic monitoring for

tampering visitor access procedures so guest access for example with controlled entry points interior access controls badges

key codes secure doors fire detection and prevention protection of sensitive asset Systems wiring closets Etc due to its Cloud Focus the ccsp exam

spends little time on physical security but focuses more on the aspects of logical security and Design it is a fact that there is no security

without physical security but in the cloud this is a CSP responsibility I will a bit later in this session though show you how you can verify that your

CSP has taken the appropriate steps to build excellent physical security into their data center design now you may see questions on the exam

around the data center tier standard which lays out a four-tier standard for data center availability and uptime and redundancy so availability and uptime

are often used interchangeably there is actually a difference uptime simply measures the amount of time a system is running availability encompasses availability of the infrastructure the

applications and the services that are hosted it's generally expressed as a number of nine such as five nines ninety nine point nine nine nine percent availability it should be measured by

the cloud customer to ensure the CSP is meeting their SLA obligations these tears come from a company called The uptime Institute this is an organization that publishes

specifications for physical and environmental redundancy expressed in these four tiers that organizations can Implement to achieve High availability so let's take a look at each of these

tiers starting with tier one which is basic site infrastructure this involves no redundancy and the most amount of downtime in the event of unplanned maintenance or an interruption it must

have a UPS an uninterruptible power supply that can handle brief power outages as well as sags and spikes in power it must have dedicated cooling equipment

that can run 24 7 and a generator to handle extended power outages the expected availability of tier one is

99.671 percent moving into tier two we have redundant site infrastructure this provides partial redundancy meaning an unplanned Interruption will not necessarily cause an outage it adds

redundant components for important Cooling and Power Systems facilities must also have the ability to store additional fuel to support the generator

and it's expected to provide 99.741 availability tier 3 can currently maintainable site infrastructure adds even more redundant

components it has a major advantage in that it never needs to be shut down for maintenance enough redundant components that any component can be taken offline for maintenance and the data center

continues to run it's expected to provide 99.982 availability and then finally we have tier 4 fault tolerant site infrastructure which can withstand

either planned or unplanned activity without affecting availability this is achieved by eliminating all single points of failure and it requires fully redundant

infrastructure including dual commercial power feeds dual backup generators and is expected to provide 99.995 availability

heating ventilation and air conditioning or HVAC is also a concern because an HVAC failure can reduce availability of computing resources just like a power failure

customer reviews of a CSP should include review of the adequacy and redundancy of their HVAC systems now I mentioned that the physical

aspects of security and the physical aspects of data center design belong to the CSP but also that I'd show you a way that as a customer on behalf of your

customers you can validate you can do some due diligence to ensure that CSP has made good decisions in their data center design and one of those documents

is the sock 2 type 2 report now because of the confidential information in a sock 2 type 2 report some csps will require a non-disclosure

agreement prior to sharing or at least that you are a customer and a routine review of the most current sock 2 report is a critical part of a

customer's due diligence in evaluating csps so let's unpack that sock 2 type 2 report what is that exactly it is part of the statement on standards

for attestation engagements which is a set of auditing standards issued by the American Institute of certified public accountant

and ssae 18 is an audit standard that enhances the quality and the usefulness of system and organization control or sock reports so they're designed for larger organizations like Cloud

providers because the cost of a type 2 report can run thirty thousand dollars or more they're not in the expensive now the socket type 1 report assesses the design of a security process at a

specific point in time so it's looking at your processes at a point in time a snapshot sock type 2 on the other hand assesses how effective those controls

are over time by observing operations for six months and it is that type 2 report that we're interested in so what I'd like to do now just to give you some context is show

you how to retrieve a sock 2 type 2 report from a CSP and we'll start with Microsoft I'm here at service trust.microsoft.com their service trust

trust.microsoft.com their service trust portal and you'll notice here under certifications regulations and standards they show us some of the certifications

with which Microsoft Azure and other cloud services Microsoft offers comply I'll click on all documents which takes me to

the list of documents that I can retrieve related to certifications and if I go down the list way down here under sock I will find a

number of sock type 2 reports so you see there is a sock one here's a sock two type one sock two type two and if I just

click on one of these what you'll find I mentioned these are available but often considered sensitive if I click this to download you notice here I'm prompted to authenticate you

must be a customer and incidentally if you sign in and go a couple of steps further you'll be prompted to agree to an NDA now I've pulled up one of these reports just so you can see what you get

it's a PDF that goes line by line through the sock requirements with those details so out of respect for that NDA I'll stop there and I'm going to just

mention that AWS similar path to get that sock 2 type 2 report you'll see here they post on their blog when those reports are available and it mentions we

can go to the AWS customer portal the AWS artifact in the AWS Management console and in fact that will prompt us for authentication and

we'll get to those reports so fairly similar and another area we need to be concerned with is

multi-vendor pathway connectivity as another element of environment design so connectivity to data center locations from more than one internet service

provider is what we call multi-vendor Pathway connectivity using multiple vendors as a proactive way for csps to mitigate the risk of losing network connectivity in a best practice for csps

or data centers is dual entry dual provider for high availability that means two providers entering the building from separate locations and likewise customers should consider

multiple paths for communicating with their Cloud vendor so if a customer has site-to-site connectivity with a VPN building some redundancy into that connectivity in the end this protects availability

whether we're talking about the CSP and their two providers two paths or that customer to CSP connectivity and finishing out 3.2 design resilience so

resilient designs are engineered to respond positively to changes or disturbances like natural disasters or even man-made disturbances for that matter

a few examples of resilient design High availability firewalls whether that's active passive or active active multi-vendor pathway connectivity that we just spoke about a web server Farm

behind redundant load balancers a database cluster like a Windows or a Linux cluster feature service level resiliency requires identifying single points of failure throughout a service

chain so if we're thinking about an end-tier application resilient design means we're looking at the application layer any middleware at

the data tier on the back end and thinking about resiliency in the systems and Facilities that surround that application's service chain

and that brings us to section 3.3 analyze risks associated with Cloud infrastructure and platforms here we'll talk about risk assessment identifying and analyzing risks

Cloud vulnerabilities threats and attacks and will finish up section 3.3 with a look at risk mitigation strategies the risk management on the hull is so

important because it's the practice of mitigating and managing the risks to our sensitive data and to our business critical systems careful selection of

csps as important as this development of service level agreement and our contractual agreement so when we look at the cloud the service level slas are pretty well established we do have a

responsibility as a customer to make sure that we Monitor and hold our CSP to account but slas can also Factor when we think about vendors in our supply chain

for example organizations can balance cost savings with Risk by building a system on top of ayaz or PAs rather than utilizing a SAS solution

bearing in mind that if we go that is route as a customer is means more control more responsibilities and ultimately more risks that are our

responsibility to mitigate and manage customers need to be proactive in addressing their responsibilities under the shared responsibility model and making sure that their CSP does the same

and that last point is important because even when a CSP cloud service of one form or another doesn't meet its mandated contractual SLA it doesn't mean

every CSP is going to proactively give you a partial Credit in response to that SLA breach I've seen csps that have a major outage and they come back and provide a partial credit to customers

due to the SLA failure I've seen others that definitely do not identifying risks is the first step in the risk management process and to

identify risks we first need to identify the organization's valuable assets once we have identified our assets then we can identify potential causes of

disruption to those assets there are actually some risk Frameworks that can provide us with processes and procedures and give us a more systematic and

consistent approach one of those is ISO IEC 31000 risk management guidelines another comes from nist SP 800-37 which is a guide for applying the risk

management framework to federal information systems and while nist guidance is applicable to government Information Systems you're definitely going to find guidance in there that's equally applicable in commercial

businesses now I want to talk about another aspect of risk assessment called out in the official study guide and that is quantitative risk assessment which assigns a dollar value to evaluate the

effectiveness of countermeasures quantitative risk assessment is objective it ensures our controls are cost effective in other words that our countermeasures are not more expensive

than the impacts themselves and risks specific to Cloud environments should be identified when we're making a decision to use a cloud service we should assess that risk before we take that step into

that cloud service and Analysis is our next step analyzing risks continues the conversation we started by asking what could go wrong and it seeks to answer two primary

questions what will the impact be if that situation occurs if the potential impact is realized and that's what we call the single loss expectancy in

quantitative risk assessment that's expressed as a dollar value and How likely is that impact to happen that's what we call our annualized rate of

occurrence so how frequently is it going to occur that would be expressed as a decimal so for example an impact that happens twice a year has an annualized rate of occurrence of two

an impact that happens once every two years has an annualized rate of occurrence of 0.5 and an impact that

happens once every five years is 0.2 so by those numbers you can guess that a risk that happens once a year would have an annualized rate of occurrence of 1.0

and with these two figures with single loss expectancy in the annualized rate of a current we can calculate our annualized loss expectancy annualized

loss expectancy is the possible yearly cost of all instances of a specific realized threat against a specific asset so I'd like to look at this with you in

the form of a simple example and we'll at that point calculate our annualized loss expectancy the formula is single

loss expectancy times annualized rate of occurrence equals annualized loss expectancy so let's just step through an example we have a scenario a tornado May strike one of our

Branch offices once every five years causing a 30 percent loss to a one million dollar building so we'll Begin by calculating the cost

of a single occurrence so what will be the impact if that goes wrong well the single loss expectancy we express as a dollar value

how significant will the loss be that's our exposure Factor we express that as a percentage the formula for

that single loss expectancy is the asset value times the exposure Factor so doing the math if we have a million dollar building

we have an exposure factor of 30 percent that means we expect a three hundred thousand dollar loss in a single incident so that's our percentage loss

that exposure Factor so one million times thirty percent or point three uh when expressed as a decimal is a three hundred thousand dollar single loss

expectancy every time a tornado hits that building now let's calculate our annualized cost our annualized loss expectancy we said our single loss expectancy is three

hundred thousand dollars our annualized rate of occurrence once every five years is expressed as a decimal as 0.2

so let's calculate our annualized loss expectancy we have the three hundred thousand dollar single loss expectancy we take that times our annualized rate of occurrence 0.2

equals an annualized loss expectancy of sixty thousand dollars that's that three hundred thousand single loss expectancy spread across the five years for every

single occurrence and that is a simple example I won't try to tell you that that simple example is really simple but you now have the PDF

that you can download with this video so you can watch this video over and again and look at those formulas and commit these to memory I'm not certain you're going to see a lot of quantitative risk

assessment on the exam but since it's called out in the official study guide I want to make sure that you are prepared for exam day

so analyzing our CSP risk so when we're analyzing a CSP or a Cloud solution in the associated risk it's going to involve many departments and focus areas

our business units will likely get involved vendor management or supply chain potentially our privacy Specialists when we're dealing with

risks that involve data breach or data leaks and our information security department the folks responsible for securing our

Cloud infrastructure and CSP operation should also be considered but most major csps are audited for ISO IEC 27001

27017 and 27018 now what are those exactly do you ask well these are standards to guide

csps in their preparation or for customers evaluating potential csps so ISO IEC 27001 is a framework for

policies and procedures that include legal physical and Technical controls involved in an organization's risk management processes

but the focus is on policies and procedures then we have ISO IEC 27017 which is a standard developed for cloud service providers and users to make a

safer cloud-based environment and reduce the risk of security problems then ISO IEC 27018 which is the first International standard about the privacy

in Cloud Computing Services now we actually covered ISO IEC 27017 in depth and domain one in

section 1.5 we will cover ISO IEC 27018 a bit later in this series in domain six in section 6.2

repetition is good for memorization I'm going to call these out in various facets throughout the series so you'll be ready on game day

and csps like Microsoft and Amazon do provide resources that demonstrate their compliance with standards like ISO IEC

27001 as well as the 27017 and one eight standards so we're going to revisit in the Microsoft example here the service trust portal at service

trust.microsoft.com and I will search

trust.microsoft.com and I will search for 27017 and what I'll find here are documents demonstrating compliant for various

Microsoft cloud services with ISO 27001 27018 and 27017 all in a single document in the example of that cloud service and

you'll find similar resources in the AWS Management console again a cloud agnostic exam but I just want you to understand what your recourse is as a

customer or a consultant to customers when you want to verify that your CSP or prospective CSP meet your Quality Bar when it comes to compliance with

well-known security standards continuing with risk analysis let's look at a couple of CSP risks and risks for the Cloud solution are mainly associated with data privacy and information

security there's authentication risk so does the CSP provide a solution or is this a customer responsibility we talked about

Federation versus identity as a service a bit earlier in this session so if it's customer managed we have more control if it's CSP managed we're transferring some

of that risk over to our cloud service provider then data security how a vendor encrypts data at rest the strength of the cryptography and the access controls

that prevent unauthorized access by cloud service personnel and other tenants so some controls may be on by default but the customer may have to

enable others we saw this in domain two when we looked at cloud storage where we saw encryption at rest enabled by default we saw that forcing encryption

in transit so TLS encryption was a feature we needed to turn on as was double encryption which would facilitate crypto shredding down the road

supply chain risk management so evaluating vendor security policies and processes now most csps don't allow direct auditing of their operations due in part to the sheer number of customers

they support instead they provide standardized reports and Assurance material regarding their security practices such as a sock 2 report ISO

27001 certification and specialized reports for regulated data like HIPAA fedramp and ISO IEC 27017 and

one eight and you saw exactly how we retrieved those standardized reports in one example demonstrated earlier in this session so let's shift gears and talk about

common Cloud risks now one risk that's been discussed is the organization losing ownership and full control over system Hardware assets careful selection

of csps in the development of slas and other contractual Agreements are critical to limiting risk organizations can balance cost savings with Risk by

building a system on top of is or paths rather than utilizing a SAS solution remember the service model affects the level of control but

regardless of which deployment or service model is used some risks are common to all cloud computing environments so Geographic dispersion of CSP data

centers if the cloud services properly architected the disruption at one data center should not cause a complete outage but customers must verify the resilience and continuity controls in

place at the CSP downtime resilience for Network disruptions can be built in multiple ways such as multi-vendor connectivity zones and

regions we discussed these earlier in this session as well as in Cloud shared considerations in domain one compliant compliance data in some

jurisdictions cannot be transferred to other countries so data dispersion is inappropriate now your major csps have compliance focused service offerings so

you'll have some mitigations enabling you to Control Data residency then there's General technology risk so Cloud systems are not immune to Standard

Security issues like cyber attacks and CSP defenses should be documented and tested and customers should be aware of their configuration responsibilities remembering that some security features

are enabled by default and others must be configured by the customer and its customer responsibility to know which to be aware of which

let's shift to risk type so we have external risks different threat actors ranging from competitors and script kitties to criminal syndicates and state actors

capabilities will depend on their tools their experience and certainly their funding other external environmental threats like fire and floods and man-made

threats such as accidental deletion of data or users internal threats a malicious Insider a threat actor who may be a dissatisfied employee like someone overlooked for a

promotion another internal threat is Human air which is when data is accidentally deleted csps also face these risks and customers

have to verify their CSP has addressed them or provided tools to help customers address them but customers should know who is responsible for configuration that's going to be a recurring theme

when it comes to security feature configuration so let's shift gears and talk about Cloud vulnerabilities threats and attacks the primary vulnerability in the

cloud is that it is an internet based model organizations could be at risk if the csp's public facing infrastructure comes under attack any attack on your

CSP or Cloud vendor may be unrelated to you as an organization threat actors may be targeting the CSP or another tenant of the CSP risks can come from other

tenants as well customers may be collateral damage of an attack on the CSP now I want to talk about Cloud specific

risks the cloud security Alliance details the top Cloud specific security threats in their list titled the CSA egregious 11.

and they cover the top 11 threads from year to year so a recent list included data breaches misconfiguration and inadequate Change

Control lack of cloud security architecture and strategy insufficient identity credential access and Key Management account hijacking

Insider threat insecure interfaces and apis weak control plane metastructure and Atlas structure failures we'll talk about those two

terms if you're not familiar limited Cloud usage visibility and abuse and nefarious use of cloud services so let's break these 11 down a bit further

first we have data breaches which are loss of sensitive data due to a security breach now an unintentional loss or over sharing is a data leak a data breach is

lost due to a security breach you'll want to know the difference for exam day misconfiguration and inadequate Change Control software can offer the most secure configuration options but if it's

not properly set up then the resulting system will have security issues the same is true of any cloud service we can remediate this risk through change and configuration management a deliberate

written plan that goes through a review process to reduce errors lack of cloud security architecture and strategy as organizations migrate to the

cloud some Overlook security or they fail to consider their obligations in the shared responsibility model insufficient identity credential access

and Key Management it's important to remember that the public Cloud offers benefits over Legacy on-premise environments but it can also bring additional complexities

identity and access management encryption and secret and Key Management are different than on-prem and essential

in the cloud but we need to spend time in architecting those solutions to make sure we're following best practices for the cloud so we modernize our approach

to these areas as we modernize our approach to compute and Service delivery account hijacking credential theft abuse and or elevation to carry out an attack

fishing is actually the most common approach to account hijacking Insider threat disgruntled employees employee mistakes and unintentional over sharing

job rotation privileged access management auditing and security training are all potential mitigations

insecure interfaces and apis customers failing to secure access to systems gated by apis web consoles and the like controls like multi-factor

authentication role-based access control and key based API access are all controls that can help mitigate these threats next we have weak control plane

issues weaknesses in the elements of a cloud system that enable Cloud environment configuration and management this would be our web console our command line interfaces and our apis the

good news is most csps offer reference architectures to ensure customers secure and isolate their Dev test in prod environments as well as their production data

so now let's take a quick look at Insider threat protections offered by csps and again I'm just going to show you one example here of Insider threat

protections available with a CSP just for context so I'll switch to a browser and I'm going to browse to compliance.microsoft.com which is home a

compliance.microsoft.com which is home a Microsoft purview which includes an array of compliant Solutions and here I see The Insider risk management solution

and when I go to the policies tab here I can create a policy to Define what types of behavior I'd like to monitor for and you'll see there are templates here that allow me to monitor for malicious

behaviors like Data Theft but also unintentional leakage data leaked by my higher priority users or my habitually risky users I see security policy

violations even misuse of health records so a number of templates that get me off to a good start if I'm not quite sure what sorts of behaviors I want to

monitor for now I'll quickly create a policy here just so we can look at the types of behaviors these policies will monitor for a bit more specifically

and when we get into the details here I see I can look at the indicator so for office indicators I can look at sharing

behaviors I can look at deleting of SharePoint files as I scroll down here I see adding users from outside the

organization I see removing sensitivity labels and when I look at the casby solution I see unusual Mass deletion another great example of tooling

provided by the CSP that requires customer configuration continuing with the CSA egregious 11. we

have meta structure and apple structure failures these are vulnerabilities in the operational capabilities that csps make available like apis for accessing

various cloud services now if the CSP has inadequately secured these interfaces any resulting Solutions built on top of those services will inherit

these weaknesses now let's break these down just a bit further The Meta structure is the protocols and mechanisms that provide the interface between the cloud layers enabling

management and configuration an apple structure are applications deployed in the cloud and the underlying application Services used to build them

that would include past features like message queues functions and message services so who's responsible and how do we mitigate well mitigating risks in this

area is the responsibility of the CSP so customers should verify the CSP has implemented their own secure software development lifecycle to ensure service continuity

and remembering that your csps generally don't allow direct audit that's where we're going back to read Assurance materials in which the csps tell us about their

compliance with various audit standards and compliance standards and rounding out the list limited Cloud usage visibility which refers to when

organizations experience a significant reduction in visibility over their information technology stack as a whole now this is because in some models the

CSP owns the stack so visibility is limited by Design and by responsibility and finally abuse and nefarious use of cloud services and while low-cost and

high scale of compute in the cloud is an advantage to Enterprises it's also an opportunity for attackers to execute disruptive attacks at scale this makes executing DDOS and phishing attacks

easier so csps have to implement mitigating security controls to address these risks remember csps are dealing with

multi-tenancy at higher scale and with a more varied customer base than we are in a private cloud in a corporate environment there are several approaches to risk mitigation in Cloud environment and the first of those is selecting a

qualified CSP the next is designing and architecting with security in mind security should be considered at every step and that starts with the design process

the next risk mitigation tool is encryption and data should be encrypted at rest and in transit so that means storage and database encryption at rest

TLS and VPN for data in transit and finally ongoing monitoring and management to maintain security posture major csps generally provide tools to

manage and monitor configuration security and to monitor changes to cloud services and to track their usage so let's take a quick look at an example

of this in a live Cloud environment ongoing monitoring and management to maintain security posture in fact we call this capability Cloud

security posture management and Cloud workload protection so I'm going to look on the Microsoft platform and Microsoft Azure at Defender for cloud which gives

us that security posture management AWS and Google Cloud platform absolutely have equivalent tools so here I can see my security posture I can see

recommendations coming from the CSP and it even goes a bit further than that so when I drill down into these recommendations for example

encrypting data at rest I see here it tells me I have a VM and a database now it tells me the status is completed so if I had a regression if somebody were

to reverse a secure configuration that would appear here as well and a recommendation would be provided and you can see that it's even been gamified to a certain degree there's a

score here in addition to that recommendation so I'll go to security alerts any alerts that require my attention any configuration

recommendations come up here and going down the list under Cloud security I see that security posture I see Regulatory Compliance so this is going to show me

some default configurations now this tool has dozens of compliance templates I can apply but you see here sock and ISO 27001 right out of the box here's

that cloud workload protection so any of my specific workloads are going to be surfaced here so I can thumb through my VMS and then my past Services right here

in one place but just a quick look so know that your cloud service providers have that capability baked in for you

that brings us to section 3.4 design and plan security controls here we'll cover physical and Environmental Protection this would include on-premises for private and hybrid Cloud scenarios

system storage and communication protection identification authentication and authorization in Cloud environments and audit mechanisms functions like log

collection correlation which would be a Sim function and packet capture we're going to touch on a few concepts related to physical and Environmental Protection and in some cases revisit

Concepts we've touched on previously but the primary consideration is site location as that will have an impact on both physical and Environmental Protections your cloud data centers share many requirements with traditional

co-location providers or individual corporate data centers including the need to restrict physical access at multiple points ensuring a clean and stable power supply

adequate utilities like water and sewer adequate Workforce remember for the exam that these considerations are a customer responsibility in on-premises or private

cloud data centers and a CSP responsibility in the public Cloud I do expect overall to see less exam focus on physical consideration since it's a CSP

area of responsibility for public Cloud we saw how to track down those CSP assertion documents that articulate the

csps compliant with various Regulatory and audit standards and Frameworks so site selection and facility design

the key elements in site selection and facility design include visibility composition of the surrounding area accessibility effects of natural

disasters we don't want to build a data center in a site that's not easily accessible by automobile for example or that would have undue exposure to natural disasters you know for example I

might not build a data center on the coast now these are all problems for the CSP and the public Cloud again customers need to focus on selecting CSP data center locations to meet their disaster

recovery and data residency requirements remember csp's Auto Select region pairs for redundancy something to just bear in mind so if we revisit the region pairs

concept we talked about in a previous installment in the series for example we have East U.S as a primary data center

region the CSP will pair a secondary region to serve as the backup and that's generally 300 plus miles away chosen by

the CSP so in my example Microsoft uses West us as the region pair for East U.S

moving on to system storage and communication protection we'll touch on a few Concepts you've seen at least once before we want to make sure that we encrypt and protect data at rest in

transit and in use and protect systems and services from disruptive attacks at scale like denial of service and distributed denial of service certainly made easier in the cloud

boundary protections for Ingress and egress firewalls intrusion detection and prevention and Key Management so protecting secrets of all kinds passwords Keys certificates

Etc that's really the technology half of the equation and security practices automation of configuration think infrastructure as code responsibilities

for protecting Cloud systems and services should be well defined monitoring and maintenance in place this is a little more people and process focused and remembering that customer

and CSP roles in all of these areas are going to vary based on the shared responsibility model so your responsibilities as a customer vary from is to Pez and to and SAS and we need to

make sure you know the difference on exam day and properly securing Information Systems can be a difficult task due to the sheer number of elements that make up a system it can actually

help to break these systems down into components and then apply security controls to make the overall task a bit more manageable to kind of piece it out

now one source of controls is nist special publication 800-53 security and privacy controls for information systems and organizations which contains a

family of control specific to systems and Communications in fact that control family includes more than 50 controls many of which are relevant to system

storage and communication now to get a bit more specific we'll break this down into policy and procedures separation of system and user

functionality security function isolation denial of service protection boundary protection and cryptographic key establishment and management

so starting with policy and procedures we establish requirements for system protection and Define the purpose scope roles and responsibilities needed to achieve it

separation of system and user functionality essentially no single person can control all of the elements of a critical function or system and separating user and admin functions

can also prevent users from altering processes or misconfiguring systems sometimes unintentionally security function isolation separating security specific functions from other

roles is just another flavor of separation of Duties really configuring data security controls like encryption and logging configuration would be perfect examples of that

security function isolation denial of service protection so denial of service is a disruptive attack at scale it's definitely more difficult for smaller organizations to combat

effectively but most of your csps offer denial of service or DDOS mitigation as a service and there are also dedicated third-party providers like Akamai and

cloudflare that offer DDOS mitigation protections now in the big three we have Azure DDOS AWS shield and Google Cloud

armor which are all DDOS mitigation as a service features and on at least a couple of those platforms they offer a basic tier of that service at no charge

and requiring no real configuration and we have boundary protection which deals with both Ingress and egress protections including preventing malicious traffic from entering the network

preventing malicious traffic from leaving the network protecting against data law so data exfiltration and configuring rules and policies in your routers gateways or firewalls

and your large csps generally have a policy engine that allows you to configure centralized policies to apply to your network virtual appliances your virtual firewalls and gateways as you bring those devices or new regions

online so you don't have to configure those individual devices manually so you're really codifying your configuration in infrastructure with code

and finally cryptographic key establishment and management cryptography provides a number of security functions including confidentiality integrity and

non-repudiation and it helps to match these functions to the predictions they offer so encryption tools like TLS or VPN can be used to provide confidentiality

hashing can be implemented to detect unintentional data modifications that's really an Integrity function so if I Hash a file I calculate a hash I send

you the file you calculate the hash on the file you receive if the hashes match we know the file has reached you intact it's Integrity remains intact and

additional security measures like digital signatures or hash based message authentication code or hmac can be used

to detect intentional tampering so hmac can simultaneously verify both data integrity and message authenticity so

that's really a non-repudiation function let's move on to identification authentication and authorization authentication sometimes abbreviated as

auth n is the process of proving that you are who you say you are that's identity authorization sometimes abbreviated off Z is the act of granting an

authenticated party permission to do something that's access so permissions rights and privileges are granted to users based on their proven

identity for resources to which they have been assigned access and users should be granted minimum necessary permissions this is called the principle

of least privilege I want to touch on accountability which is a challenge with Cloud identity users who perform activities on a system need to be held accountable for following

policies and procedures accountability is typically enforced with adequate logging and monitoring of system activity now Cloud brings with it some challenges in enforcing accountability

for example SAS apps used as users travel make identifying anomalous or malicious behavior much more difficult bad password practices with our users

specifically users reusing passwords across Services is a problem and the use of personal devices in BYOD or bring your own device scenarios

now modern identity as a service tools in the cloud provide solutions for these challenges which we'll talk through and I'll show you a bit in just a moment so let's start with multi-factor

authentication which works by requiring two or more of the following authentication methods something you know like a pin or a password something you have like a

trusted device or something you are a biometric Authentication that second Factor can be authenticator apps like the Microsoft authenticator or Google Authenticator

a voice call an SMS or text message though SMS is considered a very weak second factor and organizations like the cloud security Alliance have been recommending against

that for some time we have the oath hardware token which provides a time-based one-time password and if that one-time password concept

isn't Crystal Clear think about any authenticator app you use Microsoft Google One login any third party they also generally serve as a software oath

providing that time-based one-time password in the form of a numeric sequence that changes every couple of minutes continuing with multi-factor authentication so two or more

authentication factors obviously more secure than a single authentication Factor if you talk to some of the identity as a service providers you might be surprised to learn that in the

opinion of any experts passwords are the weakest form of authentication now password policies help increase their security by enforcing complexity and history requirements

smart cards are a good option which include a microprocessor and cryptographic certificate oath tokens are a stronger second Factor option creating a one-time password

whether that's a hardware token or a software token like the authenticator app on your phone biometric methods identifying users based on a fingerprint or facial

recognition every modern iPhone features facial recognition your Android phones that don't offer facial ID do have fingerprint generally speaking

so lots of options to go beyond a simple text message for that second Factor now let's shift gears and talk about conditional authentication policies this capability is increasingly common in

identity as a service platforms we've seen this in Azure active directory used with Office 365 for a lot of years now so a conditional authentication policy

will typically look at the signals around the authentication attempt the user and their location the device they're authenticating from is it a known device is it compliant with our

security policies is the application An approved application what is the real-time risk rating of this user and typically that risk rating comes from

machine learning and AI processing data from that user's past behaviors potentially some user entity behavioral analysis that tell us if conditions are unusual if risk is medium or high

potentially these signals will be processed together and then the platform will allow access block access or potentially require multi-factor authentication we can throw an

additional prompt at that user if the conditions tell us that there's something a bit unusual and if they meet the bar then they are granted access to our data

and resources and this functionality Works seamlessly with the authenticator app on our mobile device that's ubiquitous today the authentication application it's also

called so it's a software based authenticator it implements two-step verification services using the time-based one-time password algorithm

and hmac based one-time password algorithm for authenticating users of software applications that's the authenticator app and we know Microsoft authenticator and

Google Authenticator are really just two of many but the authenticator apps from companies like Microsoft and Google generate one-time passcodes using these Open Standards that are developed by the

initiative for open authentication so oath you'll hear hmac and totp tokens called oath tokens with some of these providers just different names for the

same functionality we have push notifications where the server is pushing down the authentication information to your mobile device so you have notifications enabled on your phone and

really there's a finer grain of notifications it's time sensitive notifications so that push notification will push a notification from your authenticator app directly to you on your phone right away when you need to

respond to that second Factor but the identity platform is using the mobile device app to be able to push that message to you in real time or near real time so you can respond to that

second Factor on your phone now I'd like to take just a minute and show you conditional authentication policies in an identity as a service platform just to give you some real world context for

how that functionality increases the security around identity and access management in the cloud so I'll switch to a browser here and I'm looking at the Azure active directory

admin Center so this is Microsoft's identity as a service platform so if you've not used this with Microsoft Azure maybe you used an Azure ID account with office 365. this is the platform

that supports Office 365 for identity now I'm going to scroll down and look at the security features of azure active directory and conditional access is what

Microsoft calls their conditional authentication functionality that I was describing in the presentation now I'm going to look at an existing policy here exchange online requires compliant

device so I can see it's already configured to look at some of the signals as part of that user's authentication attempt so I can apply

this policy to all users or specific groups of users even guests and external users I can apply this to specific applications I can drill down to a

specific app or apply it to all apps now let's look at conditions so I see here I can act based on the user's location and in fact I can exclude certain locations so

I might not want to apply additional factors of authentication to trusted location so it's certainly possible that when someone is on a compliant device in

a trusted location we're going to skip this policy and I'll just exclude them and I can look at device platforms so I

can apply this to specific types of devices Windows Mac OS iOS Android Etc none or low maybe I want to apply these additional

authentication conditions now look at user risk so this is the risk level for the user itself for that identity and again giving me the option to configure

my tolerance there now I'll scroll down a bit and look at my access controls here so I can configure some conditions around access so I can choose to Grant

or block access now blocking access is a pretty straightforward decision I'm just checking block but under Grant what you'll notice here is I can require MFA I can require specific authentication

strength a compliant device a device that's hybrid Azure ad joined so join to my on-premises active directory and synced to my identity provider in the

cloud in Azure ad I can require an approved Client app and an app protection policy which would be something we'd set up in our mobile

device management platform and then you'll notice down here I can require one of these controls or all of these controls so I have a lot of flexibility in the functionality and on this platform they actually offer the option

to straight up enable that policy or to put it into report only mode which can be handy because we can assess what the

impact of the policy would be before we roll it out to live users so again just a quick look hope that gives you some context so back to our presentation let's talk

about Federation which is a collection of domains that have an established trust so the level of trust may vary it typically includes authentication and

almost always includes authorization we're typically using this for identity and access management it often includes a number of organizations that have established

trust for shared access to a set of resources for example you can Federate your on-premises environment with your Azure active directory and use this Federation for authentication and

authorization this sign in method ensures that all user authentication occurs on premises we are federating to our on-premises directory it allows administrators to implement

more rigorous levels of access control so historically we would use Federation so we could leverage certificate authentication or a key fob or a card token

some of these methods are making their way into the identity as a service platform so Federation has become less necessary in some circumstances I'd like to talk through a quick identity

Federation example I think might resonate with you so I have a website let's say it's hosted in Microsoft Azure that's my CSP so that's going to use Azure active directory as its identity as a service

that's identity provider a IDPA that's identity provider a I have a user who wants to authenticate with identity provider B let's say they're a Facebook

user so they don't have an Azure active directory account and I want to facilitate easy authentication of Facebook users to my website without requiring everyone to have an Azure ID

account so what I can do is configure Federation I can configure Azure active directory to trust Facebook as an identity provider so identity provider a Azure ad trust identity provider B

Facebook and that way my user can authenticate with their Facebook account and then they are granted shared access now this may be cloud or it may be on

premises we definitely see identity Federation happening between identity providers in the cloud and on premises like active directory on-prem is quite common and trust is not always

bi-directional as in this example trust only happens in One Direction and incidentally configuring Facebook as an identity provider and Azure active directory is not that difficult in fact

I'm just going to go back to the portal quickly and I'll click on external identities here just to show you all identity providers and you'll notice Facebook is right there so many of your identity as a

service platforms are going to have similar functionality to allow Facebook Google Twitter as potential identity providers

and with identity and access management audit mechanisms are top of mind we need to collect logs so we have an audit Trail and your cloud services will offer different controls over what information

is logged what they will have in common is they collect a minimum level of security relevant events like the use of privileged accounts or changes to

privileged accounts and a log aggregator like a security information event management system or sem can ingest logs from all of your

on-premises and Cloud resources for review and correlation so nist SP 800-53 and the owasp logging cheat sheet

both offer guidance on specific information to capture in audit records and good news there we covered both of these in domain two of this series

so correlation that I just mentioned refers to the ability to discover relationships between two or more events across logs this capability is commonly associated with a sim a security

information event management system which correlates events and logs from many sources this is very important in investigation and Incident Management

security incidents because we can correlate activities across a broad variety of sources to provide a more comprehensive picture of the actors activities in our environment

we touched on some of the core tenets of a Sim in domain 2 and we'll talk about Sims in Greater depths later in this series and to round out 3.4 we'll touch on

packet capture and replay so packet capture tools are also called protocol analyzers and in the cloud Some Cloud environments may not provide any facility for capturing packets

particularly in SAS scenarios where the customer is not responsible for anything related to the environment certainly you'll see that your csps offer some facilities for IAS and other

foundational scenarios now Wireshark is a free open source protocol analyzer it has CLI and GUI versions windows and Linux versions it is really ubiquitous

this is the de facto standard for packet capture now some of your csps support Wireshark directly others have specialized services to perform packet

capture on Virtual networks so two good examples in Microsoft Azure there is Network Watcher which is a specialized packet capture medium AWS supports

Wireshark directly incidentally Network Watcher in azure produces pcap output that we can open in Wireshark so your CSP protocol analyzers can actually save

the data that they collect to a Wireshark compatible packet capture file or pcap which is the case in Azure and a couple of other platforms that

come immediately to mind and that brings us to section 3.5 plan disaster recovery and business continuity so here we'll touch on

business continuity and Disaster Recovery strategy business requirements we're going to touch on three key acronyms recovery time objective

recovery Point objective and Recovery Service level and creation implementation and testing of our business continuity and Disaster

Recovery plans a good place to start is by identifying the difference between a business continuity plan and a disaster recovery plan so the BCP focuses more on the

whole business where the disaster recovery plan focuses more on the technical aspects of recovery the business continuity plan will cover Communications and process more broadly

another way to think about that is the business continuity plan is an umbrella policy and the disaster recovery plan is part of it

so what are the goals of DRP and BCP well it's about all about minimizing the effects of a disaster by improving responsiveness by the employees in

different situations erasing Confusion by providing written procedures and participation in drills to ensure folks know what they are doing in the event of

an actual disaster ultimately helping your important users executing the plan to make logical decisions during a crisis

there are a few core definitions related to business continuity planning that are worth knowing for exam day so the business resumption plan this is the plan to move from the disaster recovery

site back to your business environment or back to normal operations in other words mean time between failures that's a determination of how long a piece of it

infrastructure will continue to work before it fails mean time to repair or sometimes mean time to recovery on time determination for how long it will take to get a piece

of Hardware or software repaired and back online Max tolerable downtime the amount of time we can be without the asset that is

unavailable before we must declare a disaster and initiate our Disaster Recovery plan so let's shift and talk about

business continuity and Disaster Recovery strategy I wanted to provide just a couple of definitions here that may come in handy on exam day so the business continuity

plan is the overall organizational plan for how to continue business after an event has occurred it's a proactive risk mitigation

strategy that contains likely scenarios that could affect the organization and guidance on how the organization should respond in other words the business

continuity plan is going to focus on the most likely scenarios this plan is sometimes called a continuity of operations plan now

depending on the sources you look at some sources will Define a difference call out a subtle difference between a business continuity plan and a continuity of operations plan if you

look at the common body of knowledge for the ccsp exam these two are considered one and the same and then the disaster recovery plan again is the plan for recovering from an

I.T disaster and having the I.T

I.T disaster and having the I.T

infrastructure back in operation one is business focus the other is more Tech focused and the business impact assessment which we talked about earlier in this series

is used to determine which processes are critical and which are not it measures the impact of specific systems and processes and any that are

deemed critical to the organization's functioning must be prioritized in an emergency situation the business impact assessment contains

typically a cost-benefit analysis and a calculation of the return on investment and just pivoting to look at business continuity and disaster recovery from a

CSP perspective a cloud data center that's affected by a natural disaster will likely activate multiple BCPS and drps a CSP will activate both plans to

deal with the interruption to their service now one key element of the BCP is communicating incident status to relevant parties

now the customer is responsible for determining how to recover in the case of a disaster in the cloud so recovery of our application does not necessarily going to be automatic and a customer may

choose to implement backups or utilize multiple availability zones load balancers or other techniques in other words the CSP is going to give us the tools but they're not necessarily going

to do all of that design and implementation work for us we have to use the tools we're given csps can further protect customers by not

allowing two availability zones within a single physical data center within a cloud region now we talked about availability zones all the way back in

domain one so let's just briefly revisit the concept of availability zones in the cloud data center to refresh your memory here so availability zones are unique

physical locations within a region with independent power Network and Cooling and they're comprised of one or more data centers if we look at a region for

a cloud service provider like U.S east

for example that region is going to consist of multiple data centers in fairly close proximity and availability zones will provide a way for us to

spread our infrastructure within that region within those data centers to tolerate data center failures via redundancy and isolation the focus there

is really on providing redundancy within that data center region so if I put a load balancer in place with multiple web application instances I would hope

to spread those throughout the data centers in that region across availability zones so I make my load balancer Zone redundant in other words but the focus again is on data center

failures within a region so our hope is that our CSP doesn't provide availability zones that leave us stuck in a single Data Center

and your major csps have multiple data centers within a region so it can be safely assumed this is true so let's talk about the communication plan the plan that details how relevant

stakeholders will be informed in the event of an incident like a security breach it would include a plan to maintain confidentiality such as encryption to ensure that the event does

not become public knowledge at least before we're ready the contact list should be maintained that includes stakeholders from government police customers suppliers

and internal staff now compliance regulations like gdpr include notification requirements relevant parties and timelines for example gdpr

has a 72-hour time limit on the point by which certain notifications must go out but confidentiality amongst internal

stakeholders is desirable so external stakeholders can be informed in accordance with the plan you want to be the one as an organization informing

your stakeholders not allowing them to get that information from a News Bulletin so when we have an incident there are multiple groups of relevant stakeholders

that we need to inform and manage and they may include internal stakeholders a cyber insurance provider business partners customers law enforcement a

stakeholder in this case is a party with an interest in an Enterprise corporate stakeholders include investors employees customers suppliers

regulated Industries like Banking and Healthcare will have requirements driven by the regulations governing their Industries so stakeholder management and

communication plans will certainly be influenced by the industry that your organization Works in so let's talk business requirements these are the three acronyms called out

in the exam syllabus there's the recovery Point objective that's the age of data that must be recovered from backup storage for normal operations to resume if a system or a network goes

down next we have the recovery time objective or RTO which is the duration of time in a service level within which a business process must be restored after a

disaster in order to avoid unacceptable consequences associated with a break in continuity SLA is between a company and its customers will definitely influence the

RPO and the RTO in fact they will be determined based on contractual slas between a company and its customers or operating level agreements or Olas

between the IT department and other departments within the organization and finally we have the recovery service level which measures the compute

resources needed to keep production environments running during a disaster it is a percentage measure zero of a hundred of how much computing power you

will need during a disaster and based upon a percentage of computing used by production environments versus other environments like development tests and

QA so for example if I have a 10 web server environment and eight of those servers are used for Dev test and QA I'd only need to bring the two production servers into my Dr

environment I'm only going to migrate what I need to keep the production trains running so to speak but that Recovery Service level answers the

question what needs to be migrated to keep production running and another quick real world look this time at data backup and retention

features in platform as a service offerings this will only take a minute but it'll be a good reminder of the pros and cons the trade-offs in platform as a service

so I'm going to look at Azure SQL so Microsoft's pass offering for SQL Server so I'm looking at a SQL instance here and I'll go down under

data management to backups and what I see down here are my available backups but I'm going to look at my retention policies and what I want to show you here is when I look at the retention

policies for This Server we'll notice here that for pitr which is point in time restore backups I only have so many days that I can select there there's a

sliding scale that gives me one to seven days and I can then look at my differential backup frequencies I have a drop down that gives me a limited number of options I have a little more control

in my long-term retention you'll see here it mentions that I can keep my long term backups for up to 10 years so I have

that long-term retention flexibility but less flexibility in some of the short-term point in time recovery options so the upside is configuration is very simple it's just a few clicks the

downside is I have to accept the limitations that come with that platform as a service offering next up is bcdr or business continuity

and Disaster Recovery plan creation implementation and testing and I'd like to talk through the process with you beginning with the design phase we design our bcdr plans based on

priorities from the business impact analysis and FEMA and infraguard are organizations that can also advise us on likely disasters for a region so we prioritize our planning around the most

probable impact then we Implement our plan to protect critical business functions again we're always focused on valuable assets so

when we're designing plans to recover business operations and infrastructure we're focused on critical business functions first we also need to identify

key personnel as they will be the ones carrying out these BCD or plans now in the testing process we're testing to make sure our plans function as

expected and that the people involved know their roles and responsibilities and that the plans actually work testing both the BCP and DRP plans is essential

and disaster recovery and business continuity plans that are not tested seldom work as expected in Live use if we haven't tested it and refined them

first and when we conduct these tests we then report and revise so our business continuity and Disaster Recovery plan should be revised as necessary based on

test results and test will definitely identify need for revision because our business evolves and so these plans must evolve and be refined over time to

continue to align with our critical business functions and processes so let's talk through a few Disaster Recovery test scenarios we need

to test our business continuity and Disaster Recovery plans at least annually most organizations will test them in part in various forms more than once a year

common disaster scenarios would include data breach data loss power outage or other utilities Network failure so notice that not every impact is the most

significant impact we want to test a range of impacts natural disasters civil unrest or terrorism we're getting more serious now and pandemics

and the plans should also test the most likely scenarios first but can also be tested in a number of ways there are different types of tests we can carry out so for example tabletop testing

members of the disaster recovery team Gather in a large conference room and role play a disaster scenario usually the exact scenario is known only to the

test moderator who presents the details to the team at the meeting so they are responding in the moment the team members refer to the document

and discuss the appropriate responses to that particular type of disaster so a couple of benefits to this type of testing is that a tabletop test is role

play only so it's a minimal impact on productivity and it's also a great way in your early revisions to identify

revisions to the plan steps when you write out that first draft of a disaster recovery or business continuity plan nobody's going to get it perfect on the first draft so the tabletop testing

can help us refine the plan so we are ready for a real impact then there's a dry run and this test some of the response measures are tested on non-critical functions so there's a

bit of doing in this case then we have a full test which involves actually shutting down operations at the primary site and shifting them to the disaster recovery site when the entire

organization takes part in an unscheduled unannounced practice scenario a full business continuity and Disaster Recovery activities and just a couple of notes on plan

implementation so implementing business continuity or Disaster Recovery processes May necessitate utilizing cloud computing for critical services so customers can take advantage of the

Cloud's High availability features like multiple availability zones automatic failover to backup regions direct connection to a cloud service provider

and most of these choices come with costs that have to be considered even if we're talking about intra-region features like availability zones protecting us against a data

center failure or if it's automatic failover to a backup region when we're implementing that type of redundancy there's going to be some infrastructure involved that has a subscription cost

but the cost of high availability in the cloud is generally less than a company trying to achieve High availability on their own but it needs to be cost effective at the end of the day the cost

of building resiliency should be less than the cost of business interruption and with that in mind let's get started with domain for cloud application security

so let's take a look at the exam Essentials those areas the official study guide advises will Factor prominently on exam day beginning with Cloud development Basics pitfalls and

vulnerabilities so here we'll talk about performance scalability portability and interoperability as they pertain to Cloud as well as the popular threat

lists from owasp and sand up next we'll talk about the application of the software development life cycle or as you'll hear it referred to in the

ccsp context the secure software development life cycle we'll touch on development models like agile and waterfall threat models as well as

secure coding practices and standards then applying test methodologies to application software will touch on functional and non-functional testing

static and dynamic testing as well as the QA process we'll touch on managing software supply chain security and secure software usage

practices common application security technology and security controls will look at elements of design and data encryption as well as orchestration and

virtualization and finally identity and access Management Solutions as well as common threats to Identity and access we'll start with 4.1 Advocate training

and awareness for application security and here we'll drill down on cloud development Basics common pitfalls and common Cloud vulnerabilities with some

focus on the oasp top 10 and Sans top 25 lists we'll start with Cloud development Basics and we see three key Concepts called out in the official study guide

there is security by Design which declares security should be present throughout every step of the process various models exist to help like the building security and maturity model

this pairs well with devsecops then there's shared security responsibility the idea that security is the responsibility of everyone from the most Junior member of the team to Senior

Management that describes the primary principle of devsecops where security is present throughout the software development life

cycle and everyone is responsible for security and finally we have security as a business objective where risk mitigation through security control should be a key business objective

similar to customer satisfaction or Revenue this does require organization-wide security awareness and

commitment in order to be effective common pitfalls of application Security in the cloud so we'll touch on

performance scalability interoperability portability and API security you'll want to know the common pitfalls and the advantages of avoiding each of these

so we'll start with performance so Cloud software development often relies on loosely coupled services this makes designing for and meeting performance

goals more complex as multiple components May interact in unexpected ways you want to verify functionality and performance through

end-to-end load and stress testing and we have scalability one of the key features of the cloud is the ability to scale allowing applications and services

to grow and Shrink as demand fluctuates it does require developers to think about how to retain State across instances and handle fault with individual servers

scale out tends to be better than scale up in the cloud we can scale out with additional instances to meet demand and scale back during times of lesser demand

and our overall run rate is going to be less than deploying a smaller number of larger instances that run billing us all the time for unused capacity

next we have interoperability this is the ability to work across platforms services or systems and can be very important especially in multi-vendor and multi-cloud scenarios interoperability

across platforms increases service provider choice and can ultimately reduce costs and we have portability designing software that can move between on-premises and Cloud environments or

between Cloud providers that's what makes it portable portability in a hybrid scenario requires avoiding use of certain environment or provider specific apis

and tools that additional effort can make it harder to leverage Some Cloud advantages and it may require some compromises because not all tools we use on-premises

May translate to the cloud and certainly not all the features in the cloud are going to be available back on premises and finally API security so application

programming interfaces or apis are relied on throughout Cloud application design development and operation and designing apis to work well with cloud

architectures while remaining secure are both common challenges for developers and architects our API considerations need to include Access Control Data

encryption throttling rate limiting your csps offer path services that simplify addressing API concerns

and you'll find the CSP offerings also address the need to present your apis in multiple regions in multiple geographies let's talk Ramon about Cloud

vulnerabilities common Cloud vulnerabilities there are several groups that provide guidance on common application vulnerabilities and related security threats so in terms of vulnerabilities we're

talking about common Cloud vulnerabilities to avoid within the secure software development life cycle these include data breaches data

Integrity issues insecure application programming interfaces and denial of service and there are organizations that provide information on security threats

the cloud security Alliance the Sands Institute and the open web application security project or owasp and later in this domain we will dive

into some of the top security concerns and risks as identified by some of these organizations next up is 4.2 describe the secure

software development lifecycle process here we'll focus on business requirements as well as phases and methodologies we'll touch on the phases of the secure

software development life cycle as well as waterfall and agile methodologies for managing software development efforts

so first we have business requirements so mature software development shops utilize a secure software development life cycle because it saves money and it supports repeatable quality software

development the secure software development lifecycle is fully successful only if integration of security into the organization's existing software

development life cycle is required for all development efforts that must be mandatory we have business requirements that capture what the organization needs its

information systems to do and then we have functional requirements which detail what the solution must do such as supporting Max concurrent user

requirements which then turn into business requirements like supporting all workers being able to access a system to perform their assigned duties in addition to these functional

requirements the organization must also consider security privacy and compliance objectives and requirements though when regulatory requirements are involved

objectives become requirement in a hurry moving on to the secure software development life cycle if we take a look at the common phases of a typical life

cycle we have planning that leads to requirements and then a design and then coding that design and then testing and then ongoing care and feeding which we

call maintenance these are the common phases of the secure software development life cycle now there are multiple variations of the secure software development life cycle I

did want to just briefly mention that if you see the acronym ssdlc that's just another way of saying secure software development life cycle and I saw that acronym called out in the common body

and knowledge now regardless of which sdlc model a company uses there are a few phases that appear in all models planning

requirements definition design and coding these are mentioned in the official study guide so I'm sure you're familiar we have planning which considers

potential development work focusing on determining need feasibility and cost do we need it and is it possible for a reasonable cost requirements definition so once an

effort has been deemed feasible user and business functionality requirements are captured this involves user customer and stakeholder input to determine desired

functionality identify current system or app functionality and the desired Improvement then the design phase focuses on designing functionality architecture integration points and

techniques data flows and business processes the solution is designed based on requirements that we gathered in the requirements definition

and then in the coding phase this is where the actual coding the work happened now the ccsp exam outline mentions four phases in a secure software development

life cycle or at least it calls out these four phases of the life cycle and those phases are design Code test and maintain so these are mentioned in the official

study guide so make sure you're familiar we have the design phase again the solution is designed based on requirements gathered code this is where the coding work the

real work happens then the test phase this is testing to ensure software is functional scalable and secure and then maintain ongoing maintenance

updates patching and checks to ensure software remains functional and secure now there are only two software development models called out in the official exam syllabus and only too

likely to appear and the first is agile which places an emphasis on the needs of the customer and quickly developing new functionality that meets those needs in

an iterative fashion through iterations and then there's the waterfall model which describes a sequential development process that results in the development

of a finished product so agile is defined by its ability to allow quick response to changing requirements and Rapid iterations of

prototypes where waterfall requires clear requirements a stable environment and a low rate of change

the waterfall model has seven phases seven stages their system requirements software requirements preliminary design detailed design

code and debug testing and operations and maintenance so these phases happen in sequence and the waterfall model only allows returning

one phase back for correction so it's a bit inflexible much like we cannot change the direction of a waterfall the waterfall model does not allow us to

easily pivot to New Directions as a result the waterfall model has declined in popularity in recent years at this relatively uncommon in Cloud development

and it's seen as Legacy by many the agile model for software development is based on the following four principles individuals and interactions over processes and tools working

software over comprehensive documentation customer collaboration over contract negotiation and responding to change

over following a plan agile was first described in the manifesto for agile software development back in 2001.

and it leverages an iterative a repeating process called a Sprint so we have Sprint Planning Development testing

demonstration and then we repeat that through iterative Sprints we plan our work we do the development work the coding we test

we demonstrate we have a demo day at the end of every Sprint and then we do it over and again and a standard Sprint is two weeks

you'll see some organizations that'll run one week Sprints and a small project might be something you can complete in a single Sprint you may have large projects that take many Sprints over a

period of months but we'll repeat that process from the first Sprint to sprint in however many that is to the end of our project at which point

we have a working piece of software up next in 4.3 apply the secure software development lifecycle here we'll touch on cloud specific risks

threat modeling avoiding common vulnerabilities during the development process secure coding and will focus on some specific secure

coding standards that are expected to appear on the exam and software configuration management and versioning so Cloud specific risks are called out in section 4.3 and are actually

revisiting a concept we covered previously these were covered in domain three they come straight from the CSA website this is the CSA egregious 11 and they're covered in depth in the common

body and knowledge as well so in domain three we covered them from an architecture perspective here we will cover them briefly from a software development lifecycle or secure software

development lifecycle perspective in the context of devsecops and continuous integration and continuous delivery we have data breaches loss of sensitive

data due to a security breach and from a development perspective we'd want to implement centralized Secrets management using a vault solution in the cloud

which all the major csps offer data masking to obscure visibility of sensitive data at the database tier even from our database administrator so we

can leverage a solution like data masking to allow our database administrator to manage the data without being able to fully see sensitive data

and then misconfiguration and inadequate change control and here is where CI CD and infrastructure as code come into play We Define our application

infrastructure in code we deploy it hands-free through a CI CD process or a pipeline to eliminate human error and we

Implement release management so we have human approval Gates we have checkpoints to make sure that a smart person in the chair with the

right training verifies that the code we're about to release has in fact gone through all the appropriate validation procedures it's gone through testing in

QA and we know that code is functional and secure before we release it into production then there's lack of cloud security architecture and strategy we really need

to implement security from the design phase and we need to remember our obligations in the shared responsibility model so whether we're implementing an

is or path or leveraging a SAS service remembering where our security responsibilities begin and end an insufficient identity credential

access and Key Management so we're really talking about identity and access management here identity providers and a great solution here is developers can

leverage identity as a service rather than building their own resulting in stronger authentication and authorization controls quite typically and certainly a more mature approach

and in this case it's really about making good decisions as to where we want to innovate and where we want to leverage existing solutions that come in the cloud

account hijacking credential theft abuse or elevation to carry out an attack again using existing identity providers identity as a service for your app reduces risk

Insider threats here we leverage separation of Duties checks and balances in the release management process like those approval Gates I just mentioned

insecure interfaces and apis so failing to secure our apis we Implement access controls like role-based access control and for our apis we'd Implement access

Keys API keys and a weak control plane weaknesses in the elements of a cloud system that enable the environment configuration and management here continuous integration

and continuous deployment we're not deploying through the web console or from the command line it's codified automated and deployed through a pipeline and that brings us to threat modeling

which allows security practitioners to identify potential threats and security vulnerabilities it's generally used as an input to risk management and it can be proactive or reactive but in either

case the goal of threat modeling is to eliminate or reduce threats there are three approaches to threat modeling three common approaches I'll

call it there's the asset Focus which uses asset valuation results to identify threats to the valuable assets that's where we want to focus our effort we're focused on attackers identifying

potential attackers and identifying threats based on the attacker's goals or focused on software considering potential threats against software the

organization develops so either assets attackers or software so now let's look at some common threat modeling approaches

first we have stride which was developed by Microsoft and it stands for spoofing tampering repudiation information disclosure denial of service and

elevation of privilege so going back to the three approaches it's focused on potential threats against software the organization develops then we have the dread model which is

based on the answer to five questions what is the damage potential the reproducibility exploitability affected users and

discoverability so dread is looking more from an attacker's perspective what is the potential damage can it be reproduced is

it a vulnerability that can be exploited and who would our effective users be what would the scope of the exploit be and how discoverable is the

vulnerability if it can't be discovered it can't be exploited and therefore will worry less about it next up we have the pasta model which focuses on developing countermeasures

based on asset value the third of those three approaches to threat modeling pasta involves seven stages we have definition of objectives

then definition of the technical scope app decomposition and Analysis stage four threat analysis stage five weakness and vulnerability

analysis stage six attack modeling and simulation and Stage seven risk analysis and management so pasta will bring us to answers to

many of the same questions we saw in dread damage potential exploitability discoverability just coming at it from a different angle

and finally we have atasm and This Acronym stands for architecture so the process begins with analysis of the system's architecture this is followed by an examination of threats listing all

possible threats threat actors and their goals and then examining attack surfaces identifying components that are exposed to attack

and finally mitigations analyzing the existing mitigations in place those security controls already protecting the system to determine if they are indeed adequate now the thing about atasm is

it's not actually a threat model itself it's a series of process steps for performing threat modeling and because it is not actually a threat model itself

it can be used with threat models like stride dread and pasta it's some really Common Sense thinking now we're going to talk a bit about how

to avoid common Cloud vulnerabilities and like all risk mitigations a layered approach combining multiple types of controls as a best practice including training and awareness of our developers

which is critical because they make decisions about how to design and Implement system components awareness of common flaws like injection attacks for example prevent coding mistakes and in

the next section when we talk about secure coding we'll take a look at some artifacts from organizations like oasp and Sans that can help us in training our developers around awareness of those

common flaws documented process secure sdlc should be well documented and communicated to all team members designing developing and

operating systems it's similar to security policies and it has to be understood and followed by our developers test driven development focusing on

meeting acceptance criteria can be one way of simplifying the task of ensuring that security requirements are met having well-defined test cases for security requirements can help avoid

vulnerabilities like oasp top 10 application security risks this ensures developers know what test will be conducted against their code and common Cloud vulnerabilities are

well known and they're documented in lists like the owasp top 10 list and the CSA egregious 11. and we can use these to build our well-defined test cases

these documented vulnerabilities can guide what we're testing against and testing for let's shift gears and talk about secure coding

so this is the practice of Designing systems and software to avoid security risks it's essentially a proactive risk mitigation practice we are through

secure coding avoiding potential security risks are multiple organizations out there that exist and work to mature secure coding practices the syllabus calls out

three of these o wasp which produces the cloud native application security top 10 list and they're better known and older

top 10 web application security risks the exam also mentions the Sans top 25 as well as safe code standards and we'll touch on all of these so let's start

with the oasp top 10 web application security risks which is an awareness document that represents a broad consensus about the most critical

security risks to web applications so quickly running through these 10 in order we have broken Access Control cryptographic failures injection

insecure design security misconfiguration you'll notice that some of these are really self-explanatory vulnerable and outdated components identification and

authentication failures software and data Integrity failures security logging and monitoring failures and server-side request forgery which is

a pretty common or at least a well-known attack two things about this list number one it changes from year to year so you'll see

these risks moving up and down the list or potentially off the list as they become less common for example server-side request forgery was higher up this list five years ago

for the exam be familiar with the meaning around these Concepts don't worry about memorizing these specific items or their order

now let's move on to O wasp Cloud native application security top 10 the primary goal of this draft is to provide assistance and education for

organizations looking to adopt Cloud native applications securely and in this top 10 we have insecure Cloud container and orchestration configuration

injection flaws improper authentication and authorization CI CD Pipeline and software supply chain flaws

insecure secret storage over permissive or insecure Network policy so you'll notice again many of these are self-explanatory using

components with known vulnerabilities improper assets management inadequate compute resource quota limits and ineffective logging and monitoring

and for the exam you want to be familiar with the common Solutions around these problems and best practices which are all covered in this series so there's really not any real additional reading that you need to worry about

here so moving on to the sand top 25 most dangerous software errors this list is not specific to Cloud native

environments like the owasp cloud native app security draft we just looked at and because it's not specific to Cloud native environment it seems to get a bit

less attention in the official study guide and in the common body and knowledge but we're going to cover it anyway for reasons it'll be apparent in a moment you're going to see some

patterns developing here across these various standards so going through the top 25 these do change from year to year in

this list and for the exam you'll want to know the attack types don't memorize this list it's an even longer list don't memorize the list but I'm going to call

out the attack types for you here and we'll cover those at the end so when we get to the end of all of these mini lists you'll be ready for exam day so from number one to number 25 we have out

of bounds right which is a type of buffer overflow attack there's an improper neutralization of input a cross-site scripting attack

number three is a SQL injection number four improper input validation input validation actually prevents injection attacks if I validate the

input to a form it prevents a SQL injection for example out of bounds read is another buffer overflow type attack improper neutralization of special

elements that's an operating system command injection so another injection attack use after free and other buffer overflow type attack

improper limitation of a path name that's a path or directory traversal we'd call it now this Sans top 25 list uses the

common weaknesses scoring system or cwss by the by so continuing down the list here number nine cross-site request forgery pretty widely explained attack

less common than it used to be unrestricted upload of file with dangerous type null pointer reference deserialization of untrusted data on

number 12. number 13 is integer overflow

number 12. number 13 is integer overflow or wrap around what I will tell you about numbers 11 through 13 is input validation of various sorts will fix all

three of these the type of input you're validating varies but input validation is helpful in all of these number 14 improper

Authentication number 15 use of hard-coded credentials then missing authorization then improper neutralization of special

elements and other injection attack and then rounding out this page missing authentication for critical function so you'll notice authentication and authorization as a theme surfacing here

so knowing the best practices there will be important now that cwss that scoring system is actually composed of three scores I should mention there's the base finding

score the environmental score and the attack surface score I'm not sure the exam is going to get that deep but I wanted to call those out just so you have them top of mind and finishing out

the top 25 list we have improper restriction of operations within the bounds of a memory buffer that's another buffer overflow attack incorrect default permission so

identity and access related let's call it server side request forgery another fairly well-known attack less common than it used to be that's actually on

the oasp list you may remember concurrent execution of shared resource with improper synchronization a race condition this is called another type of attack you should be familiar with

number 23 uncontrolled resource consumption so this would be a denial of service attack improper restriction of XML external entity reference so number 24 isn't a

directory traversal attack but it does result in some unauthorized local file access basically and number 25 on the list improper control of generation of code another

injection type attack so I want to just recount for you some attack types and Concepts here some themes if you will so we have injection

attacks buffer overflow attacks directory or path traversal denial of service or distributed denial of service race condition

and then authentication and authorization so an understanding of the attack types should be enough for exam day and you saw these themes across all of these

lists to varying degrees the authentication and authorization is really more a matter of knowing best practices and we're going to touch on authentication and authorization from

multiple directions in this course and in this series so I don't think you have much to worry about there but what I would like to do is just in case you're not familiar with these

types of attacks I'm going to just walk you through a simple explanation of numbers one through five so in case you haven't been exposed to these in your work life you'll have a a good

foundational knowledge so let's start with injection attacks improper input handling is really the The Source cost here these are used to compromise web front end and back-end databases SQL

injection is the most widely known that's where the attacker uses unexpected input to a web application to gain unauthorized access to an underlying database

it's not new it can be prevented through good code practices essentially the countermeasure is input validation and using prepared statements like stored

procedures rather than allowing SQL queries to be generated from that front-end web application we don't allow the front-end web application to issue

select statements instead we limit that front-end web application to calling stored procedures which are pre-compiled

and allow us better control but input validation is the key now another I mentioned was buffer overflows these are attacks used to exploit poorly

written software this exists when a developer does not validate user input to ensure that it's an appropriate size for example it allows an input that is too large to

overflow a memory buffer but you see a theme here right we can prevent this with input validation another attack type is directory traversal we saw on the list gaining

access to restricted directories if an attacker can gain access to restricted directories through HTTP that's what we call a directory traversal or path

traversal attack one of the simplest ways to perform direct reach reversal is by using a command injection attack that carries out the action and if successful

it may allow the attacker to get to the site root directory most vulnerability scanners will check for weaknesses with directory traversal

or command injection and inform you of their presence so to secure your system you should run a vulnerability scanner and keep the web server software patched it's as simple

as that another type of attack we saw called out in the list was a resource consumption attack or a denial of service there's denial of service which is a resource

consumption attack intended to prevent legitimate activity on a victimized system it essentially consumes all of the processing and memory resources on

that system there's a distributed denial of service attack or DDOS and that's a Dos attack utilizing multiple compromised systems as sources of attack

the distinction here is the distributed denial of service or DDOS involves multiple systems where denial of service may involve just a single system your countermeasures here are several

good firewalls routers intrusion prevention systems security information event management so we can identify malicious activity more quickly across

our estate disabling broadcast packets entering and leaving our trusted Network disabling Echo replies the Ping echo or

icmp Echo reply and keeping our systems patched so many of these attacks exploit known vulnerabilities that just patching systems goes a long way but denial of service and distributed

denial of service represent a class of attacks these aren't specific attacks there are many variations of denial of service and distributed denial of service so we call these a class of attacks and we've already discussed in

this series how csps provide some built-in protections or some premium tier capabilities to protect against distributed denial of service attacks we're really approaching it from a more

fundamental perspective here as some of those lists amongst owasp and Sans are not specific to Cloud Computing and certainly in a hybrid

Cloud scenario one would want to know how to protect against these sorts of attacks at the last attack type I want to cover with you is the race condition this is where the system's behavior is

dependent upon the sequence or timing of other uncontrollable event a common race condition is the time of check to time of use condition this is a timing vulnerability that

occurs when a program checks access permissions too far in advance of a resource request the problem occurs when the state of the resource changes between the time the

application checks the state of the resource and the time the application or service attempts to use that resource a great example is file locking the application checks the file to ensure

it's not locked so it can write to the file presumably and then in the time between the check and the attempt to actually access and write to the file the state of the file has changed

because some other process has locked that file so to the degree this becomes undesirable that becomes a bug in your

software and a developer can avoid that time a check to time a use Problem by really checking the state of the resource it wants to access just in time

and I want to talk about the safe code standard the last that's called out in the official exam syllabus safe code was first published as fundamental practices

for secure software development it's informed by existing models including owasp the Microsoft software development lifecycle and others and it's designed to help the software

industry adopt and use these best practices effectively safe code itself covers topics like software design secure coding practices testing

validation third-party risks handling vulnerabilities I will say safe code is certainly the least well known of these standards it was last updated in 2019

and in my opinion fairly unlikely to appear on the ccsp exam and certainly not in more than one question if it does now we'll just switch over to a browser

and take a quick look at the sanso wasp and safe code standards on the web so you'll know that when you see them I'll have links in the PDF that you download with this course now here's sans.org the

top 25 most dangerous software errors you see a rank you see the name and in the middle here you have a link to read about the details again the reading of

the details here absolutely unnecessary if you've listened to my video here and next we have the oauth cloud native

application security top 10 so this one was last updated in April 2022 and then we have the oasp top 10 web application

security list and you see here they even give you some visualization of how these risks have moved up and down the list over time and then last on the list was

safe code so here is the fundamental practices for secure software development this is a PDF you'll find from safe code again at the end of this

section in the PDF you will find links to all of this additional reading if you like you want to shift gears and talk through some foundational concepts related to software configuration management and

versioning that's called out in the syllabus so let's talk about code repositories for a moment this is where the source code and related artifacts like our libraries are stored this is

where our infrastructure is code is stored so how do we handle source code securely well we don't commit sensitive information we're not keeping secrets on

disk for example we protect access to our code repository so we have authentication and authorization gating access to our code repositories

could protect access means we're only going to allow certain people to commit code to those repositories to those repos as we call them signing your work so typically code

signing is assumed for your third-party commercial software vendors if you look at any application in Microsoft Office for example you'll see that it is signed

with a Microsoft certificate keep your development tools your IDE up to date most code repositories today use get virtually everyone is using git it's the

most widely used modern Version Control System created by Linus Torvalds incidentally the creator of Linux also created git through IDE if you're not familiar is

integrated development environment so that would be a tool like vs code which is the most common IDE today in use anywhere in the world

so let's move on to configuration and change management so configuration management ensures that systems are configured similarly and

configurations are known and documented this is where baselining comes in that ensures our systems are deployed with a common baseline or starting point and imaging is a common baselining method

for example in is but that baselining exercise can carry over to a number of Technologies we could Baseline our configuration in a containerized environment for example

change management helps reduce outages or weaken security from unauthorized changes so versioning is a way we can track our

software lineage uses a labeling or a numbering system to track changes in updated software versions now how folks do this differs there are many approaches

what I often see is a major version a minor version and a patch version so like 23.05.02 major minor patch

but all of these practices can help us prevent the shift and drift that results in security related incidents and outages so I want to dig into baselining just a bit further for a moment so

baselining is how we track how our systems are set up not just our software our applications but our virtual infrastructure as well that Baseline is effectively a snapshot of a system or an

application at a given point in time it is our starting point this process should also create artifacts that can be used to help understand our system configuration and probably include some

metadata and we also want system and component level versioning for example if I have a VM that requires a Java Library I want to make sure that

at a system level I know what that operating system is but I also know what that Java library is you know how we capture that depends on the type of underlying compute we're dealing with the infrastructure but at

the end of the day our applications depend on compute resources and other software components there are many layers between the wire between the network and that application

user interface that our end user is working in and baselining is going to help us capture all of the details in between those two points the last thing I'd like to talk to you

about here in section 4.3 is the software bill of materials which is an emerging strategy and standard in tracking software versions the s-bomb lists all of the components

in an application or a service including open source libraries or proprietary code libraries for that matter think of it as a full inventory of the component of an application and we really started

seeing this get a lot of attention after solarigate where solarwinds had a breach that reached all the way into their source code but s-bomb as mentioned briefly in the

official study guide it's not mentioned in the exam syllabus you may see a question on there I doubt it though but do expect that s-bomb is something you will see more and more of

in the future there are multiple standards coming together and you'll see production of a software bill of materials integrated into pipelines as a future standard it will be fully integrated into the CI CD

process now you know what it is and that should be enough for exam day so moving on to section 4.4 apply Cloud software assurance and validation so here we're

going to talk about testing we'll talk about functional versus non-functional testing security testing methodologies like black box white box static and dynamic

testing and one you may not have heard of before the interactive application security test we'll talk about quality assurance and abuse case testing

before we delve into testing methodologies I want to talk for a moment on environment so secure environments for development testing and staging before moving an application into production are absolutely necessary

an environments mapped to phases of application development debugging testing and ultimately relief so the development environment is where an application is initially coded often

through multiple iterations as where agile comes in we iterate quickly through our sprints testing is where developers integrate all of their work into a single application so we may have developers

working independently in the development environment they need to roll their code together in testing where we can then integrate that work into a single application regression testing to ensure

functionality as expected for example can happen here and we have the staging environment where we ensure quality assurance before we roll out to production

QA happens here at minimum QA may happen before this phase but certainly in the staging environment we're going to see some QA happening often taking the form of what we'd call

uat or user acceptance testing making sure that application functions exactly as we expect in production or the changes to the application then finally production where the

application goes live and end users have the support of the IT team so let's talk testing so functional and non-functional testing

so functional testing determines if software meets functionality requirement defined earlier in the secure software development life cycle and it takes multiple forms including integration

testing that validates whether components work together regression testing that validates whether bugs were reintroduced between versions

and user acceptance testing which tests how users interact with and operate the software functional testing focuses on specific features and functionality that's

important now let's compare that to non-functional testing which focuses on the quality of the software it looks at software qualities like stability and performance

methods here include load testing stress testing recovery testing and volume test it examines the way the system operates

as a whole not the specific functions so continuing down the thought path here functional and non-functional let's talk about functional versus

non-functional security requirements and we're revisiting this concept from domain one so what is the difference functional security requirements Define a system or its components and specifies

what it must do it's captured in use cases defined at a component level for example application forms must protect against injection attacks which we do

with what input validation non-functional security requirements specify the system's quality characteristics or attributes and again it applies to the whole system it's a

system level assertion an example here would be security certifications which are non-functional so if we're looking to see that an application is HIPAA compliant for

example or complies with PCI DSS that's looking at the quality of that application as a whole it's not a functional requirement again we touched on this in domain one I

just wanted to refresh your memory and bring that forward here in context so let's talk about static and dynamic testing so static application security

testing it's analysis of computer software performed without actually executing programs the tester has access to the underlying

framework design and implementation it requires source code access and then there's Dynamic application security testing which is where a program communicates

with a web application and it executes that application it's exercising the application the tester has no knowledge of the Technologies or the Frameworks

that the application is built on there's no requirement for source code access so we say static testing test the application from the inside out

looking at the source code dynamic application security testing test the application from the outside in exercising the application in Live use

throwing unexpected inputted forms for example to make sure that it protects against injection attacks next we have white box testing which is conducted with full access to and

knowledge of systems code and environment static application testing is one example of white box testing remember that required source code access

and then we have black box testing which is conducted as an external attacker would access the code the systems and the environment the tester has no knowledge of any of these elements at

the outset of a test so obviously no source code required there sometimes white box testing is called full knowledge testing and black box testing is referred to as zero knowledge

testing next we have interactive application security testing I asked which analyzes code for vulnerabilities while it is being used

it focuses on real-time reporting to optimize testing and Analysis processes so unlike static and dynamic testing iaft analyzes the internal functions of

the application while it's running so it's testing the application from the outside in but analyzing those internal functions at the same time it's often built into CI CD automated release

testing and then we have software composition analysis which is used to track the component of a software package or application

and it's a special concern for apps built with open source software components because open source components often

involve reusable code libraries and SCA tools are going to identify flaws and vulnerabilities that are included in these components are going to ensure we're working with the latest versions

it's really automated and combines application security and Patch management so to speak it's making sure that we're using the latest versions of

the libraries and we're not exposing ourself to vulnerabilities unnecessarily by not working on the latest version or potentially working with the latest

version that has a known vulnerability and then there's quality assurance which is responsible for ensuring that the code delivered to the customer through the cloud environment is quality code

defect free and secure from a process perspective it's frequently a combination of automated and manual validation testing techniques it typically involves reviews testing

reporting and other activities to complete the QA process so it's going to be a combination of people in process the goal of QA is to ensure software

meets standards or requirements because devsecops preaches security is everyone's responsibility the role of QA is significantly expanded in a devops or devsecops team and it tends to be

embedded throughout the development process because security is an element of quality and in devsecops where security is everyone's responsibility we shift left and we start looking at

security from the very beginning of the software development life cycle and QA should be involved in many testing activities including load and performance tests stress testing as well

as vulnerability management so QA testing is looking at functionality performance and security so what is an abuse case well an abuse

case is a way to use a feature that was not expected by the implementer allowing an attacker to influence the feature or outcome of use of the feature based on

the attacker's action or input it describes unintended and malicious use scenarios of the application describing how an attacker could do this and an

abuse case test takes that abuse case and puts it into action it focuses on using features in a way that weren't intended by the developer it may exploit

weaknesses or coding flaws from the perspective of multiple personas malicious user abusive user and even an unknowing user it can help organizations

to consider security features and controls needed for an application in fact oasp provides an abuse case cheat sheet in their cheat sheet series

at owasp.org that we've looked at earlier in this series but testing generally focuses on documented abuse cases and those test cases could come from a

number of sources including our outputs from threat modeling where we're looking at the vulnerabilities of our services and the attack surface

that brings us to section 4.5 use verified software here we'll talk about securing application programming interfaces apis we'll touch on Supply Chain management

and vendor assessment and vendor assessment traditionally versus in the cloud third-party software management and validated open source software

so buy the buy if you're using the official study guide these last two come from chapter five of the official study guide beyond that the entirety of domain

4 is covered in chapter six of the official study guide so let's talk about apis we have soap and rest so an API is a set of exposed

interfaces that allow programmatic interaction between services that means no user or human involved soap is a standard communication protocol that uses XML

rest is an architectural model that uses https for web Communications to offer API endpoints and your security features from the CSP include API Gateway

functionality authentication IP filtering throttling quotas data validation generally offered through those past services that I've mentioned in the past

in previous domains in this series so you have API Management in Azure you have the API Gateway offering in AWS you also need to make sure that you have a

plan for storage distribution and transmission of your API access Keys those need to be maintained in a secure fashion both at rest and when they're

being transmitted between parties for whatever reason so let's talk supply chain today most services are delivered through a chain of multiple entities

a secure supply chain includes vendors who are secure reliable trustworthy and reputable due diligence should be exercised in assessing vendor security

posture business practices and reliability this may include periodic attestation requiring vendors to confirm continued implementation of security practices I

can tell you firsthand this typically happens on an annual basis at least we'll typically see an annual vendor survey for Microsoft as well as

customers in regulated Industries and a vulnerable vendor in the supply chain puts not only the organization at risk but potentially other members of the supply chain so let's examine vendor

assessment in our supply chain from a couple of perspectives so traditional vendor evaluation would occur through a number of different options so on-site assessment is an option visiting the

organization interviewing personnel and observing their operating habits so in extremely sensitive scenarios or where human safety is involved that's

certainly an option we'd see exercise document exchange and review investigating data set and document Exchange process and policy reviews we request

copies of their security policies processes and procedures to make sure that the security they attest is in place is actually in writing

and then third-party audit having an independent auditor provide an unbiased review of an entity's security infrastructure now these are all options

a CSP like AWS or Azure or Google might use to evaluate a vendor now in the cloud companies with hundreds or thousands of

customers like our csps cannot support direct vendor assessment so we can't perform the same type of review of these

major csps instead we're going to review audit and certification reports provided by the CSP directly these could include a third party audit a review of an

independent Auditor's unbiased review of an entity security infrastructure provided to us by the CSP we could also review a sock 2 type 2 report we could

look at the iso IEC 27001 27017 and 018 report to verify the efficacy of the csp's physical and logical security controls

and as we saw in domain 3 your major csps generally make these reports available to customers for review on demand we went to one of the CSP portals and actually retrieved these reports we

did have to sign an NDA to get to that sock 2 type 2 report but as a customer that was very possible

but this is how we evaluate a cloud vendor like a CSP or a SAS offering third-party software also adds additional risk to our organization a

third party may have limited access to your systems but will often have direct access to some portion of your data if you think about Office 365 applications

storing our documents in SharePoint and OneDrive that's absolutely true limited system access but direct access to our data typical issues addressed in

software vendor assessment would include where in the cloud is the software running is this on a well-known CSP or does the provider use their own cloud service that's going to give us

some insight into reliability or at least raise additional concerns potentially to go assess this vendor further is the data encrypted at rest and in transit and what encryption

technology is used not only are they encrypting the data but have they made good decisions in terms of encryption algorithm selection

how is access management handled access management is going to be tied to our identity provider there so really what is the identity and access

management system that service runs on for example with Office 365 we're talking about Azure active directory with Google Cloud platform we're talking about Google's own identity platform so

identity as a service often makes us feel a bit better a bit more comfortable what event logging can you receive we talked about the level of logging across different

Cloud deployment models and we know we're going to have greater access to logs in areas where we have greater responsibility so our log access is greater in is than it is in Pas and it's

greater in paths than it is in software as a service and what auditing options exist and we just talked about auditing in the cloud and we know that even when we are working with a vendor that has many

hundreds of thousands of customers and we can't audit directly generally speaking we're going to have access to their audit documentations and assertions in the form of sock 2 type 2

reports and the like the focus here is on risk to data security so let's talk open source software versus proprietary open source is one where the vendor makes the license

freely available they allow access to the source code though they may ask for an optional donation in exchange for access to source code there is no vendor support with open

source so you might pay a third-party company to support in a production environment we definitely see this with certain flavors of Linux another example is one of the more

popular open source firewalls PF sent now proprietary on the other hand tends to be more expensive but tends to provide more and better protection and more functionality and support albeit at

a cost so in the firewall example there are many vendors in this space including Cisco checkpoint Palo Alto barracuda and we don't get source code access in

the case of proprietary either so if we're using open source software we want to use validated open source software it must be validated in a business environment so we have some

risk when it comes to open source and some argue that open source software is more secure because the source code is available for review I would say that we

can definitively prove through evidence that is not guaranteed now adequate validation testing through sandbox testing vulnerability scans third-party

verifications all reduce our risk but more visibility into a problem can result in better outcomes but the transparency is not itself a guarantee

of security and I can cite two great examples here that would be open source projects like openssl and Apache which have contained serious vulnerabilities

in the past just because everyone's watching doesn't mean every security issue is identified and remediated and that brings us to section 4.6

comprehend the specifics of cloud application architecture in this section we'll cover supplemental security components we're going to talk about

flavors of firewalls or gateways in the cloud we'll talk about cryptography sandboxing as well as application virtualization and orchestration

we'll start with supplemental security components which are called out here in the syllabus beginning with a web app firewall often abbreviated as wath which protects web applications by

filtering and monitoring HTTP traffic between a web app and the internet it typically protects web applications from common attacks like cross-site scripting cross-site request forgery SQL

injection attacks on the owasp top 10 quite often and you'll often find these waffs

include default owasp core rule set and this is fairly common in your csps we'll take a look at a WAAF here in just a moment and then there's the XML firewall which

is used to protect services that rely on XML based interfaces including some web apps it provides request validation and filtering rate limiting and traffic flow management I think the main thing you

need to remember here is that it Services XML traffic usually implemented as a proxy then we have database activity

monitoring abbreviated as dam that's the associated acronym this combined network data and database audit info in real time to analyze database activity for

unwanted anomalous and unexpected Behavior it monitors application activity privileged access and it detects attacks through behavioral

analysis so there's some intelligence to this tooling and most of your csps offer some form of dam tooling as a service that you can enable

and finally we have the API Gateway which provides traffic monitoring for your application Services exposed as API endpoints it provides authentication and key

validation services that control access to your apis in the cloud your csps Amazon offers the API Gateway Azure offers the API Management Service both

of those are Paz offerings that I have alluded to previously in the series as providing a number of services to secure your apis okay so I'd like to do some quick Show

and Tell here this is a good opportunity we're going to look at a web app firewall in a CSP and examine the oasp core rule set we'll just switch over to a browser here

and I'll start at the oasp.org website I wanted to point you to the owasp core rule sets here it's just a set of generic attack detection rules for youth with compatible web application

firewalls so it gives you a broad range of protection against many of those attacks defined in the owasp top 10 injection attacks cross-site scripting

cross-site request forgery just to name some of those more popular now I will switch over to my portal here so I'm at portal.azure.com looking at the

portal.azure.com looking at the Microsoft Azure portal and I've drilled down to a web application firewall instance and if I look at managed rules

I'll see here that by default they have implemented the oasp 3.2 core rule set and the way it tends to work in the Microsoft

scenario anyway as they support the last three sets of the oasp core rule sets it's going to vary by CSP but the point is you can deploy a firewall and by default it's going to have these

built-in web protections you can run your web traffic through here and feel pretty good about your security posture right out of the gate so just a very quick look but if you go

down the lift here you'll see the rules in the set are numbered so you can go find some additional explanation for those rules out on the oauth website if you're interested before the exam

just know that's really one of the key value propositions of a web app firewall and finishing up our discussion on firewalls obviously any time we're in an Internet connected environment the

firewall is important for filtering incoming traffic in a perimeter network but that web app firewall the WAFF is one that you're going to find is very common it's really the most popular

firewall generally speaking there are a couple of reasons for that one is cost it meets a common need it's easy to configure it comes with those default o wasp core rule sets and it's less

expensive than the more function rich Next Generation firewalls and secure web Gateway firewalls now we will see those more feature-rich

firewalls implemented as well but for other reasons now the need for Network segmentation for example should be supported with appropriate traffic filtering and restriction with the firewall that's

most appropriate for the use case and when we get into heavier traffic filtering and cross-cloud scenarios or hybrid scenarios that's where you're going to step up to a Next

Generation firewall it'll have advanced functionality centralized policy capabilities real-time threat Intel connected to that device in fact

real-time threat intelligence is part of what makes a Next Generation firewall and Next Generation firewall and that firewall can filter traffic between our virtual networks and the

internet or our virtual networks and our corporate Network the common theme there is segmentation when we think about zero trust Network architecture it focuses on micro

segmentation and you'll find your firewall functionality maps to different layers in the OSI model so we have a seven layer OSI model on network firewall work

layer 3 of The OSI stateful packet inspection at layer three and four and many of your Cloud firewalls like the web app firewalls work at layer 7 of The

OSI model which is the application layer a cryptography is mentioned in this section of domain four and it really touches on three areas data at rest data

in motion and key management and I've talked about Key Management endlessly throughout the series so I want to just say a couple of words on data at rest and data in motion so data at rest

we need to encrypt our storage accounts and your CSP storage providers usually protect data at rest at the account level by automatically encrypting before

persisting it to manage disks blob storage file or queue storage but you have a default layer of encryption what you'll find sometimes is they offer an additional layer of

encryption what we call Double encryption so the customer can hold the keys it enables crypto shredding secure deletion in an environment where you don't own the physical storage medium we

talked about this earlier in the series and then full disk encryption this is BitLocker or dmcrypt on the Linux platform if you've ever worked in the windows ecosystem a Windows desktop

you're probably familiar with BitLocker you'll find csps offer this capability in their is model when you're working with VMS then there's transparent data encryption for SQL databases and data warehouses

that give us protection against the threat of malicious activity with real-time encryption and decryption of the database backups and the transaction

log files at rest without requiring app changes these features generally include a customer managed key option and I use that acronym SQL generically

here you'll find transparent data encryption is available on more than just the Microsoft SQL platform as you look at MySQL and postgresql how do we encrypt data in motion well

there are a couple of most common methods the first being transport layer security over HTTP so https and in hybrid Cloud scenarios and

cross-cloud connectivity we often see traffic tunneled over a VPN connection you'll sometimes hear TLS referred to as

SSL the terms are used interchangeably but TLS replaced SSL a long time ago shifting gears let's talk about sandboxing for a moment placing systems

or code into an isolated secured environment where testing can be performed Cloud sandboxing architectures often create independent ephemeral environments for testing and serving

multiple purposes we can enable patch and test scenarios ensuring a system is secure before putting it into a production environment it also facilitates investigating dangerous

malware and you'll see that in a couple of scenarios in email protection you'll see email attachments and URLs delivered in

messages detonated in a sandbox before they are delivered to the recipient and you'll also find some of your xdr functionality your desktop protections

your endpoint protections that will have some sort of sandboxing scenario where you can isolate a potentially infected node and investigate before restoring full network connectivity

and at the end of the day sandboxes provide an environment for evaluating the security of code without impacting other systems as well we'll round out section 4.6 with a talk

about app virtualization and orchestration so let's start with a quick refresher on containerization and I'm thinking about Docker and kubernetes here where one of your key value

propositions is containers do not have their own OS so we get greater density in our virtualization infrastructure and we can use containerization in some cases to

isolate existing applications developed to run in a VM with a dedicated operating system containerization can fool an application basically into thinking it has its own kernel when in

fact it doesn't it's sharing so in terms of hypervisors you remember we have the type 1 hypervisor which is the bare metal hypervisor VMware esxi

Microsoft hyper-v what you'll generally be dealing with in a CSP scenario and our VM runs atop that hypervisor

each VM has its own OS kernel and memory which results in more overhead with a container host which is a virtual machine typically

we have containers running in that single operating system so the containers are isolated but this year in OS kernel as well as binaries and libraries where possible so we're

getting greater density essentially so core components in a container platform we have the orchestration or scheduling controller Network and storage a container host container

images which are functionally parallel to what we'd see with a VM template and your container registry where we store our container images the isolation is

logical isolating processes compute storage Network secret and management plane but kubernetes is a container orchestration platform for scheduling

and automating the deployment management and scaling of containerized applications and all your major csps have a managed kubernetes flavor your

container hosts are the cloud-based virtual machines this is where the containers run most of your csps offer that hosted service it's a Paz offering and you only

pay for the agent nodes within your cluster you don't pay for the kubernetes management cluster that's why it's a managed kubernetes environment and your

major csps generally offer a monitoring solution that will identify at least some potential security concerns in the environment and those offerings are AKs in the

Microsoft World eks on the AWS platform and gke in Google Cloud platform and finishing out 4.6 we have Cloud orchestration which allows a customer to

manage their Cloud resources centrally in an efficient and cost-effective manner the intent of reducing effort cost and complexity and this is going to be very important in a multi-cloud

environment and we're seeing more customers move to a multi-cloud stand to reduce their risk exposure and management of the complexity of

corporate Cloud needs really only increases over time as more and more workloads move to the cloud and we see more and more multi-cloud scenarios but it allows the automation of workflows

the management of account in addition to the deployment of cloud and containerized applications and it implements Automation in a way that manages cost and enforces corporate

policy in and across clouds your major csps offer orchestration tools that work on their platform and the third parties offer multi-cloud orchestration

Solutions in fact your csps will offer orchestration capabilities for customers as well as their Service Partners who manage environments for multiple customers

and that brings us to 4.7 design appropriate identity and access management solution so here we'll touch again on Federated identity identity

providers single sign-on multi-factor Authentication Cloud access security Brokers or casby and secrets management

so definitely topics we've seen before in the series there'll be a bit of refresher here but I'm going to interlace this with some new details let's start with Federation so Federation is a collection of domains

that have established trust the level of trust may vary but it typically includes authentication and almost always includes authorization in fact it has always included both in the scenarios

I've been exposed to and it often includes a number of organizations that have established trust for shared access to a set of resources for example you can Federate your

on-premises environment with Azure active directory and use this Federation for authentication and authorization this sign in method ensures that all user authentication occurs on premises

which allows administrators to implement more rigorous levels of access control for example historically we'd use Federation when certificate

authentication or key fob or a card based token was desired now some of this capability certificate authentication in particular is possible with some

identity as a service provider's Cloud identity providers but that was one of the core value propositions of federation when we wanted authentication and authorization to take

place on premises and we wanted to implement more rigorous methods now let's look at a federation scenario I think will resonate for you so we have a website that authenticates with identity

provider a let's say that's Azure active directory the identity provider of office 363 five and we have a user that authenticates with identity provider B let's say

that's their Twitter account so we configure Federation so identity provider a trust identity provider B and identity provider B then has shared

access and this may be cloud or on-premises and trust is not always bi-directional but this sort of support for Federation is quite common in your identity as a

service providers in fact earlier in the series I showed you how easy it was to add identity providers like Facebook and Google and others to Azure active

directory it has some built-in support for social identities and on that note let's talk about identity providers so identity providers create maintain and manage identity

information while providing authentication services to applications for example Azure active directory is the identity provider for office 365.

some of your other identity as a service options include OCTA Duo One login and on that topic of social identity

providers I mentioned social identity providers that support oauth like Google Facebook and apple are common in Federation scenarios think about all the cloud apps that you

subscribe to where you can log in with your Google account or your Microsoft account or your Facebook account and single sign-on is a concept where a

user doesn't have to log in to every application they use they essentially sign in once and they can use that credential for multiple applications single sign-on-based authentication

systems are sometimes called modern authentication and this is a very common user experience issue in Enterprise desktop scenarios we have to create a user experience so the user can log in

one time and more or less have access to all the applications installed on their desktop whether those are authenticating locally or to an active directory domain

or to a cloud identity provider like Azure active directory for your office suite multi-factor authentication so let's talk about MFA attack prevention for

example so we know multi-factor authentication are two or more methods combining something you know like a pin or a password with something you have like a trusted device something you are

so a biometric authentication method like fingerprints on most of your mobile devices out there and potentially facial recognition if you're using a fancier

smartphone but MFA is a preventative security control for multiple attacks so MFA can prevent phishing attacks

spear fishing Key loggers credential stuffing brute force and reverse Brute Force attacks man in the middle attacks in large part because we're tying that

second Factor back to something the user has or something that they are and most of your data loss your data breaches happen as a result of

credential theft and all of these attacks are used in credential theft in fact in talking to some of Microsoft's Dart organization the folks who go out and help customers who have been

breached one of those Consultants told me that a lack of MFA is a causal factor in almost every security breach they've gone out to clean up and to dig a little

deeper in a concept we'd mentioned briefly before the casby the cloud access security broker which enforces the company's data security policies

between on-premises and the cloud really anywhere a user is trying to access and share and store company data it can detect and optionally prevent data

access with unauthorized apps and data storage and unauthorized locations so if for example we wanted to prevent a user from accessing a Word document or a text

document with some third-party generic app we could stop that if we wanted to prevent storage in a third-party Cloud repository like box or Dropbox we could

Implement protections to prevent that it combines the ability to control use of services with data loss prevention and threat management features

it's often used in Enterprise scenarios where high levels of control and Assurance in Cloud usage are necessary now there are on-premises hybrid and

Cloud hosted models of casby I really see Cloud hosted casby's everywhere I look these days and just a quick reminder on Secrets management remember csps offer a cloud

service for centralized Secure Storage and access for application secrets of Vault solution a secret to anything you want to control access to your API Keys

passwords certificates tokens cryptographic Keys the service will almost always offer programmatic access via API to support devops and continuous

integration and continuous deployment your CI CD pipeline access control is generally offered at the Vault instance level as well as to

the secrets stored within the Vault your CI CD pipelines should leverage centralized storage of Secrets rather than hard-coded values or storage on

disks and Microsoft AWS and Google Cloud platform all have a vault for storing your secrets centrally and that does it

for domain four so moving on to domain five Cloud security operations and we'll begin with a look at the exam

Essentials those topics the official study guide promises will Factor on exam day we have how to ensure clustered host and guest availability so we'll touch on

resource scheduling and dynamic optimization to topics we haven't talked about yet in the series explaining the importance of security hygiene best

practices in particular here security baselines standard processes used for IT service Management in an organization

we will touch on roughly a dozen processes in this session change management continuity incident problem availability configuration

access control for local and remote system access here we'll discuss popular remote access options some of the finder points of security around those

network security controls as part of a cloud environment so we'll touch on a variety of network virtual appliances and security Concepts so intrusion

detection and prevention firewalls honey pots the role of the security operations center will dive into incident response and we'll dive into SIM in this session

as promised as security information event management is a core tool of the sock and finally the role of change and configuration management so we'll get

into each of these and talk about how the two work together and how one influences the other so that brings us to 5.1 Implement and build physical and logical

infrastructure for a cloud environment so in this section we'll cover Hardware specific security configuration requirements and drill down on the HSM

and the TPM installation and configuration of management tools virtual Hardware specific security configuration requirements so we'll touch on a handful of topics

here some the responsibility of the CSP some the responsibility of the consumer assuming we're talking about public Cloud scenarios and installation of guest operating

system virtualization tool set so we'll start with the trusted platform module the TPM which is a chip that resigned on the motherboard of the device

it's a multi-purpose device it handles functions like storage and management of keys used for full disk encryption Solutions like BitLocker like dmcrypt on the Linux platform

it provides the operating system with access to keys but it prevents Drive removal and subsequent data access you can certainly remove the drive but

without that TPM you're not going to access the data on that drive you'll also hear this called a cryptographic processor on occasion virtual TPMS are part of the hypervisor

and provided to VMS running on a virtual platform and unlike the HSM it is generally a physical component of the system hardware and it cannot be

added or removed at a later date the hardware route of trust now when certificates are used in full disk encryption they use a hardware route of trust for key storage it verifies that

the keys match before the secure boot process takes place the TPM is often used as the basis for that Hardware route of trust it is usually that Hardware route of trust

and next we have the Hardware security module HSM this is a physical Computing device that safeguards and manages digital Keys performs encryption and decryption functions for digital signatures

strong authentication and other cryptographic functions it's like a TPM but often a removable or external device it's what I call a function specific

device it's not a component of a computer like a chip on a motherboard as a TPM is key escrow uses an HSM to store and

manage private keys cloud service providers all offer cloud-based HSM Solutions for customer managed key scenarios so the examples

there would include the dedicated HSM in azure Cloud HSM in AWS and Google KMS on the Google Cloud platform

software-defined networks may come up on the exam so this is a network architecture approach that enables the network to be intelligently and centrally controlled or programmed using

software an sdn enables us to reprogram the data plane at any time so if I can update the data plane using infrastructure as code or as security

conditions evolve this is going to be great for security for a micro segmentation strategy and a zero trust Network architecture

common use cases SD Lan and sd-wan so separating the control plane from the data plane opens up a number of security challenges what I'd say in short is the

sdn vulnerabilities by and large come from a malicious entity inside the network so an sdn is not really vulnerable from

outside the network but vulnerabilities can include a man in the middle attack or a service denial a denial of service attack and both of these would come from a

compromised endpoint on the network and because it's software based it supports CI CD infrastructure as code and micro segmentation implementation of virtual networks both

your public and private subnets are important elements of cloud network security we'd call that segmentation or micro segmentation in a zero trust Network architecture scenario one

concept related two segmentation is the virtual private cloud or VPC this is a virtual Network that consists of cloud resources where the VMS for one company are isolated from the resources of

another company and separate vpcs can be isolated using public and private Networks so the VPC term is applicable in AWS and

Google Cloud platform they call it a v-net in azure and we have public and private subnet The Familiar concept even with on-premises networks the environment

needs to be segmented public subnets that can access the internet directly and protected private Networks virtual networks can be connected to other networks with a VPN Gateway or

network peering So within the private networks of our Cloud subscription typically we're going to use Network peering it's going to be better in terms of performance

and you should have isolation as a customer on the csps backbone a VPN Gateway for scenarios site-to-site connectivity for example scenarios where

you need encryption but generally speaking we see VPN Gateway in the site-to-site scenario and network peering within the subscription

for vdi and client scenarios a Nat Gateway for internet access usually makes sense section 5.1 also calls out installation and configuration of management tools so

there are a few considerations you should be aware of here the first is redundancy any critically important tool can be a single point of failure so adequate planning for

redundancy should be important uh just one real world example in hybrid Cloud we have a sync tool that synchronizes our on-premises identities with the

cloud and we typically have a backup instance on standby so if we have a problem with that primary we can bring the secondary online that way users

changing passwords group memberships that are being updated will continue to sync to the cloud and not be out of sync because our primary instance of the tool is down

and that need for redundancy really comes down to is the tool we're talking about a runtime tool or is it a design time or other ad hoc tool that doesn't affect service operation

schedule downtime and maintenance downtime may not be acceptable for some tooling so we need to make sure that these tools are patched or taken offline for maintenance on a rotating schedule

or during acceptable Windows when we don't need them for example having our monitoring system that monitors our critical Services offline for an extended period of time would more

likely than not be unacceptable isolated Network and robust access control so with any management tooling we want to make sure that access to our

tools is tightly controlled with our virtualization management tools even more so because access to the physical hosts and the VMS running there certainly increases the scope of the

risk so adequate enforcement is very important we can use not only access control but need to know least privilege encryption with our tooling

like our remote desktop client for example and require VPN access into a secure access workstation in the cloud for example to get to systems the host

sensitive data and critical services now when we're talking about virtualization management tools that's a bit of a vague term if we're thinking about the physical hypervisor host in a

public Cloud that's going to be a CSP responsibility in a private Cloud that's going to be an organization responsibility configuration and change management so tools and the infrastructure that

supports them should be placed under configuration management to ensure that they stay in a known hardened state that we don't have a drift in the configuration there

and then logging and monitoring audit Trail is important but logging activities can also create additional overhead so we need to moderate and balance the need for logging in a manner

that doesn't impact performance of the system we're collecting logs from there are also some virtual Hardware specific security configuration items called out because a VM shares physical

Hardware with potentially hundreds of other VMS the biggest issue related to Virtual Hardware security is enforcement for the hypervisor we need strict

segregation between the guest operating systems running on a single host in a public Cloud that is especially important because we're dealing with potentially hundreds of other customers

who are not part of our organization there are two main forms of control you should be aware of there's configuration just ensuring that the hypervisor has been configured correctly to provide the

minimum necessary functionality so disallowing inter-vm network communications if not required and encrypting snapshot now in a public Cloud scenario where you're consuming

VMS in an is model we're talking about responsibilities of the CSP and since you can't audit your CSP directly that's where you go

to their portal and find the documents that give you confirmation that they have implemented the necessary controls and then there's patching so a customer

would be responsible for patching VMS in the is model while the CSP patches the hypervisor and if there are VMS in a path service you're running the CSP owns

VM patching there as well and in the vein of virtual Hardware there are a couple of particular concerns for virtual network security controls and a couple of Concepts we've

visited before so virtual private Cloud gives a customer a greater level of control at the network layer including managing non-routable IP addresses and

control over inter-vm communication now that's not talking about inter-vm communication at the host level but within our own network we can create

multiple vpcs to prevent groups of VMS from communicating with one another Even in our own subscriptions and this is exceedingly common in a zero

trust Network architecture will carve out a VPC for app servers and other for the database tier

and we will restrict Ingress traffic to those networks using security groups a security group is similar to an access control list for network in fact it

looks a lot like a firewall it has distinct rules for inbound and outbound traffic and in AWS they call it a security group in Azure they call it a network security

group so in Azure the interface for configuring that Network Security Group looks a lot like a firewall in AWS they call a security group a virtual firewall

so just to give you some exposure to the concept I'm talking about there I'd like to do a quick demo and we'll take a look at securing virtual networks with security groups and notice here when I say security

groups I'm talking about a security group in this specific Network context this isn't like a security group that contains users and is used to provide

access to resources assigned to people so I'll just switch over to a browser to the Azure portal here portal.azure.com

and I'm going to take a quick look at network security groups from a couple of different angles so we'll start by looking just at the network security groups themselves and you'll notice here

that I have network security groups for Fe which is front end so I think front-end app server back end think backend database server so if I click on front end NSG here what it's going to

show me are the configuration elements of this NSG so I can see the inbound security rules and take a look at these rules here so we see the name of the

rule we see the priority and when we scroll over we can see the port and protocols that are applied and if it's an allow rule or a deny Rule and as you

might imagine you see the rule at the bottom there is a deny rule so if no allow is found all inbound is denied if not explicitly allowed so I'll click add here so we can

just add a rule and you'll see I can Define source Port ranges so I can make very specific rules so if I want to create an RDP rule for example for a

remote desktop I'm going to use 3389 and lock that rule down to that specific port and you'll see I can get down to IP

addresses or specific application security groups and tags that's something specific to Microsoft Azure but in any cloud provider you're going to find that

typically you can Define IP addresses or ranges of ips and then we can pick services so you'll notice here it gives me a standard list of services so I can decide what protocol I'd like to send

through so you notice when I pick https the destination Port is automatically set to 443. now in the Azure context

these network security groups can be applied in two ways they can be applied to the network interface of a VM directly which is fairly common and I can click the associate button and it will show me

any network interfaces that don't have an NSG and I can assign them to subnets this is much more common so you'll notice this

NSG is assigned to the Fe subnet so the front end nfg is assigned to the front end subnet so that inbound outbound rules that applies to that entire Subnet

in that case now I'm going to just give you a look from another angle here's a virtual machine and I mentioned that an NSG a network security group can apply

to a VM interface as well to its network adapter so if I scroll down and get into the networking here you can see that an

NSG has been applied to this network adapter and there's the name of the network security group and the inbound rules so notice there there's an RDP

rule remote desktop protocol and I can add inbound rules right here so what you have here looks a lot like a layer Force stateful packet inspection

firewall so that's a quick look at nsgs I hope that gives you a better idea of what Security Group means in the network context

and finishing up section 5.1 guest operating system virtualization tool sets so the tool sets that exist I'm talking about would come from the maker of the hypervisor and provide extended

functionality for various guest operating systems be those windows or Linux for example hyper-v integration Services enhance VM performance and provide

several useful features things like guest file copy time sync guest shutdown in the public Cloud these tool sets will

typically be provided by the CSP in some capacity that brings us to 5.2 operate physical and logical infrastructure for a cloud environment so let's look at the roughly

dozen topics we need to touch on here beginning with access controls for local and remote access so we'll touch on RDP SSH jump boxes and more

secure network configuration topics like VLAN DHCP DNS SEC and VPN and in network security controls we have

some overlap here with a discussion we're going to have in section 5.6 so I'm going to carry a few topics from this section of 5.24. I'll let you know when we get there know that we won't

skip them I just want to consolidate this to to a more natural single discussion of topics like firewalls IDs IPS and Honeypot

operating system hardening through the application of baselines monitoring and Remediation patch management infrastructure is code strategy

infrastructure as code is foundational in the public cloud in particular absolutely the norm so we'll dig in to the details there availability of

clustered host this delves into the physical and isn't really even specific to Cloud so this will be a bit more academic in terms of discussion availability of guest

operating systems performance and capacity monitoring Hardware monitoring again an area outside your corporate data center that is the responsibility

of the CSP configuration of host and guest operating system backup and restore functions and we'll finish up 5.2 with a

look at scheduling orchestration and maintenance in the management plane because this is such a big section of domain five I'm going to track this a

little more closely for you so we'll begin with 5.2.1 access controls for local and remote access so we have in local and remote access

remote desktop protocol the native remote access protocol for Windows we have secure shell which is the go-to for Linux operating systems very popular in

Remote Management of network devices so these are going to be utilized by the CSP and the consumer alike RDP and SSH

both support encryption and MFA secure terminal or console based access this is a system for secure local access this is going to be the realm of the CSP

in the public Cloud so that's using a KVM typically with access controls basically allowing you keyboard access only at your system through a layer of

security at that physical shared keyboard jump boxes at least what they call jump boxes in the ccsp exam you might hear these called jump servers

elsewhere but it's a Bastion host at the boundary of lower and higher security zones so csps offer services for this Azure Bastion on the Microsoft platform

the AWS Transit Gateway for Amazon virtual client software tools that allow a remote connection to a VM for use as if it's your local machine

very common example here would be vvdi virtual desktop infrastructure for contractors and access to any of these can generally

be gated with some form of privileged access management solution on the identity and access management platform used by the CSP next we have VPN so this extends a

private Network across a public network enabling users and devices to send and receive data across shared or public networks as if their devices were directly connected to the private

Network and you have split tunnel versus full tunnel so full tunnel means using VPN for all traffic both to the internet and corporate Network split tunnel uses

VPN for traffic destined for the corporate Network only and internet traffic direct through its normal route you'll see split tunnel very commonly in

work from home scenarios and then we have remote access versus site to site so in site to site ipsec site to site VPN uses an always on mode we're both packet header and payload are

encrypted this is ipsec tunnel mode and in a remote access scenario a connection is initiated from a user's PC or laptop for

a connection typically of shorter duration that's ipsec transport mode so data in a remote access session must

be encrypted in transit using strong protocols like TLS 1.3 and session Keys session keys that ideally are good only for that session so they are useless if

discovered for later sessions strong authentication you know maybe combined with cryptographic controls such as a shared secret key for SSH and or MFA and previously in the series we've talked

about strong MFA factors device State and other conditions of access being applied to an authentication request enhanced logging and reviews all admin

accounts should be subject to additional logging and review of activity and frequent access reviews privileged access Solutions in identity

as a service solutions often include an access reviews feature use of identity and access management tools so many csps offer that identity

as a service option that enables strong authentication and access control schemes right out of the box examples would include Azure active directory on

the Microsoft platform Google identity services for gcp and single sign-on is very important to the user experience so your identity as a service solutions typically enable

users to log into other services using their Company accounts many identity as a service solutions function as a single sign-on provider a general best practice for

administrative users is the use of a dedicated admin account for sensitive functions and a standard account for day-to-day use I want you to remember that for the ccsp exam for the real

world I want to tell you that while that's listed in the common body and knowledge for the exam it is not always true that's because increasingly identity as a service solutions offer

privileged identity management or privileged access management for just in time privilege elevation enabling us to run an account without privilege

day-to-day and when we need admin access for a few minutes or a few hours we can Elevate do our work that requires privilege and then our elevation either expires or we

self-revoke it's a more granular version of lease privilege and I'll show you in a live Cloud environment here in just a moment

so solution features typically offer temporary elevation of privilege approval Gates and audit Trail and privilege is activated and an access review process to avoid permissions

sprawl so again the ccsp is not focused on one CSP but we'll take a look at privileged access management in one of the major

providers here just for context understanding that the features will vary by cloud service provider and I'll just switch to my browser here

and I'm in the Azure portal and I've navigated to the Azure active directory admin Center I'll click on Azure active directory and we're going to look at the privileged identity management feature

which is one of several privileged access management capabilities we find on the Microsoft platform I'll click on identity governance and under privileged

identity management I'll click Azure ad roles and I'm going to look at the global admin role so if I just type

Global administrator we have a privileged identity profile here and I see Adele has been assigned access to this profile so let's look at the settings of the profile and see what

that means when she activates This Global administrator profile well it means she gets Global administrator rights for up to eight hours at which point it's automatically revoked upon

activation we're requiring Azure multi-factor authentication we are requiring justification so she has to write us a little note in a long text box to tell us why she's activating

we can require a ticket on activation this is set to no at present and we're not requiring approval though those are features we can turn on so I'll click

edit here and you'll notice here I can turn this up to as much as 24 hours so we were at eight hours in our current

settings and I can turn on notification notifications a popular option when we decide not to require approval folks always behave when they know everybody else is being notified that they've

activated this privileged role so it gives us a nice audit Trail but it gives us something almost as good as approval but in that case the user doesn't have

to wait for approval and we no doubt trust the folks that we're assigning access to this privileged role and you

see here I can require the ticket I can turn those extra features on if I like now I want to just back out of here and

talk for a moment about the access review option so you'll see here under manage I have access reviews for the privileged identity management feature I

can configure access reviews for my privileged roles to see if users still need access so here's my Global admin review

I can set a start date I can set a frequency I like to review the sensitive roles at least quarterly and if I scroll down here

you'll see that I can select the roller rolls that I would like to audit so I could basically configure one review that kicks off for all the roles so a user will have to review any role that

they're a member of so I see all active and eligible assignments so eligible is how we do this the user is eligible to activate the role it's not permanently activated

meaning they don't have privilege all the time and for reviewers you'll notice here that I can assign a manager I can select a specific reviewer or my

favorite let the reviewers basically self review and tell me if they still need access and when I scroll down here you'll notice that upon completion

once the review period is over if the users don't respond I can make no change I can remove their access or automatically approve their access I can

choose how I respond if the user fails to reply and I have a healthy number of notification options that I can configure here to make sure the right people are in the loop that we've gone

through that access review process so that's just a quick look again there are multiple privileged access management features in Azure ad and you'll find similar features on other identity as a service platforms I just

wanted to give you some context for some of those capabilities we were talking about earlier in this session we're going to move on to 522 secure network

configuration so we'll talk security around vlans DHCP DNS sec TLS VPN just to name a few

I want to start by revisiting the zero trust security concept which we've touched on earlier in the series a strategy where no entity is trusted by default moving on from the trust but

verify strategy of years past and IT addresses the limitations of the Legacy network perimeter-based security model where we trust everything on our trusted Network and everything outside that

perimeter firewall is untrusted now we're trusting no entity it treats user identity as the control plane and it assumes compromise or breach in

verifying every request but zero trust Network architecture is another element of a zero trust strategy

in an Enterprise so network security groups Factor here Network firewalls inbound and outbound traffic filtering inbound and outbound traffic inspection

and centralized security policy management and enforcement so network security the network security group provides an additional layer of

security for cloud resources it acts as a virtual firewall of sorts for virtual networks and resource instances like VMS databases subnet

carries a list of security rules IP addresses and Port ranges that allow or deny Network traffic to Resource instances on a subnet it provides a

virtual firewall for a collection of cloud Resources with the same security posture it exists in multiple csps the details are going to vary slightly with each they call it a security group in AWS

they call it a network security group in azure so when we segment our Network we can use a security group to act as an

Ingress an egress filter for the segments of our Network so perhaps I have a subnet for my app a subnet for my databases

a subnet for management infrastructure so that's enough for now we're going to revisit this a bit later and I'll actually get into a security group with you Hands-On so you can get a good look

at that feature so segmentation that I was just referring to restricting services that are permitted to access or be accessible from other zones using rules to control inbound and outbound

traffic so we use rules that are enforced by the IP address ranges of each subnet and within a virtual Network segmentation can be used to achieve

isolation it's Port filtering through a network security group we can do filtering through a firewall but in that micro segmentation scenario we're going to use a security group or

what they call a network security group in Azure it's a security group in AWS and our VPC our virtual private Cloud contains private subnets and each of

these subnets has its own site or IP address range and cannot connect directly to the internet they could be configured to go through the NAT Gateway if outbound internet

connectivity is necessary client VMS and database servers will often be hosted in a private subnet and they're actually three private subnets that are

predefined so private subnets are not for public services like websites but there are three private IP address ranges that are defined there's the

10.0.0.0 so that's a Class A you have the 172.16 to 172.31 that's a Class B and then the

192.168.00 which is a Class C these private ranges are defined in RFC 1918 and not routable over the public

internet all other IP address ranges except the self-assigned range the 169.254 all others are public addresses

so just as you would use these in your own data center these same private ranges are equally applicable in the cloud and used frequently so in a secure network design we have to

account for East-West traffic this is where traffic moves laterally between servers within a data center or in this case within our virtual data center which is our Cloud subscription

north-south traffic moves outside of the data center that's Ingress and egress VLAN which is a collection of devices that communicate with one another as if

they made up a single physical land and it creates a distinct broadcast domain that can span multiple physical Network segments

so on a switch we can assign ports to a VLAN so if I have my finance department spread across multiple floors I could have members of the finance department on their own

private VLAN so their work with sensitive data never leaves that distinct broadcast domain they're within their own virtual local area network

then we have a screened subnet that's a subnet placed between two routers or firewalls bashed and hosts are often located within a subnet like this as

maybe web servers you hear this called a perimeter Network or a DMZ sometimes so to wrap up the VLAN discussion many public clouds offer the virtual private

Cloud functionality essentially a Sandbox area within the larger public Cloud it's Network space dedicated to a specific customer different csps have

different names for it Microsoft calls the VPC a v-net a virtual network but vpcs take the form of a dedicated VLAN essentially for a specific user

organization means other Cloud tenants are blocked from accessing those resources and a given customer can spin up multiple

vpcs within their subscription and allow or prevent communication between those vlans those essentially dedicated vlans so we have a couple of options for VPC

connectivity connecting those virtual networks so to speak so we can create a VPN connection using

l2tp or ipsec using a VPN Gateway or a Transit Gateway it's sometimes called now to connect vpcs within our subscription we can use Network peering

that's another method for connecting virtual networks in the cloud so peering is the more common option between Cloud networks within a subscription and then for hybrid

connectivity back to our corporate Network that's when we'd use a VPN we'd use a site-to-site VPN effectively creating a hybrid cloud

moving on let's talk DNS security so DNS SEC DNS security extensions the set of specifications primarily aimed at reinforcing the Integrity of dnf it

achieves this by providing for cryptographic authentication of DNS data using digital signatures it provides proof of origin and it makes

cash poisoning and spoofing attacks more difficult it does not provide for confidentiality since digital signatures rely on publicly decryptable information

encrypting data in motion is often achieved through transport layer security that's how it's done for secure HTTP for secure web sessions

and TLS in that web context uses an x.509

certificate with a public private key pair so for customer public facing websites you'll typically use a certificate from a trusted provider like a digicert or a GoDaddy

for secure sessions on internal sites within your organization many organizations will have their own public key infrastructure to issue certificates

because it only needs to be trusted by members and devices of their organization if you're preparing for the ccsp exam you're likely already familiar with the

core functionality of DHCP Dynamic host configuration protocol issuing IP addresses dynamically to endpoints coming onto the network there are a

couple of Niche security discussions to be had here relevant for the exam so one is the idea that the IP address associated with a system event can be used when identifying a user or a system

so our Sim our security information event management solution can leverage this data to track an IP address to a specific endpoint we really need to just

let our Sam ingest those DHCP logs to leverage that data for greater context there's another Niche discussion to have

and that's that some hypervisors offer a feature to limit which network cards are eligible to perform the DHCP offer which can prevent Rogue DHCP servers from

issuing IP addresses to clients and servers so we can prevent a rogue VM from being spun up and configured as a DHCP server

so that sort of protection is going to be the responsibility of the CSP in a public Cloud scenario as the CSP is responsible for the configuration of that physical host that type 1

hypervisor and in a private cloud and a corporate data center that's going to be the responsibility of the organization you should be familiar with methods to

provide non-repudiation so the guarantee that no one can deny a transaction digital signatures prove that a digital message or document was not modified

intentionally or unintentionally from the time it was signed digital signatures use asymmetric cryptography a public-private key pair the digital equivalent of a handwritten

signature or stamped seal a couple of common methods for implementing non-repudiation include message authentication code or Mac where the two parties that are communicating

can verify non-repudiation using a session key electronic Financial transfers or electronic funds transfers frequently

use Macs to preserve data integrity and then there's hmac or hash based message authentication code which is a special type of Mac with a cryptographic

hash function and a secret cryptographic key so https some secure FTP options and other transfer protocols use hmac so just breaking down a few

cryptographic Concepts cryptography provides a number of security functions including confidentiality integrity and

non-repudiation so your encryption tools like TLS or VPN can be used to provide confidentiality hashing can be implemented to detect unintentional data modifications

integrity is the focus there additional security measures like digital signatures or hmac can be used

to detect intentional tampering hmac and simultaneously verify both data integrity and message authenticity and up next is network security controls

topic number three in that long list of 12 topics within section 5.2 we'll start with the last one on this list Bastion host so a Bastion host is a

host used to allow administrators to access a private network from a lower security zone so the system will have a network interface in both the lower and higher security zones and the host

itself will be secured at the same level as the higher security Zone it's connected to when you're accessing sensitive resources generally speaking the system you are accessing them from

should be secured at the same level or greater now we've talked about Bastion host by another name already in this series and that is

jump box jump box or jump server are two common names for Bastion hosts and csps offer services for this there's Azure

Bastion and AWS Transit Gateway which offer Bastion host functionality usually with additional conveniences for

example Azure Bastion allows you to connect via RDP or SSH to Windows and Linux systems directly from a browser so you don't need and endpoint climbs you don't need an

RDP or SSH client but in whatever form think of a Bastion host as a dedicated host for secure admin access there are several other network security controls in the list there you should

have basic familiarity with in terms of functionality and their strength so in the realm of firewalls stateless and stateful

application host and virtual firewalls web application firewalls or waffs the Next Generation firewall or ngfw

in the intrusion detection and prevention systems host based IDs heads and hips

network-based IDs nids and nips and Hardware versus software and in terms of other security controls the honey pot and vulnerability

assessments now we're going to touch on all of these in section 5.6 and the conversation would be largely redundant so we're going to press pause

on these and we'll cover these in detail later in this session in 5.6 and for now we'll move on to the fourth item on the list operating system

hardening through the application of baselines monitoring and Remediation the OS hardening is the configuration of a machine into a secure State through

application of a configuration Baseline baselines can be applied to a single VM image or we can apply that Baseline to a VM template

which we create once and then use that to deploy all our VMS and a hardened image may be a customer defined image a CSP defined image or it

could be from a third party often available through a cloud Marketplace a great example of a third party is the hardened image set you can get from the

center for Internet Security CIS who offers hardened images in CSP marketplaces in fact if you want to build your own hardened image you can

also buy the scripts from the center for Internet Security directly but I find many customers just opt to use the CIS images from the marketplace of their

chosen CSP let's dig a bit deeper into configuration baselines and related Concepts so we have the concept of a control which is a high level description of a feature or an activity

that needs to be addressed and it's not specific to a technology or an implementation a security control is an example where we describe a level of security

that needs to be achieved without discussing the specific implementation a benchmark contains security recommendations for a

specific technology like an is VM or maybe we're talking about an identity as a service provider like Azure active directory or Cisco Duo

or Google's identity services then we have a baseline which is the implementation of The Benchmark on the individual service

so a control is expressed as a benchmark and a benchmark is implemented as a Baseline through a Baseline for benchmarks describe configuration

baselines and best practices for securely configuring a system you'll often see platform or vendor specific guides released with new products so that they can be set up as

securely as possible making them less vulnerable to attack web servers for example the two main web servers used by commercial companies are

Microsoft's internet information server and the Linux based Apache because they're public facing their Prime targets for hackers and to help reduce the risk both Microsoft and

Apache provide security guides to help security teams reduce the attack surface making them more secure these guides advise updates being in

place unneeded services or disabled and the operating system is hardened to minimize risk of security breach and Justice with web servers operating

system vendors like Microsoft have guides that detail best practices for installing and securing their operating systems OS benchmarks are also available

from CIS and other third parties application service vendors produce guides on how to configure application servers like email servers or database servers to make them less vulnerable to

attack and the list goes on network infrastructure devices from companies like Cisco produce network devices and offer benchmarks for secure

configuration of their Network Hardware at the end of the day benchmarks aim to ease the process of securing a component reducing the attack footprint and minimizing the risk of

security breach and diving into some of the details of os hardening we want to minimize listening ports and running Services restricting to those that are absolutely

necessary filtering traffic disabling some ports entirely if unneeded we can block ports through firewalls we can disable listening ports entirely by

disabling the underlying Windows service many times then there's the Windows registry and access should be restricted and updates controlled through policy wherever possible

we always want to take a backup of the registry before we start making changes disk encryption so Drive encryption full drive encryption we call it can prevent

unwanted access to data in a variety of circumstances so full disk encryption is BitLocker on the Windows platform or dmcrypt on Linux

and Os hardening can often be implemented through security baselines that come from the vendor and they can be applied through Active Directory Group policies or management tools like

mobile device management platforms such as Microsoft InTune or AirWatch and we can Implement all of these using configuration baselines

I wanted to call out a few sources for configuration baselines for OS hardening in particular that is the exam objective called out in the syllabus after all

so we have vendor supplied baselines again Microsoft VMware and Linux all offer configuration guidance for their products that point out specific Security Options and recommended

settings but they all have configuration guidelines and in the case of Microsoft for sure they offer configuration baselines you can download as a starting

point DSA stigs so the defense Information Systems agency produces Baseline

documents known as security technical implementation guides or Stakes now I will warn you that the disa stigs may include configurations that are too

restrictive for many organizations after all Their audience is government so the regulations around Security on the whole are going to be more stringent

in that space then we have nist checklist so the National Institute of Technology and standards maintains a repository of

configuration checklists for various OS and application software another agency focused on a government audience

so likely again some guidance you can use and some that may be a bit too stringent for the average commercial company then we have CIS Benchmark so

the center for Internet Security publishes Baseline guides for a variety of operating systems applications devices all of which incorporate many

security best practices and CIS offers Benchmark scripts that are priced based on environment size if you go to your CSP Marketplace you'll find VM images

that give you a ready-made hardened image if you want to go that route for your OS hardening but as you can see here you have a number of options available to you

next on the agenda in 5.2 is patch management there are a few Basics you want to be familiar with on exam day patch management is sometimes called update management really just two names

for the same discipline and it ensures that systems are kept up to date with current patches the process will evaluate test approve and deploy patches so we need to design

that process often we use what I call a ring strategy where we'll deploy to a small group of users usually within the

IT department then in a second ring to a broader sampling a pilot group across business units before we deploy broadly to the organization

system audits verify the deployment of the approved patches to the systems we want to make sure we patch both native OS and third-party applications

it's pretty common that organizations of lesser maturity will not get around to patching third-party apps which leaves security holes

we want to apply out-of-band updates promptly so if a software provider supplies a security patch out of bandage usually

because it is an urgent situation and cloud service providers generally provide a patch management feature tailored to their is offering

up next is infrastructure as code strategy infrastructure as code is the management of infrastructure our Network VMS load

balancers and connection topology described in code just as the same source code generates the same application binary code in the

infrastructure as code Model results in the same environment every time it's applied in fact infrastructure as code is a key devops practice and it's used in

conjunction with continuous integration and continuous delivery in fact infrastructure as code is very common it's really the standard in the cloud

the csps typically offer Cloud native controls for implementing infrastructure as code Microsoft offers Azure resource manager

Amazon offers AWS cloud formation these tools make managing the respective Cloud resources easier on each platform supporting infrastructure as code but

they are separate tools for separate platforms they're platform specific now third-party tools add more flexibility functionality and multi-platform support

organizations will typically move to third-party IAC Solutions when the native Cloud Solutions do not meet their functionality needs or they become a

multi-cloud customer so for example some organizations move to terraform for infrastructure as code because it supports the major csps using

a single language and csps offer a marketplace where third parties can publish offers related to infrastructure as code now there are two distinct

characteristics of infrastructure as code that improve resiliency and is and past service models I want to make sure you're familiar with these for the real world if not for the exam so

the term declarative infrastructure as code must know the current state it must know whether the infrastructure already exists to know whether or not it needs to create it

imperative deployment methodologies are unaware of current state if you write a Powershell script for example or a python script that is an imperative

deployment methodology it doesn't know if the infrastructure already exists infrastructure is code when implemented through

the CSP native tools or Solutions like terraform are also ident deployment of an infrastructure's code template can be applied multiple times without changing

the result for example if the template says deploy four VMS and three exist only one more is deployed but these characteristics help reduce

errors and configuration drift because we can apply the infrastructure as code template multiple times and the results will always be the same it will be an

environment exactly as is described in the infrastructure as code template up next we'll talk about the availability of clustered host and we're

really talking about clustered virtualization house the physical servers hosting our hypervisor so that's the realm of the CSP and the public Cloud but our responsibility in

the corporate data center and a hybrid Cloud scenario the cluster advantages include High availability via redundancy optimized performance via distributed

workload as the cluster can push VMS to different members of the cluster to distribute the load and availability to scale resources so let's start with the cluster

management agent it's often part of the hypervisor or load balancer software it's responsible for mediating access to Shared resources in a cluster

reservations are guarantees for a certain minimum level of resources available to a specified virtual machine a limit is a maximum allocation

a share is awaiting given to a particular VM a Share value is used to calculate percentage-based access to pooled resources when there is contention in

those resources distributed resource scheduling is the coordination element in a cluster of VMware esxi host so drf is VMware

specific it mediates access to the physical resources it handles resources available to a cluster reservations and limits for the VMS running on the cluster and

maintenance features Dynamic optimization is Microsoft's DRS equivalent delivered through their cluster management software

storage clusters pool storage providing reliability increase performance and possibly additional capacity

all of this Tech is CSP owned in the public cloud and organization owned in a private or hybrid cloud next we're going to talk availability of

the guest operating system and we're really talking about the guest operating system in the is context in this case we've deployed a virtual machine in the is model

and it's important to recognize that once a VM is created in is the CSP no longer has direct control over that guest operating system the customer can use baselines backups

and cloud storage features to provide resiliency in the guest OS ing vendor supplied OS Baseline templates for example or cloud storage

redundancy features like zone or Geo redundancy or backups and in virtualized Cloud infrastructure this might involve the use of snapshots fortunately your csps

offer backup features for VMS in the is model resiliency is achieved by architecting systems to handle failures from the outset rather than needing to be

recovered for example virtualization host clusters with live migration provide resiliency but resiliency of the physical

hypervisor cluster networks and storage are responsibility of the CSP so next we'll take a look at performance and capacity monitoring

now the CSP should Implement monitoring to ensure that they're able to meet customer demands and promised capacity because the cloud provides the

perception of unlimited capacity but in reality is a highly scalable platform of finite infrastructure resources cleverly

oversubscribed so consumers should monitor to ensure the CSP is meeting their obligations in terms of performance and availability most monitoring tasks will tend to be in

support of the availability objective monitoring for service availability first and foremost alerts should be generated based on established thresholds and appropriate

response plans initiated when objectives are not being met when thresholds are breached monitoring should include utilization

performance and availability for compute for CPU memory storage and network that's what we call the core 4.

and just as reviews make log files impactful appropriate use of performance data is also essential if it's not used it is wasted and increasing cost and

nothing more next up is Hardware monitoring so this is definitely in the public Cloud going to be the purview of the CSP in the

private Cloud that's where it falls to the customer in their corporate data center So Physical Hardware is necessary to provide all the services that enable the virtualization that enables cloud

computing and again Hardware monitoring should monitor CPU Ram fans disk drives Network

components any point of failure in that physical infrastructure environmental monitoring is also important Computing components are not designed for use in

very hot humid or wet environment so HVAC temperature and humidity monitoring are all important and in public Cloud Hardware monitoring

will be the responsibility of the CSP and not the consumer as with many topics it comes down to that shared responsibility model and knowing our role

next we'll talk configuration of host and guest operating system backup and restore functions so responsibility varies by categories

we're going to go beyond the OS for just a moment here in the SAS model the CSP retains full control over backup and restore so if their operating systems

behind the scenes the CSP owns it all the only customer responsibility there is typically a shared responsibility for their own data in the path model shared responsibility

CSP owns the infrastructure backups consumer owns backups of their data in the is model the consumer owns backup and recovery of VMS

so consumer backups may include full backups snapshots or definition files used for infrastructure as code deployments customer choice in that case

there are a few additional considerations so sensitive data may be stored in backups and in this case access controls a need to know principles will limit exposure

physical separation is important backups should be stored on different Hardware or availability zones so using Zone redundant or geo-redundant

cloud storage for example Integrity of all backups should be verified routinely to ensure they're usable that brings us to our final Topic in

section 5.2 the management plane for the management plane in the cloud provides virtual management options analogous to physical admin options of a

legacy data center for example the ability to power VMS on and off provisioning virtual infrastructure for VMS like RAM and Storage it includes orchestration this is the

automated configuration and management of resources in bulk this would include features like patch management and VM reboots which are very commonly orchestrated tasks

and the Management console is the web-based consumer interface for managing resources and they'll typically be command line

equivalents as well it's very important though that the CSP ensure that management portal calls to the management plane only allow customer

access to their own resources up next is 5.3 Implement operational controls and standards like itel and ISO

IEC 20000-1 or part one in Short we're talking service management and topics in 5.3 will run the gamut of service management including change management

continuity management information security management continual service improvement management incident problem

release deployment configuration service level availability and capacity management

so fully a dozen different subsections within 5.3 so what is ISO IEC 20 000-1 well it specifies requirements for establishing implementing maintaining

and continually improving a service management system it supports management of the service life cycle including planning design transition delivery and

service improvement so these topics are all relevant for both the consumer and the CSP Your Role varies based on the cloud model but relevant to the CSP and the consumer

just the same so we'll start with a look at configuration change and asset managements and I'm covering these three together because of their interrelated

nature one does impact the other so Change Control refers to the process of evaluating a change request within an organization and deciding if it should go ahead

requests are generally sent to a change Advisory board or a cab for short to ensure that it's beneficial to the company this typically requires changes to be

requested approved tested and documented so we have change management which is the policy that details how changes will be processed in an organization and

change control which is the process of evaluating a change request to decide if it should be implemented so change management is guidance on the process Change Control is the process in

action and in an environment that leverages CI CD and infrastructure as code change reviews may be partially automated when new code is ready for deployment the

level of automation is going to vary by maturity whether it's continuous delivery or continuous deployment but automation is quite common and this

reduces operational overhead and human error reduces security risk enables more frequent releases while maintaining a strong security posture

and if you haven't already you'll find that CI CD and infrastructure as code are the norm not the exception in the cloud configuration management ensures that

systems are configured in a similar way configurations are known and they're documented baselining ensures that systems are deployed with a common baseline or

starting point and imaging is a common baselining method whether it's in is with virtual machines or its containerization Imaging VM templates

or container images are very common a baseline is composed of individual settings called a configuration item change management on the other hand

reduces outages or weakened security from unauthorized changes versioning uses a labeling or numbering system to track changes in updated versions of

software and configuration management and change management together can prevent incident and service outages continuity management focuses on the

availability aspect of the CIA Triad and there are a few standards out there related to continuity management The ccsp Exam May mention the nist risk

management framework or RMF and ISO 27000 both of which deal with business continuity and Disaster Recovery terms that fall under the larger category of continuity management

we have the health insurance portability and accountability act or HIPAA which governs Health Care data in the United States and mandates adequate data

backups Disaster Recovery planning and emergency access to healthcare data in the event of a system interruption remember your number one responsibility

as a security professional is human safety nowhere more apparent than with HIPAA and then there's ISO 22301 security and resilience and business continuity

Management Systems this specifies the requirements needed for an organization to plan Implement and operate and continually improve the continuity capability

so for the exam remember these are all associated with business continuity disaster recovery and availability they are in one way or another relevant for

both customer or consumer and the CSP the goal of Information Security Management is to ensure a consistent organizational approach to managing

security risks it's the approach an organization takes to preserving confidentiality integrity and availability the CIA Triad for systems and data

there are several standards that provide guidance for implementing and managing security controls in a cloud environment and those include ISO 27001

27017 27018 27701 the nist risk management framework nist SP 800-53 the nist cyber security

framework and the sock 2 standard from the American Institute of certified public accountants of all places and we've talked about the importance of sock 2

reports already and being familiar with all of these at a high level will be good insurance on exam day and very useful to you throughout your cyber security career

so the standards we're talking about here are all related to development of Information Security Management standards for an organization so let's

just cover these a bit further at a high level we have ISO IEC 27001 which is a global standard for Information Security Management that helps organizations

protect their data from threat there's ISO 27017 which is a security standard developed for cloud service

providers for csps and users to make a safer cloud-based environment and to reduce the risk of security problems we actually cover

27017 at some depth back in domain one in section 1.5 then there's 27018 which is the first International standard about the privacy

in Cloud Computing Services it is a code of practice for protection of personally identifiable information in public clouds acting as pii

processors this will be covered in depth in domain 6 and section 6.2 so we'll get a bit further into 27018 a bit later

in this session ISO 27701 extends the guidance in 27001 to manage risks related to privacy by

implementing and managing a privacy information management system or pims I think it's best if I describe the nist RMF and CSF together that's the risk

management framework and the cyber security framework from nist so the risk management Frameworks audience is the entire federal government

and the CSF is aimed at private commercial businesses although both address cyber security risk management the RMF is mandatory and the cmf is

voluntary of course the nist SP 800-53 provides a catalog of security and privacy controls for all U.S federal information systems except

U.S federal information systems except those related to National Security so it's a government audience there again government-focused and the guidance

follows Phipps 200.

and then the sock 2 standard is a framework that's seen wide adoption among csps as well as the use of a third party to perform Audits and that's important because it provides increased

Assurance for business partners and customers who cannot audit the CSP directly because they have far too many customers to allow it remember earlier in the series when we went to the CSP

portals and saw we can download that stock to report to gain that assurance this is another standard that will be covered in depth in domain six in

section 6.2 moving on to continual service improvement management one critical element of continual service improvement includes the areas

of monitoring and measurement which often take the form of security metrics and metrics need to be tailored to the audience they will be presented to which

often means executive friendly Business Leaders will be less interested in technical topics the metrics should be used to aggregate information and present it in an easily understood

actionable format next up is Incident Management and there are a couple of Concepts you want to be familiar with here the first is an event

events or any observable item including routine actions such as a user successfully logging into a system incidents by contrast or events that are unplanned and have an adverse impact on

the organization now all incidents should be investigated and remediated to restore the organization's normal operations and to minimize adverse impact

not all incidents will require the security team but certainly the ccsp exam focus is security so The Incident Management

framework that you can expect to see in focus on this exam is quite likely going to be nist 800-61 the computer security incident handling guide

it's a very popular standard it's called out in the common body and knowledge it's covered in depth in this course in section 5.6 manage security operations

so we'll be talking about the computer security incident handling guide from nist here very shortly greater depth I did want to mention the

incident response framework from Sans Sans 504-b that includes six steps which starts with preparation

where incident response plans are written and configurations documented identification which determines whether or not an organization has been breached

is it really an incident in other words step three is containment limiting damage the limiting the scope of the incident step four is eradication once affected systems are identified

coordinated isolation or shutdown and then rebuild and notify relevant parties step five is recovery root cause is addressed and time to

return to normal operations is estimated and executed and then step six or phase six helps prevent recurrent and improve IR

processes I wanted to share the sand incident response phases here for two reasons number one you're going to see them

again in your cyber security career number two when we dive into nist 800-61 a bit later you're going to

notice a number of parallels problem management so in the ITIL framework problems are the causes of incidents or Adverse Events that impact

the CIA Triad problems are in essent the root cause of incident problem management utilizes root cause analysis to identify

the underlying problems that lead to an incident it also aims to minimize the likelihood of future recurrent an unsolved problem will be documented

and tracked in a known issues or known errors database and in the world of problem management a temporary fix is called a workaround next up is release management so today

traditional release management practices have been replaced in large part with release practices in Agile development methodologies the primary change is the frequency of

releases due to the increased speed of development activities in continuous integration and continuous delivery or CI CD release scheduling may require

coordination with customers and the CSP so it may not be fully automated but it's certainly going to be partially automated the release manager is responsible for a

number of checks including ensuring change requests and approvals are complete before approving the final release gate changes that impact data

exposure may require the security team some of the release process is often automated but manual processes may be involved such as updating documentation and writing release notes

from a security perspective it's worth noting that the increased Automation and pace of release in agile and CI CD typical to the cloud necessitates automated security testing and policy

controls agile and cicd are the norm for the cloud deployment management so in more mature organizations the CD in cicd stands for

continuous deployment which further or fully automates the release process once a developer has written their code and checked it in automated testing is

triggered and if all tests path code is integrated and deployed automatically less manual effort means lower cost fewer mistakes faster releases

although it's worth mentioning that even organizations with continuous deployment may still require some deployment management processes to deal with deployments that can't be fully automated

processes for new software and infrastructure should be documented containerization managed kubernetes is common in mature organizations supporting more frequent deployment in

public Cloud environments kubernetes is the de facto standard for containerization and fully automated deployment requires greater coordination width and

integration of information security throughout the development process so security is everyone's responsibility we call that devsecops

next we have service level management which focuses on the organization's requirements for a service as defined in a service level agreement or SLA slas are like a contract focused on

measurable outcomes of the service being provided and slas should include clear metrics that Define availability for a service and exactly what availability means slas

require routine monitoring for enforcement and this typically relies on metrics designed to indicate whether the service level is being met and as a consumer or customer of a CSP your Cloud

infrastructure decision should be made with your applications SLA in mind because defining the levels of service for your Cloud infrastructure is usually up to the cloud service provider in

public Cloud environments so you need to make sure that the is pass and task components that you choose as part of your solution architecture have slas

that will support your overall service SLA but customers should monitor their csps compliance with the slas promised with

the various services including service credits for SLA failures oftentimes your csps provide Financial backing for their SLA so you want to make sure that those

credits are received when they're due availability management now a service may be up that is to say the service is reachable but not available meaning it

cannot be used and availability in up timer often used synonymously but there's an important distinction availability means the

specific service is up and usable for example authentication and authorization must work and request must be fulfilled if the users can't get their requests fulfilled the service is

not truly available many of the same concerns that an organization would consider in business continuity and Disaster Recovery apply equally in availability management

bcdr plans aim to quickly restore service availability and Adverse Events so bcdr and availability management align in many respects other concerns

and requirements like data residency or the use of encryption can complicate availability but customers have to configure services to meet their requirements this responsibility lies

firmly on the customer or consumer in most cases Cloud consumers do have a role to play an availability management how much depends on the cloud service category

whether it's is paths or SAS we know the customer has the most control in the is category and to round out section 5.3 we have capacity management so one of the core

concerns of availability is the amount of service capacity available compared with the amount being subscribed to for example if a service has a hundred active users but only 50 active licenses

available that means the service is over capacity and 50 users will be denied service which calls attention to the fact that capacity issues can be physical such as

the underlying csp's infrastructure or logical issues like licenses for example measured service is one of the core elements of cloud computing so metrics that illustrate demand for the service

are relatively easy to identify generally responsibility for capacity management belongs to the CSP at the platform level but belongs to the customer for deployed

apps and services so the customer the consumer must choose appropriate service tiers and design their app to scale to meet demand the cloud provides the perception of

unlimited capacity but in reality is oversubscribed by Design and our CSP must monitor how much is too much over subscription and here again customer versus CSP

responsibility will vary in accordance with the cloud service category whether we're talking about is Paz or SAS

up next is 5.4 support digital forensics in this section we'll talk about forensic data collection methodologies evidence management

and collecting acquiring and preserving digital evidence the ccsp exam does not expect that your a digital forensics expert but it does assume that you're familiar with the

special challenges of forensic data collection in the cloud as well as the standards that outline best practices and processes for digital forensics

you may see questions on e-discovery so e-discovery or electronic Discovery is the identification collection preservation analysis and review of

electronic information e-discovery is usually associated with the collection of electronic information for legal purposes or in response to a security breach

there are roughly a half dozen forensic standards you should be familiar with for the exam most of these are ISO IEC standards so that's the International Organization for standardization and

there's one from the cloud security Alliance you should be familiar with as well so we'll go through each of these at a high level so ISO IEC 27037 is a guide for

collecting identifying and preserving electronic evidence all right so IEC 27041 a guide for incident investigation

27042 a guide for digital evidence analysis and 27043 a guide for incident investigation principles and processes

ISO IEC 27050 is a four-part standard within the iso IEC 27000 family of information security standards it offers a framework governance and best

practices for forensics e-discovery and evidence management if you were going to do your own investigation this would be a standard to be familiar with but generally speaking hiring an outside forensic

expert is the best path for most organizations if they don't have a forensic expert on staff now the CSA security guidance comes in

domain 3 legal issues contracts and electronic Discovery this offers guidance on legal concerns related to security privacy and contractual

obligations it covers topics like data residency and liability of the data processor role the data processor role has a lot of

responsibility around data security storage tools collection and transfer next let's talk about some considerations around forensic data

collection number one logs are essential all activity should be logged including time the person performing the activity the tools that are used the system or data

inspected and the results you should document everything including physical or logical system States applications running any physical

configurations of Hardware as well as any security around the system including physical security physical access the person on the other side of the

conversation may be an opposing party trying to identify instability in the system state or a lack of physical security that places the data that's

been collected into question and consider volatility volatile data data that is not on durable storage requires special handling and priority

generally speaking you want to collect data from volatile sources first an example of a volatile data source would be system memory which is going to

be potentially erased over time or on system reboot we'll get a bit deeper on volatility a bit later in this section when we talk about data collection handling and

preservation there are also a handful of evidence collection best practices called out that you should be familiar with

utilize original physical media so use physical media whenever possible as copies may have unintended loss of Integrity but this is during collection

verify data integrity at multiple steps using hashing especially when you're performing operations such as copying files you'll want to run a hash on the

original file and then a hash of the file after the copy to ensure that they match that there's no loss of Integrity or data in that copy

follow documented procedures dedicated evidence custodian logging all activities leaving systems powered on to preserve volatile data

and establish and maintain Communications with relevant parties such as the CSP internal legal counsel at your organization and law enforcement in the case of a security breach for

guidance and requirements the considerations we covered right there are enough to send many organizations to an external forensics expert we will talk about communication with

relevant parties and communication plans in section 5.5 next we're going to move into evidence management and I want to just refresh your memory on a couple of Concepts we touched on in domain two the

first is legal hold which involves protecting any documents that can be used in evidence from being altered or destroyed sometimes called a litigation hold if

you see litigation hold that's just another name for legal hold generally speaking and another very important concept when it comes to forensic data collection chain of

custody this tracks the movement of evidence through its collection safeguarding and Analysis life cycle it documents each person who handled the evidence the date and time it was

collected or transferred and the purpose for that transfer it confirms appropriate collection storage and handling

and chain of custody is of Paramount importance in legal proceedings scope of evidence is very important as well so this describes what is relevant when collecting data and in a

multi-tenant cloud environment this can be particularly important because collection from shared resources like cloud storage May expose other customers data if they did not fully

erase their data before they left and if the CSP does not adequately manage scope they may expose sensitive data of an unrelated company potentially

exposing you the consumer to unneeded liability the scope of data collection is definitely going to be a bit more challenging in the cloud for this reason alone

but it's certainly not the only challenge so the cloud comes with several challenges when it comes to forensic investigation and data collection so one of these is data location do you know

where the data is hosted and the laws of the country it's hosted in many cloud services store copies of data in multiple locations rights and responsibilities so what

rights for forensic data collection are listed in your CSP contract and if it requires CSP cooperation what is their response SLA

tools are your forensic tools suitable for a multi-tenant environment for a highly virtualized environment what is your organization's liability if you unintentionally capture another

customer's data on a shared resource because of inadequate tooling decisions that you made remnants of a previous customer's data on physical storage for example

because as we've discussed in previous domains the consumer is responsible for Data Destruction and if they don't practice

crypto shredding they may leave remnants behind for the next customer to find in a situation such as forensic data collection but these aren't the only considerations

so laws and regulations also impact a consumer's ability to perform forensic data collection in the cloud because cloud data should generally be stored and have data sovereignty in the

region or country where it's stored and many countries have laws requiring businesses to store data within their borders so when we talk about that problem of knowing where your data is

many times the law requires you to know where your data is the U.S introduced the clarifying lawful

the U.S introduced the clarifying lawful overseas use of data or Cloud act in 2018 due to problems the FBI faced enforcing Microsoft to hand over data

stored in Ireland it aids in the evidence collection in investigation of serious crimes that was the intent and in 2019 the U.S and the UK signed a

data sharing agreement to give law enforcement agencies in each country faster access to evidence held by cloud service providers so while there are certainly many laws

that are targeting consumers commercial companies with customer data they're also laws targeting the csps so a lot to consider there which means verifying audit and forensic data

collection rights with your CSP to ensure you understand your rights and their legal obligations before you sign contracts is very important going a bit further down the road here

forensic investigators should know their legal rights in every jurisdiction every region or country where the organization hosts data in the cloud some countries will not allow e-discovery from outside

their border so you may be required to hire an agent in country now chain of custody in traditional forensic procedures is easy to maintain and accurate History of Time location

and handling do at least in part to the fact that we know where the data is located in the cloud physical location is somewhat obscure however investigators can

acquire a VM image from any workstation connected to the internet and because your cloud data centers where you store data may be hosted around the world time stamps and offsets

can mean more challenging due to the varying Geographic locations and maintaining a proper chain of custody thus more challenging in the cloud because we have to record that sequence

of events who collected the data how they collected the data what data was collected and when they collected that data and from where but with the variance and physical

location it means the where and when can be more challenging to track and breach notification laws vary by country and regulations for example gdpr

requires notification within 72 hours and that applies to all with EU customers even if it's a third party breach so if you are a company located

in the United States and your CSP experience as a breach in the EU you are responsible for notifying your customers if that breach impacts their data so

remember a first party breach begins within the company a third party breach would be outside the company but data residency and data sovereignty are certainly more challenging in the cloud

due to the many potential locations of our data centers and the fact that many cloud services will make multiple copies of our data and store them in multiple regions for

redundancy reasons so once we've managed to collect the data let's talk about the utility of evidence or the usefulness of that evidence so evidence should possess five

attributes to be useful needs to be authentic the information should be genuine and clearly correlated to the incident or crime to which it's attributed

needs to be accurate the truthfulness and integrity of the evidence should not be questionable evidence should be complete and all evidence should be presented in its

entirety even if it might negatively impact the case that's being made in fact it's illegal in most jurisdictions to hide evidence that disproves a case

evidence should be convincing so the evidence should be understandable and clearly support and assertion being made and that is to say evidence

presented to support a fact should clearly support that fact chain of events presented from audit logs for example should be clear and show the chain of events clearly

admissibility so evidence must meet the rules of the body judging it such as a court and the bar for admissibility will vary based on the body judging it hearsay evidence which is indirect

knowledge of an action or evidence that has been tampered with may be thrown out by a court courts typically set a higher standard than Regulators for

admissibility of evidence chain of custody is going to be one of the many key elements

that support or negate admissibility for requirements for evidence to be admissible in a court of law going one level deeper evidence must be relevant

to a fact at issue in the case makes a fact more or less probable essentially the fact must be material to the case

the evidence must be competent which means reliable it must be obtained by legal means evidence obtained by illegal means will

be thrown out by a court to Prevail in court evidence must be sufficient which means convincing without question leaving no doubt now we're going to shift gears and talk

evidence acquisition and preservation so let's start with the importance of collecting evidence so as soon as you discover an incident you should begin to collect

evidence and as much information about the incident as possible evidence can be used in subsequent legal action or in finding an attacker's identity evidence

can also assist you in determining the extent of damage and as we discussed some evidence is volatile it's not going to be there forever it will disappear over time and

with system reboot so collecting evidence as soon as you know there's an incident is very important control is important using a cloud service involves loss of some control and different

service models offer varying levels of access on the whole we have the most control as a customer or consumer in the is model and the

least in the SAS model multi-tenancy and shared resources Factor because evidence collected while investigating a security incident May unintentionally include data from

another customer this is most likely if the CSP or delegate were performing forensic recovery from a shared physical resource like a storage array

if a customer managed to not encrypt data or they were not holding the keys there's potentially some residual data there that could be uncovered in a

forensic data collection operation data volatility and dispersion so Cloud environments support High availability techniques for data like data sharding

sharding breaks data into smaller pieces storing multiple copies of each piece across different data centers I've mentioned data volatility a few times so let's unpack that a bit further

so to determine what happened in a system you need a copy of data and volatility tells us which evidence we should collect first if it disappears in

a system reboot or a power loss or the passage of time that evidence is volatile so here's the approximate order of

volatility it starts with CPU cash and register content it goes down through routing tables live network connections memory so your RAM

temporary files all the way down to data stored on archival media and backup so pretty common sense in most cases here for the exam remember that volatile perishable information should be

collected first you don't need to remember the order of volatility I just wanted to make sure that the concept of volatility is crystal clear this is a niche topic of one subject

within a large exam but bottom line remember that volatile information should be collected first now there are four General phases of digital evidence

handling starting with collection examination analysis and Reporting and there are a number of concerns in the collection phase relevant to the ccsp exam

the proper evidence handling and decision making should be part of the incident response procedures and training for team members who are performing response activities

and let's talk evident preservation and the concerns in preserving evidence so this is really about how to retain logs Drive images VM snapshots any other

data set for Recovery or internal and forensic investigations protections for evidence storage would include locked cabinets or safes dedicated or isolated storage facilities

environment maintenance making sure that we maintain proper temperature and humidity access restrictions and documentation or tracking of activity so when evidence is

checked out there should be a record of that when evidence is checked in there should be a record of that what and who and when and blocking interference of shielding

data from wireless access and that speaks to Integrity if someone came in to investigate review evidence with a mobile device they could potentially access that data

through Wireless that's where a faraday cage comes into play if evidence is being examined that examiner would not have a mobile phone with them and bottom line here you

collect originals and you work from copy so you don't impact the Integrity of the original unintentionally let's take just a minute or two and look at a few examples of areas and

considerations around evidence acquisition and most of these examples are applicable to the is model so we have disk also known as hard drive so

was the storage media itself damaged Random Access Memory but which is volatile memory used to run applications the swap or page file which is used for

running applications when Ram is exhausted also itself somewhat volatile the operating system was their Corruption of data associated with the OS or applications

the device when police are taking evidence from laptops desktops and mobile devices they take a complete system image and the original image is kept intact

installed on another computer hashed and then analyzed to find evidence of any criminal activity are you seeing the underlying theme Here

of integrity so continuing on firmware embedded code this is going to be more applicable to the virtualization host which could be reversed engineered by an attacker so an

original source code would have to be compared to code in use that really steps out of our role as the consumer down to the CSP who's hosting in a public Cloud scenario

so in this case we'd need a coding expert to compare both lots of source code in a technique called regression testing because root kits and back doors are concerns in this area but in a public Cloud situation this would

essentially be a third party breach this would be the csp's responsibility to deal with so we'd hope that they have incident response procedures in place and are going to be cooperative with us

if we're impacted as a customer a snapshot if the evidence is from a virtual machine a snapshot of the virtual machine can be exported for investigation

cash special high speed storage that can be either a reserved section of main memory or an independent high-speed storage device doesn't matter if it's memory cache or disk cache both are

going to be volatile Network so the OS includes command line tools like netstat that provide information that could disappear if you reboot the computer so you'll want to

run those commands soon after the incident is discovered like Ram connections are volatile and lost on reboot and in the tcpip world may be lost

before that just through the passage of time artifacts any piece of evidence including log files registry hives DNA fingerprints or fibers of clothing normally invisible to the naked eye

we're focused on cloud computing here so you know which of these apply to cloud computing but now you know what an artifact is Integrity so I've mentioned Integrity as an underlying theme here so hashes

when either the forensic copy or the system image is being analyzed the data and applications are hashed at collection it can be used as a check sum

to ensure Integrity later files can be hashed before and after collection to ensure a match on the original hash value to prove data

Integrity I even use hashing when I am archiving my system log files when I archive my syslog I hash the file I'm about to upload to the cloud before I

copy it and after I copy it so I know that the hashed copy of the file that arrived matches what I sent from the syslog server so it

ensures integrity provenance so data provenance effectively provides a historical record of data and its origin and forensic activities performed on it it's similar to

data lineage but it also includes the inputs entities systems and processes that influence the data in case you're not familiar with data lineage that's the process of tracking the flow of data

over time showing where the data originated how it's changed and its ultimate destination so provenance also shows us what happened to that data the

inputs the entities the systems and the processes that touched it for the exam hashing is Far and Away the most likely of these to appear on the exam so make sure you understand the importance of

hashing in integrity and just some final words on evidence preservation so data needs to be preserved in its original state so that

it can be produced as evidence in court whether that's legal proceedings or if we are pursuing legal action against an

attacker in a data breach original data must remain unaltered and pristine so what is a forensic copy well an image or exact sector by sector copy of a hard

disk or other storage device taken using specialized software preserving an exact copy of the original disk whether that is a physical disk or a copy of our

virtual VM disk which is stored on physical shared storage at our CSP deleted file slack space system files

and executables and documents renamed to mimic system files and executables are all part of a forensic image and putting a copy of the most vital evidence in a

worm Drive will prevent any tampering with the evidence because you cannot delete data from a worm drive you could also write protect or put a legal hold

on some types of cloud storage and on that topic I want to jump into a live CSP subscription and look at log collection and retention across a few

different cloud services to talk about how that relates to preserving potential evidence we'll switch over to a browser here and take a look at Microsoft Azure

subscription so that's my primary CSP and I'll take a look at a storage account here so I'm just going to look at a pretty standard storage account and right up here under overviewic activity log

and if I look at the logs here I can export my activity logs it tells me that when I configure this export that I can

export different log categories and you'll notice I can choose my destination here so I can archive to another storage account so that's a form of retention I can send this over to a

log analytics workspace which would allow me to then query on that data potentially generate alerts and I can even send over to some other sources not

so important here including a partner solution so if I had a third party Sim I might send over there now in this case if I don't know exactly what these

categories mean the CSP Microsoft in this case gives me a link to learn more about those categories and they are well explained here in a web page so that's

really helpful and if we just back out of there I want to scroll down and look under redundancy here I talked more than once about the

challenge of just knowing where your data is located so in the redundancy area here I see this is a geo-redundant storage account and the CSP provides me a map to show me where my data is hosted

so I see here that my primary region is South Central us and its geo-redundant partner for Disaster Recovery is North Central U.S so they generally pick a

Central U.S so they generally pick a backup more than 300 miles away as we talked about back in domain one so let's switch gears here and jump over

to a path service we're going to look at SQL server and here under the overview I do see an activity log

area and I can export my activity logs and again similar interface as we saw with the is service with the VM I have an option here to configure some

category exports to in fact the same locations now jumping over one level down I want to look at a virtual machine

so an is scenario and there's a logging option here it appears to be more Performance Based so this is more about monitoring system

health and performance than the activities of the VM itself and we can look around here to see if we have absolute consistency in the types of

logs and sure enough we do see activity logging available here in that same export function so it's fairly uniform across the various services but if we

come over here to our cloud-based Sim Microsoft Sentinel which is what I've shown you in previous examples if we go down to the data connectors which is how

we ingest data into a typical Sim we can see here that if I just search for example by the word windows I see I can adjust Windows Firewall data so

that's going to give me a lot of relevant information for my Sim in terms of the events what's coming into the Ingress and what's going out the egress

if I also search on the word security I see that when I scroll down here I can gather security events from a Windows

system using the Azure monitoring agent if I search for SQL I can expect to find an option for my Azure SQL databases so the path service

has an option here for data ingestion that eases that burden and just Switching gears one more time I want to go back to the storage account so if I decide that I'm going to Archive

data in a storage account and I do this frequently myself for example with syslog data so I come to my storage account here if I go down to Containers

which is where I would store data think of it as a folder if I come into my backups folder here for example you'll see that I can establish an access

policy and there's an option for immutable blob storage so storage that cannot be altered ensuring the Integrity is intact and I see here I can use this

for time-based retention which is something I do all the time so if I want to keep my archive logs for seven or eight years I'm going to set a retention here based on the number of days up to

the level of retention that I'm comfortable with but there's also an option here which is the legal hold a concept we talked about briefly in

this domain and prior and with a legal hold we'd typically associate this with one or more tags which is an identifier like a case ID is in a legal case

so point being in the cloud we have many options for data logging log aggregation and log retention it's up to you as the

consumer to be familiar with the options your CSP makes available to you and that brings us to 5.5 manage communication with relevant parties we'll touch on our communication

strategy with vendors customers Partners regulators and other stakeholders which will vary by situation and while best practices certainly exist for

communication plans we'll talk about the influence of company security policies and Regulatory Compliance requirements on our communication plan and just like disaster recovery and business

continuity come with a plan communication starts with a plan a plan that details how relevant stakeholders will be informed in the event of an incident like a security breach

that would include a plan to maintain confidentiality such as encryption to ensure that the event does not become public knowledge at least before we are ready that plan should include a contact list

that includes stakeholders from the government police customers suppliers our internal staff and the Order of Operations compliance regulations like gdpr include notification requirements

like relevant parties and timelines for example gdpr has a 72-hour clock on a security breach that involves sensitive data

I want to just unpack confidentiality one more time so confidentiality amongst internal stakeholders is important so our external stakeholders are informed

in accordance with our plan so they are not surprised by a news report this sort of breach could have long-reaching consequences it can affect

the stock price in the short term it can impact customer and partner trust in the long term and for the long term so I mentioned a plan needs to include

our stakeholders who we need to inform and manage and other stakeholders is that nebulous category we should unpack a stakeholders any party with an interest in an Enterprise

for example corporate stakeholders include our investors employees customers and suppliers our supply chain and regulated Industries like health care and banking are going to have

requirements driven by the regulations governing their industries that will influence who we need to have on this list to communicate with the first step in establishing communication with vendors is an

inventory of critical third parties upon which the organization depends this inventory will drive vendor risk management activities in two ways really some vendors may be critical to the

company's ongoing function like the CSP for example Others May provide critical input to a company's Revenue generation such as a partner who processes credit card

transactions and vendor Communications may be governed by contract and service level agreement now as Cloud consumers most companies will be the recipient of communication

from their chosen csps and while customers should Define communication slas where they can they should at least monitor those of the big csps which are typically going to be

predefined Partners often have a level of access to a company's systems similar to that of the company's own employees but they are not under company control

communication needs to evolve with your partners through that relationship communication and onboarding will evolve into a maintenance mode as we have a day-to-day relationship with that

partner and then there's certainly an off-boarding communication sequence which may involve handoff to a new partner so you'll notice not all of the communication we're talking about here

is strictly incident driven but I think we can safely assume there's going to be a bit of an incident driven focus on the exam then we have Regulators most Regulators

have developed cloud-specific guidance for the compliant use of cloud services and your regulatory standards like gdpr HIPAA and PCI DSS all have communication

requirements that are well defined other stakeholders the company may need to communicate with include the public investors and the company's cyber insurance company in a crisis and

procedures for the order and timing of contact should be created so we know who we're contacting first and what that flow looks like incidentally I'm seeing increasingly that cyber insurance

providers require that they are the first point of contact in the event of a security incident in which case they may actually drive the communication sequence for you

I don't expect you're going to be tested on that last bit I just wanted to throw that Real World experience in there for you so who is responsible for communication

well if a customer has impacted data the company is always responsible for timely communication with that customer if we have a data breach the company must contact customers it doesn't matter

whose fault it is this is true regardless of the cloud service model that's in use and even if the CSP is at fault so the bottom line with timely

communication and shared responsibility is it's not really a shared responsibility so let's talk about shared responsibility for security so application security for example

responsibility varies by model and we always as a customer have the most responsibility in the is model and our responsibility is less as we move through Pass and to software as a

service network security same thing although you'll notice there that our customer responsibility is is nil for Pas and says

host infrastructure our service provider is dealing with all of the physical and most of the host level requirements even in the is model

physical security that's the CSP responsibility across the board data classification is customer driven as a customer we have to classify and

protect our data that's our responsibility and then identity and access management you see that the customer has at least shared responsibility throughout all

Cloud models the bottom line here is that the customer has responsibility throughout the process when it comes to

application security and access and data protection and identity and access management the customer always plays a role in Access Control and data security

and the customer is always in the driver's seat and fully responsible when it comes to timely communication with impacted parties in the event of a

security incident up next is 5.6 manage security operations in this section we'll touch on the security operations center

intelligent monitoring of security controls with a look at firewalls IDs and IPS honey pots and more log capture and Analysis and here we're going to get further into the Sim

function and the log management function related to The Sim and we'll finish up 5.6 with a look at Incident Management and vulnerability assessments but let's

start with the security operations center this is a support unit designed to centralize a variety of tasks and Personnel at the Tactical and operational levels

we typically refer to the security operations center as the sock and it's worth noting that both the CSP and a consumer should typically have a sock function

so what are the key functions of the security operations center well include functions like threat prevention threat detection Incident Management

continuous monitoring and Reporting alert prioritization and compliance management now your CSP dashboards like Azure status the AWS service Health

dashboard and the Google Cloud status dashboard give us a look at service Health but also the scope of the services that our major csps are

managing through their sock function so here's another opportunity for a quick real world glance let's take a look at those Cloud health and service

status dashboards from our major csps I'll switch to a browser and we'll take a look first at the AWS Health dashboard and you see here I can look at service

Health by region by date and then by service listed in alphabetical order and that's without being logged in so anyone can see that

status and I'll switch over to the Azure status portal and if I scroll down here again I

see some service Health by region and services listed in alphabetical order and if I then switch over to Google

cloud service Health we get again a similar view and you will find that some aspects of service Health like if you'd want to look at the aspects of service

Health that apply to your subscription and your resources you may have to log in with read access or better next we have monitoring which is really

a form of auditing that focuses on active review of log file data and monitoring can take many different perspectives we can hold subjects accountable for their actions for

example another aspect of monitoring would focus on system performance and another facet of monitoring would include tools like IDs or Sims to

automate monitoring and to provide real-time analysis of events from our logs and in the case of a sem potentially correlating events across those logs

now monitoring security controls used to be an activity closely related to formal audits that occur relatively infrequently often annually or less

but monitoring is something we now do continuously and it's described the the concept of continuous monitoring in nist

Sp 800-37 the risk management framework and the risk management framework specifies the creation of a continuous monitoring strategy for getting

real-time risk information Network firewalls web app firewalls your intrusion detection and prevention systems provide critical sources of information for our network operations

center or Security operation Center teams and your firewalls and your IDs and IPS devices are processing a lot of information and they should be continuously monitored to ensure they

are functional so we don't miss any important events monitoring for functionality would include monitoring log generation centralized log aggregation

and the device analysis of those logs let's take a look at a few different firewall Concepts and we'll start with Hardware versus software firewalls so a hardware firewall is a piece of

purpose-built network Hardware it may offer more configurable support for Lan and Wan connections versus a software firewall it's also typically going to have better throughput versus software

because it's Hardware designed for the speeds and connections common in an Enterprise Network now in the cloud a hardware firewall is virtual it's a network virtual Appliance

or NVA for short and a software-based firewall is software that you would install on your own Hardware you'd put a software firewall on a physical or virtual server for example

now this is going to give us flexibility because we can place firewalls anywhere we'd like in the organization simply by installing that software on servers workstations and you can run any sort of

host based firewall as long as you have a server to install it on the downside that comes with that flexibility is host based software firewalls are more vulnerable to being disabled by attackers you know

oftentimes they simply have to disable a service to disable that firewall if they can establish a present on that host an application firewall caters

specifically to application Communications layer 7 in The OSI model this could be any application traffic web traffic is very common an example would be a web application

firewall or WAFF for short in a host based firewall is a software firewall an application installed on a host OS like a Windows or Linux client

or server operating system you'll find host based firewalls on both the client and server flavors of Windows and Linux and then virtual firewall so in the

cloud firewalls are implemented as virtual Network appliances or VNA just a moment ago I called that a network virtual Appliance or NVA that's not an accident I wanted you to see it both

ways you'll see it referred to differently in different scenarios with different csps and these are available both from the

CSP directly and often from third-party Partners commercial firewall vendors that will be listed in some sort of Online Marketplace attached

to that csp's cloud and then we have stateless and stateful firewalls so stateless means the firewall can watch Network

traffic and restrictor block packets based on source and destination addresses or other static values it's not aware of traffic patterns or data flows typically it's faster and it

performs better under heavier traffic loads because it's doing less work frankly a stateful firewall can watch traffic streams from end to end it's aware of

communication paths and it can Implement various IP security functions such as tunnels and encryption and it's better at identifying unauthorized and forged Communications

but greater work means a stateful firewall is going to require greater processing power on the whole there are several varieties of modern firewalls available in the cloud a couple that you're likely to encounter at some point

in your career are the web application firewall which protects web applications by filtering and monitoring https traffic between a web application and the internet

typically protects web applications from common attacks like cross-site scripting cross-site request forgery SQL injection the top 10 oauth threats in fact some of

these firewalls will come pre-configured with owasp rule sets what they call the owasp core rule sets we actually looked at that earlier in the series and then there's the Next Generation

firewall which is a deep packet inspection firewall that moves beyond port and protocol inspection and blocking and it adds application Level inspection

intrusion prevention and it typically brings intelligence from outside the firewall generally in the form of a threat intelligence feed that feeds real-time threat information or near

real-time thread information to the firewall enhancing its ability to block information coming from malicious sources and that ability to block traffic from

malicious sources with that real-time information is something you will find commonly in the native firewall features that you get on your major CSP platforms

like Azure and AWS you may see these two abbreviated the web app firewall is commonly called a WAFF and the Next Generation firewall

may show up as ngfw you should also be familiar with the different types of intrusion detection and prevention so an intrusion detection system or IDs

generally responds passively by logging and sending notifications it will identify a problem and notify us but it typically does nothing little or nothing to correct it

an IPS system or intrusion prevention is placed in line with the traffic and includes the ability to block malicious traffic before it reaches the target

and then we have the host based variety so HIDs are host based intrusion detection systems which can monitor activity on a single system only the

drawback is that attackers can often discover and disable these and you may have some heads that are Hardware based and others that are software based but the host based aspect

can be considered a weakness and then we have network based intrusion detection which can monitor activity on a network nids tends to not be as visible to

attackers incidentally the same distinction exists for intrusion prevention system so you'll also see reference to hips and

nips host-based IPS and network-based IPS next we have a honey pot so a honey pot is a system that has

pseudo flaws and fake data designed to lure Intruders as long as the attackers are in the honey pot they're not in our live Network it's worth touching on the goals of a

Honeypot a bit more specifically so it's to lure bad people into doing bad things with some limits you want to entice folks not entrap them you're not allowed

to let them download items with enticement for example allowing download of a fake payroll file would be what we call entrapment in U.S law

so to be clear the goal of a Honeypot is to distract from real assets and to isolate that threat in a padded cell until you can track them down and incidentally a group of Honey pots is

called a honey net the monitoring tools like a security information event management system or Sim use Ai and machine learning to automate investigations in response so I

wanted to touch on these briefly to make sure you understand the difference so artificial intelligence focuses on accomplishing smart tasks combining machine learning and deep learning to

emulate human intelligence machine learning is a subset of AI that involves computer algorithms that improve automatically through experience

in the use of data machine learning gets smarter by processing data and then deep learning is a subfield of machine learning concerned with algorithms

inspired by structure and function of the brain called artificial neural networks then we have user entity Behavior Analysis or Yuba which is based on the

interaction of a user that focuses on their identity and the data they would normally access during a normal day it tracks the devices the user normally uses and the servers that they normally

visit and then we have sentiment analysis which is artificial intelligence and machine learning to identify attack the cyber security sentiment analysis

can monitor articles on social media look at the text and analyze the sentiment behind the Articles and over time can identify a user's attitudes towards different aspects of cyber

security now we're going to move into log capture and Analysis in the context of the tooling we use in our security operations center that allows our organization to Define our incident

analysis and response procedures in a digital workflow format so we're integrating our security processes and Tooling in a central location our sock

leveraging response automation using machine learning and artificial intelligence in our Sim and soar functions now these make it faster than

humans in identifying and responding to True incidents it reduces mean time to detection and accelerates security response it uses playbooks that define

an incident and the action that will be taken the capabilities are going to vary by the situation and the sem Bender in your CSP for that matter but over time

it should produce faster alerting in response for the sock team so let's break these down we have SIM security information event management which is a system that collects data

from many other sources within the network so it is ingesting logs from many different sources and it provides real-time monitoring analysis and

correlation as well as notification of potential attacks and then we have the soar function security orchestration Automation and response which is centralized alert and

response automation with threat specific playbooks I use that Playbook term loosely many solutions use that term Playbook but it's going to be a bit different based

on vendor and the response may be fully automated or single click what we'd call semi-automated some of these systems will do the analysis and the correlation and recommend an action but require you to

single click approve before it implements that action before it takes that response many providers deliver these cements or capabilities together in a single solution

and with very few exceptions today they use AI machine learning and threat intelligence so I promised we dig a bit deeper into the send function in domain

five but first I want to just do a quick recap of our introduction to the Sim function back in domain two so number one our logs are worthless if you

do nothing with the log data logs are made valuable by review whether it's human manual review or it's automated that is they're valuable only if the organization makes use of them to

identify activity that is unauthorized or compromising the sem function can help solve some of these problems by offering some key features log centralization and aggregation data

integrity and normalization so normalizing our logs into a common format that we can then hunt and query through automated or continuous monitoring

alerting and investigative monitoring some automation of the investigation process so let's take a look at some key Sim features necessary to optimize event

detection and visibility into scale security operations first and foremost is Log centralization and aggregation so rather than leaving log data scattered around the environment on

various hosts the Sim platform can gather logs from a variety of sources operating systems applications that can be paths and SAS applications for that

matter Network appliances user devices providing a single location to support investigations and with all that log data in one location you can imagine that data Integrity is very important

the Sim should be on a separate host with its own Access Control preventing any single user from tampering with that log data so separate host really speaks

to physical or at least logical isolation and that's where a cloud SIM can solve that problem for us a cloud-based Sim is going to use cloud

storage will have its own access control and can ensure that we have that access control boundary and normalization a Sim can normalize incoming data to ensure the data from a

variety of sources is presented consistently and we can query across all of that log data from those many different sources automated or continuous monitoring so

sometimes referred to as correlation Sims use algorithms to evaluate data and identify potential attacks or compromises

so because we have centralized log data that's been normalized into a common format that we can query across the automated investigative capabilities are

going to have greater context because it can look at entity activity across our endpoints in our identity system with our applications on our Network

so it's going to do a better job of capturing the full scope of a potential security incident and then can alert us automatically generating alerts like emails or tickets when action is

required based on analysis of that incoming log data but not everything can be automated and that's why a Sim should support investigative monitoring so when manual

investigation is required the Sim should provide support capabilities like querying log files and generating reports but the Sim is giving

us visibility across our entire technology estate our data apps identities endpoints and infrastructure through that log centralization and

aggregation that broad Sim visibility across the environment means better context in log searches and security investigations and it really allows us

to get our arms around the full scope of a potential security breach and the key to that visibility is log collection of course it will vary by Sim solution

but let's talk through some common log collection methodologies we see with a Sim so a Sim typically has built-in log collector tooling that can collect

information both from a syslog server and multiple other servers often we can place an agent on a device that can collect log information and parse and restructure the data and then

pass that to the SIM for aggregation ingestion might be with an agent such as on a Windows or a Linux server or a syslog server we can capture that syslog

data and forward that and in some cases we'll see that data capture that log aggregation happening through an API pretty common with SAS services that API

is our route for aggregation but that aggregation is really correlating and aggregating events so that duplicates are filtered and a better understanding of network events is achieved to help

identify potential attacks and then packet capture we can capture packets and analyze them to identify threats as soon as they reach your network providing immediate alert to the

security team if desired and while I see that called out in some discussions you go packet capture is really more of a network construct we're going to see that packet level Focus

happening with our IDs and IPS Solutions enrolling some of that data up through our logs or with the SIM we're really looking at the data coming from those

devices that are at the front lines of the packet analysis and then data inputs our SIM can collect a massive amount of data from a variety of sources like our

network devices our identity management system our mobile device management system or casby the cloud access security broker or extended detection

and response function at our endpoints and really many more so let's just talk about log ingestion

with a Sim here's an example so we have our Sim it can collect logs from our SQL servers for example both is and pass now how that happens is going to vary by the

solution for is commonly we'll see an agent installed on that system as we may be consuming those lugs from storage or through an API

our identity as a service solution typically via an API connector our Network virtual appliances quite commonly we're collecting via a syslog

connector of some sort one of the solutions I work with very commonly we install an agent on that syslog server and the agent then proxies that syslog

data over to our Sim or xdr solution that's our endpoint activity data when we see a best and sweet solution where the vendor that

gives us our xdr functionality and our sembender are one and the same sometimes we'll see that the xdr will simply forward alert data over then we have our infrastructure as a service our is our virtual machines

we're often collecting via a local agent then we have our Cosby solution our Cloud access security broker that's usage alerts event

related to how users are accessing and using our data with apps so the data we collect from Academy might be events it might be alert it might be incident

and again A lot of times what we're collecting there depends on if the send vendor and the casbi vendor are the same vendor for example if Microsoft Azure is where

you Source your sim solution Microsoft Sentinel and then Microsoft's casby solution since you have a single vendor there the vendor knows what data they've already processed on the casby side so

maybe they'll just forward over the resulting alert instead of sending over all the Raw event data again just an example the ccsp is cloud

agnostic CSP agnostic we're not focused on one vendor here but with all that functionality it's no wonder that the Sim is really a core tool of the security operations center

let's take a minute and go a little deeper on some of the log file data that our Sim solution might be ingesting because in any given environment data is recorded in a variety of databases and different types of log files

system logs security logs application logs firewall logs proxy logs syslog and that data should be protected by centrally storing that log data and

using permissions to restrict access that's one of the functions of our Sim and archive log should be set to read only to prevent modification but log files play a core role in

providing evidence for our investigations you want to be familiar with the many different types of log files a typical Sim solution might ingest a network log this log file can identify

the IP and Mac addresses of devices that are attached to your network this data is commonly sent to a syslog server often a central syslog server our

network-based intrusion detection and prevention can be important in identifying threats and anomalies from these log files from a proxy server can reveal

our users who are visiting malicious sites intentionally or otherwise the collective Insight may be useful in stopping a distributed denial of service attack when we have eyes on all of that network

data across our Network segments and devices we can see common patterns in there it allows our sem to investigate with greater context web server logs can provide many types

of information about web requests so evidence of potential threats and attacks will be visible here information collected about each web session

IP address request date and time the method that we see in HTTP like get or post the browser that's used what we call the user agent and the HTTP status

code for example the 400 series HTTP response codes are client-side errors in the 500 series response codes or server side errors but these logs must be fed

to our Sim in order for it to analyze that data and these files may exist on client systems as well as server systems so sending these to a Sim can help

establish that Central audit Trail across all of our endpoints to give us greater visibility greater context into the scope of the attack so on a Windows system for example we'll

have a system log that contains information about Hardware changes updates to devices date and time synchronization Group Policy application we have the

application log that has information about software applications when their launched success or fail warnings about potential problems or errors and then the security log that contains

information about successful login as well as unauthorized attempts to access the system and its resources it can identify attackers trying to log into

your computer systems it captures information on file access and can determine who has downloaded certain data you will find these log files with these

names in The Event Viewer on any Windows client or server machine as the administrator of your client and server systems you are responsible for dialing up or down the level of security

event logging to make sure that you are at the very least capturing the minimal audit Trail virtually every DNS server will log server level activity like Zone transfer

DNS server errors caching and DNS SEC events most of your DNS servers will have query logging disabled by default due to the sheer volume of DNS queries

that come in to the typical DNS server authentication logs information about login events logging successor failure can come from a variety of sources

depending on your identity and access model those sources might include the radius server for your VPN access active directory domain controllers and Cloud

providers like Azure active directory and Google's identity provider if you have a hybrid Cloud environment and log files related to voice applications even can be valuable in

identifying anomalous activity unauthorized users and even potential attacks I'm a bit in the weeds here but your VoIP and call managers capture information on the calls being made and

the devices they originate from and they may capture call Quality by logging some mean Optical score in Jitter data and significant loss in quality May indicate

attack so typically from these call managers we would want to be capturing these potential events and alerts indicative of attack this may come from a syslog but we'd want to capture some

of that data and each call often is logged inbound and outbound the person making the call and receiving that including long distance calls that goes beyond what you typically

collect in a Sim but you have another level you could go to for some manual investigation and your session initiation protocol information this is used for internet-based calls and the

log files generally show the 100 series event known as the invite the initiation of a connection that relates to the ringing the 200 event

which is followed by an acknowledgment a large number of calls not connecting May indicate attack at the end of the day VoIP phones are embedded systems it's an embedded

computer of sorts that must be secured and the logs generated here can be significant we might just be capturing this data via syslog but it's another source of

information another source of context for our Sim solution okay moving on to reporting so a Sim typically includes dashboards and

collects report data that can be viewed regularly to ensure that the policies have been enforced and the environment is compliant and they can also highlight whether the Sim system is effective and working

properly are incidents raised truly positive or are we seeing a lot of false positives in there false positives may arise because the wrong input filters are being used or

the wrong hosts are being monitored or some hosts are not being monitored that should be and Sim Solutions will typically have dashboards that include views into the status of log ingestion as well as

potential security concerns identified through correlation and Analysis of the logs the Sim has ingested so this is a good opportunity to take

another detour and a quick look at a real Sim solution we're going to take a look at a cloud-based Sim just to give you some context into SIM functionality

in case you're not familiar so I'll just switch over to the Microsoft Azure portal here and I'm going to look at Microsoft Sentinel which is Microsoft's cloud-based Sim and I'm looking at the

Sentinel portal here just the central dashboard and if I scroll down I see functions here such as a view into incidents under threat management I see

a hunting interface where I can make raw queries against that normalized data and in fact this solution provides many canned queries that I can simply enable

or pull in from a gallery and if I scroll down a bit further we see data connectors this is what I wanted to talk to you about and that is that log collection Focus so you'll notice here

it mentions there are 127 connectors they appear to be listed alphabetically I can filter them by providers for example and you'll notice a wide variety

of providers here now I'll just search on some keywords to show you some themes let's search on the word firewall we see here Azure firewall Microsoft native

firewall the Microsoft WAFF but also a variety of third-party Solutions I can also search for syslog and just as you'd imagine there's a syslog connector

that allows me to ingest data from my central syslog solution and you'll even notice here in the A's that we see Amazon web services so I can

ingest data from another csps platform our common web servers there's Apache scrolling down here Azure active directory I can get into my identity

provider some DDOS data from Azure DDOS logs from my key Vault solution and what you'll find with some of these connectors depending on the solution

you're working with if it's collecting data from a service on the same CSP platform the connector may just require a couple of clicks so I'm going to look at the Azure active directory connector

and I'll open that connector page and see what sort of configuration is required and I notice here it's quite simple I can tell it which Azure ad logs

I would like to collect and it's going to begin collecting those for me all I have to do is apply those changes so that's just a quick look but you see now that

with a modern Enterprise Sim solution with a cloud Sim in particular we're going to have some built-in connectivity to a wide variety of sources that make

that wide ingestion of log data much less work for us we're going to shift gears now and talk incident response and the ccsp common

body of knowledge explorers nist SP 800-61 the computer security incident handling guide so that's the methodology I'd suggest you focus on for the exam

now first party incidents are internal to the organization these are incidents that begin inside our organization and we are principally responsible for

handling third party incidents affect an external entity like our CSP or a vendor in our supply chain we certainly may have a role in incident response there

although it may be simply as an informed party the first phase in the nist model is preparation this refers to the organization's preparation necessary to

ensure they can respond to a security incident including tools processes competencies and readiness so those details should be documented in

a security incident response plan that is regularly reviewed and updated typically plan review happens multiple times per year in a walk through what we

call a tabletop exercise where we walk through the plan together in a sample scenario to make sure that the steps we need documented in our response are

present and we are familiar with our role then we have detection and Analysis the activity to detect a security incident in a production environment and to

analyze all events to confirm the authenticity of the security incident in other words do we really have a security incident on our hand

next is containment eradication and recovery so in containment the required and appropriate actions taken to contain the security incident based on the

analysis done in the previous phase in detection and Analysis this limits the damage the scope of the incident we're containing eradication is the process of

eliminating the root cause of the security incident with a high degree of confidence and during the incident our focus is on protecting and restoring business

critical processes recovery should happen after the adversary has been evicted from the environment and known vulnerabilities

have been remediated recovery Returns the environment to its normal fully functional original state prior to the incident

and a post-mortem analysis is often performed after the recovery of a security incident and actions performed during the process

are reviewed to determine if any changes need to be made in the preparation or detection and Analysis phases basically how can we improve our incident response

process and those Lessons Learned Drive continuous Improvement ensuring effective and efficient incident response we're going to talk about vulnerability

assessments now but first I want to touch on our right to audit in the cloud so when we're talking about vulnerability scanners the use of scanners and Pen testers may be limited

by your csp's terms of service and you should understand the type and frequency of testing the CSP allowance now the good news is csps typically have

penetration testing and scanning Rules of Engagement in fact I'll just switch over to a browser here and I'll show you these if you just go search for AWS pen

testing Rules of Engagement do the same for Azure and Google you'll find Pages like this this is aws's customer support policy for penetration testing

the Microsoft version in fact is listed as the pen testing Rules of Engagement so this will let you know what is okay and not okay in terms of turning your

vulnerability scanner on your csps platform our vulnerability management process includes routine vulnerability scans and

periodic vulnerability assessment we use a vulnerability scanner a tool that can detect known security vulnerabilities and weaknesses and absence of patches or

weak passwords on the systems in our environment and we can use that scanner to facilitate a vulnerability assessment to extend just beyond technical scans

and include review and audit to detect vulnerabilities and to further assess their severity so going a little deeper on vulnerability scans a scan can assess

possible security vulnerabilities in computers networks and equipment that can be exploited and this scanning can sometimes require authentication for

Access so a credentialed scan is typically a much more powerful version of the vulnerability scan because it has higher privilege than a non-credentialed scan this can spot vulnerabilities that

require privilege like non-expiring passwords a non-credentialed scan has lower privileges than that credentialed alternative and it will identify vulnerabilities that an attacker would

easily find a non-credentialed scan is going to find missing patches some protocol vulnerabilities but the credentialed scan is going to

allow you to go a level deeper and we can perform non-intrusive scans which are passive and merely report vulnerabilities they don't cause damage to your system

we can perform intrusive scans that can cause damage as they try to exploit the vulnerability and should be used in a sandbox not your live production system of course

and then we have a configuration review now configuration compliance scanners and desired State configuration and Powershell for example ensure no deviations are made to the security

configuration of a system it allows us to catch shift and drift in our configuration so to speak but the combination of techniques can

reveal which vulnerabilities are most easily exploitable in a live environment so Network scans these are scams that look at computers and devices on the

network and help identify weaknesses in their security we have application scans so before applications are released coding experts perform regression testing that will

check code for deficiencies but we can also turn a scanner on those applications before they go live web application scans will crawl through a website as if they are a search engine

looking for vulnerabilities can perform an automated check for site and application vulnerabilities like cross-site scripting and SQL injection there are many sophisticated web

application scanners out there doing part to mass adoption of cloud computing you'll also want to know the difference between static application security

testing and dynamic application security testing for the exam we covered that in a previous domain and common vulnerability exposure and

common vulnerability scoring system so cve and CVSs if you've spent any time in the security World you've probably seen these acronyms

CVSs is the overall score assigned to a vulnerability it indicates a severity and it's used by many vulnerability scanning tools if you're using a vulnerability scanner you're almost

certainly going to see that CVSs scoring metric and cve is simply a list of all publicly disclosed

vulnerabilities included is the CBD ID a description dates and comments both of these are used broadly in vulnerability scanners

the CVSs score is not reported in the cve listing you actually have to use the national vulnerability database to find CVSs scores

the cve list feeds into the national vulnerability database and the national vulnerability database is a database maintained by nist that is

synchronized with the miter cve list I do not expect the exam to go this deep on cve and CVSs I just thought it would be helpful for you to know and

appreciate the relationship between the two so a vulnerability scanner can identify and Report various vulnerabilities before they're exploited

so examples here would be software flaws missing patches open ports services that should not be running weak passwords this is going to help companies avoid

known attacks like the SQL injection your buffer overflows denial of service other types of common malicious attacks and that credentialed vulnerability scan

is really going to be the most effective because it gives us more information than any other variety of scan and it's going a layer beyond what a typical attacker will have available to

them in their initial passes in our environment so a scan assesses the possibility of the exploits and when we get that report

we'll see sometimes false positives which is where the scan believes that there is a vulnerability but when we physically check it it's not there a false negative when there is a vulnerability but the scanner doesn't

detect it the true positive which is where the result of the system scan agree with the manual inspection we perform after the scan but the fact that we have false

positives and false negatives point to the reality that log reviews are very important after the scan it's important we look at the log files and reports that come from our scanner

and that's it for domain five moving on to our final domain and one of the most important areas of cloud knowledge in my opinion legal risk and compliance and as always we're going to begin with

a look at the exam Essentials those topics the official study guide promises will Factor on exam day and domain six is some of the most important content not only for this exam

but for your cyber security career we'll touch on the different sources of law in the United States we'll have a look at the difference between criminal and civil liability and what liability is

exactly the four elements of tort of negligence then we'll get into e-discovery issues we'll talk chain of custody when it

comes to digital evidence knowing the purpose of e-discovery the role of iso 27050 and some guidance from the cloud security Alliance

relief Frameworks that help guide our efforts in e-discovery describing the sensitive information types as well as the major laws that govern

security and privacy in the cloud we're going to take a look at many different Frameworks with heavy focus on digital forensics incident response

and risk management common policies used in an organization's security program we'll spend a fair bit of time on vendors

supply chain and external risk and risk management strategies that an organization May adopt and here we'll spend some time on what we call risk treatment talking through responses to

risk like mitigation avoidance transference acceptance so we'll start with 6.1 articulate legal requirements and unique risks within the

cloud environment here we'll cover conflicting International legislation evaluation of legal risks specific to cloud computing

legal Frameworks and guidelines e-discovery and forensic requirements let's start with conflicting

International legislation it's important to be aware of the various laws and regulations that govern cloud computing and remembering that our presence in the cloud is quite often global

our customers and customer data may be stored in multiple countries and laws can introduce risks to a business fines penalties even the loss of the ability

to do business in a certain place it's important to identify these risks and make recommendations to mitigate them just like any other risk so there's a really easy example I can

cite where two laws conflict in the cloud or at least May conflict in the wrong situation so for example gdpr and EU law that forbids the transfer of data to

countries that lack adequate privacy concerns I can promise you the EU is none too excited about sensitive information being transferred to the United States however

the clarifying law oversees use of data or Cloud act requires csps like Microsoft Amazon and Google to hand over data to Aid in investigation of serious

crimes even if that data is stored in another country and for a customer that raises a very serious question which law prevails when the two are in conflict

things can get complicated here and as with many aspects of security legal compliance requires collaboration legal counsel should be part of the evaluation of any cloud-specific risks

legal requests and the company's response to these remembering that the consumer is responsible for navigating these challenges the CSP will give you third-party audit

documents other Assurance documents explaining how they will respond in particular situations But ultimately legal responsibility falls to the consumer

and whether we say consumer or customer we're talking about the organization who is the customer of the CSP a couple of high-level concepts related to encryption and privacy I want to

mention so computer export controls U.S

companies can't export certain computer tech to what are deemed Rogue Nations Cuba Iran North Korea Sudan and Syria and the Department of Commerce also

details some limitations on export of encryption products outside the U.S I'm

not sure either of those will come up on the exam it is worth mentioning that the basis for privacy rights in the U.S is the fourth amendment of our constitution

and you'll likely see gdpr multiple times on the exam it's not a U.S law but it's very likely to be mentioned because it is the gold standard when it comes to

privacy protections for users and it applies to any company with customers in the EU so it doesn't matter the country in which the company is based if they

have customers in the EU then they are subject to gdpr regulation if they want to do business in the EU and we'll look at gdpr from a couple of

different angles in this domain so moving on Cloud practitioners do need to be aware of multiple sets of laws and regulations and the risks introduced in conflicting

legislation across jurisdictions so I gave you an example but let's just talk through some of the scenarios where conflicts can come into play copyright and intellectual property law

particularly the jurisdictions that companies need to deal with local versus International to protect and enforce their IP not every country respects intellectual

property rights as the United States does safeguards and security controls required for privacy compliance particularly details of data residency

or the ability to move data between countries as well as varying requirements of due care in different jurisdictions you know as we've talked about a couple

of times already with gdpr we have a pretty high bar of due care around data privacy in the EU

data breaches in their aftermath particularly breach notification a bit later we'll call out the laws that include a breach notification requirement

and finally International Import and Export laws particularly technologies that may be sensitive or illegal to import or export

under various International agreements so when we are consuming services and running subscriptions in multiple countries we need to be familiar

with the guard rails that the laws of each country impose upon us so for the exam you'll want to know the difference between laws regulations

standards and Frameworks so we'll break the difference down here quickly first we have laws which are the legal rules created by government entities

like the U.S Congress

and we have regulations which are the rules created by governmental agencies these will include rules for regulated Industries like financial services and health care

laws and regulations both have to be followed where they can result in civil or criminal penalties for the organization for failing to comply then we have standards which dictate a

reasonable level of performance for example we'll talk a bit later about ISO 31000 which includes several standards around creating and operating

a risk management program they can be created by an organization for its own purposes so internally or they can come from an industry body or a

trade group an external group PCI DSS for example which came from the four major credit card companies coming together to create a standard

and finally Frameworks which are a set of guidelines helping organizations improve their security posture we'll touch on Frameworks for e-discovery for risk management from organizations

like nist from the cloud security Alliance but just commit these Concepts to memory and you're going to see plenty of examples in this session you'll also want to be familiar with

types of law for the exam so for example criminal law contains prohibitions against acts such as murder assault robbery and arson

not our primary focus for the ccsp exam civil law examples would include contract disputes real estate transactions employment matters estate

and probate procedures but contract disputes when we're talking about agreements between an organization and a CSP you can imagine civil law is something we may think about vendor

contracts fall into this category and then there's administrative law policies and procedures and regulations that govern daily operations of government and government agencies

regulations like HIPAA fall into this category and a fourth type of law to be familiar with is constitutional law the U.S

Constitution is the highest possible source of law in the United States and no laws from any other source May conflict with the provisions in the Constitution

in fact if Congress passes a law that is later found to be in conflict with the Constitution the law is declared unconstitutional

and can be struck down by the courts so I quickly want to touch on the seven articles of the U.S Constitution and point to what you want to remember for exam day so article one of the

Constitution establishes the legislative branch of government that includes Our House of Representatives and the Senate Article 2 establishes the Executive Branch the Office of the President

article 3 establishes the judicial branch that's our courts Article 4 defines a relationship between the federal government and state governments

Article 5 creates a process for amending the Constitution itself amendments are not something that happened very often we've had two amendments in the last

53 years article 6 contains the supremacy clause establishing that the constitution is the supreme law of the land and article

7 sets forth the process for initial establishment of the federal government for exam day remember this one the Constitution is

the supreme law of the land what I said in the very first sentence on this topic which is that it's the highest possible source of Law and no laws from other

sources May conflict with the Constitution continuing with types of law we have case law interpretations made by courts over time establish a body of law that other courts May refer

to in making their own decisions and in many cases the case law decisions made by courts are binding on both that court and any subordinate course those

are lesser courts in the hierarchy of the judicial system and we have common law which is a set of judicial precedents passed down as case law through many generations

and stand as examples cited in future court cases contract law violations of a contract generally do not involve law enforcement agencies so they're treated as a private

dispute between parties and handled in civil court a violation is known as a breach of contract and courts may take action to enforce the terms of a contract if one

of the party fails to honor the terms of the contract they agreed to and signed related to types of law are types of

legal liability liable means responsible or answerable in law legally obligated and that can mean legal obligation to do something or obligation to not do

something for purposes of our discussion there are two types of legal liability you want to be familiar with criminal liability which occurs when a

person violates a criminal law and civil liability which occurs when one person claims that another person has failed to carry out a legal Duty that they were responsible for

civil cases are brought to court by one party called the claimant who is accusing another party of a violation called the respondent the claimant may be an individual or

Corporation or the government as may be the respondent you're also expected to be familiar with legal risks specific to cloud computing

legal Regulatory and compliance risks in the cloud can be significant for certain types of data or Industries so there are differing legal requirements to consider for example

State and provincial laws in the United States and Canada have different requirements for data breach notifications such as the time frames different legal systems and Frameworks

in different countries in some countries clear written legislation exists in others legal precedent is more important

president refers to the Judgment in past cases and is subject to change over time with less advance notice than updates to legislation we talked about precedent when we were

discussing common law and case law in the U.S just a bit earlier

the U.S just a bit earlier and conflicting laws the European Union's gdpr and the U.S

cloud act directly conflict on the topic of data transfer as we saw in the example we looked at earlier but these unique legal risks specific to the cloud are a direct result of the

global nature of the cloud the fact that as a cloud consumer we're very likely to have data and services residing in data

centers in multiple countries around the world subject to the unique legal Regulatory and compliance restrictions

of the jurisdiction where they reside so a few things to bear in mind when it comes to the bottom line on legal risks specific to cloud computing

responsibility for compliance with laws and regulations researching and planning response in case of conflicting laws ensuring

necessary audit and incident response data is logged and retained and any additional due diligence and due care

are all the responsibility of the cloud consumer the customer there are several legal Frameworks and guidelines you should be familiar with

for the exam that affect cloud computing environment one of those is the organization for economic cooperation and development the oecd which is an international organization

that's comprised of 38 members including the United States but members from around the world and it publishes guidelines on data privacy many of its principles are aligned with European

Privacy Law including consent transparency accuracy security and accountability then there's the Asia Pacific economic

cooperation privacy framework or Apec which is comprised of 21 member economies in the Pacific Rim Apec incorporates many standard privacy practices into their guidance including

preventing harm notice consent security and accountability many of the same standards that we see represented in oecd and gdpr but Apec promotes the

smooth cross-border flow of information between Apec member nations that is the scope of their focus then we have the European Union's gdpr the general data protection regulation

which is perhaps the most far-reaching and comprehensive set of laws ever written to protect data privacy it mandates privacy for individuals it defines company's duty to protect

personal data and it prescribes punishments for companies violating these laws it includes mandatory notification timelines in the event of data breach

and for this exam I expect you'll need awareness of Standards laws and regulations that include mandatory notification timelines for data breach I

don't believe you'll be quizzed on any specific timeline limits for example in gdpr that timeline is 72 hours I don't believe the exam is going

to get that deep on you gdpr does formally Define many data roles as well related to privacy and security like subject controller and processor we will touch on those later

in this session you will want to be familiar understand the difference and understand who is liable in the event of data breach who the owner is

some additional legal Frameworks and standards likely to get mentioned on the exam include health insurance portability and accountability act commonly referred to Simply as HIPAA

it's a law that regulates privacy and control of health information data in the U.S

the U.S payment card industry data security standard or PCI DSS which is an industry standard for companies that accept process or receive payment card

transactions next is privacy Shield which exists to solve the lack of a U.S equivalent to gdpr which impacts rights and

obligations around data transfer and sarbanes Oxley commonly called Socks there's a law enacted in 2002 and it

sets requirements for U.S public

companies to protect financial data when it's stored and used so this exam does not expect you to be a legal expert but you'll notice here when I'm calling out legal Frameworks standards and guidelines I'm giving you

some identifying characteristics you don't need to be an expert on HIPAA or PCI or socks to pass this exam but you do need to know what their content

covers and where they apply if you see a question about U.S law and protected health information HIPAA is quite likely going to be an answer if

you hear anything about protecting financial data in a publicly traded company socks is the first regulation I'd think of you'll also need to know the difference

between statutory Regulatory and contractual requirements statutory requirements are required by

law for example HIPAA gdpr and FERPA are three statutory requirements then we have regulatory requirements which may also be required by law but refer to

rules issued by a regulatory body that is appointed by a government entity fisma and fedramp are two good examples and then we have contractual requirements which are required by a

legal contract between private parties and these agreements often specify a set of security controls or a compliance framework that must be implemented by a

vendor for example the contract may require that we leverage sock or generally accepted privacy principles or

csa's Cloud controls Matrix PCI DSS is a good example of a contractually enforced regulatory requirement and there are some challenges and

complexities that we need to consider in the cloud especially when it comes to e-discovery and our supply chain so an organization investigating an incident May lack the ability to compel the CSP

to turn over Vital Information needed to investigate this is where a good contract with your CSP is going to be important the information may be housed in a

country where jurisdictional issues make the data more difficult to access like the EU where gdpr applies maintaining a chain of custody is more difficult because there are more

entities involved in the process and their physical location more geographically dispersed on the whole three important considerations include

vendor selection architecture and understanding your due care obligations going into the situation as we're evaluating a CSP selection or a vendor

selection we need to think about the architecture that they're working with and our due care obligations because that will impact our ability in an

e-discovery scenario to capture the data we need for response let's unpack those three a bit further starting with vendor selection so when considering a cloud vendor e-discovery

should be considered a security requirement during the selection and contract negotiation phases we know we're going to be limited on our

ability to compel a CSP to produce data during e-discovery unless it is mandated in writing in a contract architecture considerations we know data

residency and system architecture are important because our data is going to tend to be distributed in the cloud we need to think about the impact to

e-discovery proactively such as when designing or deploying a system or a business process so we're thinking about how data privacy

regulations and e-discovery are going to impact us before they are impacting us and do care considerations Cloud security practitioners must inform their

organization of any risks and require due care and due diligence related to cloud computing as security practitioners we need to

ensure the organization is prepared for digital forensics and incident response on the topic of e-discovery it's important to remember that csps may not

preserve essential data for the required period of time to support historical investigations in fact they may not even log all of the data relevant to support

an investigation this shifts the burden of recording and preserving potential evidence onto the consumer that the theme we're seeing as we move through here right

so consumers must identify and Implement their own data collection there are e-discovery Frameworks that include Cloud specific guidance that may help so let's touch again on some of those

complexities that we see in terms of digital forensics and e-discovery in the cloud and then talk about some of those Frameworks so in the cloud we know it's difficult or impossible to perform

physical search and seizure of cloud resources like storage or hard drives organizations like ISO IEC and the cloud security Alliance provide guidance on

best practices for collecting digital evidence and conducting forensics investigations in the cloud every security practitioner should be familiar with the following standards

even if they don't specialize in forensics we touched on all the relevant standards in domain 5 we're going to revisit them again here in this context

and I did want to call out nist IR 8006 so nist ir8006 cloud computing forensic science challenges so nist IR is an acronym that may not be familiar to you

it stands for nist interagency or internal reports IT addresses common issues and solutions needed to address digital forensics and

incident response and Cloud environments so dfir just make sure you're familiar with that acronym for the exam if I were to just quote the summary of

nist ir 806 from the abstract it summarizes research performed by the members of the nist cloud computing forensic science working group and it Aggregates categorizes and

discusses the forensics challenges faced by experts when responding to incidents that have occurred in a cloud computing ecosystem

in short it is guidance for dfir in the cloud and that's the only net new framework I wanted to call out here so let's revisit

domain five we had ISO IEC 27050 which is a four-part standard within the iso 27000 family of information security standards it offers a framework

governance and best practices for forensics e-discovery and evidence management hiring an outside forensic expert is something we should all recognize as

potentially the best path for many organizations if you don't have an expert on staff an expert makes sense because there are

legal implications in digital forensics such as chain of custody such as how we process the evidence capturing the original but working on a

copy so we don't unintentionally modify the original many details in that process that can go wrong if you don't have the expertise and then there's a cloud security

Alliance Security guidance there's free guidance in there domain three legal issues contracts and electronic Discovery it offers guidance on legal

concerns related to security privacy and contractual obligations it covers topics like data residency and liability of the data processor role

in fact if we just call out all the forensic investigation standards that may come up on the exam you see the iso family here from 27037

27041 4243 27050 that we just mentioned and then the CSA guidance so recapping the iso family here 27037

focuses on collecting identifying and preserving electronic evidence 27041 is a guide for incident investigation

27042 covers digital evidence analysis and 27043 covers investigation principles and processes

again you don't have to be an expert on the details of these standards you do need to know in summary the focus of each of these standards so I'm trying to call out the summarization that'll be

relevant for you on exam day that brings us to 6.2 understand privacy issues here we'll take a look at the difference between contractual and

regulated private data country-specific legislation related to private data jurisdictional differences in data privacy which gets interesting in the cloud

where our data is generally hosted in multiple regions in different countries quite often standard privacy requirements

so here we'll dig into gdpr a bit further ISO 27018 as well as the generally accepted privacy principles and we'll take a look at privacy impact

assessments let's start with a look at types of private data at the highest level first we have personally identifiable information or pii

which is any information that can identify an individual name birth date and place social security number biometric data

this is defined by nist special publication 800-122 then we have protected health information or Phi which is health

related information that can be related to a specific person it must be protected by strong controls and access audited

it's regulated by HIPAA High Trust HIPAA is the original Healthcare privacy regulation and high trust came along later and specifically updated HIPAA

regulations and the third type of private data is payment data So Think Credit Card data allowable storage of information related

to credit card and debit card and transactions is defined and regulated by PCI DSS and it is contractual

it applies to those who are processing the transactions and because it's contractual when you decide to become a credit card processor when you're processing transactions the contract you

sign includes your contractual agreement to be regulated by PCI DSS standards to effectively secure this data the

security team must understand what types of data an organization is processing where it is being processed and any Associated requirements like contractual obligations

and in any cloud computing environment the legal responsibility for data privacy and protection rests with the cloud consumer and the individual in the data

controller role is always responsible for ensuring that the requirements for protection and compliance are met even if that data is processed in a csp's cloud service

the data controller cannot transfer responsibility but risk can be mitigated and you will find that components of a contract may include requirements and

restrictions on how data is processed security controls the deletion of data physical location audit requirements the use of subcontractors and if

subcontractors are allowed it may restrict their physical location and all of these considerations fall back to the

data controller as responsible next we have the Australian Privacy Act which allows that organizations May process data belonging to Australian

citizens offshore at the same time it demands that the transferring entity the data owner must ensure that the receiver of the data holds and processes it in accordance with the principles of Australian

Privacy Law the data owner the controller again is responsible for data privacy compliance is often achieved contractually through contracts that

require recipients to maintain or exceed the data owner's privacy standards however The Entity transferring the data out of Australia remains responsible for any

data breaches buyer on behalf of the recipient entities so even if the data owner and their organization have a contract with an entity processing that data they

are responsible for that entity's compliance with these standards so again the data owner the controller can mitigate the risk but they cannot

transfer the responsibility and Canada also has a Privacy Law when it comes to private data that's the personal information protection and electronic documents Act

it's a national level law that restricts how commercial businesses May collect use and disclose personal information and it covers information about an individual that is identifiable to that

specific individual DNA Age Medical Data education employment information any identifying numbers information about their religion

race or ethnic origin financial information it's quite thorough in its coverage of what falls under personally identifiable information and it includes a data

breach notification requirement as well and it's worth noting that the Pepita standard may also be superseded by province specific laws that are deemed substantially similar to the national

law next we have gdpr General data protection regulation this is the law of data privacy in the European Union and it includes the following on data

subject privacy rights the data subject is the individual about whom data is being collected it includes the right to be informed the

right of access the right to rectification the right to Erasure the right to restrict processing the right to data portability the right to object

and rights in relation to automated decision making and profiling so in short this all adds up to a lot of control for the individual to understand

what information is being collected how it is being processed to ask for a copy of that data to ask an entity processing that data to stop and to erase their data

the right to correct any inaccuracies it's really considered the gold standard when it comes to data Privacy Law and other private data types in gdpr race or

ethnic origin political affiliations or opinions religious or philosophical beliefs and sexual orientation to summarize gdpr

it deals with the handling of data while maintaining privacy and rights of an individual it's International because it was created by the European Union which has 27 different countries as its members

and gdpr applies to any company with customers in the EU without regard of where that company is located so if you're a us-based company with

customers in the EU gdpr compliance applies to you and gdpr includes a 72-hour notification deadline in the case of data breach now

we'll shift Focus to some us-based laws beginning with the Graham Leach bliley Act of 1999 which focuses on Services of Bank lenders and insurance

glba severely limits the services they can provide in the information these entities can share with each other this act consists of three main sections

the financial Privacy Rule which regulates the collection and disclosure of Private Financial information the safeguards rule which stipulates that financial institutions must

Implement Security Programs to protect such information and the pre-texting provisions which prohibit the practice of pre-texting which is accessing private information

using false pretenses in other words when these entities are accessing your private information they must State a true and accurate reason for that access

next we have privacy Shield an international agreement between the U.S

and the European Union which allows the transfer of personal data from the European economic area to the U.S by U.S

based companies but is not an indicator of gdpr compliance so organizations under privacy Shield commit to seven principles of the

agreement notice Choice security access accountability for onward transfer data

integrity and purpose limitation and finally recourse enforcement and liability so all things said privacy Shield extends transparency and control

to the data subject similar to what we see in gdpr next we have the stored Communications

Act of 1986 an early effort in data privacy in the electronic realm it created privacy protection for electronic communications like email or other digital Communications stored on

the internet it effectively extends the Fourth Amendment of the U.S Constitution to the electronic realm so the fourth amendment

is where individual privacy has its root the Fourth Amendment details the people's right to be secure in their persons houses papers and effects against unreasonable searches and

seizures so this act outlines that private data is protected from unauthorized access or interception by private parties or the government

next we have the health insurance portability and accountability Act of 1996 commonly known as HIPAA which implements privacy and security

regulations requiring strict security measures for hospitals Physicians and insurance companies HIPAA covered entities are those organizations that collect or generate

protected health information Phi under HIPAA there are separate rules for privacy security and breach notification and flow of these rules down to third parties

and flow of these rules down to third parties is important because that tells us that when data is transferred it does not relieve the data controller of

responsibility under HIPAA Phi may be stored by cloud service providers provided that the data is adequately protected and finally we have the clarifying

lawful overseas use of data or Cloud act which AIDS an Evidence collection in investigations of serious crimes it was created in 2018 due to the

problems the FBI faced enforcing Microsoft to hand over data stored in Ireland in the prosecution of a crime in the United States the cloud act essentially requires

us-based companies to respond to Legal requests for data no matter where the data is physically located so it's not hard to imagine how the cloud act could certainly come into

conflict with the eu's gdpr which country or countries have jurisdiction which determines which laws apply in data security may depend on the

location of the data subject which is the individual about whom data is being collected the data collector the cloud service provider

subcontractors processing that data or even the headquarters of the entities involved and this raises some legal concerns these can impact the

utilization of a particular cloud service provider add costs and time to Market and drive changes to technical architectures required to deliver the services

in other words which laws are going to apply in a given situation may substantially impact how we deliver a service and from where

and may significantly impact the cost and level of effort based on changes we make to Technical architectures and legal red tape we never replace compliance with

convenience when evaluating Services as this increases risks so even if it proves inconvenient or expensive we can never

skimp on compliance because many privacy laws impose fines or other action for non-compliance that will far outpace the money we save let's shift gears and have a look at a couple of those data privacy

standards called out in the syllabus starting with ISO IEC 27018 which was published in July 2014 as a

component of the iso 27001 standard adherence to the Privacy principles in the 27 000 family enables customer trust in a CSP

major csps like Microsoft Google and Amazon all maintain ISO 27000 compliant it can provide a high level of assurance so digging into some of the principles

in 27018 consent personal data obtained by a CSP may not be used for marketing purposes unless expressly permitted by the subject

a customer should be permitted to use a service without requiring this consent control customers shall have explicit control of their own data and how that

data is used by the CSP transparency csps must inform customers of where their data resides and any subcontractors that may process their personal data

communication auditing should be in place and any incidents should be communicated to customers and audit companies the CSP in this case

must subject themselves to an independent audit on an annual basis and that the key phrase their independent audit that annual audit from an

independent and trustworthy Source takes us to a high level of assurance with the iso IEC 27018 standard next let's talk generally accepted

privacy principles the Gap is a framework of privacy principles created by the American Institute of certified public accountant

Gap are widely incorporated into the sock 2 framework as an optional Criterion and organizations that pursue a sock 2 audit can include these privacy controls if it's appropriate whether or

not it makes sense will generally depend on the type of service they're providing the principles we see in the generally accepted privacy principles are similar

to ISO 27018 which is an optional extension of the controls defined in ISO 27002 and an audit of these controls results

in a report that can be shared with customers or potential customers who can use it to assess a service provider's ability to protect sensitive data you'll

remember from previous domains where we went to the CSP portals and we pulled down a sock 2 type 2 audit it can increase assurance

now I want to cover with you the categories of the 10 main privacy principles covered in the generally accepted privacy principles not because you need to memorize these for the exam

but understanding these principles will make tackling data privacy questions on the exam easier and it's going to make you better at your job going forward so let's get into it here we'll start with

management the organization defines documents communicates and assigns accountability for its privacy policies and procedures

remember responsibility goes back to the data controller to the owner notice the organization provides notice of its privacy policies and procedures the organization identifies the purpose for

which personal information is collected used and retained choice and consent the organization describes the choices available to the individual and secures implicit or

explicit consent regarding the collection use and disclosure of the personal data collection personal information is collected only

for the purposes identified in the notice provided to the individual and use retention and Disposal the personal information is limited to

the purposes identified in the notice the individual consented to which is the why the org can retain and when they must dispose of that data but you should notice some themes in

here when we're looking at these standards and the laws around data privacy gdpr may be the gold standard but you'll notice some themes in terms of the Privacy principles that Frameworks like

the generally accepted privacy principles put out here so these are the first five and moving on we have access the organization provides individuals with access to their personal information for

review or update disclosure to third parties personal information is disclosed to third parties only for the identified purposes with implicit or explicit consent of the

individual security for privacy personal information is protected against both physical and logical unauthorized access

quality the organization maintains accurate complete and relevant personal information that is necessary for the purposes identified and monitoring and enforcement the

organization monitors compliance with its privacy policies and procedures and it also has procedures in place to address privacy related complaints and disputes

we see a lot of themes here very similar to the rights outlined for data subjects in gdpr which is a good thing so six seven eight nine and ten right here

and again you don't need to memorize these as gaap privacy principles but understanding these Concepts as they broadly apply across many laws

Frameworks and standards it's going to make the exam easier and it's going to make you more effective in your job going forward to polish off section 6.2

we have the Privacy impact assessment so what is a Pia it is designed to identify privacy data being collected processed or stored by a

system and to assess the effects of a data breach so when is a Pia necessary well several privacy laws explicitly require pias as a planning tool for

identifying and implementing required privacy controls that would include gdpr and HIPAA conducting a Pia typically begins when a

system or process is being evaluated so before implementation however evolving privacy regulation often necessitates assessment of existing systems

to conduct an effective Pia you have to define the assessment scope the data collection method and plan for data retention and in fact the International Association of privacy professionals has

published guides and resources related to privacy efforts including conduction of a privacy impact assessment and that brings us to section 6.3

understand audit process methodologies and required adaptations for a cloud environment there's quite a lot of ground to cover in section 6.3 we'll talk about internal

and external audit controls the impact of audit requirements identifying Assurance challenges of virtualization in Cloud

types of audit reports restrictions of audit scope statements Gap analysis audit planning

internal information security management system or isms internal information security control system

policies the identification and involvement of relevant stakeholders specialized compliance requirements for highly regulated Industries

and finally the impact of distributed Information Technology models really speaking to the geographically diverse nature of the cloud we'll start with a look at a few core

auditing Concepts beginning with the question what is auditing so auditing is the methodical examination of an environment to ensure compliance with regulations detect abnormalities

unauthorized occurrences or outright crimes the process of auditing is a detective control frequency is based on risk

and the degree of that risk also affects how often an audit is performed so when we think about external independent audits often annual is the

frequency but internal audits should be happening much more often on the whole so it relies on audits to identify issues before we expose our environment

to external auditors and audits are an element of due care security audit and Effectiveness reviews are key elements in displaying due care because without them Senior Management

would likely be held accountable and liable for any asset losses that occur that due care obligation is very important we have to demonstrate that we're acting with common sense prudent

management and taking responsible action to address risk and some of those do care obligations that come with regulation roll up to your Executives in terms of

responsibility just as the data controller is responsible for any data breach and ensuring that due care is taken to

secure data ultimate responsibility for compliance of an organization rolls up to its leadership and security and audit reviews serve important internal functions they help

ensure that management programs are effective and being followed they're commonly associated with account management practices to prevent violations with least privilege or need to know principles

can also be performed to oversee many programs and processes so a layer of governance patch management vulnerability management change management

configuration management all important processes that impact security and all processes that can be audited or reviewed on a periodic basis to ensure

they are still relevant and effective in our environment and just a side note from The Real World about controlling access to audit reports because audit reports often

contain sensitive information they often include the purpose and scope of the audit and the results that were discovered are revealed and we may not want that to be widespread knowledge

throughout the organization they can include sensitive information like problem standards causes and recommendations details about security deficiencies that

have been discovered for example so only people with sufficient privilege and need should have access for example senior security administrators would see

the full detail of an audit particularly if they are responsible for closing the gaps your Senior Management would get a high level summary Senior Management would want to know if

deviations or deficiencies have been discovered and that there was a game plan to close those gaps so the internal auditor acts as a trusted advisor to the organization on

risk educating stakeholders assessing compliance compliance May mean company policies or Regulatory Compliance the definition is going to vary based on the company and

the environment and an internal audit can provide more continuous monitoring of control Effectiveness and policy compliance more

so than an annual audit and it enables the organization to catch and fix any issues before they show up on a formal audit report internal audits can also mitigate Risk by examining Cloud

architectures to provide insights into an organization's Cloud governance data classification strategy identity and access management Effectiveness Regulatory Compliance

privacy compliance cyber threat what's the security posture an internal auditor is an independent entity though who can provide facts

without fear of reprisal and some legal and Regulatory Frameworks require the use of an independent auditor others demand a third-party auditor but that's an important implementation detail that

even an internal auditor should be independent which in this case means essentially free to speak their mind the requirement to conduct audits can

have a large procedural and financial impact on a company as well through in regulated Industries for example we see numerous auditing requirements like Banks critical

infrastructure providers and health care so more Auditors and more specialized audit requirements are going to increase that cost with multinational companies audit complexity may be higher due to

conflicting requirements conflicting laws for example and in large environments we'll see representative Samples used to assess

compliance on a manageable scale so a random sample rather than an explicit check of every one of a hundred servers will see a representative sample of 20 of those servers pulled for example to

ensure that configuration is consistent across the sample multi-region data dispersion in the cloud and dynamic VM failure and hypervisors can definitely also

complicate the audit process for the simple reason that it can be difficult to locate exactly where that virtual infrastructure was hosted so getting to the audit Trail itself can

be a challenge with that being said you may see questions around Assurance challenges with virtualization and Cloud on the exam because the cloud is made possible

by virtualization technologies that enable Dynamic environments needed for a global provider platform and it's that Dynamic nature that can make audit very challenging because depending on the

cloud architecture employee to Cloud security professional may need to go through multiple layers of auditing and to be effective the auditor must understand the virtualization

architecture of the cloud provider in fact it will be absolutely necessary in tracing the true sequence of events and finding that through audit Trail

so the provider the CSP really owns the audits of controls over the hypervisor so Microsoft Amazon Google they're basically in control of the logging and monitoring of the physical

virtualization infrastructure and the customer has VMS deployed on top of that hardware and those are usually owned managed and audited by the customer the cloud consumer let's switch gears and

talk through a few types of audit reports some audit standards and we'll start with a statement on standards for attestation engagements the ssae 18 is a set of Standards defined by the American

Institute of CPAs it's designed to enhance the quality and usefulness of system and organization control or sock reports it includes audit standards and

suggested report format to guide and assist auditors and you want to be familiar with the ssae report types so there's the sock1 which deals mainly with financial

controls and these are used primarily by CPAs auditing financial statements where you want to focus is on the sock too so there's the sock 2 type 1 which

is a report that assesses the design of security processes at a specific point in time there's the sock 2 type 2 often written

as type 2 with Roman numerals assesses how effective those controls are over time by observing operations for at least six months

it often requires an NDA in order to see that report due to sensitive contents in fact you'll see that with your major csps as a customer you'll have access to a sock 2 type 2 report since you can't

perform a direct audit but you will typically have to agree to an NDA before that report is served up to you in the cloud portal then there's the sock 3 report which

contains only the auditor's General opinions and generally non-sensitive data and is shareable publicly so the ssae is us-based of course but

sock 2 has become something of a de facto global standard when it comes to audit especially in the technical realm and in the cloud

sock 2 type 2 gives us that high Assurance we're looking for as a cloud consumer and next we have the international standard on Assurance engagements the

isae this is the international auditing and Assurance Standards Board which issues the isae report and this board

and its standards are similar to what we see in the ssae the isae 3402 standard is roughly equivalent to the sock 2 reports

just used less frequently then we have the cloud security Alliance which has the security trust assurance and risk certification program or Star

program it's called and this can be used by cloud service providers Cloud customers Auditors Consultants it's designed to demonstrate compliance to a

desired level of assurance and star consists of two levels of certification which provides increasing levels of assurance for breaking that down just a bit

further level one is self-assessment a complementary offering that documents the security controls provided by the CSP level two would be a third party audit which requires the CSP to engage

in independent external auditor to evaluate the csp's controls against the CSA standard so of course that level 2 external audit is going to be stronger

as it's conducted by an external definitely independent trained qualified auditor an audit scope statements provide the reader with details on what was actually

included in the audit and what was not an audit scope statement generally includes a statement of purpose and objectives the scope of the audit and

explicit exclusions the type of audit security assessment requirements assessment criteria and the rating scale that's going to be used in the report

the criteria for acceptance expected deliverables so what are the outputs of the audit and classification which is going to determine who gets access

how restrictive we are a visibility of the outcome of this audit and setting parameters for an audit is known as audit scope restrictions

so who determines audit scope well audit scope is usually a joint activity performed by the organization being audited and their auditor

and that's not to say that that auditor won't have their limits especially in a third party audit so why limit the scope of an audit well audits are expensive Endeavors that can

engage highly trained and highly paid content expert auditing of systems can affect system performance and in some cases require the downtime of production systems

and a new system not yet in production without all the planned controls in place is not ready to audit anyway and in other cases the cost of implementing controls and auditing some systems is

too high relative to the revenue the service generates there are Gap analysis identifies where an organization does not currently meet requirements and it provides important

information to help the it organization remediate gaps particularly before a third party audit the main purpose is to compare the organization's current

practices against a specified framework and to identify gaps between the two and it may be performed by either internal or external parties and that is to say

some organizations especially in regulated Industries will hire an external auditor to come assess their Readiness before the third party

independent auditor comes in to perform the actual audit the choice of which is usually driven by cost and the need for objectivity

so know when a gap analysis is useful on exam day you know as a precursor to a formal audit process so the organization can close gaps before that third party

external audit or when assessing the impact of changes to Regulatory and compliance framework which introduce newer modified requirements

so ISO 27002 and the nist cyber security Frameworks or two Frameworks commonly used for Gap analysis let's have a look at audit planning and audit phases

so the audit process can generally be broken down into four phases starting with audit planning audit planning includes documenting and defining the audit program objectives and this is

collaborative internal planning of audit scope and objectives this will involve the security organization key business stakeholders potentially legal in

regulatory situations Gap analysis or Readiness assessment basically assessing the organization's ability to undergo that full audit defining audit objectives and

deliverables that's going to be important to identify the expected outputs from the audit and finally identifying Auditors and qualifications compliance and audit

Frameworks usually specify the type of auditor you need then there are phases to the audit itself and in fact there are three major phases of an audit which include the audit field work which

involves the actual work the Auditors perform to gather test and evaluate the organization audit reporting and that report writing begins as the Auditors conduct their field work capturing their notes and any

findings they're going to put into their final report and the audit follow-up the activities that may be conducted after the audit including addressing any identified weaknesses that come in that

audit report you'll want to be familiar with information security management system is Ms for the exam which is a systematic approach to information security it

focuses on processes technology and people and it's designed to help protect and manage an organization's information ISO 27001 addresses need and approaches

to implementing isms isms functions include quantifying risk developing and executing risk mitigation strategies

and providing formal reporting on status of mitigation efforts there are several benefits to ismf as well including improving data security

increased organizational resilience to cyber attacks Central Information Security Management and formal risk management

and then we have internal information security control systems it sounds quite a lot like Information Security Management systems but you don't want to get these two mixed up so an information

security control system provides guidance for mitigating the risks identified as part of the ifms risk management processes and there are several Frameworks to choose from for

your information security control system the scoping controls refer to reviewing controls in the framework to identify which controls apply to the organization

and which do not tailoring is a process of matching applicable controls with the organization's specific circumstances to which they apply

and organizations implementing ISO 27001 isms will find that the iso 27002 controls are very easy to use because they're actually designed to

work together they fit together other control Frameworks include nist SP 800-53 the nist cyber security framework

the secure controls framework and the cloud security Alliance Cloud controls Matrix or CCM you'll want to be familiar with the function of policies

and a couple of specific policy types for the exam particularly organizational versus functional policies the policies are a key part of any data security

strategy and they facilitate a number of capabilities for an organization for one they provide users and organizations with a way to understand and enforce requirements in a systematic and

consistent way they make employees and management aware of their roles and responsibilities they standardize secure practices throughout the organization

you want to know the difference between organizational and functional policies and how they should be applied to the cloud so let's dive into those just a bit further starting with organizational

policies so companies use policies to outline rules and guidelines usually complemented by documentation such as procedures and job AIDS organizations

will typically Define policies related to proper use of company resources like expense reimbursements and travel policies are a proactive risk mitigation

tool designed to reduce the likelihood of risks like Financial losses data loss or leakage reputational damage statutory

and Regulatory Compliance issues abuse or misuse of computing systems and resources and to that effect employees should generally sign policies to

acknowledge acceptance and we can juxtapose the organizational policy to a functional policy so what is a functional policy well it's a set of

standardized definitions for employees that describe how they make use of systems or data they typically guide specific activities crucial to the organization like appropriate handling

of data vulnerability management and other security activities for example functional policies typically codify requirements identified in the isms and they align to your chosen control

framework so a few examples of functional policies not an exhaustive list but to give you an idea for the exam acceptable use what

is and is not acceptable to do on company Hardware Networks email use what is and is not acceptable to do on email accounts password and access management

notes on password complexity expiration reuse requirements for MFA and requirements for access management tools like a password manager incident response

details on how incidents are handled and requirements for defining an incident response plan data classification which would identify types of data and how each should be handled

Network Services how issues like remote access and network security are handled vulnerability scanning the routines and limitations on internal scanning and penetration testing and Patch management

how equipment is patched and on what schedule so as you can see a lot of very function specific policies policies are even more important when we move to the

cloud in part due to ease of use the ease of deploying Cloud resources without governance results in what we call Shadow I.T basically resources deployed without it approval and

sometimes without I.T knowledge this can create security risks like data loss or leakage through unauthorized use of cloud storage services and cloud storage

to my recollection is really where we saw Shadow I.T first crop up with the widespread use of Dropbox and box and OneDrive

when our non-iit users discovered that it was an easy way to collaborate with people in the organization and even at other organizations it also creates

Financial risks such as spending being more difficult to measure and control and these Financial risks are real I remember one organization where the CIO was told he was out of budget and he

said no here's my budget we're well within range but when the expense reports came in it turned out that the development organization was using a

massive amount of public Cloud on their own expense accounts in order to get their work done more quickly and that's true of Shadow I.T it's

generally not a malicious activity it's simply well-meaning users trying to be more effective in getting their job done and potentially working around the

delays of I.T so cloud services should be included in organization policies and requirements for use clearly documented in fact you want to sanction or approve

which Services can be used for which functions which public Cloud are you going to use for is for example what will be your cloud storage vendors what are you going

to use for a password vault but policies should Define requirements users must adhere to and specify which services are approved for those various

uses and in fact a cloud access security broker can help identify and stop Shadow I.T we'll use a casby to monitor our

I.T we'll use a casby to monitor our users use of our data and their use of apps to identify unsanctioned or unapproved apps

and potential oversharing of data just as a couple of examples in fact it can be is frequently a good way to identify Insider threats identify

Mass deletion or mass download of documents of sensitive data we see identification and involvement of relevant stakeholders called out

explicitly in the syllabus and one key challenge of the audit process is the inclusion of any relevant stakeholders so who are relevant stakeholders exactly

well organizations management who will likely be paying for the audit security practitioners responsible for facilitating the audit employees who will be called on to

provide evidence to Auditors in the form of documentation artifacts or even sitting for interviews and we'll see often times that cloud computing environments can include more

stakeholders than on-premises or even multiple csps simply more parties involved in Service delivery and infrastructure management you may see some questions around

requirements for highly regulated Industries and many csps have compliance focused cloud service offerings which meet the requirements of specific regulatory or legal Frameworks in fact

it is a big selling point that those big csps will Leverage for example nerc requirements the North American Electric reliability Corporation critical infrastructure

protection regulates organizations involved in power generation and distribution so you can imagine requirements are very stringent where human safety is involved

on that note HIPAA high-tech both deal with Phi and Implement specific requirements for security and privacy protections as well as breach notification requirements

and hipaaitech don't specifically address cloud computing high-tech came along later and updated HIPAA but it very much a regulation that your major

csps will address in providing their certifications to prospective customers then we have PCI DSS which specifies protections for payment card transaction

data also no specific mention of cloud here although we can certainly expect that will change over time as these laws and standards are revised

csps generally make the controls available but remember responsibility for compliance to any relevant regulations ultimately rest with the cloud consumer the syllabus explicitly

calls out the impact of the distributed it model because cloud computing enables distributed it Service delivery with systems that can automatically replicate

data globally so just one impact of the distributed model is the additional Geographic locations Auditors must consider when they're performing an audit and we've talked about some of the

potential legal conflicts this can generate a common technique in Cloud audit is sampling which is the act of picking a subset of the system's

physical infrastructure to inspect in fact we looked at an example of this a bit earlier in this session csps have found ways to collect evidence that provide Auditors with sufficient

assurance that they've collected a representative sample for example we talked about sampling 20 servers of 100 servers across many regions to save time and expense while maintaining accuracy

of the audit process that does it for 6.3 so we're on to 6.4 understand implications of cloud to Enterprise risk management

topics called out in the syllabus for 6.4 include assess providers risk management programs in particular we'll talk about risk profiles and appetite the difference between the data owner or

controller role versus data custodian or processor regulatory transparency requirement in regulatory standards we'll talk about breach notification and some of the

requirements we see in regulations like socks or gdpr risk treatment responses to risk in other words different risk Frameworks we can use

metrics for risk management and assessment of the risk environment so assessing providers risk management programs and reviewing provider controls

can be particularly challenging in the cloud so prior to establishing a relationship with a cloud provider with a CSP a customer needs to analyze the risks associated with adopting that

Provider Services and rather than performing a direct audit the customer generally has to rely on their supply chain risk management processes

and the third party audit reports that a CSP will provide so the primary areas of focus of a supply chain risk management process include determining whether a

supplier has a risk management program in place and if so whether the risks identified by that program are being adequately mitigated but again unlike traditional risk

management activities we'd see on premises scrm and a CSP scenario often requires customers to take that indirect approach by reviewing audit report and

again we've seen this in previous domains major csps all make available the sock 2 ISO 27001 fedramp or CSA star

audit reports in lieu of a direct audit providing that high level of assurance without the need for the cloud consumer the organization to audit the CSP

directly so in reviewing an audit report from a CSP there are several key elements of the report you want to focus on such as the scoping information or the

description of the audit Target this is going to tell us how comprehensive the audit was in the report we're reading some compliance Frameworks allow audits

to be very narrowly scoped like a shock too but if the csps suck too on it did not cover a specific service that a customer wants to adopt then the audit finding

doesn't provide any real value that report may be assessing risk but it's not that particular customer's risk if it doesn't have that specific service in

scope and this may drive changes like enhanced customer side controls tracking the csp's mitigation and resolution efforts or migrating to another CSP altogether

there are some resources out there that can help organizations build or enhance their supply chain risk management program you'll want to be familiar with nist has a resource library that

includes working groups Publications and a number of other resources you can get that URL from the PDF that comes with this course and then we have ISO 27000

which is a security management system for security and resilience with particular focus on Supply Chain management now the risk profile describes the risk

present in the organization based on all the identified risks and any associated mitigations in place and the risk appetite describes the

amount of risk an organization is willing to accept without mitigating and what an organization is willing to accept without mitigating really depends on the type of business they're in and

and the degree of risk we're dealing with regulated Industries will be more apt to mitigation transference and avoidance of risk altogether smaller organizations and startups will

be more apt to Simply accept risks to avoid cost of treatment you can imagine that an early stage startup without a lot of cash is going

to opt for spending less where they can her gdpr data roles and responsibilities we saw specifically in the syllabus the call to knowing the difference between

the processor and custodian roles so the data processor is anyone who processes personal data on behalf of the data controller

so the data processor is also the data custodian in other standards gdpr calls that role the data processor and the processor is responsible for the

safe and private custody transport and storage of the data the data controller is the person or entity that controls processing of the data the owner so what

gdpr calls the data controller role would be the data owner in certain other Frameworks they own the data and the risks associated with any data breaches

when data controllers use processors they must ensure that the security requirements follow the data and to be Crystal Clear while the data processor

is acting on behalf of the controller the data controller ultimately owns responsibility gdpr also defines the data Protection

Officer who ensures the organization complies with data regulations under gdpr the DPO is a mandatory appointment in the data subject again is the

individual or entity that is the subject of the personal data the person about whom data has been collected because they're called out in the syllabus you'll want to be sure you're able to

identify each of these data roles so the data owner usually a member of Senior Management can delegate some day-to-day duties cannot delegate total

responsibility the data custodian usually someone in the IT department does implement controls for the data owner does not

decide what controls are needed in fact on the exam if the question mentions day to day it's likely data custodian is your answer and remember

for gdpr the data owner is the data controller and the custodian is the data processor transparency requirements are called out

in the syllabus as well which speaks to breach notification so a cloud security professional should definitely be aware of the transparency requirements imposed on data controllers by various

regulations and laws around the world most recent privacy laws include a mandatory breach notification and there are some variations amongst the laws how

long an organization has to respond in fact mainly around issues of timing of the notification and who must be notified will vary across standards but

regulations that require reach notifications include gdpr HIPAA glba and the Canadian papita regulation

in fact incident response plans and procedures should include relevant information about the time period for reporting as well as the required contacts in the event of a data breach

essentially who should be notified and how quickly sarbanes-oxley so if a company is publicly traded in the United States they're going to be subject to

transparency requirements called out in the sarbanes-oxley act so under socks specifically as data owners these companies have to consider the following

section 802 it's a crime to destroy change or hide documents to prevent their use in official legal processes section 804 companies must keep audit

related records for a minimum of five years Sox compliance is often an issue with both data breaches and ransomware incidents at publicly traded companies

the loss of data related to compliance due to external actors does not protect a company from their legal obligations that's kind of uh the dog ate my

homework defense that doesn't protect the organization likewise gdpr has some explicit transparency requirements for companies doing business in the European Union or

with citizens of the EU transparency requirements under gdpr are laid out in article 12. there's a link in the PDF

article 12. there's a link in the PDF with the course if you'd like to take a look but gdpr states that a data controller must be able to demonstrate

that personal data are processed in a manner transparent to the data subject the obligations for transparency begin at the data collection stage and apply

throughout the life cycle of processing in fact it stipulates that communication to data subjects must be concise transparent intelligible and easily

accessible and the use of clear and plain language which means an organization cannot hide behind confusing jargon to

take power away from the data subject or to fool them in any way meeting the requirements for transparency also requires processes for providing data

subjects with access to their data in gdpr the subject has the right to ask an organization to correct the data If it's incorrect and they can also ask to

be forgotten basically remove my data risk treatment is also called out in the syllabus the practice of modifying risks generally lowering risk it typically

begins with identifying and assessing risks by measuring the likelihood and the impact risks most likely to occur and most impactful would be prioritized

for treatment in a nutshell risk treatment is the organization's response to risk and you'll want to be familiar with these potential responses for the exam we have

risk avoidance where the organization changes business practices to completely eliminate the potential that a risk will materialize a particular risk this can negatively impact business

opportunities because the organization May avoid certain business opportunities entirely to avoid the risk associated with them there's risk mitigation which is the

process of applying security controls to reduce the probability and or the magnitude of a risk there's risk transference which shifts

some of the impact of the risk from the organization experiencing the risk to another entity for example cyber insurance and then there's risk acceptance

deliberately choosing to take no other risk management strategy and to Simply continue operations as normal in the face of a risk common when the cost of

mitigation is greater than the cost of the impact of the risk itself the mitigation would not be cost effective so it is therefore unnecessary you want to know these Concepts and be ready to

recognize examples on the exam also called out the syllabus's risk appetite sometimes called risk tolerance it's the amount of risk a company is

willing to accept now these terms risk appetite and risk tolerance are sometimes used interchangeably there are definitely experts out there that can articulate a

subtle difference for purposes of this exam risk appetite and risk tolerance are the same regulations that affect risk posture so

regulations addressing data privacy and security that influence an organization's risk posture would include gdpr socks HIPAA and PCI DSS

just to name a few and all called out in the exam syllabus in multiple places so I mentioned security controls are used in Risk mitigation they are risk

treatments for countering and minimizing loss or unavailability of services or apps due to vulnerabilities now the terms safeguards and countermeasures often seem to be used

interchangeably technically safeguards are proactive they reduce the likelihood of occurrence countermeasures are reactive they reduce

the impact after occurrence and there are definitely some risk management Frameworks available for security practitioners to use as guides when they're designing a risk management

program and in the cloud computing Arena I'd suggest being familiar with these risk Frameworks at minimum for the exam we have ISO 31000

in nissa's cloud computing risk assessment nist 800-37 the risk management framework and another worth mentioning

is nist 800-146 the cloud computing synopsis and recommendation this is not a dedicated risk management standard but does mention the various risks and

benefits associated with different deployment and service models let's go a bit deeper on these starting with ISO 31000 which actually contains several standards related to building and

running a risk management program there's ISO 31000 risk management guidelines which provides the foundation of an organization's risk management

function you have IEC 31010 risk management risk assessment techniques provides guidance on

conducting a risk assessment and ISO guide 73 risk management vocabulary which provides a standard set of terminology used through the other

documents and it's useful for dividing elements of the risk management program good for making sure everyone is speaking the same language so to speak

and from nist we have nist special publication 800-37 the risk management framework we have nist special publication

800-146 cloud computing synopsis and recommendations which provides definition of various cloud computing terms and from Anissa Anisa produces several

useful resources related to Cloud specific risks that organizations should be aware of and plan for when they're designing cloud computing systems the guide from Anisa identifies various

categories of risks and recommendations for organizations to consider when evaluating cloud computing and these include research recommendations to advance the field of

cloud computing legal risks security risks Anisha is a rough equivalent to the U.S National Institute of Standards

the U.S National Institute of Standards and Technology Anissa is the European Union Agency for cyber security so it's the European equivalent of nist more or less

risk metrics are called out in the syllabus and there are some key cyber security metrics that companies can track to present measurable data to company stakeholders for example

patching levels how many devices are fully patched and up to date unpatched devices often contain exploitable vulnerabilities and quarterly reports I

like to show not only our patching levels for devices but to call out some little details like the fact that we're patching firmware for example that we're patching our network devices that were

patching not only the core operating system or Microsoft software but all of our third-party software time to deploy patches how many devices received required patches in the divined

time frames this is a useful measure of how effective a patch Management program is at reducing the risk of known vulnerabilities and getting some of those out of band emergency zero day

type patches out the door quickly intrusion attempts how many times have known actors tried to breach Cloud systems and how many of those attacks

were effective in some way increased intrusion attempts can be an indicator of an increased likelihood of risk and then some common acronyms mean time

to detect mean time to contain and mean time to resolve how long does it take for security teams to become aware of a potential security incident to contain the damage and

resolve the incident inadequate tools or resources for reactive risk mitigation can also increase the impact of risks occurring

cyber security metrics provide absolutely Vital Information for decision makers in the organization in prioritizing their treatment of risk

and their need to evolve their strategy in particular areas at the end of the day cyber security metrics within expected parameters

indicate risk mitigations are effective metrics that deviate from the expected parameters are no longer effective and should be reviewed

assessment of risk environment is called out in the syllabus and the cloud being a critical operating component for many organizations it's very important to identify and understand the risks posed

by the CSP because the greater the dependency on the CSP the greater the risk we are handing over responsibility for elements of our compute environment

and with that some level of control over our compute environment and our ability to respond and collect data in security incident circumstances it's important to

ask a number of questions when considering a cloud service a vendor or an infrastructure provider for example is the provider subject to takeover or acquisition are we going to see an

ownership change that may result in a change to our contract terms how financially stable is the provider will they be around for the long term

in what legal jurisdictions are the provider's offices located in other words what regulations and laws are we likely to be subjected to as a customer

are there outstanding lawsuits against the provider that may affect their financial stability and their long-term present what pricing protections are in

place for services we're Contracting how will a provider satisfy any regulatory or legal compliance requirements do they have those audit reports from third parties that give us

that high level of assurance and what does failover backup and Recovery look like for the provider do they have Regional support to give us that Dr

capability in a sustainable fashion designing a supply chain risk management program to assess CSP or vendor risks is a due diligence practice

actually performing the assessment is an example of due care remember the customer organization is responsible and any organization that uses cloud services without adequately

mitigating the risks is likely to be found negligent in a breach which is going to pose problems for the data controller to guide their risk assessment process customers can

leverage ISO IEC 15 408-1 also known as the common criteria it enables an objective evaluation to validate that a particular product or system satisfies a

defined set of security requirements it assures customers that security products they purchase have been thoroughly tested by independent third-party

testers and meet the customers requirements this certification of the product only certifies product capabilities

if it's a misconfigured or mismanaged software is no more secure than anything else the customer might use so again as with the CSP a software company may put the capability there but leave it up to

the customer to properly configure it's designed to provide assurances for security claims by vendors the evaluation is often done through Testing Laboratories where the product

or platform is evaluated against a standard set of criteria the result is an evaluation Assurance level which defines how robust the security capabilities are in the evaluated

product most csps do not have common criteria evaluations over their entire environment but many cloud-based products SAS products May

it's up to the customer to review details of the common criteria assurances to make sure that the scope of the evaluation and the level of assurance

meet their requirement the cloud security Alliance offers up Star Security trust assurance and risk

which is their Assurance framework so when evaluating risks in a specific CSP or other cloud service the star can be a useful lightweight method for

ascertaining risks it contains evaluations of cloud services against csa's Cloud controls Matrix organizations can opt for self-assessed

or third-party assessed cloud services now that will affect the level of assurance whether it's low Assurance in the case of self-assessment or high

Assurance in the case of third party overall CSA star is considered lightweight lower Assurance certification for the csps that use it

another option is the EU cyber security certification scheme on cloud services or eucs so Anisa has published a

standard for certifying the cyber security practices present in Cloud environments and that framework is the eucs it defines a set of evaluation

criteria for various cloud service and deployment models the goal is producing security evaluation results that allow comparison of the security posture

across different Cloud providers this standard was still under development as of 2022 so adoption is not yet widespread I would expect any

coverage on the exam is also going to be similarly Limited and that does it for 6.4 so that brings us to 6.5 understand

Outsourcing and Cloud contract design so here we'll cover business requirements like slas msas and sows

vendor management contract management and the Clauses that should be present in your contracts with csps and similar vendors and Supply

Chain management and one thing these topics all share in common is that they pertain to customer dealings with third parties so let's start with a quick look at third party risks

first we have the supply chain and supply chain security has become a significant concern for organizations in recent years this includes suppliers manufacturers Distributors and even

customers when we think Downstream in the supply chain and a breach at any Link in the supply chain can result in business impact and then there's vendor management many

organizations today are actually reducing the number of vendors they work with and requiring stricter onboarding procedures every customer I work with has some sort

of vendor self-assessment or survey so they can gather an initial round of data from a potential vendor to assess the risk they may pose to the organization

and vendors may be required to submit to an external audit and agree to strict communication and Reporting requirements in the event of potential breach certainly when business critical

infrastructure and services are involved this is going to be true a compromised vendor opens the organization to the risk of an island-hopping attack where a bad actor

attacks the organization from the perch of a compromise vendor where they've established a presence then we have system integration so system integration Partners working on

systems have privileged remote or physical access often necessitating security measures and process controls beyond the norm

the potential for increased risk of Insider attack is one of many concerns here so you may simply think of systems integrators as it Consultants

so let's talk business requirements specifically SLA MSA and Sal so starting with the master service agreement in legal terms a cloud customer and a CSP

enter into a master service agreement this is defined as any contract that two or more parties enter into as a service agreement and the MSA should address

compliant and process requirements the customer is passing along to the CSP the MSA should include breach notification CSP duty to inform the

customer of a breach within a specific period of time after detection legal counsel is most often responsible for contracts but security should be

involved to share requirements to ensure legal captures all of the necessary elements and concerns in the MSA and other contracts

next we have the service level agreement or SLA so slas stipulate performance expectations such as maximum downtime and often include penalties if the

vendor doesn't meet expectations these are generally used with external vendors like the CSP and an SLA is legally binding more specifically an SLA

often includes Financial penalties for non-performance and may even allow a customer to terminate their contract early let's go a level deeper on slas so SLA should be written to ensure that the

organization's service level requirements are met and we need to make sure that in the SLA we're defining recurring discrete measurable items that the parties agree

on as a clear measure of whether the SLA has been met or not common elements documented in slas include uptime guarantees

SLA violation penalties SLA violation penalty exclusions and limitations so limiting the size of a penalty potentially

suspension of service clauses provide reliability data protection and management disaster recovery and Recovery Point

objective so RTO and RPO security and privacy notifications and time frames just as an audit can be too narrowly

scoped to be useful to a customer and SLA can similarly be too narrowly scoped to be useful to a customer when they need it remember you're not only handing

responsibility over to the CSP you're handing over some elements of control and contracts including those around service levels give back a level of control to the

customer leverage of a fashion to ensure that the CSP meets their obligations or other vendor for that matter so the statement of work so this is a

legal document usually created after an MFA has been executed and it governs a specific unit of work the msma document services and prices

but a Sal covers requirements expectations and deliverables for a project so in other words the MSA focuses overall ongoing and assao is

time limited and specific a non-disclosure agreement this is a contract with vendors and suppliers not to disclose the company's confidential information

a mutual NDA actually binds both parties in the agreement and I do find those tend to be more common vendor management also called out in the syllabus managing risk is complicated

when parts of the organization's I.T

infrastructure exist outside the organization's Direct Control as is the case in cloud computing and the practices of supply chain risk

management and vendor management overlap significantly however in many cases vendor management will include more activities related to

operational risks cloud computing involves Outsourcing ongoing organizational processes and infrastructure to a service provider therefore the cloud requires more

continuous management activities to Monitor and manage that vendor relationship we're handling over a level of responsibility and a level of control

that then requires continuous oversight on our part to manage our risk exposure for cloud professionals also need strong project and people management skills to

effectively perform vendor management activities Key activities would be the initial vendor assessment where security practitioners should be involved in that

initial selection process which involves assessing the security risks present in CSP and related services for many customers this process will

entail reviewing security reports like a sock 2 on an annual basis after the CSP has undergone their yearly audit that indirect assessment through third-party

audit documents and we also need to assess vendor lock-in risks this assessment will require knowledge of not only the csps offerings but the architecture and

strategy the customer organization intends to use using any unique CSP offerings like artificial intelligence and machine learning platforms can result in a

service that is dependent on that specific CSP we also need to assess vendor viability this is often a process not conducted by

the security team as it deals with operational risk rather than security risk assessing the viability of vendors may involve reviews of public information like financial

statements the csp's performance history and reputation or even formal reports like a sock1 a sock 1 being a report that's more financially focused

but all of these identify potential weaknesses that could impact the csp's ability to continue operations and then there are escrow options so

escrow is a legal term used when a trusted third party holds something on behalf of two or more other parties such as source code or encryption keys

so let's just go through a common escrow scenario a software development company may wish to protect the intellectual property of their source code however

if they go out of business their customers are left with an unmaintainable system and customers want assurance in this scenario an escrow provider could hold a copy of the source code and

release it to customers in the event the provider is no longer in business contract management is another concern organizations need to employ adequate governance structures to monitor

contract terms and performance to be aware of outages and any violation of stated agreements and that's where contract Clauses come into play a contract clause is a

specific article of related information that specifies the agreement between the Contracting parties some common contract Clauses that should be considered for

any CSP or other data service provider include the right to audit metrics definitions termination

litigation assurance compliant and access to the cloud or to our cloud data so let's go through these

at another level of detail so the right to audit a customer can request the right to audit the service provider to ensure compliance with the security requirements agreed to in the

contract many of your csps write into their contract that you can rely on their standard third-party audits their sock 2

their ISO 27001 certification to be used in place of a customer performed audit so an indirect but High Assurance metrics if there are

any specific indicators that the service provider must provide to the customer they can be documented in a contract and should be metrics tell you how compliance with the agreement will be

measured definitions so a contract is a legal agreement between multiple parties essential that all parties share a common understanding of the terms and

expectations in that contract defining key terms like security privacy key practices breach notifications can all avoid misunderstandings when problems

arise termination so this refers to ending the contractual agreement this Clause will typically Define conditions under which either party May

terminate the contract it may also specify consequences if the contract is terminated early litigation this is an area where legal

counsel really must be consulted it's agreeing to terms for litigation and can severely restrict the organization's ability to pursue damages

if something goes wrong some contracts for example will mandate arbitration before litigation

Assurance so this is defining assurance and these requirements set expectations for both the provider and the customer many contracts specify that a provider

must furnish a sock to or equivalent to the customer on an annual basis as that level of assurance then there's compliance so any customer

compliance requirements that flow to the provider must be documented and agreed upon in the contract data controllers that use cloud providers as data processors have to

ensure that adequate security safeguards are available for that data in the cloud access to the clouder data so Clauses dealing with customer access can be used

to avoid risks often associated with vendor lock-in in the vena contract management you'll want to be familiar with cyber Risk insurance so cyber Risk insurance is

designed to help an organization reduce the financial impact of Risk by transferring it to an insurance carrier in the event of a security incident the insurance carrier can help offset

Associated costs like digital forensics and investigation data recovery system restoration they may even cover legal or regulatory fines associated with the

incident though that extra coverage you can bet will be reflected in the insurance premiums cyber insurance carriers are in the business of risk management and as a

result they're unlikely to offer coverage to an organization lacking controls to mitigate risk in fact most will have specific requirements in terms of security

controls they expect to be in place language they expect to be in your contracts and cyber Insurance requires the organizations to pay a premium for the insurance plan so they have to keep

those premium payments up to date and most plans will have a limit of coverage that caps how much the insurance carrier pays

in fact there may also be sub-limits which capped the amount that will be paid for specific types of incidents such as ransomware or phishing an insurance broker can be a useful

resource when investigating Insurance options for your organization circumstances including identifying the amount of coverage the organization needs different types of coverage that are

available such as business interruption or cyber extortion security controls that the insurance carrier requires such as multi-factor authentication for example

now cyber Risk insurance usually covers costs associated with investigation Direct business losses recovery costs

legal notifications lawsuits extortion and even food and related expenses so let's dig into these

Clauses that we'd see in a typical cyber Risk insurance contract so investigation these are costs associated with the forensic investigation to determine the extent of

an incident this often includes cost for third-party investigators and at least one of the Cyber risk insurers that I work with requires that they are the

first point of contact when an incident is detected and they help manage the process including the required communication Direct business losses

these refer to direct monetary losses associated with downtime or data recovery over time for employees and oftentimes reputational damages to the organization

recovery costs these may include costs associated with replacing Hardware or provisioning temporary Cloud environments during contingency operations they may also include services like

forensic data recovery or negotiations with attackers to assist in recovery legal notifications so costs are associated with required privacy and

breach notifications required by relevant laws and lawsuits policies can be written to cover losses and payouts due to class action or other lawsuits against a

company after a cyber incident the insurance company May pay out ransomware demands and this extortion Clause is growing in popularity this may

include direct payments to ensure data privacy or accessibility by the company we don't like to encourage payout of Ransom demands as a practice but that

extortion option is available food and related expenses this is pretty simple actually incidents often require employees to work extended hours or to travel to contingency sites so these are

just costs associated with incident response including catering lodging and it may be covered even though they're not usually thought of as I.T costs

and to wrap up 6.5 let's talk about Supply Chain management the managing risk in the supply chain focuses on both operational risks which ensures that

suppliers are capable of providing the needed services and security risks the supply chain should always be considered in any business continuity or

Disaster Recovery planning proactive measures include contract language and Assurance processes that can be used to quantify the risks associated with using suppliers like csps

as well as to gauge the effectiveness of these suppliers risk management programs so there are some standards we can lean

on here there's ISO IEC 27036 which is cyber security supplier relationships the iso 27000 family of Standards includes a specific ISO standard

dedicated to supply chain cyber security risk management and that is 27036 it provides a set of practices and guidance for managing cyber security

risks in supplier relationships the standard is particularly useful for organizations that use ISO 27001 for

building an isms or ISO 31000 for risk management they're building on the concepts found in those standards in ISO

IEC 27036 and ISO 27036 is comprised of four parts including overview and Concepts

requirements guidelines for information and communication technology supply chain security and guidelines for security of cloud

services so we see cloud services get specific mention here and ISO 27036 like the other ISO standards is not a free resource there's generally a cost

associated with getting your hands on that document so let's look at the four parts beginning with overview and Concepts which provides an overview and foundation for a supply chain management

capability part two covers a set of best practices and techniques for Designing and implementing the Supply Chain management function

part 3 is a particular concern for security practitioners as it lays out practices and techniques specific to managing security risks in the supply chain

in part four which is most relevant to Cloud security practitioners in particular this standard deals with practices and requirements for managing

supply chain security risk specific to cloud computing and the CSP and some additional resources worth a

mention when we're talking about supply chain there's nist IR 8276 which is key practices in cyber supply chain risk management

nist 800-161 which is cyber security supply chain risk management practices for systems and organizations

and the Anisa publication supply chain integrity and overview of the ICT supply chain risks and challenges and vision for the way forward that was published

back in 2015. congratulations you've

reached the end of the ccsp exam cram I hope you've gotten value from the course if you have any questions as you make your final preparations for the exam leave a question in the comments or

reach out on LinkedIn chat and until next time take care and stay safe foreign

Loading...

Loading video analysis...