- 1. take away
- 2. chat
- 3. lab setup
- 4. intro
- 5. intro to cloud computing
- 6. lab 1: setting up an ssh client
- 7. key based authentication
- 8. lab 2: creating a keypair for use with EC2
- 9. overview of EC2
- 10. lab 3: create an EC2 instance and connect
- 11. Setting up a web site on EC2
- 12. understanding firewall basics
- 13. AWS Budgets
- 14. managing users and authentication
- 15. Virtual Private Cloud (VPC)
- 16. network ACL
- 17. block vs. object storage
- 18. introduction to elastic block store (EBS)
- 19. instance store volumes
- 20. introduction to Elastic Load Balancer
- 21. AWS Tags
- 22. Auto scaling
- 23. AWS Simple Storage Server (S3)
- 23.1. use case: storage capacity
- 23.2. Cloud storage providers
- 23.3. introduction to S3
- 23.4. S3 termininology
- 23.5. demo
- 23.6. S3 storage classes
- 23.7. static website hosting in S3
- 23.8. S3 lifecycle policies
- 24. overview of databases
- 25. understanding cloudwatch
- 26. simple notification service (SNS)
- 27. DNS
- 28. understanding serverless & lambda
- 29. Amazon CloudFront (Content Delivery Network (CDN))
- 30. S3 transfer acceleration
- 31. infrastructure as code (IaC)
- 32. aws rekognition
- 33. Elastic Beanstalk
- 34. code commit
- 35. cloudwatch logs
- 36. simple queuing service
- 37. aws snowball
- 38. AWS ElastiCache
- 39. AWS storage gateway
- 40. DR techniques
- 41. AWS Global Accelerator
- 42. amazon polly
- 43. elastic file system
- 44. well-architected framework
- 45. AWS personal health dashboard
- 46. AWS pricing model
- 47. EC2 pricing
- 48. AWS support plans
- 49. total cost of ownership
- 50. AWS whitepapers and documentation
- 51. consolidated billing
- 52. AWS marketplace
- 53. AWS cost explorer
- 54. business intelligence
- 55. AWS Partner Network
- 56. understanding the shared responsibility model
- 57. IAM
- 58. AWS CLI
- 59. compliance
- 60. AWS Artifact
- 61. AWS Config
- 62. AWS trusted advisor
- 63. AWS cloudtrail
- 64. Denial of service / AWS Sheild
- 65. AWS Direct Connect (DX)
- 66. baseline security items
- 67. security breaches
- 68. AWS Abuse reports
- 69. amazon machine image (AMI)
- 70. AWS Macie
- 71. vulnerability, exploit, payload
- 72. AWS inspector
- 73. amazon athena
- 74. patching activity
- 75. VPC Flow Logs
- 76. AWS security hub
- 77. AWS Systems Manager (SSM)
- 78. virtual private network
- 79. intro to cryptography
- 80. understanding communicaiton protocols
- 81. understanding disk level encryption schemes
- 82. AWS CloudHSM
- 83. AWS Key management service (KMS)
- 84. AWS control tower
- 85. AWS Outposts
- 86. Amazon Cognito
- 87. exam prep: part 1, core services
- 88. exam prep part 2: security
- 88.1. share responsibility model
- 88.2. IAM
- 88.3. AWS shield
- 88.4. Trusted Advisor WEAKNESS: must definite each of the categories
- 88.5. CloudTrail
- 88.6. AWS Artifact
- 88.7. security breach response
- 88.8. AWS Config
- 88.9. AWS Partner Network (APN)
- 88.10. firewalls
- 88.11. DDoS protection
- 88.12. AWS Classroom Training
- 88.13. AWS Professional services
- 89. exam prep part 3: deployment specific services
- 89.1. cloudformation
- 89.2. elasticbeanstalk
- 89.3. Serverless services
- 89.4. CloudFront
- 89.5. Databases
- 89.6. auto scaling
- 89.7. AWS Access Options
- 89.8. AWS CloudWatch
- 89.9. AWS ElastiCache
- 89.10. SQS
- 89.11. Serverless Computing
- 89.12. Health dashboard
- 89.13. route45
- 89.14. Data reading/writing
- 89.15. select a region for resources:
- 89.16. some services are region specific
- 89.17. storage options
- 89.18. other items
- 90. exam prep part 4: billing
- 91. exam prep part 5
- 91.1. Abuse Reports
- 91.2. DR techniques
- 91.3. AWS Athena
- 91.4. AWS Inspector
- 91.5. AWS Macie
- 91.6. Well Architected fraemwork
- 91.7. Storage gateway
- 91.8. IAM Groups
- 91.9. CloudFront
- 91.10. Compute service
- 91.11. hybrid connectivity to AWS
- 91.12. Costs of AWS
- 91.13. more into
- 91.14. dealing with suspended AWS account
- 91.15. dealing with billing issues
- 91.16. public block access
- 91.17. cloudwatch
- 91.18. transit gateway
- 91.19. KMS
- 91.20. economies for scale
- 91.21. auto scaling
- 91.22. more
- 91.23. aws organization for policy management
- 91.24. well architect fraemwork
- 91.25. global accelerator
- 91.26. VPC flow logs
- 91.27. more
- https://idbbank.udemy.com/course/aws-certified-cloud-practitioner/learn/lecture/19202366#overview
- daily report of resources that are created and don't have tags.
- for example, you can use tags to designate owners.
- verify with the team manager that the instance is owned by them
- do we use S3 Intelligent Tiering for S3 to move items to IA? Do we use lifecycle configuration.
- do we leverage STA edge locations to upload to S3?
- personal health dashboard alerting
- trend micro leads security in AWS
- consume AWS Config logs
- root user console: https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Instances:
- IAM user console: https://[acct].signin.aws.amazon.com/console
- @protonmail.com
- for the lab, you might want to create an email at some provider like protonmail.com
- register for an AWS account so you can access the free tier services.
- go to https://aws.amazon.com/free
- click create a free account
- provide contact info
- I write the labs, etc, assuming you have familiarity with key management with an ssh client. If this isn't true, you likely want to install MobaXterm: https://mobaxterm.mobatek.net/download-home-edition.html
- I chose to use the "native" openssh client available within Windows 10 later builds.
- docs:
- basically, you need to do this:
Get-Service ssh-agent | Set-Service -StartupType Manual Start-Service ssh-agent
- then get the key pair and add the key with:
ssh-add .\thisisthepemfile.pem # list the key fingerprint ssh-add -l # list the openssh key (which can be added to authorized_keys in a target service/server) ssh-add -L #connect with: ssh -A ec2-user@[globalip]
- save the pem somewhere secure (like your keepass)
- delete the pem file
- exam blueprint has four domain
- cloud concepts
- security and compliance
- technology
- billing and pricing
- on premises vs. hosted
- arranging things.
- data center (racked) or hosting provider (VPS/dedicated).
- you need to send them specifications (power, hvac, etc).
- they will send you pricing and negotiations occur.
- when issues arise, we have to go on site to DC.
- example:
- need more ram?
- Data center: buy RAM, go there, install it, 3-12 days.
- Hosting provider: raise ticket, 15 mins-12 hours, data center will resize your server.
- cloud provider: stop the server, change the instance size.
- resizing:
- under
instances
, go to actions/instance settings, change instance type, select the instance type, click apply.
- under
- resizing:
- need more ram?
- model in which computign resource are available as a service.
- important characteristics of cloud computer:
- on demand and self service: any time launch without manual intervention
- elasticity: can scale up and down anytime (vertically or horizontally)... ex: DigitalOcean is pay per hour.
- hosting provider: you are paying for the instances you've bought.
- measured service: pay per use
- software that's running is delivered to you via a UI directly (web browser, or heavy client)
- ex: google docs, Office 365
- deploying code directly to a server, the service provides everything up to hosting the framework (as in uploading code to a host)
- ex: google app engine, heroku
- offer OS instances
- ex: AWS, linode, digital ocean
- would you consider network services as IaaS?
- AWS is a very comprehensive cloud providers
- all model options (SaaS, PaaS, IaaS)
- if you depend on AWS for every component, you will pay higher money.
- Digital Ocean and Linode is cheaper.
- ex: VMs on Digital Ocean, services on AWs.
- Digital Ocean and Linode is cheaper.
- cloud is in data center physically, OS is virtualized.
- allows for:
- on-demand and self serviced, elasticity, pay per use.
- allows you to run multiple OS on a single hardware platform
- example: vmware, KVM, xen, virtualbox
- AWS used to leverage xen, but they are migrating to KVM.
- you create volumes within Elastic Block Store (left menu)
- EBS\volumes, create volume button, specify the volume size, assign the volume type, click create volume
- a person can provisiong resources in cloud whenever needed without requirign any human interaction with a service provider.
- on demand makes self service with automation seamlessly.
- on-demand does not always mean that you will be able to launch instances at any time.
- error when starting instance: "error starting instances, insufficient capacity".
- elasticity: adding and removing capacity whenever it is needed.
- example: you are a retailer on black friday, you can scale up to handle the load, then scale back down later.
- capacity generally refers to mostly processing & RAM.
- similar and interchangable with elasticity
- adding or removing instances from pool like cluster farm
- ex: adding more servers.
- adding or removing hardware resources for existing servers.
- ex: adding more RAM... in AWS this is "changing an instance type"
- single point of failure risk:
- vertical scaling: restricts services to a single point of failure.
- lack of software compatibility with compute node distribution
- horizontal scaling: some software can't run/workloads can't be distirbuted and coordinated across multiple nodes (via some method).
- ex: maybe a database
- shutdown of server is required
- vertical scaling requires the VM to be shut down (change instance type)
- scaling servers on-demand is possible
- the feature is called "Auto Scaling"
- CPU load:
- CPU load is >70%, spin up two new servers.
- CPU load is >30%, spin down two servers.
- various other hardware metrics can be used.
- within this config you can see two control attributes:
- this will scale between 1 and 5 instances (never over 5 instances)
- the metric type observed is "Average CPU utilization"
- scaling up will occur when the metric value goes above 70%
- top public cloud service provider, more than 140 services.
- global disitrubtion of data centers
- magic quarant leader
- broad range of services across many categories: computer, storage, database, analytics, encryption, deployment, and many more.
- all services are delivered as pay-as-you-go.
- service category: machine learning
- amazon rekognition: processes photos and identify what the objects are in a picture and attributes of those objects.
- service category: backup of data
- auto-scaling data eliminates the need for "backups":
S3
will distribute the files so you get eleven-9s (99.999999999%) durability.
- auto-scaling data eliminates the need for "backups":
- pay just as you consume services (generally hourly).
- this combines SaaS or other items with licensing so you can pay-as-you-go with licensing as well as hardware resources, etc.
- example: you can find nginx and spin it up, paying just for this as a non-AWS native SaaS.
- data centers are distrubted across the globe.
- reduce latency to users by placing services closer to them physically
- each data center can have thousands of servers
- aws data centers are organized into
availability zones
(AZ). - each availability zone is located in low risk locations.
- there are multiple AZs and each of them is separated by geographic region.
- there are 81 Availability Zones worldwide.
- each AZ is independant of other AZs.
- each AZ is physically separated.
- all AZs are interconnected via high speed private links.
- each AZ is located in low risk locations (from natural disaster, etc).
- example: distribute services across multiple availability zones.
- each region contains two or more AZ
- AWS has 25 regions worldwide
- there can be multple methods for auth against a system.
- password based auth is the simpliest form.
- you provide a username and a password durign authentication.
- password based is not that secure.
- many users write down their passwords
- most users don't create complex passwords
- there is a key pair generated (public and private key)
- public key is stored on teh server and is used during authentication, only the private key will be used by the client during authenticate.
- the keys have to coexist within the region where the services are used (EC2 for example)
- login to the AWS management console
- services> EC2> left menu> network & security\key pairs
- name the key and bullet pem
create key pair
button- the key pair (PEM) will be downloaded
- EC2 is a Elastic Compute Cloud.
- EC2 is a VM.
- cpu and memory size
- OS
- storage
- authentication key
- security group: a set of firewall rules that control the traffic fror your instance.
- you can access via an SSH client.
- you can access via the browser based console directly from the management console.
- log in to AWS management console
- top search bar enter "EC2"
- go to left menu> instances\instances --> launch instances
- step 1: choose an AMI
- amazon linux 2 AMI
- step 2: choose an instance type
- each instance type has it's own CPU and memory associated.
- select t2.micro for free tier eligible
- step 3: many options
- number of instances: we should opt to use
1
right now
- number of instances: we should opt to use
- step 4: add storage
- accept default
- step 5: tags (will be covered later)
- step 6: security group
- accept default setting, to "create a new security group" with an ssh rule present.
- click "review and launch instance", associate with the key previously created and launch.
- on the resulting window, click the instance ID. You can also navigate to all running instances via the left pane menu Instances\Instances.
- note that the instance state is
pending
(during spin up)
- instance states: pending, running, terminated.
- note that "terminated" means powered off and destroyed. AWS will keep the instance around for a "short period" in a terminated state. This "short period" isn't really that short (over 1 hour).
- update the "name" of the instance
- instance state graduates to
running
, after which ec2 will perform some status checks. - refresh the list using the refresh icon and "status check" is
initializing
, go grab a coffee, this will graduate toN/N checks passed...
- note that you should stop the instance as run time is calculated and billed.
- to access the web based console, check off the EC2 instance in the instances list, then click the Connect button at the top of the screen.
- this will leverage SSO to authenticate, no key management needed.
- you must add the pem to the client's key store or you can call it natively via your client at connection.
- obtain the global IP of your instance by navigating to the Instances\Instances[then the EC2 instance] page
- launch the client (meta note: I'm usign openssh via Windows 10's "native" openssh)
ssh ec2-user@[globalip]
- need EC2 instance
- need to run web server
-
connect to the EC2 instance
-
obtain the name of the package
yum install nginx
- install the package
sudo amazon-linux-extras install nginx1 -y
- start the server
systemctl start nginx
-
configure a firewall policy
-
go to aws ec2 management console
-
go to instances
-
go to the running target instance that we were interacting wiht
-
go to the security tab
-
click on the security group
-
under inbound rules, click edit inbound rules
-
add a Custom TCP rule for tcp port 80 with a source of 0.0.0.0/0
-
test the site access
http://54.221.45.200/
- on the ssh console, navigate to the homedir of the site, and review the index.html contents
cd /usr/share/nginx/html
cat index.html
- change the index.html
echo -n > /usr/share/nginx/html/index.html
- load the site in your browser
http://54.221.45.200/
- define a port:
- a port is a logical entity which acts as an endpoint of communication to identify a given process.
- identify the process bound port 80
netstat -ntlp | grep :80
- define a firewall:
- firewall is a network security system that monitors and controls incomign and outgoing network traffic based on predetermined security rules. Allow connectiosn from trusted users only.
- define a security group:
- a security group acts as a virtual firewall for instances to control inbound and outbound traffic.
- security groups are associated with EC2 instances.
- navigate to EC2 instance
- observe the security group under the instance info, and note the name.
- on left pane menu, navigate to
Network & Security\Security Groups
- in the list, locate the security group you identified earlier.
- check off that rule to access it's settings.
- click on the
Inbound rules
tab and review. - if you wish to test, you can remove the port 80 rule and try to access the nginx listener.
- discovery your global IP.
- navigate to the security group.
- navigate to the inbound rules and the specific rule for tcp 22.
- edit inbound rules
- modify the source ip to only allow your global ip. Save.
- note that this will also block the browser based console.
- ability to set custom budgets that alert you when your costs or usage exceed or are forecasted to exceed) your budgeted amount.
- cost budget: a specified dollar amount.
- usage budget: a specified usage types or use type groups.
- savings plan budget: track the utilization or coverage associated with yoru savings plans
- reservation budget: track the utilization or coverage associated with your reservations.
- cost explorer allows AWS to access your billing data, which allows AWS to report on your usage via budgets.
- You do not need to enable cost explorer for "cost budgets."
- load the management console
- navigate to billing (enter into top search bar)
- on the main billing dashboard, scroll down, locate the Top Free Tier services by usage, and click View All.
- review and compare current vs. forecasted.
- load the management console
- navigate to billing (enter into top search bar)
- in the
cost management
section, clickbudgets
, then "create a budget" (note that creating a budget will automatically enable cost explorer within 24 hours). - With cost budget selected, click next.
- interval: monthly.
- recurring.
- enter a budgeted amount, like 2.00.
- scroll past budget scoping, enter a name for the budget.
- click next
- click "add an alert threshold"
- specify a threshold, specify whether the alert is on actual or forecasted, then specify the email for the alerts.
- click next, next, then create a budget
- navigate back to the Budget billing screen and review the budget that was just created (via left pane menu). Not budget is empty.
root
users have unrestricted access to everything.IAM
users can have roles assigned to them.
-
navigate to the management console
-
in the organization drop down, navigate to "my security credentials".
-
Note the warning that states that account credentials (root users) have access to ALL AWS resources and to use IAM users to grant limited access to manage services.
-
on the main security credentials page, expand the MFA section, activate MFA and proceed.
- virtual MFA -> soft token, QR code, confirm two MFA codes.
- navigate to the management console
- on the search bar, enter IAM and navigate to IAM.
- on the left menu within the IAM module, click on Users.
- click "Add User".
- populate a name, then bullet password, leave autogenerated password, and check user must create a new password at next sign-in, then click next.
- within the set permissions screen, click "Attach existing policies directly", check off AdministratorAccess, click next, review, create user.
- Save the link that is displayed as it is specified for the organization's IAM users, and note that the password is not emailed, you must note it.
- the link contains the Account ID (https://104530835947.signin.aws.amazon.com/console)
- navigate to the management console
- on the search bar, enter IAM and navigate to IAM.
- on the left menu within the IAM module, click on Users.
- click on the user in the users list.
- click on "Security credentials" tab
- next to "Assigned MFA device" where it says "not assigned," click manage.
- Add a virtual MFA and perform the soft token process.
- a
VPC
is a logical container where items run. - VPCs can be partitioned by
subnet
. - security can be administered on the
subnet
level.- routing can be used for this
- whenever any EC2 instance is created, it is automatically placed into a VPC.
- a VPC is created per organization AWS region... meaning, in your account, each region has at least one VPC to assign items to.
- when launching an EC2 instance, you can associate within a given VPC, subnet.
- navigate to the management console
- navigate to "VPC" via the menu
- on the left pane menu, navigate to
Virtual Private Cloud/Your VPCs
. - Note the VPC is created.
- on the left pane menu, navigate to
Virtual Private Cloud/Subnets
. - note the subnets created (including the mask, etc).
- on the left pane menu, navigate to
Virtual Private Cloud/Routes
. - note the routes on the
routes
tab
- I'd suggest watching this video through: section 23, as Zeal covers node reachability.
- a network connection between two VPCs that enables the communication between both VPCs, including across regions (it used to not be possible) and across accounts(!).
- prerequisits to VPC peering: routing policies, acceptance and more. Configuring these items is covered in the SysOp exam, and is not needed for the Cloud Practitioner exam.
- cannot create peering connections between VPCs that have overlapping CIDR subnets.
- does not act like a Transit VPC.
- above, the VPC A does not act as a transit... meaning, VPC B can't use VPC A's routes to communicate with VPC C.
- navigate to the management console
- navigate to "VPC" via the menu
- on the left pane menu, navigate to
Virtual Private Cloud/Peering Connections
. - Note the peering connection is created.
- set "requster VPC CIDRs"
- set "accepter VPC CIDRs"
- network ACLs are stateless
network ACLs
operates at the subnet level, not at theinstance
level (Security Groups
operate at theinstance
level).- all subnets in VPC must be associated with a NACL.
- by default, NACL contains a full allow in INBOUND and OUTBOUND.
- block a single IP or subnet from and entire VPC. Using Security Groups for this would be very challenging because there may be MANY servers.
- Also Security Groups generally do not have Deny policies! They are purposefully set up to only have Allow policies.
- when configuring NACLs, the lower the
rule number
, the higher the priority. - when creating a custom NACL, it will have DENY all. Default NACLs, say when creating an EC2 instance, it will ALLOW all.
- note: to allow ICMP, you must do so on the security group level.
- navigate to VPC manager.
- select the VPC where an EC2 instance is created.
- navigate to Security/Network ACLs.
- note that the subnet associated of the NACL is with subnets within a VPC (we know this already).
- navigate to inbound rules
- add a rule to DENY some access (maybe your global IP) and assign it's
rule number
lower than 100.
- in block storage, the data is stored and retrieved in blocks.
- operations occur on an entire block (the contes ).
- most file systems are based on block devices.
- every block ahs an address and applications can be called via SCSI call via it's address.
- There is no storage side meta-data associated with the block except the address.
- thus block has no description or owner.
- ex: NTFS
- object storage is a data storage architecture that manages data as objects as opposed to blocks of storage.
- an object is defined as data (ex: file) along with all it's meta-data.
- this object is given an ID which is calculated from the content of the object (both data and metadata). Application can then call an object with it's unique object ID.
- ex: S3
- object storage:
- store virtually unlimited files.
- maintain file revisions.
- HTTPS based interface (API, etc).
- files are distributed in different physical nodes.
- block storage:
- file is split and stored in fixed sized blocks.
- capacity can be increaesd by adding more nodes.
- suitable for applications which require high IOPS such as database, transactional data.
- aws EBS is a persistent block storage volume for use with EC2 (persists across restarts).
- EBS volumes are external to an EC2 instance and can be dynamically attached.
- each EBS volume is designed for 99.9999% avaialbility and are automatically replicated within its availability zone.
- EBS can be elastic in nature, thus supports dynamic increase in capacity, performance, changing instance type of live volumes.
- AWS EC2 is compute, referring to memory and CPU.
- storage options, although attached to instances, may differ ("EBS-only", etc).
- go to instances
- on the left side menu pane, click on a instance, then on the boot device (path).
- this will bring you to Elastic Block Store/Volumes section on the left side menu pane.
- you can manage the volume here (such as size, volume type [which affects IOPS]).
- remember, the EBS volumes are mounted via network to EC2 instance.
- When the EC2 instance is stopped or started, the VM might migrate to a different host entirely. Since EBs is mounted via the network, then you can re-attach as needed, no matter where (in an availability zone)
- AWS Instance Store provides temporary block storage volumes for use with EC2.
- This storage is located ont eh disks that are physically attached to the host computer.
- The size of instance store varies depending on your instance type.
- data in instance store is lost in the following situations:
- the underlying disks fail
- the EC2 instance is stopped or terminated
- instance store is included in the cost of EC2 instance, so they are quite cost-effective.
- if planning to use instance store, make sure you backup your data to central storage, like S3.
- if you go through this as a lab, it will cost you money. It's a review, so you don't need to launch the instance.
- go to instances, and click launch instance.
- note the root device type differences, and filtering on the left pane, under Community AMIs: instance store or EBS.
- filter on instance store.
- Locate an AMI, and "select"
- Note that only certain instance types (RAM and CPU) support instance store. Note that you cannot change the capacity of the instance store volume.
- after starting an EC2 instances with instance store, you only have two options for instance state: reboot and terminate. You cannot Stop.
- single point of failure should be avoided
- ELB will scale up and down automatically. There are other load balancers availabile as SaaS from AWs
- ELSB allows distribution of incoming traffic to multiple instances similar to traditional load balancers.
- ELB is capable of handling rapid change in the network traffic patterns (volume).
- Customers don't need to worry about managing the internals and other HA concerns.
- stand up two instances (adjusting the instances to
2
). Name the instances. - add port 80 to the security group.
- install nginx on both
sudo su
yum -y install nginx
echo $(hostname) > /usr/share/nginx/html/index.html
service nginx start
- on the left side menu pane, go to Load Balancing/Load Balancers, Create Load Balancer.
- select classic load balancer (see the bottom of the page), provide a name, specify the load balancer port, and the instance port (the real target/listening port of the service on the Instances).
- assign security group. (accept defaults)
- create a health check.
- associate the instances.
- create.
- go back to the Load Balancers list, select the new load balancer, and then the instances tab. ELB will perform the health check on the service on the Instances and return the status. Wait for the instance members to be "in service." This may take longer than you expect.
- Remember that if you restricted the security group or VPC, the ELB poller will not work.
- If you make the above correction (setting security group to src 0.0.0.0/0), edit the health check on the ELB, and save to refresh.
- go to the description tab, obtain the DNS name, and navigate to the DNS name via http.
- when testing is completed, delete the ELB instance.
- i had problems getting access to the ELB DNS name:
- the instances are InService.
- the instances' security groups allow inbound from 0.0.0.0/0 to tcp 80.
- the VPC security group allows all traffic.
- the subnet where the instances are is shared with the ELB (obviously because the ELB can reach the instances).
- this same subnet has a route for 0.0.0.0/0 to an internet gateway.
- I had to explicitly add an entry for tcp 80 inbound on the security group that applied to the VPC resource from 0.0.0.0/0. The only pre-existing allowed src was the VPC
sg
itself.
- a tag is a label that you assign to an AWS resources.
- a tag is a key-value pair.
- ex: there are three EC2 instances running.
- you can apply labels
- navigate to Instances
- add a tag to an instance
- You can adjust the
Name
value here.
- each resource can have multiple tags
- not all services support tagging
- max tags per resource is 50
- key-value pair is case-sensitive
- tags can be used within Billing
- associating with a team and owner who will be charged.
- tags can be used with IAM to control access to resources
- items to associate with:
- owner (billing)
- role (IAM)
- upper and lower environments (IAM)
- case sensitivity enforcement
- likely lower-case
- review https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html
- change size of according to needs
- traffic patterns
- you can launch and remove servers based on load
- ex:
- if average CPU utilization > 60% --> add two more instances
- if average CPU utilization is <30% --> remove two instances
- ex:
- EC2 auto scaling helps maintain app availability and automatically add or remove according to conditions.
- minimum instances
- maximum instances
- threshold
- scheduled scaling: associate scaling with time based schedule
- dynamic scaling: follow load (reactively)
- predictive scaling: understands historic load patterns and scales (proactively)
- navigate to Auto Scaling/auto scaling groups
- review
automatic scaling
policy - connect to the instance
- generate cpu load with
dd if=/dev/zero of=/dev/null
- wait 15-30 minutes
- verify that a new instance was spun up
- stop the
dd
- wait 15-30 minutes
- verify that the new instance was spun down
- longer term storage
- replicated / backed up
- there are multiple cloud storage providers that are available.
- Depending on your use-case
- mediafire
- onedrive
- S3
- google drive
- object storage designed to store and retrieve any amount of data from anywhere
- designed for 99.999999999% durability and 99.99% availability
- the aspect that makes AWS S3 so powershel are it's features:
- versioning
- encryption
- logging
- transfer acceleration
- cross region replication
- events
- requester pays
- static website hosting
- tagging
- bucket: central folder where data is stored, unique universally across all of S3.
- object: actual files uploaded to bucket.
- locate s3 in the console
- left side menu> buckets
- create bucket
- search and find the bucket in the bucket list
- open the bucket
- upload
- backup is taken care of
- each customer might have different requirements, so they tier S3 classes that have related costs
- most expensive
- offers high durability, availability and performance object storage for frequently accessed data.
- default storage class
- data is stored in minimum of 3 availability zones
- 99.999999999% eleven-nines durability
- for data that is accessed less frequently, but requires rapid access when needed.
- 99.999999999% eleven-nines durability
- comparing storage cost of 1TB data sotred in S3 based on accessibility patterns:
- low-cost storage class for data archiving and long term stroage.
- ideally meant for data that needs to be archived for years.
- can accelerate downloads with pay-as-you-go
- designed to optimize cost by automatically movign data to most cost-effective tier
- data is stored in minimum of 3 availability zones
- example
- 1TB of data stored:
- standard S3 = $23.44
- standard IA = $12.80
- 1TB of data stored:
- organization stores terabytes of data in S3.
- S3 monitors access patterns of the objects to S3 Intelligent-tiering and moves ones that have not been accessed for 30 consecuritive days to the infrequent access tier.
- if an object in the infrequent access tier is accessed, it is automatically moved back to the frequent access tier.
- a monthly monitoring and automation fee is charged per object.
- upload an file to a bucket.
- select Storage Options> intelligent tiering
- remember storage classes like Standard S3, Standard IA are stored across at least 3 availbility zones.
- in One Zone Infrequent Access, data is stored only in one AZ
- OZIA costs 20% less than S2 standard-IA
- data will be lost in the case where one AZ is destroyed.
- use case: storing secondary backup copies of on-premises data or easily recreatable data.
- low cost, cloud-archive storage service that provides secure and durable storage for data archiving and online backup.
- options:
- glacier
- glacier deep archive
- you must make this decision when you upload data.
- upload a file
- you can select the storage class: glacier and glacier deep-archive
- old:
- if you want to visit a static web site, you need to create and manage the entire server infrastructure on a cloud provider: EC2, SSM, ELB.
- new:
- S3 now has a feature where you can upload a static site and it will automatically host.
- static website: individual webpages include static content. They might also contain client-side scripts.
- dynamic website: relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET.
- create an index.html like
<h1>this is my web site</h1>
<p> yep</p>
- create a bucket
- open the bucket, upload the index.html file at a standard client.
- go to the bucket> properties> scroll down and access the static web site hosting.
- index document: "index.html"
- identify the "bucket website endpoint"
- go to the permissions table, and uncheck "block all public access"
- you must also check off index.html in the objects tab, and then set it to "make public" via the actions menu
- consider making a CNAME for the "bucket website endpoint"
- organizations tend to keep terabytes of data in S3. Cost becomes a primary concern.
- storaging the data directly into S3 standard is not usually the best appraoch to storage. Depending on the access patterns, criticality of data, data should be transitioned to appropraite storage class.
- example of transition actions
- store 1 month of logs in S3 standard
- move logs older than 1 month to S3 standard IA
- move logs older than 6 months to glacier
- go to the bucket> management tab
- lifecycle rules> create
- choose a rule scope
- add rule actions.
- databases exist.
- flat file: excel
- relational database: mysql/mariadb (sql)
- NoSQL database: MongoDB, DynamoDB (ex: key-value pair)
- can manage databases on own EC2 instance or leverage SaaS via AWS RDS
- provisioning database.
- host level security: patching hardening, others.
- configure replicas, HA, upgrading, monitoring, etc.
- provision database within UI
- host level security: patching hardening, others. taken care of by provider
- managed by provider: replicas, HA, upgrading, monitoring, etc.
- go to RDS
- create a database with an engine
- access database
- maintaining DBs in EC2 instances, you have to: provision DB, Host security (patching hardneing and others), configure replicas, HA, updating and others.
- these items are usually outsourced to specific people
- fully managed relational database service in cloud
- AWS manages underlying hardware, OS, security, software patching, automated HA all for you.
- A client connects directly to the DB service itself.
- provisioning:
- resize hardware on demand
- multi-AZ deployments
- create read replicas
- go to mgmt console> services
- RDS service
- DB instances
- create databse
- select mysql
- select the version
- select templates
- production: HA is already selected
- Dev/Test
- free tier
- input a DB instance name
- set up master user username (input a password)
- select the DB instance size
- sizes may be restricted depending on template in use
- select the storage
- autoscaling, you can re-size scale if you're disk is getting full
- select the VPC, security group, and publicly accessible.
- audit logs and backups are configurable
- you can review the estimated monthly costs
- create database: database enters "creating" states (which may take 5-10 mins)
- endpoint is the DNS name, port is the listener.
- create an EC2 instance in the same region as the RDS instance.
- install mysql client
yum -y install mysql
- go to the RDS/databases, locate the endpoint, and copy.
- you must modify the VPC security group that is bound to the RDS database and allow inbound 3306.
- invoke connection
mysql -h [target FQDN]:3306 -u admin -p
#enter the password
- run some commands
show databases;
create database firstdb;
show databases;
- provides enhanced availability and durability for DB instance
- within multi-AZ deployments, the service automatically creates a standby DB instance and synchronously repls data from the primary DB instance in a different AZ.
- you have a primary and standby database
- HA failover occurs without any problem
- the endpoint remains the same
- create a mysql database
- use case: dev/test
- create multi-AZ: enable, costs to 200% (use of multi AZ is not free tier)
- leave all as default, create database
- automatic failover occurs under any one of the following circumstances
- loss of availability in primary AZ
- loss of network connectivity to primary
- compute unit failure on primary
- storage failure on primary
- latency increases
- you can review failures within RDS console via the Events section on the left side.
- select the RDS instance, actions menu> reboot
- select reboot with failover
- Review Events:
- "multi-az instance failover started"
- "multi-az instance failover completed"
- you can receive notices when specific Events occur
- on the RDS console, select Event subscriptions
- Create event subscription
- target: ARN, new email topic, new SMS topic
- source: instances
- you can specify a single instance, or all instances
- you can specify event categories (like "failover", "failures")
- write commited to master
- write transmits to slave
- synchronous replication: used by Multi AZ deployments
- write is not committed unless it is written on both replicas.
- results in:
- with the benefit of higher durability than async
- with the cost of higher transaction latency
- asynchronous replication: used by read replicas
- writes do not occur in real time to master and replica.
- write is committed to master, then replicated to replica(s).
- replica can fall behind master, determined by replication lag (latency, etc).
- writes do not occur in real time to master and replica.
- in case of infrastructure failure, RDS performs automated failover from primary to standby.
- endpoints remains the same after failover, no need to modify clients' configurations.
- multi-AZ is supproted for: Mysql, mariadb, postgresql, and oracle.
- multi-az is based on synchronous repl while Read Replica is based on asynchronous repl.
- RDS instance type that's a closed source database.
- available databases are divided into two types:
- open source
- closed source
- commercial offeringg does come with some advantages, like better HA etc
- mysql and postresql compatible relational database built for cloud.
- combines performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.
- Aurora:
- is up to five times faster than standard mysql databases, three times faster than standard postresql databases.
- provides the security, availability, and reliability of commercial databsesa at 1/10th the cost.
- note that there is no storage selection when building aurora, because the storage automatically scales.
- monitoring service to monitor instances/services
- collect and monitor log files, set alarms, automatically react to changes in AWS resources.
- there is a cloudwatch console.
- cloudwatch handles:
- alarms
- events
- logs
- metrics
- go to an EC2 instance, go to monitoring tab.
- you can see graphs of perf metrics.
- fully managed messaging and mobile notification service for delivery messages to the subscribed endpoints.
- SNS follows a pub-sub architecture
- examples of pub-sub:
- mailing list subscription
- "multicast" subscription
- examples of pub-sub:
- SBS publishers might be:
- anything that can send data to an SNS Topic.
- SNS subscribers might be:
- sms
- http(s)
- lambda
- SQS
- Platform application endpoint
- cloudwatch integrates well with SNS
- ex: metrics based alerting
- push notifications can be sent
- there are only specific supported regions
- SMS delivery can fail, failure reasons ("providerResponse") include many reasons
- login to mgmt console
- go to SNS console/topics
- create new topic
- go to subscriptions of the new topic and create a new subscription
- create an email subscription -> pending confirmation
- create an sms subscription (include international dial code) -> automatically confirmed
- You can then access the topic and click public message, or click public message from the sns console.
- attacks, DDoS
- unmanaged: user configures and maintains their own DNS server. Good for learning, not recommended long term.
- managed: service managed.
- scalable DNS web service with HA.
- go to route53 console
- hosted zones
- create a zone
- create a record
- data center exists
- hypervisor
- VMs
- services
- virtualization allows us to run multiple OS on a single hardware
- understand the capacity requirements
- launch EC2 instance in HA
- install python packages, pip, package dependencies, etc
- take care of security, patching, monitoring, auto-scaling.
- ex: google cloud platform, heroku
- select a language to run your app
- upload code
- run code
- whenever a configured event occurs, the lambda function runs
- the user only gets charged based on the compute time your function consumed.
- ex:
- a user uploads a video
- the video needs to be converted
- the conversion process runs
- you are only charged for this function
- go to lambda console
- use example lambda function
- bullet function
- test
- input
- observe
duration
- fully managed compute service in response to an event (including time based).
- free tier: first 1 million requests per month (differs per assigned memory), 400000-GB seconds of compute time per month.
- no need to:
- worry about servers
- worry about capacity
- deploy across nodes
- worry about scaling & HA
- worry about OS updates, security
- what you need:
- bring your own code
- pay-per-execution only; never pay for idle resources.
- hello world application
- program language: node
- purpose: hello world
- IAM role required: none
- Inside VPC: none
- navigate to lambda
- create a function
- author from scratch > create function
- provide a name "demo-hello-world"
- leave node.js as runtime
- click create function
- on the new function, go to General Configuration tab> note the Memory (128MB) and timeout (3 seconds)
- click test and in the upper right corner of the window note the "max memory used" and the "time" ("billed duration").
- performance
- with increased in number of visitors, the performance can go down.
- if a website has 1 image and 1000 users are visiting it, the same image will need to be sent to 1000 users.
- solution: scale the server.
- security
- a web app facees various type of attacks ranging from DOS, web-app attacks.
- solution: configure protections (DDoS, WAF).
- high availability
- if backend provider content provider is down, CDN can provide resources (static web pages, images, etc) to requestors even if backend web site is down.
- purpose built system; acts as a proxy and forwards request to backend system.
- has built in features: DDoS, WAF, cache and more.
- examples:
- cloudflare
- Akamai
- Amazon CloudFront
- note that the backend source storage can be an S3 bucket; so place at least an index.html and admin.txt on an S3 bucket.
- go to management console
- go to cloudfront/distributions, create distribution
- assign the origin to the s3 bucket.
- when created, you will be given a domain name for the distribution. You can navigate to this and access the resources.
- navigate to WAF & Shield, create web ACL and atttach to associated aws resources and select the cloudfront distribution; and create a WAF rule.
- in this example, create a block rule for "admin.txt".
- attempt to access [cloudfront distribution domain name]/admin.txt which should be blocked by the web ACL rule.
- To verify that a resource is cached on the cloudfront CDN, load an image that is served off the S3 bucket, you can inspect the page via browser development tools, go to network tab, click into the name on the left column of the dev tools network tab, and go to headers.
- Locate the
x-cache
header. "Miss from cloudfront" == not cached on CDN; "Hit from cloudfront" == cached on CDN.
- multiple hits of a resource usually result in CDN caching
- restrict from origin countries
- CDN is generally used for: content caching, WAF, DDoS mitigation.
- Edge Locations is geographically distributed content caching to reduce latency from users to resources.
- The amount of Edge Locations you choose to cache resources at have a charge.
- navigate to cloudfront management
- access a distribution
- under the distribution setting
- review price class, and you can minimize edge locations.
- this was covered above
- create a server with sample HTML web site
- use an s3 bucket
- you should switch the s3 bucket to publically accessible.
- you must select for everyone to read object permissions.
- create cloudfront distribution
- specify the origin domain name: this will allow you to select the s3 bucket.
- switch the price class to use only us, canada, europe
- specify the default root object: index.html
- deploy, 10-15 minutes.
- load the website from cloudfront
- navigate to the site
- explorer various features of cloudfront
- note that if you see the
x-cache
location, this means that you have a hit from CDN/edge location.
- allows users to accelerate data uploads from all over the world to centralized s3 bucket.
- transfers are accelerated by routign data to the closest CloudFront Edge location.
- access an S3 bucket
- locate transfer acceleration setting> edit> enable
- you can now use the "accelerated endpoint", instead of the direct Object URL of the publicly shared S3 resource.
- there are two ways to build server:
- manually interacting with AWS console to build an EC2 instance
- through automation, CloudFormation
- the demo here is for
terraform
- allows you to build entire stack needed for an instance to be functional.
- ex: development environment needs to be built, then test environment needs to be built.
- reusable code
- managing infrastructure via source control
- enable collaboration
- navigate to management console
- navigate to CloudFormation> Create new stack
- you can select the LAMP stack template
- you can review the Designer, and review the resources available on the left pane.
- next> provide: stack name "demo", dbname, username, password, key> next> create
- you can view the Events tab to see, and when finished, you should review the outputs column.
- deep learning visual analysis service
- identifies things that are in images
- PaaS
- automatically creates all resources needed to run code (like ELB, EC2, auto scaling group, cloudwatch, etc)
- automatically installs software and creates necessary configurations
- use case: deploy hello world app
- resources need to be created:
- service that deploys and scales web apps and services
- simply upload code and elastic beanstalk automatically handles build of infrastructure and services.
- managed source control service provided by AWS for hosting git repo.
- go to console -> "code commit"
- note that you don't have all features as the organization
root user
, but instead as an IAM user
- create repo
- must have a
git
client - create an IAM user with proper access
- there are three IAM policies for "codecommit"
- read only
- full access
- full access less repo deletion
- record the AWS CLI credential file with access key and secret key from IAM user.
- clone
- obtain http link from repo you created
- run
git clone http:///
- prompted for username and password
- go back to IAM user, click Security Credentials and scroll down and generate HTTPS git creds for AWS CodeCommit and copy username and password
- make changes, then commit changes
- status:
git status
- add:
git add test.txt
- commit:
git commit -m "adding test.txt"
- push:
git push origin master
- prompts for username and password
- a server can contain a lot of log files, from system logs to app logs.
- it's useful to have logs on hand.
- if you need to tshoot, you need to give access to the server to an individual to the server.
- this is a problem.
- if the server gets terminated, the logs are lost. (will occur with auto scaling)
- no way to setup an alarm on certain conditions or create complex filters.
- create a central log server
- there are multiple appraoches to log data centrally
- linux comes with default logging daemon called
rsyslog
- commercial log monitoring: splk, elk
- services like aws cloudwatch logs also provides basic capabilities
- linux comes with default logging daemon called
- cloudwatch logs can be used to monitor, store and access logs from amazon EC2 instances, aws cloudtrail, route 53, and other sources.
- cloudwatch logs are highly available.
- the cloudwatch service name is
awslogd
- log groups: each for specific kind of logs
- like "/var/log/messages"
- log stream: each client that pushes is a log stream (EC2 instance IDs)
- an app for restoring an image
- image gatherer: takes the images from the user via an upload button.
- image enhancer: receives timage from image gatherer
- in response to user load, many new servers hosting the image enhancer are built
- image gatherer must be configured to handle this logic.
- leverage a pub-sub architecture
- the image gatherer sends image to a queue
- the image enhancer(s) grab images from the queue and enhance!
- the queue must be highly available
- fast reliable, scalable, fully managed message queuing service.
- SQS makes it simple and cost effective to decouple components of a specific application.
- go to "sqs"
- create a queue
- create two ec2 instances
- on one instance, run
./send-messages.sh
- on the other instance, run
./receive-messages.sh
- tightly coupled system: producer is directly sending messages to a system. IF the target system is down, then the messages can't/won't be received.
- loosely coupled system: produce sends meessages to a message queuing service, and consumers grab messages from the queuing service.
- go to sqs console
- create a queue
- select type:
- standard (select this)
- FIFO
- name the queue "message-queue"
- note under Configuration,
- you can change the "message retention period" which sets how long a message sent to the queue is retained for consumption.
- maximum message size (1-256kb)
- create the queue
- click "send and receive messages" and send a message
- if you go back to the queue list page, you can see a message is "available"
- go back into the queue "message-queue" and click "poll for messages". Note the receive count.
- note that the message is still in the queue after retrieved.
- purging messages can be explicitly done.
- message expiration can be set per message.
- you can check off the message and click delete.
- data transport soltion that accelerates movign terabytes to petabytes of data into and out of AWS using storage devices designed to be secure for physical transport.
- Snowball helps to eliminate challenges that can be encountered with large-scale data trnasfers including high network costs, long transfer times, and security concerns.
- supported jobs
- aws snowball supports two major optiosn while creating a new job: import or export to S3
- go to "snowball" console
- create a job
- plan a job
- import or export
- add a new address for the snowball device
- select device for import job, and specify the target s3 bucket.
- you can set a KMS key for encryption.
- create an IAM role
- you can create a new SNS topic (for an email notification)
- create job and receive snowball
- connect snowball device to your network
- copy data
- send back to AWS and they will upload to S3.
- snowball 50TB: $200
- snowball 80TB: $250
- 10 days of onsite usage is free, each extra day is $15
- inbound (import): $0
- outbound (export): [differs per region]
- shipping charge
- s3 charge (data transfer out, if for export)
- with snowball you can transfer hundreds of terabytes or petabytes of data between on-prem data centers and S3
- in the US region, snowball devices come in two size (50 and 80TB). All other regions have 80TB only.
- encryption is enforce, protecting your data-at-rest and in transit.
- you don't have to buy or maintain your own hardware.
- if you want to transfer less than 10TB of data between your on-prem data centers and amazon S3, snowball is probably not the best choice.
- many users use the same query to a database
- with a caching solution, you can cache the response associated with the frequent queries
- this allows better response time and decreases the load on the database server.
- two solutions: memcached and redis
- you must configure, optimize and secure these engines.
- elasticache is fully managed, that makes it easy to deploy, operate and scale an in-memory data store and cache in the cloud.
- provides HA, compensating for failed nodes.
- go to elasicache dashboard
- create a new, redis
- how many replicas
- multi AZ failover
- backups
- hybrid storage service that allows the on-prem apps to easily use cloud storage
- appliance that provides access via natively support NAS protocols (NFS, iSCSI) to on prem devices (including a specific disk on an OS instance).
- uses NFS or iSCSI which the app connects to and stores data
- when you write a file to disk, the storage gateway will push the file to the backend (which can be S3, Glacier, or incremental EBS snapshots).
- data is stored primarily locally while async backing up to AWS.
- used when on-prem devices need to access large amounts of data.
- data is stored primarily on AWS S3 with cache of recently read and written data locally (data is NOT stored locally).
- used when on-prem devices need to access small amounts of data.
- virtual tape stored in S3 with frequently accessed data stored on-prem
- tape backup is practice of periodically copying data from primary storage device to a tape cartridge so data can be recoered if there is any data crash or failure on primary device.
- tape solutions remains most cost effective solution to date.
- used when on prem data can tolerate tape storage and retrieval... this exactly mimics tape based storage.
- there can be various DR designs that we can implement, this directly depends on how quickly we want to recover from a disaster, in short RTO and RPO.
- broadly classified into four types:
- constantly backup data and store it to S3 (for example), and restore then DR scenario occurs.
- for on-prem server with large amount of data, you can then use technology like direct connect or import/export to backup their data to AWS.
- minimal version of server in stopped state (or even just have AMI present to build server)
- cost: lowest, recovery: slowest.
- servers are running at minimum sizes
- when DR scenario occurs, the servers are scaled up for prod.
- cost: moderate, recovery: okay.
- complete 1:1 mirror of prod environment
- cost: high, recovery: short.
- S3
- Glacier
- data import/export
- EBS
- storage gateway
- direct connect
- RDS
- VM import/export
- elastic beanstalk
- Route 53
- a service that improves the availability and perofmrnace of your apps with local and global users. Provides statis IP addr that act as a fixed entry point to your app endpoints in a single multiple AWS region.
- geo routing to most local AZ
- start two EC2 instances in different regions serving nginx site (or ELB instances)
- configure the global accelerator
- associate with the sites
- you are given global IPs, and these IPs will direct you to the most local region.
- service that turns text into speech
- call a contact center and respond with an account status
- go to polly console
- can enter text, select voice and click to speak.
- wordpress plugin
- shared file storage
- highly available
- security
- scalability
- Amazon EFS provides a simple scalable elastic file system for linux based worklaods for use with AWS cloud sefrvices and on premise resources.
- built to scal on demand to petabytes without disrupting apps, growing and shrinking automatically.
- provide massively parrell
- go to EFS console
- create an instance
- create two EC2 instances
- present the EFS file system to both EC2 instances
- developed to help architects build secure, high performing, resilient,. and efficient infrstructure for their apps.
- five pillars
- focuess on running and monitoring system so that they provide business value
- perform operations as code
- annotated documentation
- make frequent, small, reversible changes
- refine operations procedure frequently
- anticipate failure
- learn from operational failures
- example
- production server accidentally taken down. If you utilize IaC, you will have a personal or peer review of infrastructure changes.
- do a pre-mortem
- test various failure scenarios before
- focuses on protecting info and systems
- implement a strong identity foundation
- enable traceability
- apply security at all layers
- automate security best practices
- protect data in transit and at rest
- keep people away from data
- prepare for security events
- focuses on the ability to prevent and quickly recover from failures to meet business and customer demand.
- test recovery procedure
- automatically recover from failure
- scale horizonaally to increase aggregate system avialability
- stop guessing capacity
- auto scaling
- manage change in automation
- focuses on using IT and computing resources efficiently
- democratize advanced technologies
- go global in minutes
- use serverless architectures
- experiment more often
- mechanical sympathy
- avoid unneeded costs
- adopt a consumption model
- measure overall efficiency
- stop spending money on data center operations
- analyze and attribute expenditure
- anticipate failure
- use managed services to reduce cost of ownership
- visibility of errors
- there is a service health dashboard presents backend issues
- displays issues that are impacting your resources or potentially impacting services that you use for your AWS
- visible via "bell icon" on top toolbar
- dashboard
- you can set up email alerts
- directs to cloudwatch rules engine
- SNS topic, lambda, etc.
- event log keeps historic list of issues
- aws has more than 50 services.
- each is pay-as-you-go
- pay less then we reserve
- if you commit to paying for a certain amount over time, the charge is less.
- versus pricing on-demand (pay-as-you-go)
- you can specifically select a reserved intsance of EC2 for example
- if you commit to paying for a certain amount over time, the charge is less.
- pay event less per unit by using more
- pay event less when AWS grows
- allows customers to opnly pay for the resoruces they utilize
- benefits
- no large upfront expenses
- pay for only what is being used
- pay only as long as you need
- practical benefits
- no long term contracts
- no lciensed pricing dependencies
- allows us to have resoruce based on needs and not forecast.
- for certain services like EC2 and RDS you can purchase reserved capacity depending on the predictive usage that an org might have
- allows you to save up to 75% versus on-demand
- you can pay for reserved instances in three ways
- all upfront (largest discount)
- partial upfront (lower discount)
- no upfront (smallest discount)
- volume based discounts as usage increases
- serfices like S3 or EC2
- since 2006, AWS has lowered pricing 44 times.
- competition
- payment options
- with on demand instances, for pay the compute capacity per hour or second.
- no up front payments
- increase or decrease capacity whenever it's needed
- can lead to unexpected issues
- the AWS side doesn't have a clear picture of capacity.
- this impacts customers if you scale up and there's no hardware resources for your VM to utilize. You receive a "launch fail: insufficient capacity"
- discounted up to 75% vs. on-demand
- you are committing to run a specific instance for a certain period of time
- assigned to a specific AZ and capacity.
- If you purchase, but do not run the server, then you are disqualified from this pricing model.
- allows us to big on spare EC2 computing capacity for up to 90% of the on-demand cost.
- basically, they allow you to bid on spare compute capacity. This instance is not gauranteed to run.
- This instance may stop at ANY time.
- go to EC2
- spot request on the left side panel
- allowed for EC2 and fargate
- commit on a consistent amount of usage for a period of time.
- a physical EC2 server dedicated for your use.
- avaiable as: on-demand or reservation
- high pricing
- go to pricing calculator
- you can select the pricing strategy (as covered above)
- most direct way to interact with cloud engineers
- help undersatnd a tshoot problems
- for users who:
- experimenting with AWS
- production use of AWS
- when an issue is caused by AWS
- business critical use of AWS
- select the proper support plan that meets your use case needs
- see documentation "compare AWS support plan"
- go to support center
- free
- no technical support
- not free
- can raise support tickets via email
- $29/month
- not free
- 24/7 access raise support via email, chat, phone
- cloud support engineers
- $100/month
- not free
- 24/7 access raise support via email, chat, phone
- senior cloud support engineers
- dedicated TAM
- lowers SLA for cases
- business always needs a goal that has been set
- business demand:
- know your current business demand
- how do you expect to grow in the long term
- is existing flexibility to meet those demand in a cost effective way?
- capacity planning:
- what is the average server utilization?
- how much are we over provisionined to meet peak load?
- operational challenges:
- storage capacity is good?
- HA?
- benefits of AWS:
- pay as you ho
- stop guesiing capacity
- loower over all cost
- agility to scale
- go global in minutes
- this is available to use to understand the cost comparison between using AWS versus on prem/data center over time.
- each service has it's own documentation
- HTML, PDF
- a lot of best practices
- well-architected framework
- security
- pricing
- ...and more
- many aws accounts
- common practice to set up accounts for each environemnt: prod, dev, testing.
- consolidated billing allows you to associated many accounts to a central paying account via AWS Organizations.
- go to AWS Organizations
- enable, and select only for consolidated billing
- enter account IDs
- one bill per linked AWS account
- easier to understand project costs for budgeting
- predict for volume pricing discounts
- predict for reserved instance pricing
- volume pricing is combined together!
- reserved instances can be shared between accounts
- this means that if you have reserved instances, but have on-demand instances running on another account... the reserved instances can be logically shifted to the other account to assume the "role" of the on-demand... hence lowering costs.
- max of linked 20 accounts
- use the paying account for ONLY paying, not for ANYTHING else
- even when linked, the paying account doesn't have access to any other access to any linked account resources.
- central catalog from various software vendors that allows you to easily browse and deploy software on instances.
- pay as you use.
- you can use software per day of use. You use 10 hour, you only pay for software licensing for 10 hour.
- can see a monthly estimate
- can subscribe to software
- can launch an EC2 instance with the software
- easy to use interface to visualize, undersatnd and manage costs and usage.
- you go to the billing console and then enable cost explorer
- go to cost explorer
- will show daily cost and monthly cost
- can break down cost per region
- can break down cost by instance type
- can download CSV
- comprises data, tech, analytics and human intelligence to provide insight that results in more successful business outcome.
- list of questions to be answered that are metrics that are linked to business performance
- ETL is data integration process that combines data from multiple data sources into a data warehouse or other system.
- You should use AWS Glue to perform ETL
- scalable, serverless, machine learning powered business intel built for the cloud
- provides various integrations natively.
- group of external vendors that have received an endorsement from AWS regarding their expertise for building and impolementing solutions for AWS
- design, architecting, managing workloads and apps... accelerating their journey to AWS.
- have their own products/services that will deploy on top of AWS
- can search partner solutions
-
what AWS is responsible for and what the customer is responsible for?
- AWS handles physical and some logical security, power, etc.
- customer handles some logical security, applications, etc.
-
examples:
- CPU bug, etc.
- OS patching: if EC2 instance... of course.
- classificaitons: IaaS, Containers
- EBS, EC2, VPC
- user responsiblity: OS, encryption, app
- AWS responsibility: hardware
- RDS, elastic mapreduce
- user: data, firewall, encryotion of data, IAM
- AWS: OS management, patching, etc.
- SES, SQS, SNS, S3
- user: data, IAM
- aws: OS management, backup and encryption
- security is always shared
- AWS secures the cloud
- user secures data in the cloud
- framework of policies for insuring that the proper users in an enterprise have the appropriate access to technology resources.
- components: IAM user--> IAM policies--> AWS resources
- entity that you create in AWs to represent the person or app that is used to access resources.
- by default, an IAM user has no access associated.
- go to IAM console
- create new user
- note the policy name. The only default policy allows the user to change their own password.
- an object that specifies permissions on a specific object
- go to IAM console
- review the permission
- you can review the existing policies
- collection of IAM users.
- policies can be attached to a group.
- similar to an IAM user, but is used for AWS services to access AWS resources.
- AWS resource -> IAM role -> IAM policy -> AWs resource
- IAM role are assigned to an EC2 instance
- GUI vs. CLI
- why?
- it's always faster and used for automation
- must download and configure the aws cli package
- must have an IAM user and then access keys configured
aws
cli tools are installed on the aws linux ami. You want to work with aws cli v2.
- run
aws-configure
- then enter the access key and secret key of an IAM user who has some level of administrative access.
- PCI DSS
- portal for audit artifacts related to compliance
- PCI DSS, HIPPA, and others
- you can go here to obtain documents... attestation of compliance (AOC)
- records resource config changes over time
- audit and compliance
- displays compliance to a configured rule set
- example: AMI IDs
- displays compliance to a configured rule set
- Resource inventory
- resource timeline:
- you can compare configuration snapshots (will show an actual diff)
- Conformance Pack
- rules and remediation actions
- pay $0.003 per config item recorded per region.
- rule eval is recorded.
- conformance pack is also charged
- access AWS config
- best practice recommendations:
- cost optimization: helps improve cost optimization,
- security:
- fault tolerance: RDS HA verification, etc.
- perforance: recommendations
- service limits: more than 80% of specific service
- depends on subscription
- basic support plan
- offers:
- security group and specific port unrestricted
- IAM use
- MFA on root acct
- performance service limits
- business and enterprise support plan
- navigate to the console and review the module
- can set a weekly update report
- captures AWS related events, specifically. This is not cloudwatch. cloudtrail reports specifically on AWS admin activity.
- default is enabled, cloudtrail retention is limited.
- locate cloudtrail
- track user activity and api usage
- review event history
- can filter against resource type or resource name, etc.
-
increase customization, store logs in s3 or cloudwatch, etc.
-
create trail
-
storage location
-
you can assign a KMS key
-
you can enable cloudwatch logs
-
types of events:
- data events
- management events
- read events and/or write events
- can exclude rds data API, and KMS
-
delay 15-20 mins
- protects against DDoS
- provides visibility to DDoS
- provides 24x7 access to AWS DDoS Response Team (DRT)
- advanced == $3000/year
- business or enterprise support only
- advanced == AWS will return your money for resource utilization costs incurred during a DDoS
- normally, connectivity takes place over the internet
- Direct Connect is a dedicated connection to Direct Connect PoP avoiding the internet
- is a global service, and connection occurs to region
- Direct Connect provider is decided upon... this is where a PoP... need Letter of Authorization (LoA).
- consistent network performance
- reduces bandwidth costs
- private connectivity to VPCs
- review firewall
- review WAF
- server hardneing
- FIM should be used
- patch management
- scan with a web app scanner
- monitor for open ports and logs
- AWS access secret keys are leaked
- EC2 instnace is hacked
- S3 bucket data is leaked
- change AWS root account password
- respond to any notice from AWS support
- rotate and delete all root and AWS IAM access keys
- delete potentially compromise IAM users, and change the password for all other IAM users.
- delete any resources on your account you didn't create, such as EC2 instance and AMIs, EBS volumes, snapshots, and IAM users
- AWS will send abuse report will be sent when AWS concludes that the node is being used for abusive purposes
- review the AWS Acceptable Usage Policy
- src ip is provided
- web ui for submitting the
- they want a log file, etc.
- the master image for which you can launch EC2 instances
- you can use a specific baseline AMI, then harden it as needed, then create a new image.
- snapshotting occurs within the console
- action> image> create image
- naviate to AMI section on left side panel
- machine learning based security (DLP, and policy checks)
- goal is to search data and look for secret items
- PII: secret keys, etc, data backup, ssl private keys
- bucket policies and other policies
- bank account numbers
- credit card numbers
- etc
- can create your own regex
- public access
- encryption
- sharing
- top finding types:
- credentials
- policy findings and replication, encryption etc
- access macie console
- get started
- enable
- Macie will within 5 minutes review some IAM stuff, such as public access, access to encryption keys, etc
- upload a file that will hit a definition
- by default, it is looking for regex for creds
- create new (scan) job (might take 15-20 mins)
- remember to disable macie via settings
- yarp
- vulnerability scanner
- relies on an agent (the SSM agent)
- scans EC2 and ECR (container image scanning)
- query logs that live on an s3 bucket via SQL-like query
- if the logs are output from AWS services, they the schema is known
- cloudtrail logs in S3 and you want to see who has logged in within the past 10 days
- new updates that are released
yum update --security
- EC2 instance: customer
- dynamoDb: AWS
- RDS: AWS (but customer enables)
- generates and captures metadata about connection logs
- traffic info
- inbound
- outbound
- dashboards are available
- access VPC console
- flow logs tab
- stored in cloudwatch
- create flow log
- but must enable cloudwatch logs
- create new log group
- within vpc log, give cloudwatch loggroup name
- ingests data from
- guard duty
- inspector
- macie
- more... including third parties (like vuln scanners, etc)!
- supports compliance standards
- CIS AWS Foundation
- PCI DSS
- access security hub console
- review summary
- you must setup AWS Config
- only records created when security hub is enabled will be shwon (historic/backfill data will not be rendered)
- it will not show you inter-region, only one region
- a group of service, allows for better visibility and control in a centralized way
- run command
- parameter store (no need for SSM agent)
- sessions manager
- patch manager
- comp[laince
- inventory
- managed instances
- hybrid activations
- state manager
- distributor
- more...
- SSM agent installec on EC2 instances (systemd
amazon-ssm-agent
) - you provide specific tasks to be executed locally via this agent
- systems manager console
- create an EC2 instance
- install SSM agent
- review managed instances on SSM console
- review sessions manager on SSM console
- check out
run command
, you can run specific commands across all instances - check out [patch]
compliance
, checks for updates
- routes traffic from server to another enclave
- you place a VPN in the public AWS, and then an EC2 in a private subnet can use this VPN server. to access another machine.
- secret key is shared and is used in dec/enc
- there are some protocols that establish communication
- examples: FTP, HTTP, DNS, TCP, IP, SFTP
- you can encrypt a disk. yep.
- bitlocker and LUKS
- HSMs are devices that provide extra layers of access control, tamper resistence, etc, for sensitive data (secrets, etc)
- cloud-based HSM
- you can manage your own keys using FIPS 140-2 level 3 validated HSM
- access CloudHSM console
- create a cluster, and designate VPC and AZ
- perform encryption and decryption
- access KMS console
- create a key and obtain keyid
- perform encryption of file
aws encryption --key-id {guid} --plaintext {file spec}
ciphertextblob
will- perform decryption of file
aws kms decyrption --ciphertext-blog {ciphertextblob}
plaintext
is returned in base64
- integates with various AWS services:
- s3, dynamodb, ebs, and more
- go to KMS console
- review AWS managed keys
- many orgs have many AWS accounts (in many regions)
- solves for challenges
- identity management per account?
- solution 1: AWS SSO
- security hardening
- control all of these items with cloudformation stacksets:
- enable AWS config
- enable aws organizations and SCP
- centralize logging (cloudtrail, guard duty)
- control all of these items with cloudformation stacksets:
- centralized console
- for security compliance and other info
- identity management per account?
- Control Tower
- set up and govern an AWS multi-account nevironment following best practices
- uses various services to solve for the above challenges in multiple accounts
- AWS Organizations
- CloudFormation StackSets
- AWS SSO
- Config aggregators
- Best practices
- if you create a user account via control tower, the account greats created and falls under management and visibility of all the other items listed above
- go to control tower console
- enable and control tower will create several items
- review enrolled accounts
- it will enroll the account you are currently using
- it will create an audit account
- it will create a log archive account
- review guardrails
- guardrails are various set of policies that will govern the account
- SSO will be created
- go to USers and access on left pane -> Federated access
- SSO will configured
- access the user portal URL
- login as one of your SSO users
- then you will be able to access console
- go to USers and access on left pane -> Federated access
- review Account factory
- network configs
- log in to SSO, then access aws service catalog and provision a product which in turn creates a user account
- general cloud design has some flaws because all servers and services are hosted within AWS datacenter (latency, etc)
- to solve for this, AWS offers AWS Outpost, which allows on-prem AWS managed/"hosted" service instances.
- low latency requirements
- data residency
- financial, healthcare, defense, etc
- local data processing
- go to outposts management console
- review the catalog
- you can see EC2 capacity for example
- Create a site (location)
- federation
- provides authentication, authorization, and user management for web and mobile apps
- sign up with new crdes
- can login with external authenticaiton provider (FB, Twitter, google)
- post sign up OTP for verificaiton
- MFA
- Account recovery
- go to cognito console
- create a user pool
- policies are set up here
- go to app integration > app client list and create an app client
- access the app client and then access the hosted UI
- you can assign redirect
- you can then sign up.
- user pools: take care of entire authentication and authoirzation process
- identity pools: provides functionality of federation for users in the user pools.
- this manages tokens for users in the user pool.
- Cognito Identity pools aka AWS Cognito Federated Identities allows devs to authoirze users of an application to use various AWS services via a token.
- example:
- if a user takes a quiz, and the results should be stored in DynamoDB
- you do not want to hard code access keys..
- if a user takes a quiz, and the results should be stored in DynamoDB
- elasticity:
- allows to scale service based on demand
- make the workdload more cost effective, "dynamic user demand"
- auto-scaling groups
- on-demand:
- launch services and servers whenever they want
- high availability: accomdate failure of any component
- least priv: grant only access to resources needed to perform task
- regions:
- physical locations across the globe to host data
- AZ
- a combination of one or more DCs in a region
- minimum of 2 AZs needed for HA.
- edge locations
- an edge location si where end users access services.
- delivers content as physically close to users
- caches responses to reduce traffic.
CloudFront
can be used
- improves overall latency and improves perf of sites.
- EC2 are launched from AMI
- AMI can be AWS provided or user provided.
- EBS:
- is persistent storage for any EC2 instance
- Instance Store
- fast perf, but is not persistent
- Storage classes:
General purpose
: recommended for frequently accessed data.Infrequent access
: long-lived infrequent accessed dataReduced Redundancy
: Frequently accessed, non-critical dataintelligent-tiering
: long-lived data with unknown access patterns.one zone-IA
: long-lived infrequently accessed, non-critical data.Glacier deep archive
: archive data atha rarely needs to be accesed (retrieval time in hours)glacier
: archive data with retrieval time in minutes to hours. suitable for use-csae where durable low-cost storage is needed.
- know the difference between a bucket and object
bucket
is like folderobject
is the thing storage in buckets
- S3 is durable storage system and is based on object storeage
- S3 can slso be used for storing RDS backups
- S3 can also be used to host simple/static web sites (which is low cost)
- definite customer network for resources.
- implement specific controls.
VPC peering
allows resources within two VPCs to communicate.
- Hybrid cloud architecture: a combination of either on prem or other cloud providers and AWS
Direct Connect
is used to connect AWs VPC to data center environments, providing a detected network connection between on prem and AWS.Route53
,Virtual Private Gateways (VGW)
can be used in hybrid desigtns.- services like
Classic Load Balancer
,Auto Scaling
are not supported in hybrid designs.
- Amazon Machine Image (AMI) is the master image from which new EC2 isntances are launched
- data transport solution that moves terabytes to petabytes of data into and out of AWS using a physical storage device designed to be secure for physical transport.
- transfers exabytes like snowball.
- allows distribution of traffic across multiple services (EC2)
- ELB will automatically scale depending on the traffic pattern
- type of load balancers:
- application load balancer: layer 7 traffic
- network load balancer: very fast performance, can associate static ip addr.
- classic load balancer.
- AWS is responsible for the physical security of their facilities and infrastructure (compure, database, storage, and networking)
- customer is responsible for software, data and access that sits on top of the infrastructure.
- "Awareness and training" are shared between customer and AWS.
- Customer's responsibility examples:
- encrypting data
- updating the server's OD patching EC2
- firewall configs (SG, NACL)
- AWS's responsibility:
- anything related to hardware
- Physical Security
- ensuring AWS services are available.
- Training data center staff
- patching OS for SaaS like RDS, Elasicache, Fargate.
- users, groups, roles, policies
- if an
IAM user
wants to access a specific AWS service (or instance), assign an IAM policy to that service that grants the user access. Access/secret keys
can be for AWS CLI operations (havign been associated with anIAM user
)- An
IAM role
is assigned to an application/service. For example, to grant EC2 instance access to a service, you assign a role. IAM Policies
allows an admin to control which user can do what operations on a given resource.- always use MFA.
- if you want to apply a set of policeist o a large group of users, use an
IAM group
.
- Analyzes your AWS environment and provides best practice recommendations in five major categories:
- Cost Optimization
- performance
- security
- fauit tolerance
- service limits
- Support level required: business and enterprise.
- enables governance, compliance, operational auditing and risk auditing in AWS account.
- records the activities in your AWS account so that admins can track on which user has performance which operations.
- provides visibility/access to security and compliance documents, aka "audit artifacts"
- assists with SOC, PCI DSS, HIPAA and others.
- what do you performance when you suspect an AWS account is compromised
- change the AWS root user password
- respond to any notices you rcvd from AWS support through AWS Support portal.
- rotate and delete all root and AWS IAM access keys
- delete potentially compromised IAM users, and change the password for all other IAM users
- Delete any resources on the account that no one can attest to.
- used to audit and monitor changes to AWS resources
- this used for change management purposes.
- vendors who are endorsed by AWS.
- they may design or build workloads in AWS ("APN Consulting Partners")
- APN Technology Partners: not design build or manage workloads
- Security Groups: acts as a virtual firewall for EC2 instance.
- Network ACL applies at a subnet level instad of EC2 instance level.
- the customer manages both
- AWS Shield, CloudFront and WAF can protect against DDoS.
- AWS Shield is dedicated to DDoS
- instructor lead training in person is avialable.
- guides orgs when a customer needs specil assistances and will work with APNs.
- IaaC solution
- Supports almost all AWS services
- cloudFormation is free
- PaaS (for hosting code)
- Deploys code, handles:
- auto scaling
- load blaancing
- capacity provisioning
- app health monitoring
- can managed and deploy:
- EC2
- RDS
- Load balancers
- security groups
AWS Lambda
can execute code without provisioning or managing servers.- pay only for compure time you consume, no charge when code is at rest.
- upload code and all underlying stuff (HA, etc) is handled. You can select AZs, etc.
- "Serverless Platform" is not
EC2
orEMR
(elastic mapreduce). - Services which are part of AWS Serverless platform can be:
SNS
,DynamoDB
and others.
- CDN, globally distributed
- assists with DDoS
- caches frequently accessed static assets, placing them within
edge locations
, leading to lower request latency to users.
- RDBMS:
RDS
- NoSQL DB:
DynamoDB
- Data warehouse:
Redshift
- In-memroy Database:
Redis
andMemcached
(elasticache)
- RDS is fully managed, simplified database admin tasks.
- supports side variety of RDBMS engines
- MySQL
- PostgreSQL
- MariaDB
- MSSQL
- Amazon Aurora
- Oracle
- note that the listed engines could be hosted within EC2 instances if there is a desire to be "customer managed."
- aurora is one of the database services that can easily scale
- as your data growser, your cluster volume storage growser as well.
- Need a globally redudant database, then RDS Read Replicas can be used.
- auto-scaling allows AWS to automatically scale up or down depending on demand.
- can assign min and max.
- EC2 is a good example of this.
- accessing is possible:
- AWS Console (web UI)
- AWS CLI
- AWS ADK
Access/Secret keys
are used with an IAM user to authenticate with AWS CLI to manage resources, or access resources.- SDK allows access to AWS resources from aode.
- primarily used to monitor CPU, Disk, NEtwork utilization, etc.
- CloudWatch Logs allows users to centrally upload logs from all the servers.
- CloudWatch Logs allows real time monitoring as well as adjustable retention.
- fully managed for deployment, operation and scalability of in-memory data store and cache in cloud
- Used for storing results assicated with frequently accessed queries.
- de-coupling the coimponents of a specific app.
- used for architectural design where loosely coupled components are needed
- "loosely coupled system" vs "tightly coupled"
- AWS provides a set of fully managed services that you can use to build and run serverless apps:
- Lambda: compute
- S3: storage
- DynamoDB: data store
- SQS and SNS: app integration
personal health dashboard
provides alerts and remedation guidance when AWS is experience events that will impact you specifically.- AWS publishes timely info on availability.
- the
service health dashboard
is used for global (overall) status.
- DNS service on AWS
- if frequent (fast) IOPS are needed, use block based storage: EBS, EFS, not object based stores (Glacier, S3)
- choose the region closest to your customers/users
- compliance
- regulations
- examples of
- regional services: ELB, EC2, Auto-scaling, RDS, DynamoDB, S3
- global service: Route53, CloudFront
- Redshift: data warehouse
- EFS: shared file storage solution that can be used across EC2 instance and on prem servers.
- Storage Gateway: used by on prem apps that need to use AWS storage solutions.
- Version Control: AWS Code Commit (git)
- AWS Rekognition: automatically detects objects with an image with specific probability
- AWS Code Deploy and AWS OpsWorks: services that can deploy apps in on-prem servers.
- AWS Organizations services
- consolidates multiple AWS accounts, and can utilize volume discounts
Reserved Group instances
can be applied across all accounts within an Org.
- On demand instances: Pay a fixed rate (time based: hourly or second basis) without any committment.
- Reserved instances: reserve capabity ahead of time with a term of 1 to 3 years. A significant discount is available
- good for non interruptible workloads
- convertible RI (reserved instances): allows to change attributes of the RI.
- Highest discount applies to: three year, pay upfront, standard RI
- Spot instances: good for apps with flexible start and end times (interruptible workloads).
- up to 90% discount.
- dedicated hosts: physical EC2 server dedicated for use of a single cusotmer
- generally used when licenses are server bound
- CloudFormation
- IAM
- Auto-scaling
- elasticbeanstalk
- AWS VPC
- Consolidated Billing
- AWS Forums, whitepapers, docs, blogs
- Enterprise support has a TAM
- Business support: minimum support plan: chat/phone support
- one hour target response time: Business and enterprise
- Infrastructure Event Management: available to Enterprise, but additional fee for business
- used for specific service cost projections
- can be used to forcast costs of workloads
- used for comparing costs of funning on prem vs AWS
- generate executive reports
- be clear on what costs are inclded in TCO
- Share Responsibility Model of AWS can reduce the overall TCO.
- use RIs when calculating pricing.
- AV licensing costs are not included, but cata center security costs can be part.
- you can tag resources, then you can enable them in cost reports
- allows customers to visualize costs over time
- provides out of the box reporting associated with RIs and benefits of switchign to RIs
- Amazon Partner network (APN): architecting, migrating and workload management
- Amazon Technology Partner (ATN): devlng products within AWS.
- software listing of third parties that are available.
- allows setting custom budgets, and alerts when the projected costs will exceed the budget amount.
- you can AWS Abuse team by filing report
- classified into four types
- backup and restore
- pilot light
- warm standby
- multi-site: best architecture, cost is highest
- allows the analysis of S3 stored logs using SQL like query language
- vuln scanner for specific assesment rules and provide the results
- scan and locate PII data, DB backup and keys, etc.
- as per regex
- the five pillars
- operational excellence
- security
- reliability
- performance efficiency
- cost optimization
- hybrid storage service where an on prem application can most easily use AWS storage options.
- associate a specific set of perms to multiple IAM users, you can add the perm to the group, then associate the IAM users to that group.
- distributes resources globally so that local users access locally served resources
- cloudfront uses edge locations and caching at these locations
- is Content Delivery Network
- AWS EC2
- lambda ("managed compute service")
- auto-scaling
- elastic beanstalk
- AWS Direct Connect
- VPN
- capital expenditure is the money an org spends to buy, mtaining or improve fixed assets (data center, servers, hdd, firewall, etc)
- operaitonal expense is ongoing cost for running a system
- cloud computing allows customers to fully trade their capex with opex
- ways to identify costs associated with each department
- tag the resource and use the tags to analyze the cost
- use mltiple AWS accounst for each dept.
- remember upfront payment options providethe largest discount for EC2 dedciated reserved instances.
- Directory Service can be used to enable SSO to AWS console.
- Asset Management is much easier in cloud.
- RDS can also be deployed in multiple AZs, used provides HA, protecting from failures.
- increasing for AWS: use MFA
- self management database == EC2
- AWS Sheild == DDoS protection, detection and mitigation.
- IAM role is an entity that defines a set of perms for use with an AWS resource
- elastic beanstalk: capacity provisioning, app health monitoring, load blaancing as well as auto-scaling.
- Network ACL is part of VPC
- you can block an attacker at Network ACL or WAF.
- bad ACLs == Trusted Advisor service
- KMS == EBS volume encryption
- AWS Organizations == managing accounts
- patching is part of the shared responsibility
- amazon polly: speak synthesis service
- patching of RDS == customer enabled is required
- non payment
- aws support can assist
- if billing is incorrect, use AWS support
- S3 Public Block ACcess
- you can block public access to specific buckets
- push from various services to CloudWatch
- once in cloudwatch, you can create alerts
- for example when a root use has logged in
- something that interconnects VPCs to on prem networks.
- it allows the building of apps spanning many VPCs.
- generate and manage keys for enc/dec of data
- AWS cloud pricing reduces as there are more customers
- scaling can automatically occur: EC2, dynamoDB, aurora
- trusted advisor monitors limits
- aws support api is available for aws support customers with business and enterpise support plan
- aws compute services: ec2, lambda, batch, lightsail
- policies can be managed across multiple accounts via AWS orgs
- operational excellence
- security:
- AWS Config is used for auditing and evalulating changes
- reliability
- performance efficiency
- Cost optimization
- service that improves availability and perf for local and global users
- capture incomign and outgoing VPC packet info
- snowball, data migration service (DMS) can be used to move data from on pem to AWS
- cost explorer == forecasting
- aws has a set of solutions to help with cost managemnt and optimization
- creating budgets and notification
- config management == opworks
- container service == Elastic Container service
- automate and secure multi account aws enviromments == AWS Control Tower
- per core software license == dedicated hosts
- Amazon Connectfor phones
- Amazon Quicksight for BI and dashboards
- internet gateway allows traffic from the nternet to reach a VPC
- AWS Glue: ETL
- Amazon EMR: managed hadoop
- Amazon Naptune: process large amounts of data for graphical queries