(I) Overview of AWS:
- Cloud Computing: Its the on-demand delivery of IT resources and applications via the Internet with pay-as-you-go pricing. Cloud computing provides a simple way to access servers, storage, databases, and a broad set of application services over the Internet. Cloud computing providers such as AWS own and maintain the network-connected hardware required for these application services, while you provision and use what you need using a web application.
- Types of Cloud Computing:
- Infrastructure as a Service (IaaS): Infrastructure as a Service, sometimes abbreviated as IaaS, contains the basic building blocks for cloud IT and typically provide access to computers (virtual or on dedicated hardware), data storage space and networking features. e.g Amazon EC2, Windows Azure, Google Compute Engine, Rackspace.
- Platform as a Service (PaaS): Platforms as a service remove the need for organizations to manage the underlying infrastructure (usually hardware and operating systems) and allow you to focus on the deployment and management of your applications. e.g AWS RDS, Elastic Beanstalk, Windows Azure, Google App Engine
- Software as a Service (SaaS): Software as a Service provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring to end-user applications. e.g Gmail, Microsoft Office 365.
- Cloud Deployment Models:
- Cloud: A cloud-based application is fully deployed in the cloud and all parts of the application run in the cloud.
- Hybrid: A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing on-premises resources.
- On-premises (private cloud): Deploying resources on-premises, using virtualization and resource management tools, is sometimes called “private cloud”.
- Types of Cloud Computing:
- Advantages:
- Trade Capital Expenses for variable expenses.
- Benefit from massive economics of scale.
- Stop guessing about capacity.
- Increase speed and agility.
- Stop spending money running and maintaining data centers.
- Go global in minutes.
- Security and Compliance:
- State of the art electronic surveillance and multi factor access control systems.
- Staffed 24 by 7 security gaurds
- Access is authorized on a “least privilege basis”
- SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70 Type II), SOC 2, SOC3
- FISMA, DIACAP, FedRAMP, PCI DSS Level 1, ISO 27001, ISO 9001, ITAR, FIPS 140-2
- HIPA, Cloud Security Alliance (CSA), Motion Picture Association of America (MPAA)
(II) Overview of Security Process:
- AWS offers Shared Security Responsibility Model i.e Amazon is responsible for securing the underlying infrastructure that support the loud, and you are responsible for anything you put on the loud or connect to the cloud.
- AWS Security Responsibilities:
- Amazon Web Services is responsible for protecting the Compute, Storage, Database, Networking and Data Center facilities (i.e Regions, Availability Zones, Edge Locations) that runs all of the services in AWS cloud.
- AWS is responsible for security configuration of its managed services such as Amazon DynamoDB, RDS, Redshift, Elastic MapReduce, WorkSpaces.
- Customer Security Responsibilities:
- Customer is responsible for Customer Data, Platform, Applications, IAM, Operating System, Network & Firewall Configuration, Client and Server Side Data Encryption, Network Traffic Protection.
- IaaS, that includes such as Amazon VPC, EC2, S3 are completely under your control and require you to perform all of the necessary security configuration and management tasks.
- Managed Services, AWS is responsible for patching, antivirus etc, however you are responsible for account management and user access. Its recommend that MFA be implemented, connect to theses services using SSL/TLS, in addition API and user activity should be logged using CloudTrail.
- Storage Decommissioning:
- When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals.
- AWS uses the techniques detailed in DoD 5220.22-M (National Industrial Security Program Operational Manual) or NIST 800-88 (Guidelines for Media Sanitization) to destroy data as part of the decommissioning process.
- All decommissioned magnetic stage devices are degaussed and physically destroyed in accordance with industry standard practices.
- Network Security:
- Transmission Protection: You can connect to AWS services using HTTP and HTTPS. AWS also offers Amazon VPC which provides a private subnet within AWS cloud, and the ability to use an IPsec VPN connection between Amazon VPC and your on-premises data center.
- Amazon Corporate Segregation: Logically, the AWS Production network is segregated from the Amazon Corporate network by means of a complex set of network security segregation devices.
- Network Monitoring and Protection:
- It protects from:
- DDoS
- Man in the middle attacks (MITM)
- Port Scanning
- Packet Sniffing by other tenants
- IP Spoofing: AWS-controlled, host-based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own.
- Unauthorized port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy. You may request permission to conduct vulnerability scans as a required to meet your specific compliance requirements.
- These scans must be limited to your own instances and must not violate the AWS Acceptable Use Policy. You must request vulnerability scan in advance.
- It protects from:
- AWS Credentials:
- Passwords: Used for AWS root account or IAM user account login to the AWS Management console. AWS passwords must be 6-128 chars.
- Multi-Factor Authentication (MFA): Its a six digit single-use code that’s required in addition to your password to login to your AWS root account or IAM user account.
- Access Keys: Digitally signed requests to AWS APIs (using the AWS SDK, CLI or REST/Query APIs). Include an Access key ID and a Secret Access Key. You use access keys to digitally sign programmatic requests that you make to AWS.
- Key Pairs: Its a used for SSH login to EC2 instances and CloudFront signed URLs. Its required to connect to EC2 instance launched from a public AMI. They are 1024-bit SSH-2 RSA keys. You can get automatically generated key pair by AWS when you launch EC2 instance or you can upload your own before launching the instance.
- X.509 Certificates: Used for digitally signed SOAP requests to AWS APIs (for S3) and SSL server certificate for HTTPS. You can have AWS create X.509 certificate and private key or you can upload your own certificate using Security Credentials page.
- AWS Trusted Advisor:
- Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money, improve system performance or close security gaps.
- It provides alerts on several of the most common security misconfigurations that can occur, including:
- Leaving certain ports open that make you vulnerable to hacking and unauthorized access
- Neglecting to create IAM accounts for your internal users
- Allowing public access to Amazon S3 buckets
- Not turning on user activity logging (AWS CloudTrail)
- Not using MFA on your root AWS account.
- Instance Isolation:
- Different instances running on the same physical machines are isolated from each other via the Xen hypervisor. In addition, the AWS firewall resides within the hypervisor layer, between the physical network interface and the instance’s virtual interface.
- All packets must pass through this layer, thus an instance’s neighbors have no more access to that instances than any other host on the Internet and can be treated as if they are on separate physical host. The physical RAM is separated using similar mechanism.
- Customer instances have no access to raw disk devices, but instead are presented with virtualized disks. The AWS proprietary disk virtualization layer automatically resets every block of storage used by the customers, so that one customer’s data is never unintentionally exposed to another.
- In addition, memory allocated to guests is scrubbed (set to zero) by the hypervisor when its unallocated to a guest. The memory is not returned to the pool of free memory available for new allocations until the memory scrubbing is complete.
- Guest Operating System: Virtual instances are completely controlled by the customer. You have full root or administrative access over accounts, services and applications. AWS doesn’t have any access rights to your instances or the guest OS.
- Encryption of sensitive data is generally a good security practice, and AWS provide the ability to encrypt EBS volumes and their snapshots with AES-256.
- In order to be able to do this efficiently and with low latency, the EBS encryption feature is only available on EC2’s more powerful instance types (e.g M3, C3, R3, G2).
- Firewall: Amazone EC2 provides a complete firewall solution, this mandatory inbound firewall is configured in a default deny-all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic. All ingress traffic is blocked and egress traffic is allowed by default.
- Elastic Load Balancing: SSL termination on the load balancer is supported. Allows you to identify the originating IP address of a client connecting to your servers, whether you are using HTTPS or TCP load balancing.
- Direct Connect: Bypass Internet service providers in your network path. You can procure rack space withing the facility housing the AWS Direct Connect location and deploy your equipment nearby. Once deployed, you can connect this equipment to AWS Direct Connect using a cross-connect.
- Using industry standard 802.1q VLANs, the dedicated connection can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space, and private resources such as EC2 instances running within a VPC using private IP space, while maintaining network separation between the public and private environment.
(III) AWS Risk and Compliance:
- Risk: AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks. AWS management re-evaluates the strategic business plan at least biannually.
- This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks.
- AWS Security regularly scans all Internet facign service endpoint IP addresses for vulnerability (these scans don’t include customer instances). AWS Security notifies the appropriate parties to re-mediate any identified vulnerabilities. In addition, external vulnerability threat assessments are performed regularly by independent security firms.
- Findings and recommendations resulting from these assessments are categorized and delivered to AWS leadership. These scans are done in a manner for the health and viability of the underlying AWS infrastructure and are not meant to replace the customer’s own vulnerability scans required to meet their specific compliance requirements.
- Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and don’t violate the AWS Acceptable Use Policy.
(IV) Storage Options in the AWS cloud:
(V) Architecting for the AWS Cloud: Best Practices:
- Business Benefits of Cloud:
- Almost zero upfront infrastructure investment
- Just-in-time Infrastructure
- More efficient resource utilization
- Usage-based costing
- Reduced time to market
- Technical Benefits of Cloud:
- Automation – Scriptable infrastructure
- Auto-scalling
- Proactive scaling
- More Efficient Development lifecycle
- Improved Testability
- Disaster Recovery and Business Contiuity
- Overflow the traffic to the cloud
- Design For Failure:
- Rule of thumb: Be a pessimist when designing architectures in the cloud, assume things will fail. In other words always design, implement and deploy for automated recovery from failure.
- In particular, assume that your hardware or software will fail, outages will occur, some disaster will strike, requests will increase.
- Decouple Your Components:
- The key is to build components that don’t have tight dependencies on each other, so that if one component were to die, sleep or remain busy for some reason, the other components in the system are built so as to continue to work as if no failure is happening.
- In essence, loose coupling isolates the various layers and components of your application so that each components interacts asynchronously with the others and treats them as black box.
- Implement Elasticity:
- The cloud brings new concept of elasticity in your applications. Elasticity can be implemented in three ways:
- Proactive Cyclic Scaling: Periodic scaling that occurs at fixed interval (daily, weekly, monthly quarterly)
- Proactive Event-based Scaling: Scaling just when you are expecting a big surge of traffic requests due to a scheduled business event (new product launch, marketing campaigns, black friday sale)
- Auto-scaling based on demand: By using monitoring service, your system can send triggers to take appropriate actions so that it scales up or down based on metrics (utilization of servers or network i/o for instance)
- The cloud brings new concept of elasticity in your applications. Elasticity can be implemented in three ways:
(V) AWS Well-Architected Framework:
- Well-Architected Framework is a set of questions that you can use to evaluate how well your architecture is aligned to AWS best practices and it consists of 5 pillars: Security, Reliability, Performance Efficiency, Cost Optimization and Operational Excellence.
- General Design Principles:
- Stop guessing your capacity needs.
- Test systems at production scale.
- Automate to make architectural experimentation easier.
- Allow for evolutionary architectures.
- Data-Driven architectures
- Improve through game days (such as black Friday)
- WAF Security Pillar:
- Design Principles:
- Apply Security at all layers.
- Enable traceability
- Automate responses to security events.
- Focus on securing your system
- Automate security best practices
- Definition: Security in the cloud consists of 4 areas:
- Identity and Access management: It ensures that only authorized and authenticated users are able to access your resources, and only in a manner that’s intended. It includes:
- Protecting AWS Credentials: AWS Security Token Service(STS), IAM instance profiles for EC2 instances.
- Fine-Grained Authorization: AWS Organizations.
- How are your protecting access to and use of the AWS root account credentials?
- How are you defining roles and responsibilities of system users to control human access to the AWS Management Console and APIs?
- How are you limiting automated access (such as apps, scripts or third-party tools) to AWS resources?
- Ho are you managing keys and credentials?
- Key AWS Services: IAM, MFA
- Detective controls: It can be used to detect or identify a security breach.
- Capture and Analyze Logs: AWS Config, Elasticsearch Service, CloudWatch Logs, EMR, S3 and Glacier, Athena
- Integrate Auditing Controls with Notification and Workflow: Config Rules, CloudWatch API, AWS SDKs, AWS Inspector.
- How are you capturing and anlyzing AWS logs?
- Key AWS Services: AWS CloudTrail, CloudWatch, AWS Config, S3, Glacier
- Infrastructure protection: Outside of cloud, this is how you protect your data center. RFID controls, security, lockable cabinets, CCTV etc. Within AWS cloud all this handle by Amazon, so as a customer your infrastructure protection exists at a VPC level only.
- Protecting Network and Host-Level Boundaries:
- System Security Configuration and Maintenance:
- Enforcing Service-Level Protection:
- How re you enforcing network and host-level boundary protection?
- How are you enforcing AWS service level protection?
- How are you protecting the integrity of the operating systems on your EC2 instances?
- Key AWS Services: VPC
- Data Protection:
- Data Classification:
- Encryption/Tokenization:
- Protecting Data at Rest:
- Protecting Data in Transit:
- Data Backup/Replication/Recovery:
- Basic data classification should be in place.
- Implement a least privilege access system.
- Encrypt everything where possible i.e at rest or in transit.
- AWS customers maintain full control over their data.
- AWS makes it easier for you to encrypt your data and manage keys.
- Detailed logging is available that contains important content, such as file access and changes.
- AWS has designed storage systems for exceptional resiliency.
- Versioning, which can be part of a larger data lifecycle-management process, can protect against accidental overwrites, deletes and similar harms.
- AWS never initiates the movement of data between regions, customer have to explicitly enable a feature or leverages a service that provides that functionality.
- How you are encrypting and protecting your data at rest and in transit?
- Key AWS Services: ELB, EBS, S3 & RDS.
- Incident response: You organisation should implement a response plan and a plan to mitigate security incidents.
- Clean Room: By using tags to properly describe your AWS resources, incident responders can quickly determine the potential impact of an incident.
- Key AWS Services: IAM, AWS CloudFormation, EC2 APIs, AWS Step Functions.
- Identity and Access management: It ensures that only authorized and authenticated users are able to access your resources, and only in a manner that’s intended. It includes:
- Design Principles:
- WAF Reliability Pillar: It covers the ability of a system to recover from service or infrastructure outages/disruptions as well as the ability to dynamically acquire computing resources to meet demand.
- Design Principles:
- Test Recovery Procedures.
- Automatically recover from failure
- Scale horizontally to increase aggregate system availability
- Stop guessing capacity
- Definition:
- Foundation – Networking:
- Application Design for High Availability:
- Understanding Availability Needs
- Application Design for Availability
- Operational Considerations for Availability
- Key AWS Services: AWS CloudTrail
- Example Implementations for Availability Goals:
- Dependency Selection
- Single Region Scenarios
- Multi-Region Scenarios
- Key AWS Services: AWS CloudFormation
- Design Principles:
- WAF Performance Efficiency Pillar: It focuses on how to use computing resources efficiently to meet your requirements and how to maintain that efficiency as demand changes and technology evolves.
- Design Principles:
- Democratize advanced technologies
- Go global in minutes
- User server-less architectures
- Experiment more often
- Definitions:
- Selection:
- Compute: Autoscaling
- Storage: EBS, S3, Glacier
- Database: RDS, DynamoDB, Redshift
- Network:
- Review:
- Benchmarking
- Load Testing
- Monitoring:
- Active and Passive
- Phases
- Trade-Offs:
- Caching: ElastiCache, CloudFront, DirectConnect, RDS Read Replicas
- Partioning or Sharding
- Compression
- Buffering
- Selection:
- Design Principles:
- WAF Cost Optimization Pillar: You should use cost optimization pillar to reduce your costs to a minimum and use those savings for other parts of your business. A cost-optimized system allows you to pay the lowest price possible while still achieving your business objectives.
- Design Principles:
- Transparently attribute expenditure
- Use managed services to reduce cost of ownership
- Trade capital expense for operating expense
- Benefit form economies of scale
- Stop spending money on data center operations
- Definitions:
- Cost-Effective Resources: EC2 (reserved instances), AWS Trusted Advisor
- Appropriately Provisioned
- Right Sizing
- Purchasing Options
- Geographic Selection
- Managed Services
- Matching Supply and Demand: Autoscaling
- Demand-Based
- Buffer-Based
- Time-Based
- Expenditure Awareness: CloudWatch Alarms, SNS
- Stakeholders
- Visibility and Controls
- Cost Attribution
- Tagging
- Entity Lifecycle Tracking
- Optimizing Over Time: AWS Blog, AWS Trusted Advisor
- Measure, Monitor, and Improve
- Staying Ever Green
- Cost-Effective Resources: EC2 (reserved instances), AWS Trusted Advisor
- Design Principles:
- WAF Operational Excellence Pillar:It includes operational practices and procedures used to manage production workloads. I addition, how planned changes are executed, as well as responses to unexpected operational events.
- Design Principles:
- Perform operations with code
- Align operations processes to business objectives
- Make regular, small, incremental changes
- Test for responses to unexpected events
- Learn from operational events and failures
- Keep operations procedures current
- Definitions:
- Prepare
- Operational Priorities
- Design for Operations
- Operational Readiness
- Operate
- Understanding Operational Health
- Responding to Events
- Evolve
- Learning from Experience
- Share Learnings
- Prepare
- Design Principles:
Advertisements