Azure Virtual Networks

  • Virtual NetworkAzure Virtual Network is a fundamental component that acts as an organization’s network in Azure. Organizations can use virtual networks to connect resources. Virtual networks in Microsoft Azure are network overlays that you can use to configure and control connectivity between Azure resources such as VMs and load balancers.
    • vNet: A virtual network is a logical isolation of the Azure cloud dedicated to your subscription.
      •  You can divide a VNet into multiple subnets for organization and security. It contains at least one Subnet
      • You can create up to 50 virtual networks per subscription per region. Although you have the ability to increase this limit to 500 by contacting Azure support.
      • If you connect to a VNet with a VPN or ExpressRoute, you must ensure that the address space is unique and does not overlap any of the ranges that are already in use on-premises or in other VNets.
      • Always plan to use an address space that is not already in use in your organization, either on-premises or in other VNets.
      • Even if you plan for a VNet to be cloud-only, you may want to make a VPN connection to it later. If there is any overlap in address spaces at that point, you will have to reconfigure or recreate the VNet.
      • Virtual networks (VNets) can contain both public and private (RFC 1918 address blocks) IP address spaces. When you add a public IP address range, it will be treated as part of the private VNet IP address space that is only reachable within the VNet, interconnected VNets, and from your on-premises location.
      • You add a public IP address range the same way you would add a private IP address range; by either using a netcfg file, or by adding the configuration in the Azure portal.
      • dhcpOptions: Object that contains single required property i.e dnsServers
      • dnsServers: Array of DNS servers used by the vNet. If no server is specified, then Azure internal name resolution is used.
    • Subnets: A subnet is a child resource of a vNet, and helps define segments of address spaces within a CIDR block range, using IP address prefixes.
      • You can further divide your network by using subnets for logical and security isolation of Azure resources. Each subnet contains a range of IP addresses that fall within the virtual network address space.
      • Can be one or more subnets in a single vNet.
      • The smallest subnet you can specify would be /29 subnet mask.
      • The first three and the last IP addresses are not available for use within a subnet.
      • NICs can be added to subnets, and connected to VMs, providing connectivity for various workloads.
      • Routing between subnet is automatic and ICMP is allowed within subents that belongs to the same vNet.
      • You can also configure route tables and NSGs to a subnet.
      • Keep Static and Dynamic subents separate for ease of management.
      • VMs and PaaS role instances deployed to subnets (same or different) within a VNet can communicate with each other without any extra configuration.
      • location: Azure location or region where subnet is defined.
      • addressPrefix: Single address prefix in CIDR notation that make up the subnet.
      • networkSecrurityGroup: NSG is applied to subnet.
      • routeTable: Route table applied to subnet.
      • ipConfigurations: Collection of IP configuration objects used by NICs which are connected to the subnet.
      • IaaSv2, you add the subnet to a NIC, rather than adding the VM to a subnet as you do for IaaSv1.
      • You can create user defined routes that specify the next hop for packets flowing to a specific subnet to go to your virtual appliance instead, and enabling IP forwarding for the VM running as the virtual appliance.
    • Network Security Group (NSG): List of rules that can allow or deny traffic to Azure resources.
      • Can be associated with either Subnets or Network Interfaces.
      • You can use network security groups to provide network isolation for Azure resources by defining rules that can allow or deny specific traffic to individual VMs or subnets.
      • Source: Any|IP Addresses|Service Tag
      • Source Port: port | range (1-65535) | *
      • DestinationAny|IP Addresses|VirtualNetwork
      • Destination Port: port | range (1-65535) | *
      • Protocol: Any|TCP|UDP
      • Action: Allow|Deny
      • They control inbound and outbound traffic passing through a Network Interface Card (NIC) (Resource Manage deployment model), a VM (classic deployment), or a subnet (both deployment models).
        • If the VM has multiple NICs, network security group rules are NOT automatically applied to traffic that is designated to other NICs.
      • Following are Default Rules, that you cannot delete, but can override, because they have the lowest priority:
        • allow all inbound and outbound traffic within a virtual network
        • allow outbound traffic towards the Internet
        • allow inbound traffic from Azure load balancer.
        • denies all inbound and outbound traffic with the lowest priority.
      • By default you can create 100 NSGs per region per subscription. You can raise this limit to 400 by contacting Azure support.
      • You can apply only one NSG to a NIC (RM deployment model), subnet, or VM (Classic deployment model).
      • By default, you can have up to 200 rules in a single NSG. You can raise this limit to 500 by contacting Azure support.
      • You can apply an NSG to multiple resources in the same or different resource groups.
      • If you have a NSG applied to Subnet and other associated to NIC, then subnet is evaluated first followed by NIC, with most restrictive rules winning.
    • IP addresses: VMs, Azure load balancers, and application gateways in a single virtual network require unique IP addresses in the same way as clients in an on-premises subnet do. This enables these resources to communicate with each other.
      • Private IP addresses: A private IP address is allocated to a VM dynamically or statically from the defined scope of IP addresses in the virtual network.
        • Used for communication within an Azure virtual network (VNet), and your on-premises network when you use a VPN gateway or ExpressRoute circuit to extend your network to Azure.
        • A private IP address is allocated from the address range of the subnet to which the resource is attached. The address range of the subnet itself is a part of the VNet’s address range.
        • An IP address that is allocated by DHCP has infinite duration and is released only if you deallocate (stop) the VM.
        • The RFC 1918 standard defines three private address spaces that are never used for addressing on the Internet.
        • Administrators use these ranges behind Network Address Translation (NAT) devices to ensure unique addresses used within intranets never prevent communication with Internet servers.
        • There are two methods in which a private IP address is allocated: dynamic or static.
        • The default allocation method is dynamic, where the IP address is automatically allocated from the resource’s subnet (using DHCP). This IP address can change when you stop and start the resource.
        • You can set the allocation method to static to ensure the IP address remains the same. In this case, you also need to provide a valid IP address that is part of the resource’s subnet. Static private IP addresses are commonly used for:
          • VMs that act as domain controllers or DNS servers.
          • Resources that require firewall rules using IP addresses.
          • Resources accessed by other apps/resources through an IP address.
        • These three address spaces are the only ones that are supported within an Azure VNet.
          • 10.0.0.0/8: 10.0.0.1 – 10.255.255.255.
          • 172.16.0.0/12: 172.16.0.1 – 172.31.255.255.
          • 192.168.0.0/16: 192.168.0.1 – 192.168.255.255.
        • Private IP is associated with the followings:
          • Virtual Machine (Dynamic\Static): NIC
          • Load balancer-Internal (Dynamic\Static): front end configuration.
          • Application Gateway (Dynamic): front end configuration
        • When you create a VM, a mapping for the hostname to its private IP address is added to the Azure-managed DNS servers.
        • In case of a multi-network interface VM, the hostname is mapped to the private IP address of the primary network interface.
        • You cannot set a static private IP address during the creation of a VM in the Resource Manager deployment mode by using the Azure portal. You must create the VM first, then set its private IP to be static.
      • Public IP addresses: Public IP addresses allow Azure resources to communicate with external clients and Internet.
        • Public IP addresses allow Azure resources to communicate with Internet and Azure public-facing services such as Azure Redis Cache, Azure Event Hubs, SQL databases, and Azure storage.
        • Public IP addresses are allocated dynamically when you create a VM, and are bound to the NICs.
        • VMs with multiple NICs, only one of those (primary network interface) can have a Public IP. And any interface that have public IP is also going to have private IP as well.
        • The list of IP ranges from which public IP addresses (dynamic/static) are allocated to Azure resources is published at Azure Datacenter IP ranges.
        • Public IP can be assigned to the following:
          • Virtual Machine (Dynamic|Static): Associated with NIC.
          • Load Balancer – External (Dynamic|Static): Front-end configuration.
          • Application Gateway (Dynamic): Front end configuration.
          • VPN Gateway (Dynamic): Gateway IP configuration.
        • Dynamic allocation: Its the default allocation method, where public IP address is allocated when you start (or create) the associated resource (like a VM or load balancer). The IP address is released when you stop (or delete) the resource. This causes the IP address to change when you stop and start a resource.
        • Static allocation: To ensure the IP address for the associated resource remains the same, you can set the allocation method explicitly to static. In this case an IP address is assigned immediately. It is released only when you delete the resource or change its allocation method to dynamic. Static public IP addresses are commonly used in the following scenarios:
          • End-users need to update firewall rules to communicate with your Azure resources.
          • DNS name resolution, where a change in IP address would require updating A records.
          • Your Azure resources communicate with other apps or services that use an IP address-based security model.
          • You use SSL certificates linked to an IP address.
        • Basic: Assigned to VM, LB, Application and VPN Gateways in specific zones.
        • Standard: Assigned with Static allocation to NIC and LB only. Zone redundant by default.
        • For public IP resource, you can specify a DNS domain label, FQDN and CNAME record.
        • The DNS domain name label for a public IP resource, which creates a mapping for label.location.cloudapp.azure.com to the public IP address in the Azure-managed DNS servers.
          • You can use this FQDN to create a custom domain CNAME record pointing to the public IP address in Azure.
          • Each domain name label created must be unique within its Azure location.
    • Network Interface: They are used to configure IP addresses, Virtual Network settings, and DNS servers that will be assigned to a VM, Azure supports attaching multiple NICs to a VM.
      • VMs communicate with other VMs and other resources on the network by using virtual network interface cards (NICs). Virtual NICs configure VMs with private and optional public IP address. VMs can have more than one NIC for different network configurations.
      • You can apply only one NSG to a NIC at a time.
      • Multiple NICs in Virtual Machines:
        • You can create virtual machines (VMs) in Azure and attach multiple network interfaces (NICs) to each of your VMs.
        • Multi NIC is a requirement for many network virtual appliances, such as application delivery and WAN optimization solutions.
        • Multi NIC also provides more network traffic management functionality, including isolation of traffic between a front end NIC and back end NIC(s), or separation of data plane traffic from management plane traffic.
        • The address for each NIC on each VM must be located in a subnet and multiple NICs on a single VM can each be assigned addresses that are in the same subnet.
        • The VM size determines the number of NICS that you can create for a VM.
        • The following limitations are applicable when using the multi NIC feature:
          • Multi NIC VMs must be created in Azure virtual networks (VNets). Non-VNet VMs cannot be configured with Multi NICs.
          • All VMs in an availability set need to use either multi NIC or single NIC. There cannot be a mixture of multi NIC VMs and single NIC VMs within an availability set. Same rules apply for VMs in a cloud service.
          • A VM with single NIC cannot be configured with multi NICs (and vice-versa) once it is deployed, without deleting and re-creating it.
    • User Defined Routes: User Defined Routes (UDR) control network traffic by defining routes that specify the next hop of the traffic flow. You can assign User Defined Routes to virtual network subnets.
    • Forced Tunneling: With forced tunneling you can redirect internet bound traffic back to the company’s on-premises infrastructure. Forced tunneling is commonly used in scenario where organizations want to implement packet inspection or corporate audit.
    • VNet-to-VNet (vNet Peering):  Its used to connect two Azure virtual networks with each other.
      • Connecting a virtual network to another virtual network (VNet-to-VNet) is similar to connecting a virtual network to an on-premises site location.
      • Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE.
      • The VNets you connect can be in different subscriptions and different regions.
      • You can combine VNet to VNet communication with multi-site configurations. This lets you establish network topologies that combine cross-premises connectivity with inter-virtual network connectivity.
      • All traffic is routed to Azure network
      • Faster and easier to setup than VPN and no public IP is required.
      • Peering relationships are not transitive
      • Changes to address space require repeating the peering
      • Can’t use overlap addresses
    • Cloud-Only Virtual Networks: When you create a VM or cloud service, you can specify endpoints that external clients can connect to. An endpoint is a VIP and a port number. Therefore an endpoint can be used only for a specific protocol, such as connecting a Remote Desktop Protocol (RDP) client or browsing a website. These VNets are known as cloud-only virtual networks.
      • A dynamic routing gateway is not required in the VNet.
      • Endpoints are published to the Internet, so they can be used by anyone with an Internet connection, including your on-premises computers.
  • Load Balancer: Deliver high availability and network performance to your applications.
    • Azure load balancer is a transport layer (layer 4) load balancer that distributes incoming traffic across healthy virtual machine instances in the same data center using a hash-based distribution algorithm.
    • By default, Azure load balancers use a 5tuple (source IP, source port, destination IP, destination port, protocol type) hash to distribute traffic across available servers.
    • Azure load balancers also support network address translation (NAT) to route traffic between public and private IP addresses.
    • You can configure the load balancer to: Load balance incoming traffic across your virtual machines OR Forward traffic to and from a specific virtual machine using NAT rules.
    • SKU: Standard load balancer up to 1000 instances, greater backend pool flexibility, HA ports, zone-redundant scenarios.
    • Load balance traffic between virtual machines in a virtual network, between virtual machines in cloud services, or between on-premises computers and virtual machines in a cross-premises virtual network. This is called internal load balancing.
    • An endpoint listens on a public port and forwards traffic to an internal port. You can map the same ports for an internal or external endpoint or use a different port for them. The creation of a public endpoint triggers the creation of a load balancer instance.
    • Hash based distribution: The load balancer uses a hash-based distribution algorithm.
      • By default, it uses a 5-tuple (source IP, source port, destination IP, destination port, and protocol type) hash to map traffic to available servers.
      • It provides stickiness only within a transport session.
      • Packets in the same TCP or UDP session will be directed to the same datacenter IP instance behind the load-balanced endpoint.
      • When the client closes and reopens the connection or starts a new session from the same source IP, the source port changes. This may cause the traffic to go to a different datacenter IP endpoint.
    • Port Forwarding: The load balancer gives you control over how inbound communication is managed. This communication can include traffic that’s initiated from Internet hosts or virtual machines in other cloud services or virtual networks. This control is represented by an endpoint (also called an input endpoint).
      • An endpoint listens on a public port and forwards traffic to an internal port. You can map the same ports for an internal or external endpoint or use a different port for them.
    • Service Monitoring: The load balancer can probe the health of the various server instances. When a probe fails to respond, the load balancer stops sending new connections to the unhealthy instances. Existing connections are not impacted. There are following types of probes:
      • Guest agent probe (on PaaS VMs only): The load balancer utilizes the guest agent inside the virtual machine. It listens and responds with an HTTP 200 OK response only when the instance is in the ready state (i.e. the instance is not in a state like busy, recycling, or stopping). If the guest agent fails to respond with an HTTP 200 OK, the load balancer marks the instance as unresponsive and stops sending traffic to that instance.
        • The load balancer will continue to ping the instance. If the guest agent responds with an HTTP 200, the load balancer will send traffic to that instance again.
        • When you’re using a web role, your website code typically runs in w3wp.exe, which is not monitored by the Azure fabric or guest agent. This means that failures in w3wp.exe (e.g. HTTP 500 responses) will not be reported to the guest agent, and the load balancer will not know to take that instance out of rotation.
      • HTTP custom probe: This probe overrides the default (guest agent) probe. You can use it to create your own custom logic to determine the health of the role instance. The load balancer will regularly probe your endpoint (every 15 seconds, by default). The instance will be considered in rotation if it responds with a TCP ACK or HTTP 200 within the timeout period (default of 31 seconds).
      • TCP custom probe: This probe relies on successful TCP session establishment to a defined probe port.
    • Source NATAll outbound traffic to the Internet that originates from your service undergoes source NAT (SNAT) by using the same VIP address as for incoming traffic.
      • It enables easy upgrade and disaster recovery of services, since the VIP can be dynamically mapped to another instance of the service.
      • It makes access control list (ACL) management easier, since the ACL can be expressed in terms of VIPs and hence do no change as services scale up or down or get redeployed.
      • The load balancer configuration supports full cone NAT for UDP. Full cone NAT is a type of NAT where the port allows inbound connections from any external host (in response to an outbound request).
      • For each new outbound connection that a VM initiates, an outbound port is also allocated by the load balancer. The external host will see traffic with a virtual IP (VIP)-allocated port. If your scenarios require a large number of outbound connections, we recommend that the VMs use instance-level public IP addresses so that they have a dedicated outbound IP address for SNAT. This will reduce the risk of port exhaustion.
      • The maximum number of ports that can be used by the VIP or an instance-level public IP (PIP) is 64,000. This is a TCP standard limitation.
    • Public load balancer: A public load balancer maps the public IP address and port number of incoming traffic to the private IP address and port number of the virtual machine and vice versa for the response traffic from the virtual machine.
      • You can associate a public IP address with an Azure Load Balancer, by assigning it to the load balancer frontend configuration.
      • The public IP address serves as a load-balanced virtual IP address (VIP).
      • You can assign either a dynamic or a static public IP address to a load balancer front-end.
      •  You can also assign multiple public IP addresses to a load balancer front-end, which enables multi-VIP scenarios like a multi-tenant environment with SSL-based websites.
    • Internal load balancer: Internal Load Balancer only directs traffic to resources that are inside a virtual network or that use a VPN to access Azure infrastructure.
      • You can assign a private IP address to the front end configuration of an Azure Internal Load Balancer (ILB) or an Azure Application Gateway.
      • This private IP address serves as an internal endpoint, accessible only to the resources within its virtual network (VNet) and the remote networks connected to the VNet.
      • You can assign either a dynamic or static private IP address. to the front end configuration.
  • Application GatewayBuild secure, scalable, and highly available web front ends in Azure.
    • Works at Application Layer (layer 7) and works as reverse proxy service.
    • Application gateways provide load-balanced solutions for network traffic that is based on the HTTP/HTTPS protocol.
    • They use routing rules as application-level policies that can offload Secure Sockets Layer (SSL) processing from load-balanced VMs.
    • In addition, you can use application gateways for a cookie-based session affinity scenario.
    • You can associate a public IP address with an Azure Application Gateway, by assigning it to the gateway’s frontend configuration. This public IP address serves as a load-balanced VIP. Currently, you can only assign a dynamic public IP address to an application gateway frontend configuration.
    • The default health monitoring tests servers every 30 seconds for a healthy HTTP response. A healthy HTTP response has a status code between 200 and 399.
    • HTTP layer 7 load balancing is useful for:
      • HTTP load balancing: Applications, such as a content delivery network, that requires multiple HTTP requests on the same long-running TCP connection to be routed or load balanced to different back-end servers.
        • URL-based content routing
        • Multi-site routing
      • Cookie-based session affinity: Applications that require requests from the same user/client session to reach the same back-end virtual machine. Examples of these applications would be shopping cart apps and web mail servers.
      • Secure Sockets Layer (SSL) offload: Applications that want to free web server farms from SSL termination overhead.
    • Application Gateway Requirements:
      • Back-end server pool: The list of IP addresses of the back-end servers. The IP addresses listed should either belong to the virtual network subnet or should be a public IP/VIP.
      • Back-end server pool settings: Every pool has settings like port, protocol, and cookie-based affinity. These settings are tied to a pool and are applied to all servers within the pool.
      • Front-end port: This port is the public port that is opened on the application gateway. Traffic hits this port, and then gets redirected to one of the back-end servers.
      • Listener: The listener has a front-end port, a protocol (Http or Https, these are case-sensitive), and the SSL certificate name (if configuring SSL offload).
      • Rule: The rule binds the listener and the back-end server pool and defines which back-end server pool the traffic should be directed to when it hits a particular listener.
  • Traffic Manager: Route incoming traffic for high performance and availability.
    • You can use Traffic Manager to load balance between endpoints that are located in different Azure regions, at hosted providers, or in on-premises datacenters. These endpoints can include Azure VMs, Web Apps, and cloud services.
    • You can configure this load-balancing service to support priority or to ensure that users connect to an endpoint that is close to their physical location for faster response.
    • Traffic Manager works at the DNS level. Traffic Manager uses DNS to direct end users to particular service endpoints. Clients then connect to the selected endpoint directly. Traffic Manager is not a proxy, and does not see the traffic passing between the client and the service.
    • When using a vanity domain with Azure Traffic Manager, you must use a CNAME to point your vanity domain name to your Traffic Manager domain name. Due to a restriction of the DNS standards, a CNAME cannot be created at the ‘apex’ (or root) of a domain.
      • Thus you cannot create a CNAME for ‘contoso.com’ (sometimes called a ‘naked’ domain). You can only crate a CNAME for a domain under ‘contoso.com’, such as ‘www.contoso.com’.
      • Thus you cannot use Traffic Manager directly with a naked domain. To work around this, we recommend using a simple HTTP re-direct to direct requests for ‘contoso.com’ to an alternative name such as ‘www.contoso.com’.
    • Traffic Manager provides two key benefits:
      • Distribution of traffic according to one of several traffic-routing methods
        • Ensure the user is directed to the closest version of an app
      • Continuous monitoring of endpoint health and automatic failover when endpoints fail
        • Stop routing users to a location if traffic goes down
  • Azure DNS: Host your DNS domain in Azure.
    • The Domain Name System (DNS) enables clients to resolve user-friendly fully qualified domain names (FQDNs), such as http://www.adatum.com, to IP addresses.
    • Names of resources that are created in Azure can be resolved by using Azure-provided name resolution or by using a customer provided DNS server.
    • Azure provides a DNS system to support many name resolution scenarios. However, in some cases, such as hybrid connection you might need to configure an external DNS system to provide name resolution for virtual machines on a virtual network.
    • In a hybrid scenario where your on-premises network is connected to an Azure virtual network through a VPN or ExpressRoute circuit, an on-premises computer cannot resolve the name of a VM in an Azure virtual network until you configure the DNS servers with a record for the VM.
    • Resources created in the same virtual network and deployed with Azure Resource Manager (ARM) share the same DNS suffix; therefore, in most cases name resolution by using FQDN is not required.
    • For virtual networks that are deployed by using the Azure classic deployment model, the DNS suffix is shared among VMs that belong to the same cloud service. Therefore, name resolution between VMs that belong to different cloud services in the same virtual network require the use of FQDN.
    • A DNS server should have a static IP address because clients may not be able to locate it if its address changes.
    • Azure DNS does not currently support purchasing of domain names. If you want to purchase domains, you’ll need to use a third-party domain name registrar. The domains can then be hosted in Azure DNS for management of DNS records.
    • Azure uses Anycast networking so that each DNS query is answered by the closest available DNS server. This provides both fast performance and high availability for your domain.
    • The root zone contains NS records for ‘com‘ and shows the name servers for the ‘com’ zone. In turn, the ‘com’ zone contains NS records for ‘contoso.com’, which shows the name servers for the ‘contoso.com’ zone. Setting up the NS records for a child zone in a parent zone is called delegating the domain.
    • Name Resolution Scenarios:
      • VMs in the same cloud service. VMs can resolve the names of all other VMs in the same cloud service automatically by using the internal Azure name resolution.
      • VMs in the same VNet. If the VMs are in different cloud services but within a single VNet, those VMs can resolve IP addresses for each other by using the internal Azure name resolution service and their Fully Qualified Domain Names (FQDNs). This is supported only for the first 100 cloud services in the VNet. Alternatively, use your own DNS system to support this scenario.
      • Between VMs in a VNet and on-premises computers. To support this scenario you must use your own DNS system.
      • Between VMs in different VNets. To support this scenario you must use your own DNS system.
      • Between on-premises computers and public endpoints. If you publish an endpoint from a VM in an Azure VNet, the Azure-provided external name resolution service will resolve the public VIP. This also applies for any internet-connected computers that are not on your premises.
      • If two VMs are deployed in different IaaS cloud services but not in a VNet, they cannot communicate at all, even by using DIPs. Therefore name resolution is not applicable.
    • If you are planning to use your own DNS system, you must ensure that all computers can reach a DNS server for registering and resolving IP addresses. You can either deploy DNS on a VM in the Azure VNet or have VM register their addresses with an on-premises DNS server.
    •  Your DNS server must meet the following requirements:
      • The server must support Dynamic DNS (DDNS) registration.
      • The server must have record scavenging switched off. Because DHCP leases in an Azure VNet are infinite, record scavenging can remove records that have not been renewed but are still correct.
      • The server must have DNS recursion enabled.
      • The server must be accessible on TCP/UDP port 53 from all clients.
    • Domains and ZonesThe Domain Name System is a hierarchy of domains.
      • The hierarchy starts from the ‘root’ domain, whose name is simply ‘.’.
      • Below this come top-level domains, such as com, net, org, uk or jp.
      • Below these are second-level domains, such as org.uk or co.jp. And so on.
      • The domains in the DNS hierarchy are hosted using separate DNS zones. These zones are globally distributed, hosted by DNS name servers around the world.
    • Domain registrar: A domain registrar is a company who can provide Internet domain names.
      • They will verify if the Internet domain you want to use is available and allow you to purchase it.
      • Once the domain name is registered, you will be the legal owner for the domain name. If you already have an Internet domain, you will use the current domain registrar to delegate to Azure DNS.
    • Zone Delegation: Azure DNS allows you to host a DNS zone and manage the DNS records for a domain in Azure. In order for DNS queries for a domain to reach Azure DNS, the domain has to be delegated to Azure DNS from the parent domain.
      • To delegate authority to DNS zone hosted in Azure, you contact the registrar for your domain name to configure the name server (NS) records with NS records listed in the zone created in Azure.
      • NS records are created by default when DNS zone is created
      • Each delegation actually has two copies of the NS records; one in the parent zone pointing to the child, and another in the child zone itself.  The abc.com zone contains the NS records for abc.com (in addition to the NS records in com).
      • Each registrar has their own DNS management tools to change the name server records for a domain. In the registrar’s DNS management page, edit the NS records and replace the NS records with the ones Azure DNS created.
    • Record: Its property of a record set and list of individual records.
    • Record Sets: Its child resource of a DNS Zone, and is the collection of records with the same name and type.  For example:
    • DNS Zones: Its used to host the DNS records for a particular domain.
      • A domain is a unique name in the Domain Name System, for example contoso.com. A DNS zone is used to host the DNS records for a particular domain. For example, the domain contoso.com may contain a number of DNS records such as mail.contoso.com (for a mail server) and http://www.contoso.com (for a website).
      • You do not have to own a domain in order to create a DNS zone with that domain name in Azure DNS. However, you do need to own the domain to set up the delegation to Azure DNS with the registrar.
    • DNS Servers:
      • Authoritative DNS server hosts DNS zones. It answers DNS queries for records in those zones only.
      • Recursive DNS server does not host DNS zones. It answers all DNS queries by calling authoritative DNS servers to gather the data it needs.
      • Azure DNS provides an authoritative DNS service. It does not provide a recursive DNS service.
      • Cloud Services and VMs in Azure are automatically configured to use a recursive DNS that is provided separately as part of Azure’s infrastructure.
    • Supports Etags to prevent accidental concurrent changes.
  • VPN Gateway: Establish secure, cross-premises connectivity.
    • Azure VPN Gateway is used to connect an Azure virtual network (VNet) to other Azure VNets or to an on-premises network.
    • Virtual networks in Microsoft Azure also enable you to extend your on-premises networks to the cloud. To extend your on-premises network, you can create a virtual private network (VPN) between your on-premises computers or networks and an Azure virtual network.
    • The VPN gateway routes traffic between VMs and PaaS cloud services in the virtual network, and computers at the other end of the connection.
    • A virtual network gateway is the software VPN device for your Azure virtual network.
    •  You need to assign a public IP address to its IP configuration to enable it to communicate with the remote network. Currently, you can only assign a dynamic public IP address to a VPN gateway.
    • All VPN connections require a virtual gateway in the virtual network, which routes traffic to the on-premises computers.
    • The following VPN connections are available: Point-to-site, Site-to-site, VNet-to-Vnet, IaaS v1 VNet-to-IaaS v2 VNet, Multisite, and ExpressRoute.
    • Considerations for inter-site connection:
      • Azure supports a maximum of 30 VPN tunnels per VPN gateway. Each point-to-site VPN, site-to-site VPN, or VNet-to-VNet VPN counts as one of those VPN tunnels. A single VPN gateway can support up to 128 connections from client computers.
      • Address spaces must not overlap.
      • Redundant tunnels are not supported.
      • All VPN tunnels to a virtual network share the available bandwidth on the Azure VPN gateway.
      • VPN devices must meet certain requirements. These requirements are listed on the Microsoft website
    • VPN Type:
      • Route-basedPoint-to-site, inter-virtual network, and multiple site-to-site VPN connections are only supported with a route-based virtual network gateway.
        • In addition, if you are creating a VPN-type gateway to coexist with an ExpressRoute gateway, the VPN gateway must be route-based and the ExpressRoute gateway must be created first.
        • SKU: Route-based VPN gateway types are offered in three SKUs: Basic, Standard, and High Performance. Standard or High Performance must be chosen if the gateway is being created to coexist with an ExpressRoute gateway.
      • Policy-based:
        • SKU: Basic only
    • Point-to-Site VPN: This is a VPN that connects individual computers to an Azure virtual network.
      • No extra hardware is required but you must complete the configuration procedure on every computer that you want to connect to the VNet.
      • Can be used by the client computer to connect to a VNet from any location with an Internet connection.
      • Configuration:
        • Configure an IP address space for clients
        • Configure a virtual gateway
        • Create root and client certificates
        • Create and install VPN client configuration
        • Connect to the VPN
    • A site-to-site VPNThis is a VPN that connects an on-premises network and all its computers to an Azure virtual network.
      • To create this connection, you must configure a gateway and IP routing in the on-premises network; it is not necessary to configure individual on-premises computers.
      •  You can use a Windows Server 2012 computer running RRAS as a gateway to the VNet. Alternatively, there are a range of third-party VPN devices that are known to be compatible.
      •  If you have a VPN device that is not on the known compatible list, you may be able to use it if it satisfies the list of gateway requirements.
  • ExpressRoute: Dedicated private fiber network connections to Azure.
    • You can use ExpressRoute to provide a dedicated connection to an Azure virtual network that does not cross the Internet.
    • Because ExpressRoute connections are dedicated, they can offer faster speeds, higher security, lower latencies, and higher reliability than VPNs.
  • Content Delivery Network: Ensure secure, reliable network delivery with broad global reach.
  • Network Watcher: Network performance monitoring and diagnostic solution.
  • DDOS Protection: Protect your applications from Distributed Denial of Service (DDoS) attacks.
Advertisements
Posted in azure

Azure Virtual Machines

  • Azure Virtual Machines:
    • Azure Compute Unit (ACU) to provide a way of comparing compute (CPU) performance across Azure SKUs and to identify which SKU is most likely to satisfy your performance needs.
    • VM Series:
      • A Series: Entry level for Dev/Test
      • D Series: General purpose/Balanced (Infrastructure)
      • F Series: Optimized for compute (Web Apps, Analytics, Gaming)
      • G Series: Optimized for memory and storage (SQL, ERP, SAP)
      • H Series: High performance (Modeling, AI)
      • L Series: Storage optimized (Data warehousing, Cassandara, MongoDB)
      • G Series: GPU enabled (Graphics, Videos)
    • VM Sizes:
      • General Purpose: (B, DSv3, DSv2, Dv3, Dv2, DS, D, Av2, A0-7)
        • Balanced CPU-to-memory ratio.
        • Ideal for testing and development, small to medium databases, and low to medium traffic web servers.
      • Compute optimized: (Fsv2, Fs, F)
        • High CPU-to-memory ratio.
        • Good for medium traffic web servers, network appliances, batch processes, and application servers.
      • Memory optimized: (Esv3, Ev3, M, GS, G, DSv2, DS, Dv2, D)
        • High memory-to-CPU ratio.
        • Great for relational database servers, medium to large caches, and in-memory analytics.
      • Storage optimized: (Ls)
        • High disk throughput and I/O.
        • Ideal for Big Data, SQL, and NoSQL databases.
      • GPU: (NV, NC)
        • Specialized virtual machines targeted for heavy graphic rendering and video editing. Available with single or multiple GPUs.
      • High performance compute: (H, A8-11)
        • Fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA).
    • Deploying VMs:
      • Azure portal: Used to create IaaS v2 (ARM) virtual machines.
      • Azure PowerShellCan be used to create virtual machines using either Azure Service Manager (ASM) to create IaaS v1 or the Azure Resource Manager IaaS v2 virtual machines.
      • Azure CLI
      • Visual Studio
      • Azure Resource Manager templates:
        • ARM template max size can be 1 MB.
    • Connecting VMs to your infrastructure:
      • Site to Site VPN
      • Azure vNet with multiple VMs
      • Azure based DC or on-premises DC
      • ExpressRoute
    • VM images:
      • Image Types:
        • OS images: Includes only a generalized OS
        • VM images: Includes an OS and all disks attached
      • Image Sources:
        • Azure Marketplace: Contains recent version for Windows Server and Linux distributions
        • VM Depot: Community managed repository of Linux and FreeBSD VM images
      • Custom image:
        • Image you create and upload for use in Azure.
    • Availability Sets: Its a logical group of virtual machines that are deployed across fault domains and update domains. Availability sets make sure that your application is not affected by single points of failure, like the network switch or the power unit of a rack of servers.
      • For redundancy, configure multiple virtual machines in an Availability Set.
      • Configure each application tier into separate Availability Sets.
      • Combine a Load Balancer with Availability Sets.
      • Microsoft have a SLA in place for availability sets that’s 99.95% and for premium disks, its 99.9%.
      • Each virtual machine in an availability set is placed in one update domain and two fault domains.
      • Update domain: Virtual machines in the same update domain will be restarted together during planned maintenance. Azure never restarts more than one update domain at a time.
        • It allows Azure to perform incremental or rolling upgrades across a deployment.  Each update domain contains a set of virtual machines and associated physical hardware that can be updated and rebooted at the same time.
        • During planned maintenance, only one update domain is rebooted at a time. By default there are five update domains, but you configure up to twenty update domains.
      • Fault domain: Virtual machines in the same fault domain shares a common power source and physical network switch.
        • It defines a group of virtual machines that share a common set of hardware, switches, and more that share a single point of failure.
        • For example, a server rack serviced by a set of power or networking switches.
        • VMs in an availability set are placed in at least two fault domains. This mitigates against the effects of hardware failures, network outages, power interruptions, or software updates.
    • Virtual Machine Scale Set is an Azure compute resource you can use to deploy and manage a set of identical VMs. With all VMs configured the same, VM scale sets are designed to support true auto-scale – no pre-provisioning of VMs is required – making it easier to build large-scale services targeting resource-intensive compute, data, and containerized workloads.
      • You can create both Linux and Windows VM Scale Sets from the Azure Portal. These scale sets are automatically created with load balancer NAT rules to enable SSH or RDP connections.
      • You can set the maximum, minimum and default number of VMs, and define triggers – action rules based on resource consumption.
      • When you increase the number of virtual machines in a scale set, VMs are balanced across update and fault domains to ensure, maximum availability. Similarly when you scale in, VMs are removed with maximum availability in mind.
      • Scale set are often integrated with Azure Insight, Load Balancer and NAT rules. Azure Insight is used to measure when to scale up or scale down. The Load Balance and NAT rules work together to spread the workload over the available machines as they are added.
      • Azure Resource Explorer is a great tool to view and modify resources you have created in your subscription. The tool is web-based and uses your Azure portal logon credentials. This tool is particularly useful in viewing Azure scale sets. With the tool you can see the individual virtual machines and their properties.
    • Types of IP Addresses:
      • Public IP addresses: Used for communication with the Internet, including Azure public-facing services, like SQL Services. You can associate public IP addresses with virtual machines, internet facing load balancers, VPN gateways, and application gateways.
        • Dynamic allocation: the IP address is not allocated at the time of its creation. Instead, the public IP address is allocated when you start (or create) the associated resource (like a VM or load balancer). The IP address is released when you stop (or delete) the resource. This means the IP address can change.
        • Static allocation: the IP address for the associated resource does not change. In this case an IP address is assigned immediately. It is released only when you delete the resource or change its allocation method to dynamic.
      • Private IP addresses: Used for communication within an Azure virtual network (VNet), and your on-premises network when you use a VPN gateway or ExpressRoute circuit to extend your network to Azure. You can associate private IP addresses with virtual machinesinternal load balancers, and application gateways.
        • The default allocation method is dynamic, where the IP address is automatically allocated from the resource’s subnet (using DHCP). This IP address can change when you stop and start the resource.
        • You can set the allocation method to static to ensure the IP address remains the same. In this case, you also need to provide a valid IP address that is part of the resource’s subnet.
    • Configuration Management Tools: Deploying and maintaining the desired state of your servers and application resources can be tedious and error prone. Azure supports several configuration management systems.
      • Desired State Configuration (DSC)With Azure automation Desired State Configuration (DSC), you can consistently deploy, reliably monitor, and automatically update the desired state of all your IT resources, at scale from the cloud. DSC is a VM agent extension and works on both Windows and Linux. DSC supports ARM templates, Azure PowerShell, and XPLAT-CLI.
      • Chef and Puppet: Chef are Puppet are other configuration management tools that lets you automate the entire lifecycle of your Azure infrastructure, from initial provisioning through application deployment. Both are popular Linux tools and VM agent extensions.
      • Ansible: Ansible is an open source, clientless automation tool that automates software and OS features provisioning, configuration management, and application deployment. Ansible includes a suite of modules for interacting with Azure Resource Manager, making possible to create and orchestrate infrastructure in Azure.
    • Monitoring and DiagnosticsThe administrator enables and configures VM diagnostics from the Monitoring area of the new portal VM blade. An administrator can enable diagnostic logging for: Basic metrics, Network and web metrics, .NET metrics, Windows event system logs, Windows event security logs, Windows event application logs, Diagnostic infrastructure logs:
      • You can access host-level metrics from VMs (Azure Resource Manager-based) and virtual machine scale sets without any additional diagnostic setup.
      • These new host-level metrics are available for Windows and Linux instances. These metrics are not to be confused with the Guest-OS-level metrics that you have access to when you turn on Azure Diagnostics on your VMs or virtual machine scale sets.
      • Alerts: You can receive an https://azure.microsoft.com/en-us/documentation/articles/insights-receive-alert-notifications/ When the value of an alert rule crosses an assigned threshold, the alert rule becomes active and can send a notification. For an alert rule on events, a rule can send a notification on every event, or, only when a certain number of events happen.
        • When you create an alert rule, you can select options to send an email notification to the service administrator and co-administrators or to another administrator that you can specify. A notification email is sent when the rule becomes active, and when an alert condition is resolved.
        • For example, this alert rule will trigger when the CPU percentage guest OS value is greater than 75% in a five minute period.
        • It supports alerts via SMS, Emails and Webhook (Automation Runbook, Function, Logic App, Third party URL)
    • Security GroupsIts a group of rules that can be applied to Interfaces, vNets and Subnets.
    • Backups: Based on Recovery Services Vault. Retention Range can be either daily, weekly or monthly backup point.
    • Enable Update Management: Its enable per VM. Create Operations Management Suite (OMS) workspace and install OMS agents. Takes 15-30 mins to report results. Scans every 12 hrs by default.
    • Change Tracking: Helps identify changes in your environment (Windows and Linux) for debugging, troubleshooting and compliance. Tracks Software, File, Registry Keys (Windows) and Services/Daemons. All data is sent to Log Analytics service.
    • Extensions: VM Agent Extensions are software components that extend the VM functionality and management operations. An administrator can install multiple extensions on a VM. Currently available extensions include management tools such as Desired State Configuration (DSC), Chef, and Puppet.
      • VM Agent: Its a secured, light-weight process that installs, configures, and removes VM extensions on instances of Azure virtual machines. It intended to bootstrap these additional extensions, offered both by Microsoft and partners. The extensions that the agent loads provide specific features to increase your productivity using the instance.
    • Azure Cross-Platform Command-Line Interface (XPLAT-CLI) provides a set of open source, cross-platform commands for working with Azure. Although available for all platforms, XPLAT-CLI is primarily for use with Linux-based VMs, as Windows VMs are usually managed with Azure PowerShell commands.
    • Although Azure virtual machines are based on Windows Server Hyper-V but not all Hyper-V features are supported. For example, Multipath I/O and Network Load Balancing are not currently supported.
    • Upgrade of the Windows operating system of a Microsoft Azure virtual machine is not supported. Instead, you should create a new Azure virtual machine that is running the supported version of the operating system that is required and then migrate the workload.
    • Linux endorsed distributions supports an upgrade of the operating system of a Microsoft Azure virtual machine in case of full open source license. If licensed Linux distribution is used, then follow partner-specific rules to upgrade (BYOL or other).
    • Some of the physical hosts in the Azure data centers may not support larger virtual machine sizes, such as A5 to A11. If that happens you may get an error message such as Failed to configure virtual machine or Failed to create virtual machine.
    • Fixed size VHD disk is uploaded to Azure (VHDX disk is not supported but you can convert it using PowerShell cmdlet)
    • Source virtual machines must be generalized using Sysprep. If using ARM, must also set the status of virtual machine to generalized using the Set-AzureRMVm cmdlet.
    • For capturing IaaS v2 Azure virtual machines, you must use PowerShell or Azure Resource Explorer.
    • All Linux distributions available in Azure image gallery are endorsed distributions and Azure Platform SLA (e.g up time availability) applies only to endorsed Linux distributions. Non-endorsed distributions may run on Azure, if they meet a number of -re-requisites.
    • Azure provides a large image gallery in the Marketplace. The gallery includes recent operating system images of Windows Server, Linux, and SQL Server. You can also store your own images in Azure, by capturing an existing virtual machine and uploading the image.
Posted in azure

Azure PowerShell and CLI

  • Install PowerShell on Ubuntu 16:
  • Powershell basic cmdlets:
    • pwsh                                                  (Start the PowerShell console)
    • verb-noun -param Arg1, Arg2  (General syntax)
    • Help:
      • help | get-help  *process
      • Get-Command                        (List all available commands)
      • Update-Help -force                (Update PowerShell help)
    • Get-Process -Id 123 | Stop-Process
    • Get-Process -Name sshd | Out-File  processes.txt
      • cat processes.txt
    • Set-Location (sl) /tmp
    • pwd
    • $psversiontable                    (Show version)
    • Get-WindowsFeature where installed -eq $true
    • Get-WindowsFeature web-server | Install-WindowsFeature
  • Azure PowerShell cmdlets:
    • Service Management Mode (Azure module):
      • Its default module that includes cmdlets for managing Azure services as individual resources.
      • Its used to view, create, and manage individual Azure services in your subscription.
      • Get-Command -Module Azure | Get-Help | Format-Table Name, Synopsis
    • Resource Manager Mode (AzureResourceManager module):
      • It includes cmdlets that enable you to manage related services as a single unit. All cmdlets have RM as part of the command.
      • You can use PowerShell to create and manage Azure resources in resource groups. This approach makes it easier to manage related sets of resources as a unit.
      • Get-Command -Module AzureRM | Get-Help | Format-Table Name, Synopsis
    • Install Azure PowerShell on Windows:
      • Run Windows PowerShell ‘Run as administrator’
      • Install-Module Azure          (For untrusted repository type Y)
      • Import-Module Azure 
      • Install-Module AzureRM    (Install and import the NuGet provider, Type Y)
      • Import-Module AzureRM
      • Get-Module -ListAvailable AzureRM    (Verify Azure module)
      • Get-Command *rmstorage*
      • Get-Help New-AzureStorageAccount -full
      • Note: running scripts on your computer has been disabled:
        • Set-ExecutionPolicy Unrestricted
        • Import-Module AzureRM
        • Set-ExecutionPolicy Restricted
    • Basic Azure Cmdlets:
      • Get-AzureRmVM  [-ResourceGroupName rg1  -VMName vm1]
        • (Get-AzureRmVM vm1).StorageProfile.OsDisk
      • Get-AzureRmVMSize -ResourceGroupName rg1  -VMName vm1
      • Update-AzureRmVM -ResourceGroupName rg1  -VM $vm
      • Stop-AzureRmVM -ResourceGroupName rg1  -Name vm1
      • Start-AzureRmVM -ResourceGroupName rg1  -Name vm1
      • New-AzureRmResourceGroup  -Name ‘rg1‘ -Location ‘East US
      • Get-AzureRmResourceGroup
      • Remove-AzureRmResourceGroup -Name ‘rg1
      • Enter-PSSession -ComputerName 164.54.67.43 -Credential (Get-Credential) -UseSSL -SessionOption (New-PSSsessionOption -SkipCACheck -SkipCNCheck)
      • ConvertTo-AzureRmManagedDisk -ResourceGroupName rg1  -VMName vm1
      • Add-AzureRmVhd -ResourceGroupName disks -Desitination “https://vmstorecjh.blob.core.windows.net/vhd/mydata.vhd” -LocalFilePath D:\mydata.vhd -Verbose
      • $publicIP = New-AzureRmPublicIpAddress -Name pubIp -ResourceGroupName rg1 -Location “east us” –AllocationMethod Static -DomainNameLabel loadbalancernrp
      • New-AzureRmVirtualNetwork
      • Azure DNS:
        • PS> $zone = Get-AzureRmDnsZone -Name abc.com -ResourceGroupName rg1
        • Get-AzureRmDnsRecordSet -Name “@” -RecordType NS -Zone $zone
        • New-AzureRmDnsRecordSet  -Name abc  -ZoneName abc.com -ResourceGroupName rg1  –RecordType A -DnsRecords (New-AzureRmDnsRecordConfig -Ipv4Address “1.2.3.4“)
        • Resolve-DnsName -Server ns1-07.azure-dns.com -Name abc.com 
    • Create a VM using PowerShell:
      • Default latest Windows Server 2016 VM:
        1. Login-AzureRmAccount | Connect-AzureRmAccount (Set-ExecutionPolicy Unrestricted)
        2. Get-AzureRmSubscription | sort SubscriptionName | Select SubscriptionName
        3. Select-AzureRmSubscription -SubscriptionName “Free Trial”
        4. New-AzureRmResourceGroup
        -ResourceGroupName “rg1” -Location “EastUS”
        5. $cred = Get-Credential
        6. New-AzureRmVm -Name “vm1” -ResourceGroupName “rg1” -Location “EastUS” -VirtualNetworkName “vnet1” -SubnetName “subnet1” -SecurityGroupName “nsg1” -PublicIpAddressName “ip1” -Credential $cred
        7. Get-AzureRmVM -ResourceGroupName rg1  -Name vm1 –Status
        8. Get-AzureRmPublicIpAddress -ResourceGroupName “rg1” | Select IpAddress
        9. mstsc  /v:<publicIpAddress>
      • Detailed steps:
        1. Login-AzureRmAccount | Connect-AzureRmAccount (Set-ExecutionPolicy Unrestricted)
        2. Get-AzureRmSubscription | sort SubscriptionName | Select SubscriptionName
        3. Select-AzureRmSubscription -SubscriptionName “Free Trial”4. $stName = “<chosen storage account name>”
        5. $locName = “East US”
        6. $rgName = “rg1”

        7. New-AzureRmResourceGroup -Name $rgName -Location $locName
        8. $storageAcc = New-AzureRmStorageAccount -ResourceGroupName $rgName -Name $stName -Type “Standard_GRS” -Location $locName
        $subnet = New-AzureRmVirtualNetworkSubnetConfig -Name subnet1 -AddressPrefix 10.0.0.0/24
        9. $vnet = New-AzureRmVirtualNetwork -Name net1 -ResourceGroupName $rgName -Location $locName -AddressPrefix 10.0.0.0/16 -Subnet $subnet
        10. $pip = New-AzureRmPublicIpAddress -Name ip1 -ResourceGroupName $rgName -Location $locName -AllocationMethod Dynamic
        11. $nic = New-AzureRmNetworkInterface -Name nic1 -ResourceGroupName $rgName -Location $locName -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id
        12. $cred = Get-Credential -Message “Type administrator credentials
        13. $vm = New-AzureRmVMConfig -VMName WindowsVM -VMSize “Standard_A1”
        14. $vm = Set-AzureRmVMOperatingSystem -VM $vm -Windows -ComputerName MyWindowsVM -Credential $cred -ProvisionVMAgent -EnableAutoUpdate
        15. $vm = Set-AzureRmVMSourceImage -VM $vm -PublisherName MicrosoftWindowsServer -Offer WindowsServer -Skus 2012-R2-Datacenter -Version “latest”
        16. $vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
        $osDiskUri = $storageAcc.PrimaryEndpoints.Blob.ToString() + “vhds/WindowsVMosDisk.vhd”
        17. $vm = Set-AzureRmVMOSDisk -VM $vm -Name “windowsvmosdisk” -VhdUri $osDiskUri -CreateOption fromImage
        18. New-AzureRmVM -ResourceGroupName $rgName -Location $locName -VM $vm

  • Azure CLI 2.0
    • az login
    • az account list
    • az vm list
    • az group create -n “rg1” -l “east us
    • az group list
    • az group export -n rg1
    • az vm reset-access  -g rg  -n vm1  –u LinuxAdmin  -p  NewPassw0rd2
    • az vm availability-set create
    • az network vnet create –resource-group “rg1″ –name “cliNet” -address-prefix “10.0.0.0/16” –subnet-name “subnet1″ –subnet-prefix “10.0.1.0/24”
    • az network dns record-set ns show –resource-group rg1 –zone-name abc.com –name @

Reference:

 

Posted in azure

Azure Storage

  • Azure Storages:
    • Blob: Scalable object storage for documents, videos, pictures, and unstructured text or binary data.
      • Choose from Hot, Cool, or Archive tiers.
      • Binary Large Objects (Blobs), Unstructured object data
      • Page (VHD): Used for VM VHDs. Page blobs are optimized for writes at random locations within a blob. They also support Unmanaged Disks.
      • Block (S3): Optimized for object streaming, unstructured data
      • Append: Optimized for append operations i.e logging.
    • Managed Disks Persistent, secured disks that support simple and scalable virtual machine deployment. Designed for 99.999% availability. Choose Premium (SSD) Disks for low latency and high throughput.
    • Queue: Store and retrieve messages up to 64 KB in size. Used to store messages to be processed asynchronously.
      • Pub/sub messaging data
      • Reliable Message Workflow processing
    • File: Fully managed file shares in the cloud, accessible via standard Server Message Block (SMB) protocol. Enables sharing files between applications using Windows APIs or REST API.
    • Table: Table storage provides a NoSQL Key-Value semi-structured store for massive scale structured data.
  • Data Redundancy/Replication:
    • Locally Redundant Storage (LRS)Designed to provide at least 99.999999999 % (11 9’s) durability of objects over a given year by keeping multiple copies of your data in one data center.
      • 3 copies within a single data center in a region
      • For premium storage account, you would be limited to LRS only.
    • Zone-Redundant Storage (ZRS)Designed to provide at least 99.9999999999 % (12 9’s) durability of objects over a given year by keeping multiple copies of your data across multiple datacenters or across regions.
      • 3 copies withing 2-3 data centers in a region
      • Block blobs only
      • Available only during Service Account creation
    • Geo-Redundant Storage (GRS): Designed to provide at least 99.99999999999999 % (16 9’s) durability of objects over a given year by keeping multiple copies of the data in one region, and asynchronously replicating to a second region.
      • 3 copies in a primary region and 3 copies in a secondary region.
      • Can access data in secondary region only after fail-over
    • Read-Access Geo-Redundant Storage (RA-GRS): Designed for to provide at least 99.99999999999999 % (16 9’s) durability of objects over a given year and 99.99 % read availability by allowing read access from the second region used for GRS.
      • 3 copies in a primary region and 3 copies in a secondary region.
      • Provides you read-only access onto secondary region storage copies
  • Disks used by VMs:
    • Operating Systems Disks (Automatic):
      • Every virtual machine has one attached operating system disk. It’s registered as a SATA drive.
      • Labeled as the C: drive for Windows and /dev/sda for Linux virtual machines.
      • This disk has a maximum capacity of 2048 gigabytes (2 TB).
    • Temporary Disks (Automatic):
      • Each VM contains a temporary disk that is automatically created for you.
      • Don’t store data on the temporary disk. The temporary disk provides short-term storage for applications and processes and is intended to only store data such as page or swap files.
      • Data on the temporary disk may be lost during a maintenance event or when you redeploy a VM. Whereas, during a standard reboot of the VM, the data on the temporary drive should persist.
      • On Windows virtual machines, the temporary disk is labeled as the D: drive by default and it used for storing pagefile.sys.  The size of the temporary disk varies, based on the size of the virtual machine.
      • On Linux virtual machines, the disk is typically /dev/sdb and is formated and mounted to /mnt by the Azure Linux Agent.
      • The size of the temporary disk varies, based on the size of the virtual machine.
    • Data Disks (User Defined):
      • A data disk is a VHD that’s attached to a virtual machine to store application data, or other data you need to keep.
      • Data disks are registered as SCSI drives and are labeled with a letter that you choose.
      • Each data disk has a maximum capacity of 4095 GB (4 TB).
      • You can add data disks to a virtual machine at any time, by attaching the disk to the virtual machine.
      • Data disks are stored in a BLOB in an Azure storage account.
  • Disk Types: Azure now offers three types of storage accounts: General Purpose v2, General Purpose v1, and Blob Storage. Storage accounts determine eligibility for certain storage services and features, and each is priced differently.
    • Each subscription is allowed 200 storage accounts and 500 TB per account.
    • Premium SSD disks: are backed by SSDs, and delivers high-performance, low-latency disk support for VMs running I/O-intensive workloads. Typically you can use Premium SSD disks with sizes that include an “s” in the series name. For example, there is the Dv3-Series and the Dsv3-series, the Dsv3-series can be used with Premium SSD disks.
      • SDD-based, high performance, low latency disk supports for VM running IO-intensive workloads or hosting mission critical production environment.
      • Great for I/O intensive workloads such as production and performance sensitive workloads, for example Dynamics, Exchange Server, SQL Server, SharePoint Server
      • Max Throughput per Disk = 250 MiB/s
      • Max IOPS per Disk = 7500 IOPS
      • Need a DS-, DSv2, GS- or FS-series VMs
      • Available in 3 sizes (128 GB, 512 GB, 1024 GB)
      • Not available in all regions
      • Page (VM) blobs only
      • Pre-configured disk size
    • Standard SSD disks (Preview): Standard SSD disks are designed to address the same kind of workloads as Standard HDD disks, but offer more consistent performance and reliability than HDD.
      • Standard SSD disks combine elements of Premium SSD disks and Standard HDD disks to form a cost-effective solution best suited for applications like web servers that do not need high IOPS on disks.
      • Where available, Standard SSD disks are the recommended deployment option for most workloads.
      • Standard SSD disks are only available as Managed Disks, and while in preview are only available in select regions and with the locally redundant storage (LRS) resiliency type.
      • Suitable for Web servers, lightly used enterprise applications and Dev/Test
      • Max Throughput per Disk = 60 MiB/s
      • Max IOPS per Disk = 500 IOP
    • Standard HDD disksStandard HDD disks are backed by HDDs, and deliver cost-effective storage. Standard HDD storage can be replicated locally in one datacenter, or be geo-redundant with primary and secondary data centers. 
      • HDD-based cost effective disk support for Dev/Test virtual machines, backup, Non-critical and Infrequent access
      • agnetic based disks, having low IOPS and moderate latency
      • Default for some instances sizes, others use SSD
      • IOPS value represent maximum values
      • Can use any redundancy option
      • General Purpose: Blob, Table, Queue, File
      • Blob Storage: Hot access tier, Cool access tier
    • Managed: Managed Disks handles the storage account creation/management in the background for you, and ensures that you do not have to worry about the scalability limits of the storage account.
      • You simply specify the disk size and the performance tier (Standard/Premium), and Azure creates and manages the disk for you.
      • As you add disks or scale the VM up and down, you don’t have to worry about the storage being used.
      • RBAC, tags and Locks: Disk level.
      • Replication: LRS
      • Encryption: ADE, SSE on by default
      • Simple: Abstract storage accounts from customer
      • Granular access control: Top level ARM resource, apply Azure RBAC
      • Storage account limit don’t apply: Enables scaling free of storage account limitations
      • Secure: No public access to underline blob
      • Support VM level disk encryption: Secure data at rest
      • Better storage resiliency: Prevents single points of failure due to storage
    • Unmanaged disk: Unmanaged disks are the traditional type of disks that have been used by VMs.
      •  With these disks, you create your own storage account and specify that storage account when you create the disk.
      • Make sure you don’t put too many disks in the same storage account, because you could exceed the scalability targets of the storage account (20,000 IOPS, for example), resulting in the VMs being throttled.
      • With unmanaged disks, you have to figure out how to maximize the use of one or more storage accounts to get the best performance out of your VMs.
      • If you use unmanaged standard disks (HDD), you should enable TRIM. TRIM discards unused blocks on the disk so you are only billed for storage that you are actually using. This can save on costs if you create large files and then delete them.
        • fsutil behavior query DisableDeleteNotify   (If shows 1, then set it 0)
        • fsutil behavior set DisableDeleteNotify 0
      • RBAC, tags and Locks: Storage account level.
      • Replication: LRS, GRS, RA-GRS
      • Encryption: ADE, SSE
  • Virtual Network Service Endpoints:
    • Associate a storage account with a VNet
    • Limit access to storage account from VNet
    • NAT rules to support on-premises connections
  • Host Caching:
    • None: Disable on IaaS Domain Controllers (DCs). Good for random I/O
    • Read only: Write-through. Stored on disk and RAM of physical host OS
    • Read/WriteWrite back. Stored in memory of physical host OS
  • Azure Import/Export: Azure Import/Export service enables you to transfer large amounts of data to and from Azure using hard disk drives.
    • Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter.
    • Import: Data from one or more disks can be imported either to Azure Blob storage or Azure Files.
    • Export: This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites.
    • Migrating data to the cloud: Move large amounts of data to Azure quickly and cost effectively.
    • Content distribution: Quickly send data to your customer sites.
    • Backup: Take backups of your on-premises data to store in Azure blob storage.
    • Data recovery: Recover large amount of data stored in blob storage and have it delivered to your on-premises location.
  • Storage Explorer Tools:
    • Azure Storage Explorer
    • CloudBerry
  • Basic Facts:
    • You can upload an existing VHD file from your on-premises data center to Azure with Azure Storage Explorer or the Add-AzureRMVHD command.
    • Storage spaces can be used to combine multiple disks into a single larger high performance volume.
  • Databases:
    • Azure SQL Database (AWS RDS):
      • OTLP engine
      • Active-geo replication
      • Automatic backups
      • Database Throughput Unit (DTU) performance metrics
      • Elastic pools
    • Azure SQL Database Managed Instance:
      • Greater access to the underlying virtual machine
      • Graphical user management
      • SQL Server Agent
    • Azure SQL Data Warehouse (AWS Redshift):
      • Massively parallel processing OLAP engine
      • Data Warehouse Unit (DWU) performance metric
    • MySQL Database (ClearDB):
      • MS partner hosted option (Aventra)
      • Can be deployed alone or behind Azure App Service app
    • Azure Database for MySQL:
      • Native MySQL option in Azure
      • MySQL Community edition
      • No-downtime scaling
    • Azure Database for PostgreSQL:
      • No downtime scaling
      • Compute Units
    • Azure CosmosDB (AWS DynamoDB):
      • Multi-model NoSQL database system
      • SQL (Document DB)
      • Table (key-value)
      • MongoDB
      • Gremlin (graphs)
      • Turnkey global distribution
    • Azure HDInsight (AWS EMR):
      • Big data
      • Managed Hadoop clusters
      • HDFS distributed storages
      • MapReduce distributed processing
      • Pig scripting
      • Hive query
    • Azure Redis Cache (AWS ElasticCache):
      • Super fast in-memory key-value database
      • Can persist data
      • Good choice for rapidly changing dynamic data
      • Session state
  • Windows tools:
    • diskmgmt.msc     (Run > diskmgmt)
    • sysprep                 (C:\Windows\System32\Sysprep) 
Posted in azure

Azure basics

  • Cloud Infrastructure Models:
    • The Cloud: A model for providing infrastructure, platform, and application services on-demand to consumers.
    • Public Cloud: All services exists in the Internet, Multi-tenancy. e.g Azure
    • Private Cloud: All services exists in the private network, Complex System Center. e.g Azure Stack.
    • Hybrid Cloud: Secure, private connection between public and private clouds via VPN or ExpressRoute.
  • Cloud Delivery Model:
    • IaaS: Infrastructure as a Service, provides access to compute, storage and networking. Targeted at sysadmins. e.g Azure Virtual Machines.
      • Customer manages: Applications, Data, Runtime, Middleware, OS
      • Vendors manages: Virtualization, Servers, Storage, Networking
      • It includes:
        • Data center physical plant/building
        • Networking firewalls/security
        • Servers and storage
    • PasS: Platform as a Service, provides ability to develop applications using web-based tools. Targeted at developers. e.g Azure App Service
      • Customer manages: Applications, Data
      • Vendors manages: Runtime, Middleware, OS, Virtualization, Servers, Storage, Networking
      • It includes IaaS and the followings:
        • Operating system
        • Development tools, database management, business analytics
    • SaaS: Software as a Service, provides access to complete application running on the cloud. Targeted at customers. e.g Office 365
      • It includes IaaS, PaaS and Hosted apps
    • Package or higher level services:
      • DBaaS: Database as a Service
      • IDaaS: Identity as a Service
      • DRaaS: Disaster Recovery as a Service
  • Azure global infrastructure:
    • RegionsA region is a set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network.
    • GeographiesA geography is a discrete market, typically containing two or more regions, that preserves data residency and compliance boundaries.
      • Azure regions are organized into geographies. An Azure geography ensures that data residency, sovereignty, compliance, and resiliency requirements are honored within geographical boundaries.
      • Geographies allow customers with specific data-residency and compliance needs to keep their data and applications close. Geographies are fault-tolerant to withstand complete region failure through their connection to our dedicated high-capacity networking infrastructure.
    • Availability Zones: Availability Zones are physically separate locations within an Azure region. Each Availability Zone is made up of one or more datacenters equipped with independent power, cooling, and networking.
      • Availability Zones allow customers to run mission-critical applications with high availability and low-latency replication.
      • Availability Zones are unique physical locations with independent power, network, and cooling. Each Availability Zone is comprised of one or more datacenters and houses infrastructure to support highly available, mission critical applications. Availability Zones are tolerant to datacenter failures through redundancy and logical isolation of services.
      • Azure Availability Set is a group of virtual machines that are deployed across fault domains and update domains. Availability sets make sure that your application is not affected by single points of failure, like the network switch or the power unit of a rack of servers.
      • Fault Domain: Virtual machines in the same fault domain share a common power source and physical network switch.
      • Update Domain: Virtual machines in the same update domain will be restarted together during planned maintenance. Azure never restarts more than one update domain at a time.
    • Fabric Controller: The fabric controller is a distributed application with many responsibilities. It allocates services, monitors the health of the server and the services running on it, and heals servers when they fail.
    • Front endEach instance of the fabric controller is connected to another set of servers running cloud orchestration software, typically known as a front end. The front end hosts the web services, RESTful APIs, and internal Azure databases used for all functions the cloud performs.
  • Azure Resource Manager (ARM) Deployment ModelAzure Resource Manager contains a network provider that provides advanced control and network management capabilities.
    • With Azure Resource Manager, you can benefit from:
      •  Faster configuration due to resources being grouped.
      • Easier management.
      • Customization and deployment based on JavaScript Object Notation (JSON) templates.
      • Networking resources such as IP addresses, DNS settings, or NICs are managed independently and can be assigned to VMs, Azure load balancers, or application gateways.
    • Resource Group: A container to organize your resources in Azure.
      • When provisioning Azure services, you can group related services that exist in multiple regions to more easily manage those services. Resource groups are logical groups and can therefore span multiple regions.
  •  Azure Service Management (ASM) deployment model:
  • Azure Storage Account provides a unique namespace to store and access your Azure storage data objects. All objects in an Azure Storage Account are billed together as a group. By default, the data in your account is available only to you, the account owner.
    • Storage account names must be unique across all existing storage account names in Azure. They must be 3-24 characters long and can only contain lowercase letters and numbers.
Posted in azure

AWS Well Architected Framework and White papers

(I) Overview of AWS:

  • Cloud Computing: Its the on-demand delivery of IT resources and applications via the Internet with pay-as-you-go pricing. Cloud computing provides a simple way to access servers, storage, databases, and a broad set of application services over the Internet. Cloud computing providers such as AWS own and maintain the network-connected hardware required for these application services, while you provision and use what you need using a web application.
    • Types of Cloud Computing:
      • Infrastructure as a Service (IaaS): Infrastructure as a Service, sometimes abbreviated as IaaS, contains the basic building blocks for cloud IT and typically provide access to computers (virtual or on dedicated hardware),  data storage space and networking features. e.g  Amazon EC2, Windows Azure, Google Compute Engine, Rackspace.
      • Platform as a Service (PaaS): Platforms as a service remove the need for organizations to manage the underlying infrastructure (usually hardware and operating systems) and allow you to focus on the deployment and management of your applications. e.g AWS RDS, Elastic Beanstalk, Windows Azure, Google App Engine
      • Software as a Service (SaaS): Software as a Service provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring to end-user applications. e.g Gmail, Microsoft Office 365, AWS DynamoDB and S3.
    • Cloud Deployment Models:
      • Cloud: A cloud-based application is fully deployed in the cloud and all parts of the application run in the cloud.
      • Hybrid: A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing on-premises resources.
      • On-premises (private cloud): Deploying resources on-premises, using virtualization and resource management tools, is sometimes called “private cloud”.
  • Advantages:
    • Trade Capital Expenses for variable expenses.
    • Benefit from massive economics of scale.
    • Stop guessing about capacity.
    • Increase speed and agility.
    • Stop spending money running and maintaining data centers.
    • Go global in minutes.
  • Security and Compliance:
    • State of the art electronic surveillance and multi factor access control systems.
    • Staffed 24 by 7 security gaurds
    • Access is authorized on a “least privilege basis”
    • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70 Type II), SOC 2, SOC3
    • FISMA, DIACAP, FedRAMP, PCI DSS Level 1, ISO 27001, ISO 9001, ITAR, FIPS 140-2
    • HIPA, Cloud Security Alliance (CSA), Motion Picture Association of America (MPAA)

(II) Overview of Security Process:

  • AWS Shared Security Responsibilities Model:
    • It describe what AWS is responsible for and what the customer is responsible for when it relates to security. Amazon is responsible for securing the underlying infrastructure that support the cloud (i.e security of cloud), and you are responsible for anything you put in the cloud or connect to the cloud (i.e security in cloud).
      • Infrastructure Services:
        • This includes AWS services like VPC, EC2, EBS and Auto Scaling.
        • Amazon is responsible for security of the cloud.
          • The Global Infrastructure (Regions, AZs, Edge Locations)
          • The Foundation Services (Compute, Storage, Database, Networking)
        • The customer is responsible for security in the cloud.
          • Customer Data
          • Platforms and Applications
          • OS and  Network configs (patching, security groups, network ACLs)
          • Customer IAM (password, access keys, permissions)
      • Container Services:
        • The include services like RDS, ECS and EMR.
        • AWS is responsible for:
          • The Global Infrastructure (Regions, AZs, Edge Locations)
          • The Foundation Services (Compute, Storage, Database, Networking)
          • Platforms and Applications
          • OS and network configs
        • The customer is responsible for:
          • Customer Data
          • Customer IAM (password, access keys, permissions)
      • Abstracted Services:
        • The include services like S3, DynamoDB and Lambda.
        • AWS is responsible for:
          • The Global Infrastructure (Regions, AZs, Edge Locations)
          • The Foundation Services (Compute, Storage, Database, Networking)
          • Platforms and Applications
          • OS and network configs
          • Network traffic protection
        • The customer is responsible for:
          • Customer IAM
          • Data in transit and client-side
        • Additional Services:
          • Data encryption
          • Data integrity
  • AWS Security Responsibilities:
    • Amazon is responsible for protecting the Compute, Storage, Database, Networking and Data Center facilities (i.e Regions, Availability Zones, Edge Locations) that runs all of the services in AWS cloud.
    • AWS is also responsible for security configuration of its managed services such as RDS, DynamoDB, S3, Redshift, EMR, WorkSpaces.
  • Customer Security Responsibilities:
    • Customer is responsible for Customer Data, IAM, Platform, Applications,  Operating System, Network & Firewall Configuration, Client and Server Side Data Encryption, Network Traffic Protection.
    • IaaS, that includes such as VPC and EC2 are completely under your control and require you to perform all of the necessary security configuration and management tasks.
    • Managed Services (RDS, S3, DynamoDB), AWS is responsible for patching, antivirus etc, however you are responsible for account management and user access. Its recommend that MFA be implemented, connect to theses services using SSL/TLS, in addition API and user activity should be logged using CloudTrail.
  • Storage Decommissioning:
    • When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals.
    • AWS uses the techniques detailed in DoD 5220.22-M (Department of Defense) or NIST 800-88  (National Institute of Standards and Technology) or Guidelines for Media Sanitization to destroy data as part of the decommissioning process.
    • All decommissioned magnetic stage devices are degaussed and physically destroyed in accordance with  industry standard practices.
  • Network Security:
    • Transmission Protection: You can connect to AWS  services using HTTP and HTTPS. AWS also offers Amazon VPC which provides a private subnet within AWS cloud, and the ability to use an IPsec VPN connection between AWS VPC and your on-premises data center.
    • Amazon Corporate Segregation: Logically, the AWS public cloud network is segregated from the Amazon corporate network by means of a complex set of network security segregation devices.
  • Network Monitoring and Protection:
    • It protects from:
      • DDoS (Distributed Denial of Service)
      • Man in the middle attacks (MITM)
      • Port scanning
      • Packet sniffing by other tenants
      • IP Spoofing: AWS-controlled, host-based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own.
    • Unauthorized port scans by AWS customers on their or others’ EC2 instances are a violation of the AWS Acceptable Use Policy. You must request permission from AWS in advance to conduct vulnerability scans on your own EC2 instances (t2.micro or t2.small instance types are not allowed) as required to meet your specific compliance requirements.
  • AWS Credentials:
    • Passwords: Used for AWS root account or IAM user account login to the AWS Management console. AWS passwords must be 6-128 chars.
    • Multi-Factor Authentication (MFA): Its a six digit single-use code that’s required in addition to your password to login to your AWS root account or IAM user account.
    • Access Keys: Digitally signed requests to AWS APIs (using the AWS SDK, CLI or REST/Query APIs). Include an Access key ID and a Secret Access Key. You use access keys to digitally sign programmatic requests that you make to AWS.
    • Key Pairs: Its a used for SSH login to EC2 instances and CloudFront signed URLs. Its required to  connect to EC2 instance launched from a public AMI. They are 1024-bit SSH-2 RSA keys. You can get automatically generated key pair by AWS when you launch EC2 instance or you can upload your own before launching the instance.
    • X.509 Certificates: Used for digitally signed SOAP requests to AWS APIs (for S3) and SSL server certificate for HTTPS. You can have AWS create X.509 certificate and private key or you can upload your own certificate using Security Credentials page.
  • AWS Trusted Advisor: It analyzes your AWS environment and provides best practice recommendations in following five categories:
    • Security
    • Performance
    • Cost Optimzation
    • Fault Tolerance
    • Service Limits
    • Available to all customers for access to seven core checks:
      • Security (security groups, IAM MFA on root account, EBS and RDS public snapshots)
      • Performance (service limits)
    • Available to Business and Enterprise support plans:
      • Access to full set of checks
      • Notifications (weekly updates)
      • Programmatic access (retrieve results from AWS Support API)
    • Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money, improve system performance or close security gaps.
    • It provides alerts on several of the most common security misconfigurations that can occur, including:
      • Not using MFA on your root AWS account.
      • Neglecting to create IAM accounts for your internal users
      • Allowing public access to S3 buckets
      • Leaving certain ports open that make you vulnerable to hacking and unauthorized access
      • Not turning on user activity logging (AWS CloudTrail)
  • Instance Isolation:
    • Different instances running on the same physical machines are isolated from each other via the Xen hypervisor. In addition, the AWS firewall resides within the hypervisor layer, between the physical network interface and the instance’s virtual interface.
    • All packets must pass through this layer, thus an instance’s neighbors have no more access to that instances than any other host on the Internet and can be treated as if they are on separate physical host. The physical RAM is separated using similar mechanism.
    • Customer instances have no access to raw disk devices, but instead are presented with virtualized disks. The AWS proprietary disk virtualization layer automatically resets every block of storage used by the customers, so that one customer’s data is never unintentionally exposed to another.
    • In addition, memory allocated to guests is scrubbed (set to zero) by the hypervisor when its unallocated to a guest. The memory is not returned to the pool of free memory available for new allocations until the memory scrubbing is complete.
  • Guest Operating System: Virtual instances are completely controlled by the customer. You have full root or administrative access over accounts, services and applications. AWS doesn’t have any access rights to your instances or the guest OS.
    • Encryption of sensitive data is generally a good security practice, and AWS provide the ability to encrypt EBS volumes and their snapshots with AES-256.
    • In order to be able to do this efficiently and with low latency, the EBS encryption feature is only available on EC2’s more powerful instance types (e.g M3, C3, R3, G2).
  • Firewall: AWS EC2 provides a complete firewall solution, this mandatory inbound firewall is configured in a default deny-all mode and AWS EC2 customers must explicitly open the ports needed to allow inbound traffic. All ingress traffic is blocked and egress traffic is allowed by default.
  • Elastic Load Balancing: SSL termination on the load balancer is supported. Allows you to identify the originating IP address of a client connecting to your servers, whether you are using HTTPS or TCP load balancing.
  • Direct Connect: Bypass Internet service providers in your network path. You can procure rack space withing the facility, housing the AWS Direct Connect location and deploy your equipment nearby. Once deployed, you can connect this equipment to AWS Direct Connect using a cross-connect.
    • Using industry standard 802.1q VLANs, the dedicated connection can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources (e.g S3 buckets) using public IPs, and private resources such (e.g EC2 instances in a VPC) using private IPs, while maintaining network separation between the public and private environment.

(III) AWS Risk and Compliance:

  • Risk: AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks. AWS management re-evaluates the strategic business plan at least biannually.
    • This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks.
    • AWS Security regularly scans all Internet facing service endpoint IP addresses for vulnerability (these scans don’t include customer instances). AWS Security notifies the appropriate parties to re-mediate any identified vulnerabilities. In addition, external vulnerability threat assessments are performed regularly by independent security firms.
    • Findings and recommendations resulting from these assessments are categorized and delivered to AWS leadership. These scans are done in a manner for the health and viability of the underlying AWS infrastructure and are not meant to replace the customer’s own vulnerability scans required to meet their specific compliance requirements.
    • Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s own instances and don’t violate the AWS Acceptable Use Policy.

(IV) Storage Options in the AWS cloud:

(V) Architecting for the AWS Cloud: Best Practices:

  • Business Benefits of Cloud:
    • Almost zero upfront infrastructure investment
    • Just-in-time Infrastructure
    • More efficient resource utilization
    • Usage-based costing
    • Reduced time to market
  • Technical Benefits of Cloud:
    • Automation – Scriptable infrastructure
    • Auto-scalling
    • Proactive scaling
    • More Efficient Development lifecycle
    • Improved Testability
    • Disaster Recovery and Business Contiuity
    • Overflow the traffic to the cloud
  • Design For Failure:
    • Rule of thumb: Be a pessimist when designing architectures in the cloud, assume things will fail. In other words always design, implement and deploy for automated recovery from failure.
    • In particular, assume that your hardware or software will fail, outages will occur, some disaster will strike, traffic will increase.
  • Decouple Your Components:
    • The key is to build components that don’t have tight dependencies on each other, so that if one component were to die, sleep or remain busy for some reason, the other components in the system are built so as to continue to work as if no failure is happening.
    • In essence, loose coupling isolates the various layers and components of your application so that each components interacts asynchronously with the others and treats them as black box.
  • Implement Elasticity:
    • The cloud brings new concept of elasticity in your applications. Elasticity can be implemented in three ways:
      • Proactive Cyclic Scaling: Periodic scaling that occurs at fixed interval (daily – during working hours, weekly – weekdays, monthly, quarterly)
      • Proactive Event-based Scaling: Scaling just when you are expecting a big surge of traffic requests due to a scheduled business event (new product launch, marketing campaigns, black friday sale)
      • Auto-scaling based on demand: By using monitoring service, your system can send triggers to take appropriate actions so that it scales up or down based on metrics (cpu utilization of servers or network i/o for instance).

(V) AWS Well-Architected Framework:

  • Well-Architected Framework is a set of questions that you can use to evaluate how well your architecture is aligned to AWS best practices. It consists of 5 pillars: Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization and .
  • General Design Principles:
    • Stop guessing your capacity needs.
    • Test systems at production scale.
    • Automate to make architectural experimentation easier.
    • Allow for evolutionary architectures.
    • Data-Driven architectures
    • Improve through game days (such as black Friday)
  • (1) Operational Excellence: It includes operational practices and procedures used to manage production workloads. I addition, how planned changes are executed, as well as responses to unexpected operational events.
    • Design Principles:
      • Perform operations with code
      • Annotate documentation
      • Make frequent, small, reversible changes
      • Refine operations procedures frequently
      • Anticipate failure
      • Learn from operational failures
    • Best Practices:
      • Prepare: AWS Config and rules can be used to create standards for workloads and to determine if environments are compliant with those standards before being put into production. 
        • Operational Priorities
        • Design for Operations
        • Operational Readiness
      • Operate: CloudWatch allows you to watch operational health of a workload.
        • Understanding Operational Health
        • Responding to Events
      • Evolve: Elasticsearch Service (Amazon ES) allows you to analyze your log data to gain actionable insight quickly and securely.
        • Learning from Experience
        • Share Learnings
  • (2) Security: It  include the ability to protect information, systems and assets while delivering business value through risk assessments and mitigation strategies.
    • Design Principles:
      • Implement a strong identity foundation
      • Enable traceability
      • Apply security at all layers
      • Automate security best practices
      • Protect data in transit and at rest
      • Prepare for security events
    • Best Practices:
      • Identity and Access management
      • Detective Controls
      • Infrastructure Protection
      • Data Protection
      • Incident Response
  • (3) ReliabilityIt covers the ability of a system to recover from service or infrastructure outages/disruptions as well as the ability to dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
    • Design Principles:
      • Test recovery procedures
      • Automatically recovery from failure
      • Scale horizontally to increase aggregate system availability
      • Stop guessing capacity
      • Manage change in automation
    • Best Practices:
      • Foundations
      • Change Management
      • Failure Management
  • (4) Performance Efficiency: It focuses on how to use computing resources efficiently to meet your requirements and how to maintain that efficiency as demand changes and technology evolves.
    • Design Principles:
      • Democratize advanced technologies
      • Go global in minutes
      • Use server-less architectures
      • Experiment more often
      • Mechanical sympathy
    • Best Practices:
      • Selection
        • Compute
        • Storage
        • Database
        • Network
      • Review
      • Monitoring
      • Trade-Offs
  • (5) Cost Optimization: The Cost Optimization pillar includes the ability to avoid or eliminate unneeded cost or sub-optimal resources.
    • Design Principles:
      • Adopt a consumption model
      • Measure overall efficiency
      • Stop spending money on data center operations
      • Analyze and attribute expenditure
      • Use managed services to reduce cost of ownership
    • Best Practices:
      • Cost-Effective Resources:
        • Appropriately Provisioned
        • Right Sizing
        • Purchasing Options
        • Geographic Selection
        • Managed Services
      • Matching Supply and Demand:
        • Demand-Based
        • Buffer-Based
        • Time-Based
      • Expenditure Awareness:
        • Stakeholders
        • Visibility and Controls
        • Cost Attribution
        • Tagging
        • Entity Lifecycle Tracking
      • Optimizing Over Time:
        • Measure, Monitor, and Improve
        • Staying Ever Green
Posted in aws, cloud

AWS Monitoring, Management and Deployment Services

  1. CloudWatch (Monitor Resources and Applications):
    • Its AWS proprietary, integrated performance monitoring service. It allows for comprehensive and granular monitoring of all AWS provisioned resources, with the added ability to trigger alarms/events based off metric thresholds.
    • It monitors operational and performance metric for your AWS services (EC2, EBS, ELB and S3) and your applications.
    • You monitor your environment by configuring and viewing CloudWatch metrics.
    • Alarms can be created to trigger alerts, based on threshold you set on metrics.
    • You can create CloudWatch metrics to stop and start the instance based off of status check alarms.
    • Auto Scaling heavily utilizes Cloudwatch, relying on threshold and alarms to trigger the addition or removal of instances from an auto scaling group.
    • Metrics are specific to each AWS service or resource, and such metrics include:
      • EC2 per-instance metrics: CPUUtilization, CPUCreditUsage
      • S3 Metrics: NumberOfObjects, BucketSizeBytes
      • ELB Metrics: RequestCount, UnhealthyHostCount
    • Detailed vs Basic level monitoring:
      • Basic/Standard (5 mins): CloudWatch for Basic monitoring of EC2 instances, the CPU load, Disk I/O and Network I/O metrics are collected at 5 mins intervals and are stored for 2 weeks.
      • Detailed (1 min): Data is available in 1 mins period at additional fees.
    • CloudWatch EC2 monitoring:
      • System (Hypervisor) status checks: Things that are outside of our controls.
        • Loss of network connectivity or system power.
        • Hardware or Software issues on physical host
        • How to Solve: Generally restarting the instance will fix the issue. This cause the instance to launch on a different physical hardware device.
      • Instance status checks: Software issues that we control.
        • Failed system status checks.
        • Mis-configured networking or start configuration
        • Exhausted memory, corrupted filesystem or incompatible kernel
        • How to solve: Generally a reboot or solving the file system configuration issues.
      • Default metrics: CloudWatch will automatically monitor metrics that can be viewed at the host level (Not the software level) Such as:
        • CPU Utilization, DiskReadOps, DiskWriteOps, Network In/Out, StatusCheckFailed, Instance/System.
      • Custom metrics: OS level metrics that required a third part script (perl) to be installed (provided by AWS)
        • Memory utilization, memory used and available.
        • Disk space utilization, disk space used and available.
        • Disk swap utilization
    • CloudWatch Alarms: Allows you to set alarms that notify you when particular thresholds are hit. For example,  you can setup an alarm to be triggered whenever CPUUtilization metric on EC2 instance goes above 70%.
      • Alarms can be used to trigger other events in AWS, like publishing to an SNS topic or triggering Auto Scaling.
      • Using CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot or recover your EC2 instances.
      • You can use stop or terminate actions to help you save money when you no longer need an instance to be running.
      • You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.
    • CloudWatch Events: Helps you to respond to state changes in your AWS resources. When your resources change state they automatically send events into an event stream. You can create rules that match selected events in the stream and route them to targets to take action. You can also use rules to take action on a pre-determined schedule. For example, you can configure rules to:
      • Automatically invoke an Lambda function to update DNS entries when an event notifies you that EC2 instance enters the Running state
      • Direct specific API records from CloudTrail to a Kinesis stream for detailed analysis of potential security or availability risks
      • Take a snapshot of an EBS volume on a schedule
    • CloudWatch Logs: Helps you to aggregate, monitor, and store logs from EC2 instances, CloudTrail and other sources. e.g:
      • Logs are encrypted at rest and in transit.
      • You can check CloudWatch logs for error keywords, create an alarm and then restart the instance.
      • Monitor HTTP response codes in Apache logs
      • Receive alarms for errors in kernel logs
      • Count exceptions in application logs
      • CloudWatch Logs Agent:
        • A plug-in to the AWS CLI that pushes log data to CloudWatch Logs
        • A script (daemon) that initiates process to push data to CloudWatch Logs
        • A cron job that ensures that the daemon is always running
    • Retention period:
      • Less than 1 min Data are available for 3 hours
      • 1 min datapoints are available for 15 days
      • 5 mins datapoints are available for 63 days
      • 1 hour datapoints are available for 455 days (15 mons)
    • VPC Flow LogsAllows you to collect information about the IP traffic going to and from network interface in your VPC.
      • VPC Flow Log data is stored in a log group in CloudWatch and can be accessed from Logs in CloudWatch.
      • Flow logs can be created on:
        • VPC
        • Subnet (i.e include all network interfaces in it)
        • Network Interface and each interface have its own unique log stream.
      • The logs can be set on Accepted, Rejected or All Traffic.
      • Flow logs are not captured in real-time,  but data is captured in approx. 10 mins window and then data is published.
      • Its can be used to troubleshooting why certain traffic is not reaching to EC2 instance.
      • VPC Flow log consists of of specific 5-tuple of network traffic:
        1. Source IP address and port number
        2. Destination IP address and port number
        3. Protocol
      • Following traffic is not captured by VPC Flow logs:
        • DNS traffic: Traffic between EC2 instances and AWS DNS server
        • DHCP traffic
        • Instance metadata (168.2554.169.254) requests
  2. CloudTrail (Track User Activity and API Usage):
    • Its an auditing service for compliance, which logs all API calls made via AWS ConsoleAWS CLI or SDK to your AWS resources. It provides centralized logging (stored in S3) so that we can log each action taken in our environment and store it for later use if needed.
    • With CloudTrail, you can view events for your AWS account. Create a trail to retain a record of these events. With a trail, you can also create event metrics, trigger alerts, and create event workflows.
    • The recorded logs include the following information:
      • start time of the AWS API call
      • identity of the user
      • source IP address
      • request parameters
      • response elements returned by the service.
    • CloudTrail logs are placed into a designated S3 bucket and are encrypted by default using SSE-S3 (AES-256) keys but you may use SSE-KMS key as well. In addition, you can apply S3 Lifecycle rules to archive CloudTrail logs into Glacier or delete them. You may also setup SNS notification about log delivery and validation.
    • ClouTrail logs help to address security concerns by allowing you to view what actions users on your AWS account have performed.
    • Since AWS is just one big API, CloudTrail can log every single action taken in your account.
    • You can now turn on trail across all regions in your AWS account. CloudTrail will deliver logs files from all regions to the S3 bucket and optional CloudWatch Logs log group you specify.
    • Additionally, when AWS launches a new region, CloudTrail will create the same trail in new region. As a result, you will receive the log files containing API activity for the new region without taking any action.
  3. CloudFormation (Create and Manage Resources with Templates):
    • CloudFormation allows you to quickly and easily deploy your infrastructure resources and applications on AWS. Its allows you to turn infrastructure into code. This provides numerous benefits including quick deployments, infrastructure version control, and disaster recovery solutions.
    • You can convert your architecture into JSON formatted template, and that template can be used to deploy updated or replicated copies of that architecture into multiple regions.
    • It automate and saves time by deploying architecture in multiple regions.
    • It can be used to version control your infrastructure. Allowing for rollbacks to previous versions of your infrastructure if a new version has issues.
    • Allows for backups of your infrastructure and its a great solution for disaster recovery.
    • There are no additional charges for CloudFormation. You only get charged for underlying resources that are created using CloudFormation template.
    • Stack: A stack is a group of related resources that you manage as a single unit. You can use one of the templates provided by AWS to get started quickly with applications like WordPress or Drupal or create your own template.
    • StackSet: A StackSet is a container for CloudFormation stacks that lets you provision stacks across AWS accounts and regions by using a single AWS CloudFormation template.
    • TemplateTemplates tell CloudFormation which AWS resources to provision and how to provision them. When you create a CloudFormation stack, you must submit a template.
      • If you already have AWS resources running, the CloudFormer tool can create a template from your existing resources. This means you can capture and redeploy applications you already have running.
      • To build and view templates, you can use the drag-and-drop tool called CloudFormation Designer. You drag-and-drop the resources that you want to add to your template and drag lines between resources to create connections.
  4. AWS Config (Track Resource Inventory and Changes):
    • AWS Config provides an inventory of your AWS resources and a history of configuration changes to these resources. You can use AWS Config to define rules that evaluate these configurations for compliance.
    • AWS Config automatically discovers your AWS resources and starts recording configuration changes. You can create Config rules from a set of pre-built managed rules to get started.
    • You can configure any of the pre-built rules to suit your needs, or create your own rules using AWS Lambda to check configurations for compliance.
    • AWS Config continuously records configuration changes to resources and automatically evaluates these changes against relevant rules. You can use a dashboard to assess overall configuration compliance.
    • Evaluate resource configurations for desired settings
    • Get a snapshot of the current configurations associated with your account.
    • You can retrieve current and historical configurations of resources in your account.
    • Retrieve a notification for creations, deletions, and modifications.
    • View relationships between resources (e.g member of a security groups)
    • Administrating resources: Notification when a resource violates config rules (e.g a user launches an EC2 instance in a prohibited region).
    • Auditing and Compliance: Historical records of configs are sometimes needed in auditing.
    • Configuration management and troubleshooting: Configuration changes on one resource might affect others, can help find these issues quickly and restore last known good configurations.
    • Security Analysis: Allows for historical records of IAM policies and security group configurations. e.g what permissions a user had at the time of an issue.
  5. Systems Manager (Central place to View and Manage AWS Resources):
    • With Systems Manager you can gain operational insight and take Action on AWS resources. View operational data for groups of resources, so you can quickly identify and act on any issues that might impact applications that use those resources.
    • It allows for grouping resources and performing automation, patching, and running commands.
    • Resource Group: Allows you to group your resources logically (e.g PRD, STG, TST). Make sense out of your AWS footprint by grouping your resources into applications.
    • Insights & Dashboard: View account-level and group-related insights through operational dashboards. Aggregates CloudTrail, CloudWatch, TrustedAdvisor, and more into a single dashboard for each resource group.
    • Software Inventory: Collect software catalog and configuration for your instances. A listing of your instances and software installed on them. Can collect data on applications, files, network configs , services and more.
    • Automations: Use built-in automations or build your own to accomplish complex operational tasks at scale. Automate IT operations and management tasks through scheduling, triggering from an alarm or directly.
    • Run Command: Safe and secure remote execution across instances at scale without SSH or PowerShell. Secure remote management replacing need for bastion hosts or SSH.
    • Patch Manager: Helps deploys OS and software patches across EC2 or on-premises.
    • Maintenance Window: Allows for scheduling administrative and maintenance tasks.
    • State Manager and Parameter Store: Centralized hierarchical store for managing secrets or plain-text data. Used for configuration management.
  6. OpsWorks (Configuration Management with Chef and Puppet):
    • AWS OpsWorks is a configuration management service that helps you build and operate highly dynamic applications, and propagate changes instantly.
    • AWS OpsWorks provides three solutions to configure your infrastructure:
      • OpsWorks Stacks: Define, group, provision, deploy, and operate your applications in AWS by using Chef in local mode.
        • It lets you manage applications and servers on AWS and on-premises.
        • With OpsWorks Stacks, you can model you application as a stack containing different layers, such as load balancing, database, and application servers. You can deploy and configure EC2 instances in each layer or connect other resources as RDS databases.
      • OpsWorks for Chef Automate: Create Chef servers that include Chef Automate premium features, and use the Chef DK or any Chef tooling to manage them.
      • OpsWorks for Puppet Enterprise: Create Puppet servers that include Puppet Enterprise features. Inspect, deliver, update, monitor, and secure your infrastructure.
  7. Elastic Beanstalk (Run and Manage Web Apps):
    • With Elastic Beanstalk, you can deploy, monitor, and scale an application quickly and easily.
    • Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, Python, PHP, Node.js, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
    • It allows for quick creation of simple single tier application infrastructure and the deployment of code out into that infrastructure that take advantage of AWS services such as EC2, Auto Scaling, ELB, RDS , SQS, CloudFront.
    • Its designed to make it easy to deploy less complex applications and reduce the management required for building and deploying those applications.
    • In order to quickly provision an AWS environment that require little to no management. The operation fits within the parameters of the Beanstalk service.
    • It can deploy from repositories or from upload code files. Easily update applications by uploading new code files or requesting a pull from a repository.
    • It can be used to host Docker containers and supports the deployment of web application from Docker containers.
    • It stores your application files and optionally server log files in S3 bucket. You may configure (by editing environment config settings) Elastic Beanstalk to copy your server log files every hour to S3 bucket.
  8. WorkSpaces (Desktops in the Cloud)
    • Amazon WorkSpaces is a fully managed, secure Desktop-as-a-Service (DaaS) solution which runs on AWS.
    • With Amazon WorkSpaces, you can easily provision virtual, cloud-based Microsoft Windows 7 Experience (provided by Windows Server 2008 R2) desktops for your users, providing them access to the documents, applications, and resources they need, anywhere, anytime, from any supported device.
    • With Amazon WorkSpaces, you pay either monthly or hourly just for the AWS WorkSpaces you launch, which helps you save money when compared to traditional desktops and on-premises Virtual Desktop Infrastructure (VDI) solutions.
    • You don’t need AWS account to login to workspaces and you are given local Administrator access by default.
    • Workspaces are persistent and all data on D:\ drive is backup every 12 hrs.
Posted in aws, cloud