Each client page is an object in Amazon S3 which is addressable by a unique DNS CNAME such as https://s3.amazon.com/foo/bar.html. Where s3.amazon.com translates to the IP address of the S3 endpoint and /foo/bar.html is the unique name given to the object in S3.
Blog posts to help enterprises run applications in the cloud. Entries on cloud migrations as Fortune 1000 companies embark on migrating to the cloud.
Thursday, June 26, 2014
Oracle Amazon Linux updates
The Amazon Linux AMI repositories are available in S3, configured such that instances with EC2 IP addresses can access the repositories and download packages onto the Amazon Linux AMI instances. Once a package has been downloaded from the Amazon Linux AMI repository to an instance, any further actions taken with that package are up to the customer who launched the instance.
Amazon EMR termination using Data Pipeline
Data Pipeline provides a “terminateAfter” functionality for all activities, including EmrActivity. It is possible to set terminateAfter to be relative to the start time. It is all possible to wrap your existing EMR jobflow in a Data Pipeline EmrActivity and then set the terminateAfter on the EmrCluster object.
Determining county of origin for directing web traffic at the edge
The MaxMind API (http://www.maxmind.com/en/country) can be used as either a Nginx module or as a web service. The API is only 99.98% accurate and does not detect proxies.
Labels:
aws,
country of origin,
nginx,
origin,
redirect,
web traffic
AWS AMI hardening
AWS AMI hardening procedures and industry standards can be found in this AMI hardening article: https://aws.amazon.com/articles/9001172542712674
The client is responsible for the
initial security posture of the machine images distributed.
Private AMIs need to be configured in a secure way that does not violate the
AWS Acceptable Use Policy. Software referenced should be up to date with relevant security patches, and adherent to the following:
All
AMIs
•Disable services and protocols that
authenticate users in clear text. (e.g. telnet and ftp)
•Do not start unnecessary network services
on launch. Only administrative services (SSH/RDP) and the services required for
your application should be started.
•Securely delete (use Sysinternals, SDelete or Eraser)
all AWS credentials from disk and
configuration files.
•Securely delete
any third-party credentials from disk and
configuration files.
•Securely delete
any additional certificates or key material
from the system.
•Ensure that software installed on your
AMI does not have default internal accounts and passwords (e.g. database
servers with a default admin username and password)
•Ensure that the system does not violate
the Amazon Web Services Acceptable Use Policy.
Examples include open SMTP relays or
proxy servers.
Windows
specific
•Ensure that all enabled user accounts
have new randomly generated passwords on instance creation. The EC2 Config
Service can be set to do this for the Administrator account on next boot, but
you must explicitly enable this before bundling the image.
•Ensure that the guest account is
disabled.
•Clear the windows event log.
•Do not join the instance to a windows
domain.
Do not enable any file share points that
are accessible by unauthenticated users. It is recommended to completely
disable file share
AWS encrypting data at rest
Here is good white paper on encrypting data at rest on AWS:
http://media.amazonwebservices.com/AWS_Securing_Data_at_Rest_with_Encryption.pdf
Amazon now offers Amazon EBS native encryption: http://aws.amazon.com/about-aws/whats-new/2014/05/21/Amazon-EBS-encryption-now-available/
S3 has SSE encryption, client side encryption and SSE with key managed by you: http://aws.amazon.com/blogs/aws/s3-encryption-with-your-keys/
All data in Glacier and Redshift is automatically encrypted.
http://media.amazonwebservices.com/AWS_Securing_Data_at_Rest_with_Encryption.pdf
Amazon now offers Amazon EBS native encryption: http://aws.amazon.com/about-aws/whats-new/2014/05/21/Amazon-EBS-encryption-now-available/
S3 has SSE encryption, client side encryption and SSE with key managed by you: http://aws.amazon.com/blogs/aws/s3-encryption-with-your-keys/
All data in Glacier and Redshift is automatically encrypted.
Amazon SNS : Mobile push and SMS messaging
Here is some specifics on how Amazon SNS works. AWS SNS allows you to use one notification system regardless of the device. http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html.
You also asked about code samples: Here is a code that uses the AWS SDK for Java to publish a message to a GCM Model endpoint: http://docs.aws.amazon.com/sns/latest/dg/mobile-push-gcm.html#SNSMobilePushGCMAPI
Here is an example using the REST/Query API : http://docs.aws.amazon.com/sns/2010-03-31/api/API_Publish.html
There is no limit on how many messages can be sent through a single topic. This lists the only SNS limit: http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_sns : Default limit (meaning it can be raised, not a hard limit) of 3,000 topics. Have one topic, or at most one topic for each device type. This is because one topic can support deliveries to multiple endpoint types. For example, you can group together iOS, Android and SMS recipients. When you publish once to a topic, SNS delivers appropriately formatted copies of your message to each subscriber.
Cloud Foundry on AWS
It is possible to run Cloud Foundry on AWS, here is a good blog post: http://www.activestate.com/blog/2013/07/how-deploy-cloud-foundry-v2-aws-vagrant.
BOSH is a way to deploy to AWS with code on github: https://github.com/cloudfoundry-community/bosh-bootstrap
Amazon Redshift - What is new
Here are some new things for Redshift:
- Increased concurrency to 50 slots —Run more simultaneous queries and balance large and small queries
- Copy from EMR — Easily load data in parallel from EMR clusters to Redshift
- Copy from JSON – Load in parallel directly from JSON files on S3.
- REGEX_SUBSTR() - Use regex expressions directly in SQL queries
- Resize Progress Indicator — Resize progress monitor in the console or using API to get the same data.
- Max Result Set Size — Choose cursor counts and result set sizes
- FedRAMP approval — FedRAMP
- Support for ECDHE-RSA & ECDHE-ESDCSA SSL cipher suites — Perfect forward security for SSL connections.
Monday, June 23, 2014
EC2 Instance create date and time
Some times you may want to retrieve the creation date and time of an EC2 instance.
From the docs, an EC2 instance has the property launchTime(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-ItemType-RunningInstancesItemType.html), you can easily build a boto script that queries for all instances and reports the launchTime(http://boto.readthedocs.org/en/latest/ref/ec2.html#module-boto.ec2.instance).
From the docs, an EC2 instance has the property launchTime(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-ItemType-RunningInstancesItemType.html), you can easily build a boto script that queries for all instances and reports the launchTime(http://boto.readthedocs.org/en/latest/ref/ec2.html#module-boto.ec2.instance).
Route 53 weight average and record sets returned
When processing a DNS query, Amazon Route 53 searches for a resource record set that matches the specified name and type. If a group of resource record sets have the same name and type, Amazon Route 53 selects one from that group. The probability of any one resource record set being selected depends on its weight as a proportion of the total weight for all resource record sets in the group:
For example, suppose you create three resource record sets for www.example.com. The three A records have weights of 1, 1, and 3 (sum = 5). On average, Amazon Route 53 selects each of the first two resource record sets one-fifth of the time, and returns the third resource record set three-fifths of the time.
Labels:
a records,
amazon,
average,
aws,
DNS,
dns query,
external dns,
query,
record sets,
records,
route 53,
route53,
weighted average
AWS CLI multiple profiles
Details on multiple AWS CLI profiles can be found here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-multiple-profiles
Here is a summary of the highlights:
1. Location and name of configuration file: Directory ~/.aws. File name is config.
2. Define a profile by tying this at command prompt: aws configure --profile tomlaszeast (tomlaszeast is the name of the profile)
3. Setting the profile to use is done using this command:
aws configure --profile tomlaszeast
Here is a summary of the highlights:
1. Location and name of configuration file: Directory ~/.aws. File name is config.
2. Define a profile by tying this at command prompt: aws configure --profile tomlaszeast (tomlaszeast is the name of the profile)
3. Setting the profile to use is done using this command:
aws configure --profile tomlaszeast
Labels:
access key,
amazon,
aws,
cli,
config,
configuration,
multiple profiles,
profiles,
regions,
secret key,
setting,
users
AWS EC2 user data
You can perform any bootstrapping action you would like using user data. Here is an example of installing Apache, PHP, and MySQL. Then Apache, PHP and MySQL are started. Then a sample application is installed on Apache.
#!/bin/sh yum -y install httpd php mysql php-mysql chkconfig httpd on /etc/init.d/httpd start cd /tmp wget http://us-east-1-aws-training.s3.amazonaws.com/self-paced-lab-4/examplefiles-as.zip unzip examplefiles-as.zip mv examplefiles-as/* /var/www/html
Labels:
amazon,
apache,
aws,
boot strapping,
booting,
configuration,
ec2,
installation,
instance,
mysql,
php,
user data
Sunday, June 22, 2014
AWS EC2 instance user name
When logging into an EC2 instance using SSH, you may receive an error. You check to make sure you have the right instance name, IP address, PEM key etc. but it still fails. You may be using the incorrect user name. The EC2 instance user names by OS are listed here:
http://alestic.com/2014/01/ec2-ssh-username
For example, ubuntu servers are “ubuntu@” amazon is “ec2-user@” and other ones such as Debian are root@
http://alestic.com/2014/01/ec2-ssh-username
For example, ubuntu servers are “ubuntu@” amazon is “ec2-user@” and other ones such as Debian are root@
ELB Health Check return code
You can have TCP and HTTP health checks. A TCP health check will simply check if the web page exists (can be pinged essentially) a HTTP health check needs to return a 200 to pass the health check. More can be found here:
http://stackoverflow.com/questions/23689333/how-to-instruct-an-aws-elb-to-consider-a-health-check-that-returns-a-403-code-as
http://stackoverflow.com/questions/23689333/how-to-instruct-an-aws-elb-to-consider-a-health-check-that-returns-a-403-code-as
AWS multiple PEM for on EC2 instance
You will probably want to have a different PEM file for each developer that will be accessing EC2 instances. This is good practice so that when a developer leaves the company or you want to remove their privileges to SSH into an EC2 instance. Here’s the link to create multiple pen keys for EC2 instances:
AWS ELB instances tied to ELB and ip addresses
The Amazon Elastic Load Balancer takes care of scaling out the number of underlying EC2 instances that make up the software virtual load balancer that is the Amazon ELB. However, you may want to determine the number of instances that are servicing your requests if you plan to scale out the number of users for a large promotion or for processing of a large number of users. Determining the number of underlying instances is easy. You can simply use the host or dig commands using the ELB DNS name. There will be a minimum of one EC2 instance for each AZ that the ELB is servicing.
1. [ec2-user@ip-10-0-0-50 ~]$host ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com
ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com has address 50.112.140.192
ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com has address 50.112.183.144
2. [ec2-user@ip-10-0-0-50 ~]$ dig ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.28.amzn1 <<>> ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 813
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;ReachForceAutoScalingELB-1142887282.us-west-2.elb.amazonaws.com. IN A
;; ANSWER SECTION:
ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com. 30 IN A 50.112.183.144
ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com. 30 IN A 50.112.140.192
;; Query time: 1 msec
;; SERVER: 10.0.0.2#53(10.0.0.2)
;; WHEN: Fri Jun 6 12:31:34 2014
;; MSG SIZE rcvd: 113
1. [ec2-user@ip-10-0-0-50 ~]$host ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com
ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com has address 50.112.140.192
ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com has address 50.112.183.144
2. [ec2-user@ip-10-0-0-50 ~]$ dig ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.28.amzn1 <<>> ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 813
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;ReachForceAutoScalingELB-1142887282.us-west-2.elb.amazonaws.com. IN A
;; ANSWER SECTION:
ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com. 30 IN A 50.112.183.144
ReachForceAutoScalingELB-1342887266.us-west-2.elb.amazonaws.com. 30 IN A 50.112.140.192
;; Query time: 1 msec
;; SERVER: 10.0.0.2#53(10.0.0.2)
;; WHEN: Fri Jun 6 12:31:34 2014
;; MSG SIZE rcvd: 113
Labels:
amazon,
aws,
ec2,
elb,
instance,
ip,
ip addresses,
load balancer
Amazon RDS using private IP to connect to database - not the right approach
You should always connect to your Amazon RDS instance using the RDS endpoint in the AWS console. However, some IT folks chose to use the private IP address of the RDS instance. It is easy for you to determine the private IP address of your RDS instance by using the host or dig commands as follows (Keep in mind this is not recommended but it shows how easy it is for IT personnel that don't want to use the RDS endpoint can do so):
[ec2-user@ip-10-0-0-50 ~]$ host postgres.cyve56loidht.us-west-2.rds.amazonaws.com
postgres.cyve56loidht.us-west-2.rds.amazonaws.com is an alias for ec2-54-201-99-99.us-west-2.compute.amazonaws.com.
ec2-54-201-75-58.us-west-2.compute.amazonaws.com has address 10.0.5.204
[ec2-user@ip-10-0-0-50 ~]$ ping 10.0.5.204
PING 10.0.5.204 (10.0.5.204) 56(84) bytes of data.
^C
--- 10.0.5.204 ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 9792ms
[ec2-user@ip-10-0-0-50 ~]$ dig postgres.cyve56loidht.us-west-2.rds.amazonaws.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.28.amzn1 <<>> postgres.cyve56loidht.us-west-2.rds.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25864
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;postgres.cyve56loidht.us-west-2.rds.amazonaws.com. IN A
;; ANSWER SECTION:
postgres.cyve56loidht.us-west-2.rds.amazonaws.com. 5 IN CNAME ec2-54-201-99-99.us-west-2.compute.amazonaws.com.
ec2-54-201-99-99.us-west-2.compute.amazonaws.com. 60 IN A 10.0.5.204
;; Query time: 19 msec
;; SERVER: 10.0.0.2#53(10.0.0.2)
;; WHEN: Fri Jun 6 12:28:44 2014
;; MSG SIZE rcvd: 132
[ec2-user@ip-10-0-0-50 ~]$ host postgres.cyve56loidht.us-west-2.rds.amazonaws.com
postgres.cyve56loidht.us-west-2.rds.amazonaws.com is an alias for ec2-54-201-99-99.us-west-2.compute.amazonaws.com.
ec2-54-201-75-58.us-west-2.compute.amazonaws.com has address 10.0.5.204
[ec2-user@ip-10-0-0-50 ~]$ ping 10.0.5.204
PING 10.0.5.204 (10.0.5.204) 56(84) bytes of data.
^C
--- 10.0.5.204 ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 9792ms
[ec2-user@ip-10-0-0-50 ~]$ dig postgres.cyve56loidht.us-west-2.rds.amazonaws.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.28.amzn1 <<>> postgres.cyve56loidht.us-west-2.rds.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25864
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;postgres.cyve56loidht.us-west-2.rds.amazonaws.com. IN A
;; ANSWER SECTION:
postgres.cyve56loidht.us-west-2.rds.amazonaws.com. 5 IN CNAME ec2-54-201-99-99.us-west-2.compute.amazonaws.com.
ec2-54-201-99-99.us-west-2.compute.amazonaws.com. 60 IN A 10.0.5.204
;; Query time: 19 msec
;; SERVER: 10.0.0.2#53(10.0.0.2)
;; WHEN: Fri Jun 6 12:28:44 2014
;; MSG SIZE rcvd: 132
Network interoperability lab
I found out about a place where networking interoperability is the main focus. It happens to be where I live in New Hampshire.
www.iol.unh.edu
www.iol.unh.edu
S3 read consistency
Here is a great blog post on Amazon S3 read consistency. S3 read consistency has a big impact on how other AWS services (such as EMR) and applications use S3.
http://techblog.netflix.com/2014/01/s3mper-consistency-in-cloud.html
http://techblog.netflix.com/2014/01/s3mper-consistency-in-cloud.html
Labels:
amazon,
aws,
consistency,
eventual,
read consistency,
s3
AWS Web site on S3 and Pen testing
You're not allowed to pen test AWS API endpoints, only your own EC2/VPC instances and config. More information here on this blog post: http://cloudconclave.blogspot.com/2014/06/aws-penetration-testing-without-having.html
If you are hosting a static site on S3, you should read the risk and security white papers (http://aws.amazon.com/security/security-resources/). They discuss how AWS regularly scans S3 for vulnerabilities and performs regular penetration testing. The ISO 27001 certification also validates that.
Labels:
amazon,
API,
aws,
endpoints,
pen testing,
penetration,
s3,
testing
AWS Penetration Testing without having to fill out the pen testing form
Penetration testing is something that customers like to do when running on AWS. You have to be pre approved to run a pen test on AWS unless you use an Amazon Marketplace AMI from Tenable.
You can read about the Tenable solution here:
http://static.tenable.com/partners/Tenable_AWS_Solution_Brief.pdf
Here is the Amazon Marketplace AMI:
https://aws.amazon.com/marketplace/pp/B00G9A5MS0/ref=srh_res_product_title?ie=UTF8&sr=0-3&qid=1402498041820#support
Here is the form if you were not using the Tenable solution:
http://aws.amazon.com/security/penetration-testing/
You can read about the Tenable solution here:
http://static.tenable.com/partners/Tenable_AWS_Solution_Brief.pdf
Here is the Amazon Marketplace AMI:
https://aws.amazon.com/marketplace/pp/B00G9A5MS0/ref=srh_res_product_title?ie=UTF8&sr=0-3&qid=1402498041820#support
Here is the form if you were not using the Tenable solution:
http://aws.amazon.com/security/penetration-testing/
Labels:
amazon,
aws,
ec2,
instances,
pen testing,
penetration,
scanning,
testing,
vulnerability
OpenSwan on AWS
A common use case for using a third party VPN solution such as OpenSwan is to connect two regions VPCs through the use of an IPSec VPN server.
First, set up a VPC in both regions with, here is what I did:
Region 1 (US-West-2) - VPC 10.0.0.0/16 with private subnet 10.0.0.0/24
Region 2 (Australia)- VPC 172.0.0.0/16 with private subnet 172.0.0.0/24
==================================================================================================================
Configure the VPN server software for the EC2 instances - Region 1
==================================================================================================================
Step 1
------
sudo yum install openswan
Step 2
------
nano /etc/ipsec.conf
Step 3
------
sudo vi /etc/ipsec.d/vpc1-to-vpc2.conf
Step 4
------
conn vpc1-to-vpc2
type=tunnel
authby=secret
left=%defaultroute
leftid=<EIP1>
leftnexthop=%defaultroute
leftsubnet=<VPC1 CIDR>
right=<EIP2>
rightsubnet=<VPC2 CIDR>
pfs=yes
auto=start
Step 5
------
sudo vi /etc/ipsec.d/vpc1-to-vpc2.secrets
Step 6
------
<EIP1> <EIP2>: PSK "<TYPE A KEY HERE>"
==================================================================================================================
Configure the VPN server software for the EC2 instances - Region 2
==================================================================================================================
Step 7
------
sudo vi /etc/ipsec.d/vpc2-to-vpc1.conf
Step 8
------
conn vpc2-to-vpc1
type=tunnel
authby=secret
left=%defaultroute
leftid=<EIP2>
leftnexthop=%defaultroute
leftsubnet=<VPC2 CIDR>
right=<EIP1>
rightsubnet=<VPC1 CIDR>
pfs=yes
auto=start
Note the CIDR needs to include the block range. For example: 10.0.0.0/16
Step 9
------
sudo vi /etc/ipsec.d/vpc2-to-vpc1.secrets
Step 10
-------
<EIP2> <EIP1>: PSK "<TYPE THE SAME KEY FROM STEP 6 HERE>"
==================================================================================================================
Configuration in each region
==================================================================================================================
Step 11
-------
a-
sudo service ipsec start
b-
sudo chkconfig ipsec on
c-
sudo vi /etc/sysctl.conf
net.ipv4.ip_forward = 1
d-
sudo service network restart
==================================================================================================================
Test your connections
==================================================================================================================
Step 1 - Region 1
------
ping 172.0.0.50
Step 2 - Region 2
ping 10.0.0.50
Labels:
account,
amazon,
aws,
cross region,
ec2,
network,
openswan,
private,
private subnet,
region to region,
regions,
subnet,
tunnel,
VPC,
VPN
AWS EBS Performance
Here is a very good presentation on Amazon EBS performance from reInvent in 2013:
http://www.slideshare.net/AmazonWebServices/ebs-webinarfinal
http://www.slideshare.net/AmazonWebServices/ebs-webinarfinal
Subscribe to:
Posts (Atom)