Showing posts with label elastic. Show all posts
Showing posts with label elastic. Show all posts

Thursday, April 17, 2014

AWS Elastic Beanstalk and NAT instance

An Elastic BeanStalk launched in a VPC with a private subnet requires a NAT. Each instance needs to be able to talk to the Internet in order to answer the waitcondition. Connectivity can be provide through a NAT instance but there does have to be access to the internet.

The following show a VPC configuration with a private VPC. It is the connectivity to the Elastic Beanstalk end point that is needed. As you can see it is outside the VPC.

You can find more detailed instructions for creating and configuring a NAT instance here:

Sunday, March 30, 2014

AWS public and elastic IPs

If you launch an instance in EC2-Classic, it is assigned a public IP address by default. This is not an optional configuration.

If you launch an instance into a VPC, a public IP addressing feature is available for you to control whether your instance is assigned a public IP address. In the AWS console, there is a check-box called Assign Public IP to Auto-assign Public IP.The public IP address is assigned to the network interface with the device index of eth0. 

A public IP address is assigned to your instance from Amazon's pool of public IP addresses, and is not associated with your AWS account. When a public IP address is disassociated from your instance, it is released back into the public IP address pool, and you cannot reuse it.

Whether you assign a public IP address to your instance during launch or not, you can associate an Elastic IP address with your instance after it's launched. 

Thursday, November 7, 2013

Fixed MAC address for EC2 instances

Some software licenses require a fixed MAC address to exist.  Running on AWS EC2 the instance will get a new MAC address on reboot or shutdown or restart.  I way to ensure the new EC2 instance will get the same MAC address is to use an Elastic Network Interfaces (ENI).  ENI is like an additional network card that have fixed IP address (internal and external) and fixed MAC address.

Monday, September 30, 2013

AWS EBS PIOPS : block size and IOPS


Having spent more time in the database world than in the web development world, I am accustomed to measuring (database) performance/through put in terms of IOPS or TPS.  The web/video/image world like to use MB/sec.  Why I am saying this? Because it relates to the conversation about getting a certain level of PIOPS (based upon a 16 KB block) on AWS EBS and how this effects MB/sec.  MB/sec, I am beginning to understand, and maybe move to the 'dark side', is the ultimate measure of disk 'performance'.  

Example: A 2000 Provisioned IOPS volume can handle:
•2000 16KB read/write per second, or 1000 32KB read/write per second, or 500 64KB read/write per second 
•You will get consistent 32 MB/sec throughput (with 16KB or higher IOs)
•Perform an index creation action and sends I/O of 32K, IOPS becomes 1000, you still get 32MB/sec throughput
•On best effort, you may get up to 40 MB/sec throughput 

So, you may be better off using a 64 KB block size but your PIOPS will show up as lower but your MB/sec could be better.

Monday, September 9, 2013

Hadoop Cluster and Amazon EMR

Accenture calculated the total cost of ownership of a bare-metal Hadoop cluster and derived the capacity of nine different cloud-based Hadoop clusters at the matched TCO. The performance of each option was then compared by running three real-world Hadoop applications.  The full report is here:

http://www.accenture.com/us-en/Pages/insight-hadoop-deployment-comparison.aspx

There report compares the performance of both a bare-metal Hadoop cluster and Amazon ElasticMapReduce (Amazon EMR).

Friday, August 30, 2013

AWS re:Invent enterprise sessions


DAT202 - Using Amazon RDS to Power Enterprise Applications Amazon Relational Database Service (Amazon RDS) makes it cheap and easy to deploy, manage, and scale relational databases using a familiar MySQL, Oracle or MS SQL server database engine. Amazon RDS can be an excellent choice for running many large, off-the-shelf enterprise applications from companies like JD Edwards, Oracle, PeopleSoft, and Siebel. Sign up for this session to learn how to best leverage Amazon RDS for use with enterprise applications and learn about best practices and data migration strategies.


DAT401 - Advanced Data Migration Techniques for Amazon RDS Migrating data from the existing environments to AWS is a key part of the overall migration to Amazon RDS for most customers. Moving data into Amazon Relational Database Service (Amazon RDS) from existing production systems in a reliable, synchronized manner with minimum downtime requires careful planning and the use appropriate tools and technologies. Because each migration scenario is different in terms of source and target systems, tools, and data sizes, you'll need to customize your data migration strategy to achieve the best outcome. In this session we will do a deep dive into various methods, tools and technologies that can be put to use for a successful and timely data migration to Amazon RDS.


STG301 - AWS Storage Tiers for Enterprise Workloads - Best Practices Enterprise environments utilize many different kinds of storage technologies from different vendors to fulfill various needs in their IT landscape. These are often very expensive and procurement cycles quite lengthy. They also need specialized expertise in each vendor's storage technologies to configure them and integrate them into the ecosystem, again resulting in prolonged project cycles and added cost. AWS provides end-to-end storage solutions that will fulfill all these needs of Enterprise Environments that are easily manageable, extremely cost effective, fully integrated and totally on demand. These storage technologies include Elastic Block Store (EBS) for instance attached block storage, Amazon Simple Storage Service (Amazon S3) for object (file) storage and Amazon Glacier for archival. An enterprise database environment is an excellent example of a system that could use all these storage technologies to implement an end-to-end solution using striped PIOPS volumes for data files, Standard EBS volumes for log files, S3 for database backup using Oracle Secure Backup and Glacier for long-time archival from S3 based on time lapse rules. In this session, we will explore the best practices for utilizing AWS storage tiers for enterprise workloads

STG305 - Disaster Recovery Site on AWS - Minimal Cost Maximum Efficiency Implementation of  disaster recovery (DR) site is crucial for business continuity of any enterprise. Due to the fundamental nature of features like elasticity, scalability and geographic distribution, DR implementation on AWS can be done at 10-50% of the conventional cost. In this session, we will do a deep dive into proven DR architectures on AWS and best practices, tools and techniques get the most out of them.


STG303 - Running Microsoft and Oracle Stacks on Elastic Block Store Run your Enterprise applications on Elastic Block Store (EBS). This session will discuss how you can leverage the block storage platform (EBS) as you move your Microsoft (SQL Server, Exchange, SharePoint) and Oracle (Databases, E-business Suite, Business Intelligence) workloads onto Amazon Web Services (AWS). The session will cover high availability, performance, and backup/restore best practices

ENT303 - Migrating Enterprise Applications to AWS - Best Practices, Tools and Techniques In this session we will discuss strategies, tools and techniques for migrating enterprise software systems to AWS. We'll consider applications like Oracle eBusiness Suite, SAP, PeopleSoft, JD Edwards and Siebel. These applications are complex by themselves; they are frequently customized; they have many touch points on other systems in the enterprise; and they often have large associated databases. Nevertheless, running enterprise applications in the cloud affords powerful benefits. We'll identify success factors and best practices.

Friday, June 7, 2013

AWS EMR : Getting started for Oracle DBAs


Newer technologies such as MapReduce (AWS EMR, Hadoop) and noSQL (MongoDB, AWS DynamoDB...) can be confusing to Oracle DBAs.  This blog post takes a quick look at AWS Elastic Map Reduce (EMR) and attempts to demystify it for Oracle DBAs.  Going back before RDBMs products, MapReduce is like a mainframe batch job with no restart ability built in.  MapReduce facilities the processing of large volumes of data in one large batch.  This one large batch, however, is broken into tens or hundreds of smaller pieces of work and processed by MapReduce worker nodes.  This makes MapReduce a great solution for processing web logs, sensor data, genome data, large volumes of transactions, telephone call detail records, vote ballots, and other instances where large volumes of data need to be processed once and the results stored.MapReduce is a framework so you have to write to an API in your application in order to take advantage of MapReduce.  There are a number of implementations of this framework including Apache Hadoop and AWS Elastic Map Reduce (EMR).  Apache Hadoop has no native data store associates with it (although Hadoop Distributed File System - HDFS can be used natively).As mentioned, you need to code your own application using the MapReduce framework. AWS makes getting started with MapReduce by providing sample applications for EMR.   One of the five sample EMR applications is a Java application for processing for AWS CloudFront logs.   The  is a Java application that uses Cascading to analyze and generate usage reports from Amazon CloudFront http access logs.   You specify the EMR input source (CloudFront log location in S3) in the JAR arguments and you also specify the S3 bucket that will hold the results (output). 


For the CloudFront HTTP LogAnalyzer the input and output files use S3.  However,  HDFS or AWS DynamoDB are commonly used as input sources and sometimes used as output sources.  You may want to use DynamoDB as an output source if you which to load the results into RedShift or do future BI analysis on the results.  You could also send the results to an AWS SQS queue to be handled later for processing to S3, DynamoDB, RDS or some other persistent data store.

Tuesday, May 28, 2013

Auto Scaling Script


Auto Scaling with VPC 
This example uses the CLI version 1.0.61.2 (API 2011-01-01).  There is now a newer version of the CLI.

Overall notes:
1. Create an image (aka: gold image) that has a health check on it that consists of a simple static HTML page such as a ping.html or an index.html. 
2. The ELB (need an ELB before creating Auto Scaling group) hat has the instances running from your gold image AMI created in step 1. Could potentially (nothing is preventing you from doing) create a auto scale launch configuration (step 6 in process below) with an AMI that is different, with a different instance type as instance type is another parameter of your launch configuration, than the instance that will be part of your auto scaling group.  Both are possible but most will have the same AMI (gold image, so you know you have a health check) and the same instance type.
3. Before you create the ELB you should have instances running of the  gold image to make sure the ELB is working properly.  To provide true HA, make sure to put the instances in two AZs. (for example: us-west-2a, us-west-2b)
4. When creating elb that is using subnets, limitation on ELB is that only one subnet per AZ is allowed.  
5. When creating autoscaling group (as-create-auto-scaling-group)
 need to make sure to specific the VPC subnets and AZs.  The VPC subnets need to be in the AZs specified in the AZ parameter for command.

Steps:
Prework:
1. Identify the VPC : For example, VPC: vpc-9b120cf2
2. Add subnets : In this case, using two public subnets. I had to create the second one (subnet) which was private by default and I had to add an igw to make it public.
3. Subnets -
            A. Subnet: subnet-9f120cf6
            CIDR: 10.0.0.0/24   VPC: vpc-9b120cf2   Availability Zone: us-west-2a
            B. Subnet: subnet-8c130de5
            CIDR: 10.0.5.0/24   VPC: vpc-9b120cf2   Availability Zone: us-west-2b
4. Luanch two instances from AMI that has Apache installed with ping.html (or some other file) as a health check. One instance in subnet A and the other in subnet B. 
5. Create an ELB with the two instances across subnets.  ELB is healthy with two instances running in two AZs in two different public subnets. Limitation on ELB is that one subnet per AZ. 
6. Auto Scaling:
A. as-create-launch-config vpcautoscaling-as-lc --image-id ami-dcd344ec --instance-type t1.micro --key AutoScalingKey
B. as-create-auto-scaling-group vpcautoscaling-as-grp --launch-configuration vpcautoscaling-as-lc  --min-size 4 --max-size 12 --load-balancers VpcautoscalingAutoScalingELB --vpc-zone-identifier subnet-9f120cf6,subnet-8c130de5 --availability-zones us-west-2a,us-west-2b
C. as-describe-auto-scaling-groups --headers
output of command:
INSTANCE  INSTANCE-ID  AVAILABILITY-ZONE  STATE      STATUS   LAUNCH-CONFIG
INSTANCE  i-3af6e708   us-west-2b         InService  Healthy  vpcautoscaling-as-lc
INSTANCE  i-3cf6e70e   us-west-2a         InService  Healthy  vpcautoscaling-as-lc
INSTANCE  i-3ef6e70c   us-west-2a         InService  Healthy  vpcautoscaling-as-lc
INSTANCE  i-38f6e70a   us-west-2b         InService  Healthy  vpcautoscaling-as-lc
D. as-put-scaling-policy vpcautoscaling-scale-out-policy --auto-scaling-group vpcautoscaling-as-grp --adjustment=30 --type PercentChangeInCapacity
ARN: arn:aws:autoscaling:us-west-2:649163059618:scalingPolicy:69270ce4-3350-48f4-9d6f-71bc64225554:autoScalingGroupName/vpcautoscaling-as-grp:policyName/vpcautoscaling-scale-out-policy
E.  as-put-scaling-policy vpcautoscaling-scale-in-policy --auto-scaling-group vpcautoscaling-as-grp --adjustment=1 --type PercentChangeInCapacity
ARM - arn:aws:autoscaling:us-west-2:649163059618:scalingPolicy:d9a6780b-3319-4b00-98f1-e98dcfc68d82:autoScalingGroupName/vpcautoscaling-as-grp:policyName/vpcautoscaling-scale-in-policy
F. mon-put-metric-alarm --alarm-name AddCapacity --metric-name CPUUtilization --namespace "AWS/EC2" --statistic "Average" --evaluation-periods 6 --period 120 --threshold 80 --comparison-operator GreaterThanOrEqualToThreshold --dimensions "AutoScalingGroupName=vpcautoscaling-as-grp"  --alarm-actions arn:aws:autoscaling:us-west-2:649163059618:scalingPolicy:69270ce4-3350-48f4-9d6f-71bc64225554:autoScalingGroupName/vpcautoscaling-as-grp:policyName/vpcautoscaling-scale-out-policy
G. mon-put-metric-alarm --alarm-name RemoveCapacity --metric-name CPUUtilization --namespace "AWS/EC2" --statistic "Average" --evaluation-periods 2 --period 120 --threshold 40 --comparison-operator LessThanOrEqualToThreshold --dimensions "AutoScalingGroupName=vpcautoscaling-as-grp"  --alarm-actions arn:aws:autoscaling:us-west-2:649163059618:scalingPolicy:d9a6780b-3319-4b00-98f1-e98dcfc68d82:autoScalingGroupName/vpcautoscaling-as-grp:policyName/vpcautoscaling-scale-in-policy
H. as-describe-policies --auto-scaling-group vpcautoscaling-as-grp —headers

Thursday, April 25, 2013

CloudFormation, OpsWorks and Elastic Beanstalk

Now that AWS has announced OpsWorks and also has CloudFormation and Elastic Beanstalk there are questions on which application and infrastructure management tools is best for you.  Through in thinks like Chef and Puppet and things get really interesting. More can be found here:

https://aws.amazon.com/application-management/

Monday, April 22, 2013

Oracle specific protocol support on AWS ELB

Some Oracle customers are running Oracle specific protocols and when they move to AWS they would like to know if AWS services (such as ELB) support these products.  AWS ELB supports the HTTP, TCP, HTTPS and SSL protocols.  Oracle UCM IDC protocol is not supported with AWS ELB.  In looking at common on premise software and hardware load balancers (F5, Riverbed etc), I do not see any that support Intradoc (IDC).  The more important question to ask is: "What load balancer do you use on premise to load balance IDC traffic?"  If it is a software load balancer, you could run this on an EC2 instances.>

Another protocol that some Oracle customers run is Sockets Direct Protocol (SDP). SDP is an industry-standard wire protocol between InfiniBand network peers. When used over an InfiniBand network, SDP reduces TCP/IP overhead by eliminating intermediate replication of data and transferring most of the messaging burden away from the CPU and onto the network hardware.  This protocol is not supported on AWS. 

Monday, December 17, 2012

Oracle Enterprise Manager and AWS EC2 instance restart


Question: An EC2 restart causes OEM to loss IP address of OEM management console - Is there a way to attach a static IP address to an EC2 instance instead the default o
Response: If you assign an Elastic IP address to the EC2 
instance before installing OEM, then you should have no problem.    
Once the EC2 instance has been created (without an EIP) and 
OEM has been installed there is no 
other way to do it except to use this workaround - 

Oracle Database PIOPS and striping


A common question on PIOPS and striping:
Question: Striping with PIOPS - Is it necessary to stripe data when using PIOPS?
Response: It depends on how many PIOPS you are looking to achieve.  Since max PIOPS is per volume is 2000 PIOPS, if you need more then 2000 PIOPS then you would need to stripe.  Less than 2000 PIOPS then striping is not required. 

Tuesday, October 30, 2012

AWS Route 53 domain name configuration

When entering in the A records for your hosted zone (for example: migrating2cloud.com), you need to create two type A-IPv4 address record sets.  One for www.migrating2cloud.com and the other for migrating2cloud.com.  These A type records sets should point to your EC2 Elastic IP Address or ELB CName (or Hosted Zone ID).

Wednesday, October 10, 2012

Oracle Enterprise Manager on EC2

The situation:  Every time your reboot an AWS  server (EC2 instance - you may to do this to reattach an EBS volume that is in an LVM), - the Enterprise Manager in Oracle gets confused with the temporary host name (since the elastic IP is detached on restart). So when you assign the elastic IP back to the instance Enterprise Manager is referring to the temp hostname it got when the server rebooted.What is the recommended path to handling a server restart on elastic IPs without breaking OEM?
Solution:
I found these two items on web which may be useful:
Both mention using this process at instance boot time:
1. export ORACLE_HOSTNAME=`wget -q -O - http://169.254.169.254/latest/meta-data/public-hostname`
2. export EMKEY_LOCATION=/u02/admin/$ORACLE_SID/
3. dbs emca -config dbcontrol db