Saturday, June 29, 2013

Inheriting the Oracle user environment


Here is the command:

sudo –H –u oracle logbash –login

You must have permissions to sudo to “oracle” directly for this to work. This is preferred rather than sudo’ing to root and then su’ing to oracle (su - oracle)

Friday, June 28, 2013

Oracle Enterprise Applications on AWS

In case you missed this webinar:
Come join us and get a good introduction to running Oracle databases & applications on Amazon Web Services platform.  Hear the answers to commonly asked questions like: Which Oracle products are available and possible to run on AWS?  What does support, pricing and licensing models look like? How to size Oracle on AWS?
You can view it here:


The solutions page for AWS and Oracle can be found here:


Many of our partners have hosted test drives for Oracle on AWS, which can be invoked from the portal page below:


Thursday, June 27, 2013

Oracle RAC on AWS

Oracle Real Application Clusters (RAC) is not natively supported on AWS. The word natively is used because it is possible to run Oracle RAC in an AWS Direct Connect facility http://aws.amazon.com/directconnect/.

There are a number of options when migrating an Oracle RAC database to AWS.  The option you use depends upon the reason RAC is being used.  For HA and fail over, AWS offers multi-AZ capabilities which can provide the same level of service.  For very large databases that require high transaction through put that can not be achieved on a single instance database, Direct Connect would be the solution.  Details on these options are as follows:
1. RDS with multi-AZ :  Oracle RDS is the managed database service from AWS.  Oracle RDS has builtin multi-AZ capabilities.  Because RDS is a managed service, AWS takes care of installation, configuration and management of the secondary database, the replication between AZs, and the fail over and fail back of the database instance.
2. EC2 with multi-AZ :  Running on EC2 requires the customer or partner to install, configure, manage, and take care of the replication.    Oracle Data Guard or GoldenGate can be used for replication.
3. Direct Connect : The AWS partner Datapipe runs RAC in a managed service model using Direct Connect.


Note: Remember that AWS RDS only supports databases up to 6 TB in size. Note this number changed to 6 TB in June 2015 after being 2 TB. https://aws.amazon.com/about-aws/whats-new/2015/06/amazon-rds-increases-storage-limits-to-6TB-for-piops-and-gp2/. Check the whats new web page for any updates related to Oracle on RDS.

Note:  The reason Oracle Real Application Clusters (RAC) is not supported on AWS is:
1. Multicast is not supported on the AWS network. An overlay network is possible on AWS: http://cloudconclave.blogspot.com/2013/06/overlay-networks-on-aws.html
2. AWS EBS is not a shared disk / clustered file system.
So, even if you use a solution such as Amazon EFS, GlusterFS, Zadara, SoftNAS, or custom NFS for shared disk you can not use RAC on AWS as you need multicast support.  More on Amazon Elastic File System (EFS): https://aws.amazon.com/blogs/aws/amazon-elastic-file-system-shared-file-storage-for-amazon-ec2/ 

Tuesday, June 25, 2013

Migrating enterprise applications to AWS

There are many infrastructure artifacts that need to be addressed when migrating enterprise workloads to AWS: compute, storage, database, security, load balancers, firewalls, data encryption, management, monitoring, patching, anti-virus, and intrusion detection to name a few.  There are also development artifacts that need to be addressed: packaging, testing, deployment, versioning, and upgrades to name a few. These on premise constructs need to the mapped to the same, equivalent, or new constructs on AWS.  

There are also a number of migration approaches that can be employed: lift and shift, leverage and extend, or re-engineering.  The approach taken depends upon the application attributes - Oracle and SAP Commercial Off-The-Shelf (COTS), Microsoft and other 3RD party packaged applications, or custom systems.   There are strength and weaknesses of each approach.

AWS offerings services that are not readily accessible in your on premise environment.  There are services, including CloudFront and Auto Scaling, that can be used to increase performance and reduce cost without changing a line of application code.

In future blogs, I will began to identify and address the mappings, approaches, and services that can be used to migrate enterprise workloads to AWS.

Wednesday, June 19, 2013

EMaaS : Oracle OEM 12c as a Service webinar


Learn how Oracle Enterprise Manager 12c Cloud Control enables you to monitor and manage the complete enterprise application stack, web tier to storage, for both dedicated and cloud-based infrastructure environments.
Join Amazon Web Services (AWS) and Apps Associates, an AWS Partner Network (APN) Advanced Consulting Partner to hear how you can leverage the AWS platform to run a centralized OEM 12c environment to free up your administrative resources.

Register here:


https://event.on24.com/eventRegistration/EventLobbyServlet?target=registration.jsp&eventid=641039&sessionid=1&key=C3BFA66F187A7E2A5DD0C6865B83A3E6&partnerref=AWSemail_v1&mkt_tok=3RkMMJWWfF9wsRow5%2FmYJoDpwmWGd5mht7VzDtPj1OY6hBkuIryWflL6iuNDGZQ7fbSfDAMKHJ52xQ5LH%2FKDdYNF6Q%3D%3D&sourcepage=register

Replay can be found here:
http://www.youtube.com/watch?v=XSBND55sghc

Friday, June 14, 2013

Tunnelbrick not working on MAC

Using this version of Tunnelbrick: Tunnelblick 3.2.9 (build 2891.3328)  -  OpenVPN 2.2.1

The log file kept getting this error: "/sbin/ifconfig tun0 delete
ifconfig: ioctl (SIOCDIFADDR): Can't assign requested address".  This error was in the log file.  I was able to connect to my OpenVPN EC2 instance using VPN but not able to connect to any of the other instances in my VPC.  

The easiest fix is to download Viscosity and use this client product.

OpenVPN Server on AWS EC2


OpenVPN is a popular method to use to create an encrypted IPSec tunnel or SSL tunnel from client machines to AWS.  However, there is not much documentation or specifics on the web to walk through the set up OpenVPN on AWS and the client tools and configuration necessary.  Here are some step by step instructions for creating a encrypted SSL tunnel with caveats included:

1. Create the OpenVPN instance on AWS: Spin-up an Amazon Linux server (m1.small is fine) in a public subnet in the VPC you want to connect to. The VPC has to be a 10.0.0.0/16 network or you'll have to adjust these instructions a bit.  Put it in a separate security group with TCP 443 inbound from everywhere (for VPN connections) and TCP 22 inbound only from IPs you trust (for SSH admin)
      Note:
·   Need to create VPC with a public subnet .  I created a VPC with a public and private subnet as the whole idea behind this exercise is to have instances locked down from access and to the outside world by placing them in private subnets.
·   Need to create a security group.  Created a new security group OpenVPNConfig. For TCP port 443 (port of OpenVPN server),  needs to have a custom TCP rule for address in 10.0.0.0/16
·   Give 22 (SSH) to 0.0.0.0/0 for now just to get be able to work with instance to configure properly.  Once the OpenVPN server is running and tested, you will connect via VPN only so you should remove this rule.
·   Create a new key pair if desired.

2.Give the OpenVPN instance an EIP. You can do this by associating an ENI to the instance.

3.Login, and yum install openvpn

      A. sudo yum -y install openvpn

4.Do this: http://www.openlogic.com/wazi/bid/188052/From-Zero-to-OpenVPN-in-30-Minutes, with one caveat: when you do the build-dh command it'll generate a dh1024.pem file – that's the one you need, not 01.pem.
NOTE:
1.    Instruction for location say here /usr/share/doc/openvpn/examples/easy-rsa/2.0 but actually here: /usr/share/openvpn/easy-rsa/2.0
2.    Need to execute as root : sudo su
3.    Command used: cp -r /usr/share/openvpn/easy-rsa/2.0
/etc/openvpn/
4. I actually got 01.pem and 02.pem and dh1024.pem
Server Configuration – This is actually the same as the web page starting at section called Server Configuration.

5.Adjust the /etc/openvpn/openvpn.conf file to be something like this.  Note this uses TCP443 instead of UDP so it'll get through the AWS firewall.
port 443
proto tcp-server
dev tun
ca /etc/openvpn/keys/ca.crt
cert /etc/openvpn/keys/test-system.crt
key /etc/openvpn/keys/test-system.key
dh /etc/openvpn/keys/dh1024.pem
cipher BF-CBC
server 10.8.0.0 255.255.255.0
push "route 10.0.0.0 255.255.0.0"
comp-lzo
verb 6
ifconfig-pool-persist /etc/openvpn/ipp.txt
keepalive 10 120
status openvpn-status.log

My file:
port 443
proto tcp-server
dev tun
ca /etc/openvpn/2.0/keys/ca.crt
cert /etc/openvpn/2.0/keys/openvpn-system.crt
key /etc/openvpn/2.0/keys/openvpn-system.key
dh /etc/openvpn/2.0/keys/dh1024.pem
cipher BF-CBC
server 10.8.0.0 255.255.255.0
push "route 10.0.0.0 255.255.0.0"
comp-lzo
verb 6
ifconfig-pool-persist /etc/openvpn/ipp.txt
keepalive 10 120
status openvpn-status.log

6. Make it auto-start: sudo chkconfig openvpn on && sudo service openvpn start
Note:
1. The startup failed the first time I tried to start because of an error in my config file.  I had to run without chkconfig and with –config <config file location and name> to find out what error was.

7. Copy the client1.pem, client.crt and ca.crt from the server (or whatever you generated with build-key etc.) from the instructions you followed above… to your Mac.
            A. My files were: ca.crt, openvpn-system.crt, tom.crt, tom.key, 01.pem (seems to be associated with openvpn-system.crt), 02.pem (seems to be associated with tom.crt)
B. And Diffie-Hellman pem files: dh1024.pem
C. Copy using scp: scp -i /Users/tomlasz/Documents/Documents/EC2KeyPairs/OpenVPN.pem ec2-user@<elastic ip address>:/etc/openvpn/2.0/keys/tom.crt .
D. I needed to do a chmod 777 on the keys directory to get scp to work.
E. I needed to do a chmod 644 on the tom.key file to get scp to work on that file.

8. Setup IPTables on the OpenVPN server so that it'll do NAT out to the VPC for clients connecting to the VPN.  Here are all the commands you need assuming you used the instructions above.  As root, (sudo –s) run these on the server:

iptables -I FORWARD -i tun0 -o eth0 -s 10.8.0.0/24 -d 10.0.0.0/16 -m conntrack --ctstate NEW -j ACCEPT
iptables -I FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -t nat -I POSTROUTING -o eth0 -s 10.8.0.0/24 -j MASQUERADE
iptables -t nat -I POSTROUTING -o eth0 -s 10.0.0.0/16 -j MASQUERADE

9. Make these rules auto-start by adding those lines to a file like /etc/iptables.conf and then adding this line to /etc/rc.local

iptables-restore < /etc/iptables.conf

10. You're done with the server. 
11. On your Mac (client machine), install Tunnelbrick and use this client config, changing the location of the keys to the files you copied from the server and change the public IP to match the EIP of your OpenVPN server.

client
dev tun
proto tcp-client

# enter the server's hostname
# or IP address here, and port number
remote <elastic ip> 443

#resolv-retry infinite
nobind
persist-key
persist-tun

# Use the full filepaths to your
# certificates and keys
ca /Users/myan/.openvpn/ca.crt
cert /Users/myan/.openvpn/client1.crt
key /Users/myan/.openvpn/client1.key

ns-cert-type server
comp-lzo
#verb 6


12. Follow Tunnelbrick instructions using the OpenVPN client config above, and you're good.  Tunnelbrick can have issues on Mac.  If you do have issues, try Viscosity. 

13. Connect.  Now you can connect to all the 10.x.y.z private addresses in your VPC, provided that their security group allows inbound connections from the security group that you created for the OpenVPN server.

Note: Changed SSH on security group of my OpenVPN instance to 10.0.0.0/16 from open to world (0.0.0.0/0) now that I know it works.


14. Once it's working, roll-up that OpenVPN server into an AMI and the you can launch it into any VPC with a 10.0.0.0/16 network and connect to its EIP from Tunnelbrick, giving you access to all EC2 instances in the VPC through their private addresses.  No jump box, no EIPs – easy.  (Provided your security groups let in connections from the VPN server, which I do by default in all VPCs now.)

Friday, June 7, 2013

MFA made easy

AWS accounts can use a virtual MFA (iPhone, Android etc).  This allows you to secure your AWS account more then just using a simple password.  For the iPhone, the Google Authenticator application is certified with AWS MFA.  http://aws.amazon.com/mfa/

AWS EMR : Getting started for Oracle DBAs


Newer technologies such as MapReduce (AWS EMR, Hadoop) and noSQL (MongoDB, AWS DynamoDB...) can be confusing to Oracle DBAs.  This blog post takes a quick look at AWS Elastic Map Reduce (EMR) and attempts to demystify it for Oracle DBAs.  Going back before RDBMs products, MapReduce is like a mainframe batch job with no restart ability built in.  MapReduce facilities the processing of large volumes of data in one large batch.  This one large batch, however, is broken into tens or hundreds of smaller pieces of work and processed by MapReduce worker nodes.  This makes MapReduce a great solution for processing web logs, sensor data, genome data, large volumes of transactions, telephone call detail records, vote ballots, and other instances where large volumes of data need to be processed once and the results stored.MapReduce is a framework so you have to write to an API in your application in order to take advantage of MapReduce.  There are a number of implementations of this framework including Apache Hadoop and AWS Elastic Map Reduce (EMR).  Apache Hadoop has no native data store associates with it (although Hadoop Distributed File System - HDFS can be used natively).As mentioned, you need to code your own application using the MapReduce framework. AWS makes getting started with MapReduce by providing sample applications for EMR.   One of the five sample EMR applications is a Java application for processing for AWS CloudFront logs.   The  is a Java application that uses Cascading to analyze and generate usage reports from Amazon CloudFront http access logs.   You specify the EMR input source (CloudFront log location in S3) in the JAR arguments and you also specify the S3 bucket that will hold the results (output). 


For the CloudFront HTTP LogAnalyzer the input and output files use S3.  However,  HDFS or AWS DynamoDB are commonly used as input sources and sometimes used as output sources.  You may want to use DynamoDB as an output source if you which to load the results into RedShift or do future BI analysis on the results.  You could also send the results to an AWS SQS queue to be handled later for processing to S3, DynamoDB, RDS or some other persistent data store.

Memcache on AWS


ElastiCache is protocol-compliant with Memcached. Therefore, you can use standard Memcached operations likeget, set, incr and decr in exactly the same way as you would in your existing Memcached deployments.   Existing applications do not need to change and can simple 'pointed at' the ElastiCache service.  ElastiCache is exposed at the same port (11211) as default memcache port.  To configure and manage the cache servers for your application, you will need to use the AWS Management Console, CLI, SDK, or web services.

AWS security with iptables


AWS Security Groups are one of the key ways to secure your AWS environment. IPtables add another layer of security on top of Security Groups.   For example, security groups are not state-sensitive, you cannot have them respond automatically to an attack for instance. IPTables are well suited to more dynamic rules - either adapting to certain scenarios, or providing finer grained conditional control.

EC2 instances adding SSH users



Each EC2 instance has a ec2-user user which can be used access (via SSH) the EC2 Linux instance.  You can add addition SSH users by following these instructions:  http://www.e-zest.net/blog/how-to-add-ssh-users-in-amazon-linux/


ELB and Security Groups


The ELB Security Groups for EC2 classic can not be changed and the ELB security group name is the same across all ELBs launched an account. This means there is no way to control security group for different apps as same securiy group name across the account.  VPC changes this as each ELB has its own security group name.

Oracle Database on EC2 HA environment


Oracle Data Guard can be used to set up one or several slave databases which will be the foundation of a highly available environment. It maintains the standby databases as transaction- consistent copies of the primary database.  These instances can be placed in several availability zones.Then, if the production database becomes unavailable because of a planned or an unplanned outage of this instance or of the full availability zone, Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage.  It has three protection modes allowing the customers to maximize protection, availability or performance. The Active Data Guard module enables read-only access to the standby databases, thereby allowing customers to run read queries and reports on the standby instances, and to perform the backups from a standby instance.

You can have zero data loss by using the appropriate data protection mode:

http://docs.oracle.com/cd/B28359_01/server.111/b28294/protection.htm#CHDEDGIF

DynamoDB cross region support


DynamoDB does currently not support cross-region replication. Therefore, were you to store your data in two (or more) regions, you would have to roll out your own implementation to replicate.

SNS cross region support


SNS is available in all AWS regions.  It is possible to publish SNS topics and one region and have subscribers in another region.

ELB time out


The default time out on AWS ELB is 60 seconds.  There's a 17 minute max for ELB timeouts, but the connection from ELB to the backend app servers is persistent by default. You have to manually request the change to adjust the front-facing timeout value.  Detail here: https://forums.aws.amazon.com/thread.jspa?threadID=110299.

Oracle Database on ephemeral drives


Using EC2 ephemeral storage (either disk or SSD) is a way to achieve higher IO throughput.
You could use the design pattern Redshift uses (these use the HS1.* instances which have similar storage characteristics to the hi1.4xlarge instances) - "the first line of defense consists of two replicated copies of your data, spread out over up to 24 drives on different nodes within your data warehouse cluster".  This includes:

  1.  All data written to a node in your cluster is automatically replicated to other nodes within the cluster
  2.  All data is continuously backed up to Amazon S3

Oracle on SSD as it is recommendation to get highest level of IO when running Oracle on EC2.

Wednesday, June 5, 2013

Overlay networks on AWS

Software Defined Networks (SDN) are all the rage and for good reason.  A couple of very good reasons when running on AWS are:
1. Supporting multi-cast on AWS which does not natively support multi-cast
2. Dealing with network address overlapping when using AWS VPC

There are a number of alternatives.  https://code.google.com/p/openpgm/
This is one that is certified to run on AWS:
http://www.cohesiveft.com/products/vns3

Here is a very comprehensive post on multicast on AWS:

How to support on premise and EBS volume back up to S3 with one tool

There are many solutions to move data (backup, replicate, and synchronize) between your on premise environment and AWS S3. However, when you are looking for a solution that can backup your AWS EBS volumes to S3 there are not as many.  There are some vendor specific products (i.e. Oracle Secure Backup) but nothing that natively (at a file system or raw partition level) backups up EBS to S3.  Such a solution, would also have the added value of using the same tool / product to back up on premise disk and EBS volumes using the same method.  One company that offers such a solution is Ctera (http://www.ctera.com).  

Tuesday, June 4, 2013

Bulk loading data to AWS from on premise


I spoke about bulk loading data to AWS in this blog post:


A couple other options I did not mention are:

1. Apera - Asperasoft Company has developed a proprietary file transfer protocol based on UDP, which has shown to introduce very high-speed file transfer experience over the Internet.

2. http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/UsingEMR_s3distcp.html   Apache DistCp is an open-source tool you can use to copy large amounts of data. DistCp uses MapReduce to copy in a distributed manner  You can also use S3DistCp to copy data between Amazon S3 buckets or from HDFS to Amazon S3.  Since it is based upon MapReduce, it is most applicable when moving HDFS and other map reduce files as you will obviously have MapReduce installed in your environment. 




Monday, June 3, 2013

OEM 12c test drive on AWS

Now you can try out Oracle Enterprise Manager 12c on AWS:https://blogs.oracle.com/oem/entry/test_drive_oracle_enterprise_manager

AWS internal facing ELB


When you create a load balancer in your VPC, you can specify whether the load balancer is internet-facing (the default) or internal. If you select internal, you do not need to have an internet gateway to reach the load balancer, and the private IP addresses of the load balancer will be used in the load balancer’s DNS record.  You can also specify a security group specific to that ELB.  In EC2 classic, you don't have control over the security group name.

AWS Direct Connect active active with failover

A common question is: "Can use one direct connect connection as active active but also as failover?"
    • Yes, Active/Active(BGPmultipath).Network traffic is load balanced across both connections. If one connection becomes unavailable, all traffic is routed through the other. This is the default configuration. 

AWS VPN connection as a Direct Connect fail over connection


Another question that comes up often: