Thursday, October 31, 2013

Oracle Secure Backup details

If you are just getting started with Oracle Secure Backup for AWS, here are some resources to help you get started:


Replication of on premise Oracle to Oracle RDS

When Oracle databases are moved from on premise environments to AWS Oracle RDS, there may still be existing on premise Oracle (and non-Oracle) databases that need to replicate data to Oracle RDS. Options such as Oracle GoldenGate and Oracle Streams can be used.  There are also some other options: 

SymmetricDS is open source software : http://www.symmetricds.org
DBVisit : http://www.dbvisit.com/solutions/data_distribution/
Shareplex : http://www.quest.com/shareplex-for-oracle/
Pervasive : http://integration.pervasive.com/Integration-Scenarios/Data-Migration-ETL.aspx

Tuesday, October 22, 2013

AWS Trend Micro web cast replay

You can view the Trend Micro webinar regarding >Deep Security as a Service to deploy intrusion detection and prevention, anti-malware, anti-virus, integrity monitoring, firewall and web reputation on AWS:

http://www.youtube.com/watch?v=zOWcTC45ruw&feature=youtu.be

Wednesday, October 9, 2013

Loading data from on premise Oracle Database to Oracle RDS using RMAN

An RMAN backupset can be loaded directly into RDS. However, you can still use RMAN as a mechanism to move an Oracle database from on premise to RDS.

First, move the data RMAN dump using Tsunami, Aspera, Attunity Cloudbeam, or stand protocols like FTP(s) or HTTP(s) which will be slower. More on moving data from on premise to AWS here: http://cloudconclave.blogspot.com/2013/04/on-premise-application-replication-and.html
OR
You can use Oracle Secure Backup to dump the database directly to S3.

Then load the data into using an Oracle Database instance running on EC2 using RMAN (if RMAN dump on EBS volumes) or Oracle Secure Backup if in S3.

You then load the data into RDS from the EC2 instance using one of the these options:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html

You may ask...why did you go through all this trouble of setting an Oracle database on EC2 when I can simply export/dump the data from the on premise Oracle DB and move to EBS or S3 and then load into RDS.  It is because an RMAN dump is much faster on the on premise side and RMAN produces a more compressed and portable database format to move across the internet to AWS.

You can also use this process to load the data using data pump and Oracle Managed Files:
http://cloudconclave.blogspot.com/2013/10/process-to-load-oracle-data-from-on.html

Native replication for replicating data into and out of RDS has announced in Sept 2012 and details can be found here:
http://aws.typepad.com/aws/2013/09/migrate-mysql-data-to-amazon-rds-and-back.html
This is not yet available for Oracle RDS.

Tuesday, October 8, 2013

CloudWatch custom metric examples

Creating custom metric requires some custom coding or scripting using the AWS Command Line Interface or SDK.   As the name implies these metric are custom so some custom coding / scripting is required to be done.  Here are some places to start that will provide examples of creating custom metric:
CloudWatch custom metric for MySQL using PHP:
http://aws.typepad.com/aws/2011/05/how-to-use-the-new-cloudwatch-user-defined-metrics-feature-with-php-1.html

Linux script sample custom CloudWatch metric for memory, disk space, and swap space:
http://aws.amazon.com/code/8720044071969977

More examples using the CloudWatch Command Interface to create custom metric:
http://blogs.clogeny.com/custom-metrics-in-amazon-cloudwatch/

Sample code for instance memory reporting:



Sunday, October 6, 2013

AWS Cost Allocation Reporting


https://event.on24.com/eventRegistration/EventLobbyServlet?target=registration.jsp&eventid=681641&sessionid=1&key=6926138D02D575421B054D213DEE4E7C&partnerref=AWSevents_v1&sourcepage=register

Simplify Cost Allocation Reporting with CloudCheckr on Amazon Web Services

As deployments scale, utilization and purchasing strategies become increasing important. Amazon Web Services (AWS) and CloudCheckr, an AWS Technology partner invite you to join this presentation to learn strategies that can enable finer control and cost management structures to improve your price-to-performance metrics.
We'll provide an overview how customers today evaluate resource distribution and configurations to efficiently scale deployments using CloudCheckr. In particular, we'll address the differences between on-demand, reserved and spot instances and the most appropriate use for each.

Process to load Oracle data from on premise to RDS


RDS does not allow access to the native OS.  However, using Oracle-managed files you can create a local directory that Oracle RDS can access.  Here are the steps:

Local DB configuration


Local data pump directory





Here I run the export from my local DB






Now I go and update the Perl script with the Dump file name. UTL_FILE package takes care of copying file. It took 3 minutes for 250MB file


Login to RDS database






Ran the import





And here we are – Schema DEEP is present in RDS database TESTDB now.





Here is the Perl code:

#!/usr/bin/perl -w

use strict;
use DBI;

# RDS instance info
my $RDS_PORT=4080;
my $RDS_HOST="myrdshost.xxx.us-east-1-devo.rds-dev.amazonaws.com";
my $RDS_LOGIN="orauser/orapwd";
my $RDS_SID="myoradb";

my $SQL_INTEGER         =  4;
my $SQL_VARCHAR         = 12;
my $SQL_LONGRAW         = 24;

# Oracle destination directory and file name (these could become parameters)
my $dirname = "DATA_PUMP_DIR";
my $fname   = "test.bin";

my $data  = "test";
my $chunk = 8192;

my $sql_open   = "BEGIN perl_global.fh  := utl_file.fopen(:dirname, :fname, 'wb', :chunk); END;";
my $sql_write  = "BEGIN utl_file.put_raw(perl_global.fh, :data, true); END;";
my $sql_close  = "BEGIN utl_file.fclose(perl_global.fh); END;";
my $sql_global = "create or replace package perl_global as fh utl_file.file_type; end;";

my $conn = DBI->connect('dbi:Oracle:host='.$RDS_HOST.';sid='.$RDS_SID.';port='.$RDS_PORT,$RDS_LOGIN, '')
|| die ( $DBI::errstr . "\n") ;

# create a package just to have a global variable
my $updated=$conn->do($sql_global);

# open the file for writing
my $stmt = $conn->prepare ($sql_open);
$stmt->bind_param_inout(":dirname", \$dirname, $SQL_VARCHAR);
$stmt->bind_param_inout(":fname", \$fname, $SQL_VARCHAR);
$stmt->bind_param_inout(":chunk", \$chunk, $SQL_INTEGER);
$stmt->execute() || die ( $DBI::errstr . "\n");

open (INF, $fname) ||  die "\nCan't open $fname for reading: $!\n";
binmode(INF);
$stmt = $conn->prepare ($sql_write);
my %attrib = ('ora_type',$SQL_LONGRAW);
my $val=1;
while ($val > 0) {
  $val = read (INF, $data, $chunk);
  $stmt->bind_param(":data", $data , \%attrib);
  $stmt->execute() || die ( $DBI::errstr . "\n") ;
};
die "Problem copying: $!\n" if $!;

close INF || die "Can't close $fname: $!\n";

$stmt = $conn->prepare ($sql_close);
$stmt->execute() || die ( $DBI::errstr . "\n") ;