AWS import/Export can handle volumes of larger then 1TB to be stored on Amazon EBS volumes. However, there is a intermediate step using S3. If your storage device’s capacity is less than or equal to the maximum Amazon EBS volume size of 1TB, its contents will be loaded directly into an Amazon EBS snapshot. So, in theory no size limit. AWS does not mount the file system on your storage device, nor is a file system required to be present. AWS Import/Export performs a block for block copy from your device to an Amazon EBS Snapshot. If your storage device’s capacity exceeds 1TB, a device image will be stored within your specified Amazon S3 log bucket. You can then create a RAID of EBS volumes using software such as Logical Volume Manager, and copy the image from Amazon S3 to this new volume
Blog posts to help enterprises run applications in the cloud. Entries on cloud migrations as Fortune 1000 companies embark on migrating to the cloud.
Showing posts with label copy. Show all posts
Showing posts with label copy. Show all posts
Thursday, April 24, 2014
Monday, December 9, 2013
Redshift example
This a great place to get started with Redshift: http://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html. There are a couple of pieces of information in the step by step instructions I wanted to elaborate on:
1. Creating a cluster subnet group is done in the Redshift area of the console as seen below. This is not evident in the instructions:
This of course assumes a VPC with two public subnets and a route to an IGW has been created.
2. The copy command is issued from SQL Workbench.copy venue from 's3://awssampledb/tickit/venue_pipe.txt' CREDENTIALS 'aws_access_key_id=<your access key>;aws_secret_access_key=<you secret key>' delimiter '|';
3. Make sure to set auto commit otherwise you need to commit each statements or block of statements and the example does not have commit commands.

1. Creating a cluster subnet group is done in the Redshift area of the console as seen below. This is not evident in the instructions:
This of course assumes a VPC with two public subnets and a route to an IGW has been created.
2. The copy command is issued from SQL Workbench.copy venue from 's3://awssampledb/tickit/venue_pipe.txt' CREDENTIALS 'aws_access_key_id=<your access key>;aws_secret_access_key=<you secret key>' delimiter '|';
3. Make sure to set auto commit otherwise you need to commit each statements or block of statements and the example does not have commit commands.
Tuesday, December 3, 2013
Copying key pairs to your bastion host
When using a bastion host to protect your Oracle database from anyone on the internet getting into your database instance, you will need to copy the 1024-bit SSH-2 RSA key to your bastion EC2 instance. This is can be done using Linux secure copy (SCP):
scp -i /Users/tom/EC2KeyPairs/AWSThreeDayIAM.pem DBSysOPS.pem ec2-user@54.22.37.178:~/.
Where:
1. -i /Users/tom/EC2KeyPairs/AWSThreeDayIAM.pem : Is the key to the bastion host.
2. DBSysOPS.pem is the key file for your Oracle database server.
3. ec2-user@54.22.37.178:~/. is your EC2 instance and file location where the key file will be copied.
Where:
1. -i /Users/tom/EC2KeyPairs/AWSThreeDayIAM.pem : Is the key to the bastion host.
2. DBSysOPS.pem is the key file for your Oracle database server.
3. ec2-user@54.22.37.178:~/. is your EC2 instance and file location where the key file will be copied.
Tuesday, July 23, 2013
Redshift loading data and compression
AWS is column based so by virtue of this is compressed. Redshift runs on high disk density instance based storage for further compression. You can tweak the compression setting for columns once you know your data better. More on this here: http://docs.aws.amazon.com/redshift/latest/dg/t_Compressing_data_on_disk.html
Redshift is designed to load data in quickly. The best approach is using the COPY command to load large amounts of data. Using individual INSERT statements to populate a table might be prohibitively slow. Your data needs to be in the proper format for loading into your Amazon Redshift table. This section presents guidelines for preparing and verifying your data before the load and for validating a COPY statement
before you execute it.
You should definitely break the input file into manageable chucks and load from gzipped micro-slices on S3.
Be aware of using ETL tools. Unless an ETL tool is integrated with Redshift/S3 it may use the COPY command instead use insert statements.
Here is a very good youtube video on Redshift and data loading:
Here is the place in video that discusses the copy command:
Wednesday, July 11, 2012
AWS EC2 amazon sftp only for web site upload pem pkk ftp ssh
The most difficult part of creating my AWS EC2 web site (other then the 'coding' part ... more on this tomorrow) was getting the files loaded.
1. SFTP can only be used. FTP will not work! Discussion is here: SFTP with EC2 (or can use scp)
2. PEM file to a .PPK file - I used Filezilla as described here: Converting PEM to PPK file. Seems as though sftp does not use pem file like SSH does.
3. login in to EC2 instance using SSH and change user to root ( sudo su ) and change the directory with location of you web pages (/var/www/html) to '755' (chmod
-R 755 /var/www/html). This is because you will be using your ec2-user name to upload using SFTP.
4. login into SFTP using ec2-user and your instance name: sftp ec2-user@
Note: These file go into the directory /var/www/html.
1. SFTP can only be used. FTP will not work! Discussion is here: SFTP with EC2 (or can use scp)
2. PEM file to a .PPK file - I used Filezilla as described here: Converting PEM to PPK file. Seems as though sftp does not use pem file like SSH does.
3. login in to EC2 instance using SSH and change user to root ( sudo su ) and change the directory with location of you web pages (/var/www/html) to '755' (chmod
-R 755 /var/www/html). This is because you will be using your ec2-user name to upload using SFTP.
4. login into SFTP using ec2-user and your instance name: sftp ec2-user@
Note: These file go into the directory /var/www/html.
Subscribe to:
Posts (Atom)