While synchronize works quite well, it is not flexible enough for what I had in mind. To be able to integrate Amazon S3 with a real life backup and DRP policy, I want a set of atomic functions that enable me to simply transfer particular data objects between S3 and the server.
Using the Jets3t rich set of S3 objects and the samples provided by the author, I found it quite easy to write additional proof of concept tools.
I now have Create Bucket, Upload Single file, and DownLoad Single File programs. These are all barebones tools, use them at your own risk..
All tools use the iUtils helper tool that provides for central management of parameters and credentials. In fact, iUtils uses exactly the same configuration file Synchronize does.
I timed the Upload and Download tools with a 500MB savefile, and on my network it took less than 30 minutes to go either way.
The usage is quite simple:
- CreateBucket requires a single parameter, the new bucket name.
- uploadSingleFile requires two parameters: the target bucket name, and the file name to upload
- DownLoadSingleFile requires two parameters: the source bucket name, and the file name to download.
Using Amazon S3 in an iSeries backup/recovery scenario
S3 can be used as an offsite backup for some of your data, maybe even for all of your data. A library can easily be saved to a save file, and the save file uploaded to S3 storage until needed, at which time it can be downloaded and restored to your iSeries.
For example, look at the following set of commands that saves changed objects to a save file, zips it and sends it to S3. The extra zipping is required because the data compression built into the iSeries save commands is not very efficient, and because I have not implemented it yet as an integral part of the upload tool (although the functionality exists in the jets3t package).
/* create save file */
crtsavf s3wk200901
/* backup objects changed since December 31 to save file */
savchgobj obj(*all) lib(mylib) dev(*savf) savf(s3wk200901)
refdate('31/12/2008') clear(*all) dtacpr(*high)
/* copy save file to IFS */
cpytostmf frommbr('/qsys.lib/mylib.lib/s3wk200901.file')
tostmf('/backups/s3wk200901.savf') stmfopt(*replace)
/* further compress save file */
qsh cmd('jar cMf /backups/s3wk200901.zip /backups/s3wk200901.savf')
/* upload compressed save file to S3 */
qsh cmd('cpytos3.sh mybackup /backups/s3wk200901.zip')
Try using DTACPR(*MEDUIM) on the SAVxxx command in order to eliminate the need to ZIP the file.
ReplyDeleteAlso, be very sure you understand the SAVCHGOBJ command. For example, I noticed that you're not specifying OBJJRN(*YES). Which means you may not be getting what you think you are getting.
Charles
thanks Charles.
ReplyDeleteDoes DTACPR(*MEDIUM) provide smaller size archives than DTACPR(*HIGH)? I used the *HIGH option in my example, and yet zipping the save file decreased the size by additional 10%. Will compare between the options to see which one is best.
Using OBJRRN depends on the actual backup/restore scenario. The code in this article is an example of what may actually do run on your server. I assume that my readers are either familiar with the save and restore commands, or that they will take the time to learn them before blindly following instructions.
- Shalom
Years from your post, sorry.
ReplyDeleteI've created a couple of CLs which create SAVF, compress using p7zip (7za) with encryption or jar (zip), then use SCP , FTP or SFTP and transfers to a remote Linux box or even Softlayer object storage.
I use DTACPR(*NO) because the SAVF is more compressible (with 7za from 10:1 to 20:1).
Again, sorry for the late-late-late post.
You can see an example here:
http://diego-k.blogspot.mx/2014/12/ibm-i-iseries-as400-haciendo-respaldos.html
PS: Sorry , in spanish.