diff --git a/NEWS b/NEWS index dfef070..9dc7d18 100644 --- a/NEWS +++ b/NEWS @@ -1,59 +1,37 @@ -s3cmd 0.9.9-rc3 - 2009-02-02 -=============== -* Fixed crash in S3Error().__str__() (typically Amazon's Internal - errors, etc). +s3cmd 0.9.9 - 2009-02-17 +=========== +New commands: +* Commands for copying and moving objects, within or + between buckets: [cp] and [mv] (Andrew Ryan) +* CloudFront support through [cfcreate], [cfdelete], + [cfmodify] and [cfinfo] commands. (sponsored by Joseph Denne) +* New command [setacl] for setting ACL on existing objects, + use together with --acl-public/--acl-private (sponsored by + Joseph Denne) -s3cmd 0.9.9-rc2 - 2009-01-30 -=============== -* Fixed s3cmd crash when put / get / sync found - zero files to transfer. -* In --dry-run output files to be deleted only - with --delete-removed, otherwise users get scared. - -s3cmd 0.9.9-rc1 - 2009-01-27 -=============== -* CloudFront support through cfcreate, cfdelete, - cfmodify and cfinfo commands. +Other major features: +* Improved source dirname handling for [put], [get] and [sync]. +* Recursive and wildcard support for [put], [get] and [del]. +* Support for non-recursive [ls]. +* Enabled --dry-run for [put], [get] and [sync]. +* Allowed removal of non-empty buckets with [rb --force]. +* Implemented progress meter (--progress / --no-progress) * Added --include / --rinclude / --(r)include-from options to override --exclude exclusions. -* Enabled --dry-run for [put] and [get]. +* Added --add-header option for [put], [sync], [cp] and [mv]. + Good for setting e.g. Expires or Cache-control headers. +* Added --list-md5 option for [ls]. +* Continue [get] partially downloaded files with --continue +* New option --skip-existing for [get] and [sync]. + +Minor features and bugfixes: * Fixed GPG (--encrypt) compatibility with Python 2.6. - -s3cmd 0.9.9-pre5 - 2009-01-22 -================ -* New command 'setacl' for setting ACL on existing objects. -* Recursive [put] with a slightly different semantic. -* Multiple sources for [sync] and slightly different semantics. -* Support for --dry-run with [sync] - -s3cmd 0.9.9-pre4 - 2008-12-30 -================ -* Support for non-recursive [ls] -* Support for multiple sources and recursive [get]. -* Improved wildcard [get]. -* New option --skip-existing for [get] and [sync]. -* Improved Progress class (fixes Mac OS X) +* Always send Content-Length header to satisfy some http proxies. * Fixed installation on Windows and Mac OS X. * Don't print nasty backtrace on KeyboardInterrupt. * Should work fine on non-UTF8 systems, provided all - the files are in current system encoding. + the files are in current system encoding. * System encoding can be overriden using --encoding. - -s3cmd 0.9.9-pre3 - 2008-12-01 -================ -* Bugfixes only - - Fixed sync from S3 to local - - Fixed progress meter with Unicode chars - -s3cmd 0.9.9-pre2 - 2008-11-24 -================ -* Implemented progress meter (--progress / --no-progress) -* Removing of non-empty buckets with --force -* Recursively remove objects from buckets with a given - prefix with --recursive (-r) -* Copying and moving objects, within or between buckets. - (Andrew Ryan) -* Continue getting partially downloaded files with --continue * Improved resistance to communication errors (Connection reset by peer, etc.) diff --git a/PKG-INFO b/PKG-INFO index 2548f99..513e5c7 100644 --- a/PKG-INFO +++ b/PKG-INFO @@ -1,8 +1,8 @@ Metadata-Version: 1.0 Name: s3cmd -Version: 0.9.9-rc3 +Version: 0.9.9 Summary: Command line tool for managing Amazon S3 and CloudFront services -Home-page: http://s3tools.logix.cz +Home-page: http://s3tools.org Author: Michal Ludvig Author-email: michal@logix.cz License: GPL version 2 diff --git a/README b/README index 3544de5..cffcba6 100644 --- a/README +++ b/README @@ -5,10 +5,17 @@ Michal Ludvig S3tools / S3cmd project homepage: - http://s3tools.sourceforge.net - -S3tools / S3cmd mailing list: - s3tools-general@lists.sourceforge.net + http://s3tools.org + +S3tools / S3cmd mailing lists: + * Announcements of new releases: + s3tools-announce@lists.sourceforge.net + + * General questions and discussion about usage + s3tools-general@lists.sourceforge.net + + * Bug reports + s3tools-bugs@lists.sourceforge.net Amazon S3 homepage: http://aws.amazon.com/s3 @@ -79,41 +86,84 @@ That's the pricing model of Amazon S3 in a nutshell. Check Amazon S3 homepage at http://aws.amazon.com/s3 for more -details. +details. Needless to say that all these money are charged by Amazon itself, there is obviously no payment for using S3cmd :-) Amazon S3 basics ---------------- -Files stored in S3 are called "objects" and their names are -officially called "keys". Each object belongs to exactly one -"bucket". Buckets are kind of directories or folders with -some restrictions: 1) each user can only have 100 buckets at -the most, 2) bucket names must be unique amongst all users -of S3, 3) buckets can not be nested into a deeper -hierarchy and 4) a name of a bucket can only consist of basic -alphanumeric characters plus dot (.) and dash (-). No spaces, -no accented or UTF-8 letters, etc. - -On the other hand there are almost no restrictions on object -names ("keys"). These can be any UTF-8 strings of up to 1024 -bytes long. Interestingly enough the object name can contain -forward slash character (/) thus a "my/funny/picture.jpg" is -a valid object name. Note that there are not directories nor +Files stored in S3 are called "objects" and their names are +officially called "keys". Since this is sometimes confusing +for the users we often refer to the objects as "files" or +"remote files". Each object belongs to exactly one "bucket". + +To describe objects in S3 storage we invented a URI-like +schema in the following form: + + s3://BUCKET +or + s3://BUCKET/OBJECT + +Buckets +------- +Buckets are sort of like directories or folders with some +restrictions: +1) each user can only have 100 buckets at the most, +2) bucket names must be unique amongst all users of S3, +3) buckets can not be nested into a deeper hierarchy and +4) a name of a bucket can only consist of basic alphanumeric + characters plus dot (.) and dash (-). No spaces, no accented + or UTF-8 letters, etc. + +It is a good idea to use DNS-compatible bucket names. That +for instance means you should not use upper case characters. +While DNS compliance is not strictly required some features +described below are not available for DNS-incompatible named +buckets. One more step further is using a fully qualified +domain name (FQDN) for a bucket - that has even more benefits. + +* For example "s3://--My-Bucket--" is not DNS compatible. +* On the other hand "s3://my-bucket" is DNS compatible but + is not FQDN. +* Finally "s3://my-bucket.s3tools.org" is DNS compatible + and FQDN provided you own the s3tools.org domain and can + create the domain record for "my-bucket.s3tools.org". + +Look for "Virtual Hosts" later in this text for more details +regarding FQDN named buckets. + +Objects (files stored in Amazon S3) +----------------------------------- +Unlike for buckets there are almost no restrictions on object +names. These can be any UTF-8 strings of up to 1024 bytes long. +Interestingly enough the object name can contain forward +slash character (/) thus a "my/funny/picture.jpg" is a valid +object name. Note that there are not directories nor buckets called "my" and "funny" - it is really a single object name called "my/funny/picture.jpg" and S3 does not care at all that it _looks_ like a directory structure. -To describe objects in S3 storage we invented a URI-like -schema in the following form: - - s3://BUCKET/OBJECT - -See the HowTo later in this document for example usages of -this S3-URI schema. - -Simple S3cmd HowTo +The full URI of such an image could be, for example: + + s3://my-bucket/my/funny/picture.jpg + +Public vs Private files +----------------------- +The files stored in S3 can be either Private or Public. The +Private ones are readable only by the user who uploaded them +while the Public ones can be read by anyone. Additionally the +Public files can be accessed using HTTP protocol, not only +using s3cmd or a similar tool. + +The ACL (Access Control List) of a file can be set at the +time of upload using --acl-public or --acl-private options +with 's3cmd put' or 's3cmd sync' commands (see below). + +Alternatively the ACL can be altered for existing remote files +with 's3cmd setacl --acl-public' (or --acl-private) command. + +Simple s3cmd HowTo ------------------ 1) Register for Amazon AWS / S3 Go to http://aws.amazon.com/s3, click the "Sign up @@ -121,7 +171,7 @@ through the registration. You will have to supply your Credit Card details in order to allow Amazon charge you for S3 usage. - At the end you should posses your Access and Secret Keys + At the end you should have your Access and Secret Keys 2) Run "s3cmd --configure" You will be asked for the two keys - copy and paste @@ -135,66 +185,137 @@ you as of now. So the output will be empty. 4) Make a bucket with "s3cmd mb s3://my-new-bucket-name" - As mentioned above bucket names must be unique amongst + As mentioned above the bucket names must be unique amongst _all_ users of S3. That means the simple names like "test" or "asdf" are already taken and you must make up something - more original. I sometimes prefix my bucket names with - my e-mail domain name (logix.cz) leading to a bucket name, - for instance, 'logix.cz-test': - - ~$ s3cmd mb s3://logix.cz-test - Bucket 'logix.cz-test' created + more original. To demonstrate as many features as possible + let's create a FQDN-named bucket s3://public.s3tools.org: + + ~$ s3cmd mb s3://public.s3tools.org + Bucket 's3://public.s3tools.org' created 5) List your buckets again with "s3cmd ls" Now you should see your freshly created bucket ~$ s3cmd ls - 2007-01-19 01:41 s3://logix.cz-test + 2009-01-28 12:34 s3://public.s3tools.org 6) List the contents of the bucket - ~$ s3cmd ls s3://logix.cz-test - Bucket 'logix.cz-test': + ~$ s3cmd ls s3://public.s3tools.org ~$ It's empty, indeed. -7) Upload a file into the bucket - - ~$ s3cmd put addressbook.xml s3://logix.cz-test/addrbook.xml - File 'addressbook.xml' stored as s3://logix.cz-test/addrbook.xml (123456 bytes) - -8) Now we can list the bucket contents again - - ~$ s3cmd ls s3://logix.cz-test - Bucket 'logix.cz-test': - 2007-01-19 01:46 120k s3://logix.cz-test/addrbook.xml - -9) Retrieve the file back and verify that its hasn't been - corrupted - - ~$ s3cmd get s3://logix.cz-test/addrbook.xml addressbook-2.xml - Object s3://logix.cz-test/addrbook.xml saved as 'addressbook-2.xml' (123456 bytes) - - ~$ md5sum addressbook.xml addressbook-2.xml - 39bcb6992e461b269b95b3bda303addf addressbook.xml - 39bcb6992e461b269b95b3bda303addf addressbook-2.xml +7) Upload a single file into the bucket: + + ~$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml + some-file.xml -> s3://public.s3tools.org/somefile.xml [1 of 1] + 123456 of 123456 100% in 2s 51.75 kB/s done + + Upload a two directory tree into the bucket's virtual 'directory': + + ~$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/ + File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5] + File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5] + File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5] + File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5] + File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5] + + As you can see we didn't have to create the /somewhere + 'directory'. In fact it's only a filename prefix, not + a real directory and it doesn't have to be created in + any way beforehand. + +8) Now list the bucket contents again: + + ~$ s3cmd ls s3://public.s3tools.org + DIR s3://public.s3tools.org/somewhere/ + 2009-02-10 05:10 123456 s3://public.s3tools.org/somefile.xml + + Use --recursive (or -r) to list all the remote files: + + ~$ s3cmd ls s3://public.s3tools.org + 2009-02-10 05:10 123456 s3://public.s3tools.org/somefile.xml + 2009-02-10 05:13 18 s3://public.s3tools.org/somewhere/dir1/file1-1.txt + 2009-02-10 05:13 8 s3://public.s3tools.org/somewhere/dir1/file1-2.txt + 2009-02-10 05:13 16 s3://public.s3tools.org/somewhere/dir1/file1-3.log + 2009-02-10 05:13 11 s3://public.s3tools.org/somewhere/dir2/file2-1.bin + 2009-02-10 05:13 8 s3://public.s3tools.org/somewhere/dir2/file2-2.txt + +9) Retrieve one of the files back and verify that it hasn't been + corrupted: + + ~$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml + s3://public.s3tools.org/somefile.xml -> some-file-2.xml [1 of 1] + 123456 of 123456 100% in 3s 35.75 kB/s done + + ~$ md5sum some-file.xml some-file-2.xml + 39bcb6992e461b269b95b3bda303addf some-file.xml + 39bcb6992e461b269b95b3bda303addf some-file-2.xml Checksums of the original file matches the one of the retrieved one. Looks like it worked :-) -10) Clean up: delete the object and remove the bucket - - ~$ s3cmd rb s3://logix.cz-test - ERROR: S3 error: 409 (Conflict): BucketNotEmpty - - Ouch, we can only remove empty buckets! - - ~$ s3cmd del s3://logix.cz-test/addrbook.xml - Object s3://logix.cz-test/addrbook.xml deleted - - ~$ s3cmd rb s3://logix.cz-test - Bucket 'logix.cz-test' removed + To retrieve a whole 'directory tree' from S3 use recursive get: + + ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere + File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt' + File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt' + File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log' + File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin' + File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt' + + Since the destination directory wasn't specified s3cmd + saved the directory structure in a current working + directory ('.'). + + There is an important difference between: + get s3://public.s3tools.org/somewhere + and + get s3://public.s3tools.org/somewhere/ + (note the trailing slash) + S3cmd always uses the last path part, ie the word + after the last slash, for naming files. + + In the case of s3://.../somewhere the last path part + is 'somewhere' and therefore the recursive get names + the local files as somewhere/dir1, somewhere/dir2, etc. + + On the other hand in s3://.../somewhere/ the last path + part is empty and s3cmd will only create 'dir1' and 'dir2' + without the 'somewhere/' prefix: + + ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp + File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt' + File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt' + File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log' + File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin' + + See? It's /tmp/dir1 and not /tmp/somewhere/dir1 as it + was in the previous example. + +10) Clean up - delete the remote files and remove the bucket: + + Remove everything under s3://public.s3tools.org/somewhere/ + + ~$ s3cmd del --recursive s3://public.s3tools.org/somewhere/ + File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted + File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted + ... + + Now try to remove the bucket: + + ~$ s3cmd rb s3://public.s3tools.org + ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty + + Ouch, we forgot about s3://public.s3tools.org/somefile.xml + We can force the bucket removal anyway: + + ~$ s3cmd rb --force s3://public.s3tools.org/ + WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time... + File s3://public.s3tools.org/somefile.xml deleted + Bucket 's3://public.s3tools.org/' removed Hints ----- @@ -207,44 +328,10 @@ After configuring it with --configure all available options are spitted into your ~/.s3cfg file. It's a text file ready -to be modified in your favourite text editor. - -Multiple local files may be specified for "s3cmd put" -operation. In that case the S3 URI should only include -the bucket name, not the object part: - -~$ s3cmd put file-* s3://logix.cz-test/ -File 'file-one.txt' stored as s3://logix.cz-test/file-one.txt (4 bytes) -File 'file-two.txt' stored as s3://logix.cz-test/file-two.txt (4 bytes) - -Alternatively if you specify the object part as well it -will be treated as a prefix and all filenames given on the -command line will be appended to the prefix making up -the object name. However --force option is required in this -case: - -~$ s3cmd put --force file-* s3://logix.cz-test/prefixed: -File 'file-one.txt' stored as s3://logix.cz-test/prefixed:file-one.txt (4 bytes) -File 'file-two.txt' stored as s3://logix.cz-test/prefixed:file-two.txt (4 bytes) - -This prefixing mode works with "s3cmd ls" as well: - -~$ s3cmd ls s3://logix.cz-test -Bucket 'logix.cz-test': -2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt -2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt -2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-one.txt -2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-two.txt - -Now with a prefix to list only names beginning with "file-": - -~$ s3cmd ls s3://logix.cz-test/file-* -Bucket 'logix.cz-test': -2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt -2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt +to be modified in your favourite text editor. For more information refer to: -* S3cmd / S3tools homepage at http://s3tools.sourceforge.net +* S3cmd / S3tools homepage at http://s3tools.org * Amazon S3 homepage at http://aws.amazon.com/s3 Enjoy! diff --git a/S3/Config.py b/S3/Config.py index c47ba9f..7ff8e6e 100644 --- a/S3/Config.py +++ b/S3/Config.py @@ -7,6 +7,7 @@ from logging import debug, info, warning, error import re import Progress +from SortedDict import SortedDict class Config(object): _instance = None @@ -24,7 +25,9 @@ progress_class = Progress.ProgressCR send_chunk = 4096 recv_chunk = 4096 + list_md5 = False human_readable_sizes = False + extra_headers = SortedDict() force = False get_continue = False skip_existing = False @@ -56,7 +59,6 @@ bucket_location = "US" default_mime_type = "binary/octet-stream" guess_mime_type = True - debug_syncmatch = False # List of checks to be performed for 'sync' sync_checks = ['size', 'md5'] # 'weak-timestamp' # List of compiled REGEXPs diff --git a/S3/PkgInfo.py b/S3/PkgInfo.py index d8c48c1..8569ef0 100644 --- a/S3/PkgInfo.py +++ b/S3/PkgInfo.py @@ -1,6 +1,6 @@ package = "s3cmd" -version = "0.9.9-rc3" -url = "http://s3tools.logix.cz" +version = "0.9.9" +url = "http://s3tools.org" license = "GPL version 2" short_description = "Command line tool for managing Amazon S3 and CloudFront services" long_description = """ diff --git a/S3/S3.py b/S3/S3.py index 62f9fc7..09a7407 100644 --- a/S3/S3.py +++ b/S3/S3.py @@ -154,7 +154,6 @@ self.check_bucket_name(bucket, dns_strict = True) else: self.check_bucket_name(bucket, dns_strict = False) - headers["content-length"] = len(body) if self.config.acl_public: headers["x-amz-acl"] = "public-read" request = self.create_request("BUCKET_CREATE", bucket = bucket, headers = headers) @@ -327,6 +326,8 @@ if not headers: headers = SortedDict() + debug("headers: %s" % headers) + if headers.has_key("date"): if not headers.has_key("x-amz-date"): headers["x-amz-date"] = headers["date"] @@ -356,6 +357,8 @@ def send_request(self, request, body = None, retries = _max_retries): method_string, resource, headers = request debug("Processing request, please wait...") + if not headers.has_key('content-length'): + headers['content-length'] = body and len(body) or 0 try: conn = self.get_connection(resource['bucket']) conn.request(method_string, self.format_uri(resource), body, headers) diff --git a/s3cmd b/s3cmd index 562e8f9..22fe874 100755 --- a/s3cmd +++ b/s3cmd @@ -6,6 +6,11 @@ ## License: GPL Version 2 import sys + +if float("%d.%d" %(sys.version_info[0], sys.version_info[1])) < 2.4: + sys.stderr.write("ERROR: Python 2.4 or higher required, sorry.\n") + sys.exit(1) + import logging import time import os @@ -118,18 +123,28 @@ else: raise + if cfg.list_md5: + format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(md5)32s %(uri)s" + else: + format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(uri)s" + for prefix in response['common_prefixes']: - output(u"%s %s" % ( - "DIR".rjust(26), - uri.compose_uri(bucket, prefix["Prefix"]))) + output(format_string % { + "timestamp": "", + "size": "DIR", + "coeff": "", + "md5": "", + "uri": uri.compose_uri(bucket, prefix["Prefix"])}) for object in response["list"]: size, size_coeff = formatSize(object["Size"], Config().human_readable_sizes) - output(u"%s %s%s %s" % ( - formatDateTime(object["LastModified"]), - str(size).rjust(8), size_coeff.ljust(1), - uri.compose_uri(bucket, object["Key"]), - )) + output(format_string % { + "timestamp": formatDateTime(object["LastModified"]), + "size" : str(size), + "coeff": size_coeff, + "md5" : object['ETag'].strip('"'), + "uri": uri.compose_uri(bucket, object["Key"]), + }) def cmd_bucket_create(args): s3 = S3(Config()) @@ -310,7 +325,7 @@ uri_final = S3Uri(local_list[key]['remote_uri']) - extra_headers = {} + extra_headers = copy(cfg.extra_headers) full_name_orig = local_list[key]['full_name'] full_name = full_name_orig seq_label = "[%d of %d]" % (seq, local_count) @@ -475,7 +490,7 @@ if Config().recursive and not Config().force: raise ParameterError("Please use --force to delete ALL contents of %s" % uri) elif not Config().recursive: - raise ParameterError("Object name required, not only the bucket name") + raise ParameterError("File name required, not only the bucket name") subcmd_object_del_uri(uri) def subcmd_object_del_uri(uri, recursive = None): @@ -494,7 +509,7 @@ uri_list.append(uri) for _uri in uri_list: response = s3.object_delete(_uri) - output(u"Object %s deleted" % _uri) + output(u"File %s deleted" % _uri) def subcmd_cp_mv(args, process_fce, message): src_uri = S3Uri(args.pop(0)) @@ -508,19 +523,20 @@ if dst_uri.object() == "": dst_uri = S3Uri(dst_uri.uri() + src_uri.object()) - - response = process_fce(src_uri, dst_uri) + + extra_headers = copy(cfg.extra_headers) + response = process_fce(src_uri, dst_uri, extra_headers) output(message % { "src" : src_uri, "dst" : dst_uri}) if Config().acl_public: output(u"Public URL is: %s" % dst_uri.public_url()) def cmd_cp(args): s3 = S3(Config()) - subcmd_cp_mv(args, s3.object_copy, "Object %(src)s copied to %(dst)s") + subcmd_cp_mv(args, s3.object_copy, "File %(src)s copied to %(dst)s") def cmd_mv(args): s3 = S3(Config()) - subcmd_cp_mv(args, s3.object_move, "Object %(src)s moved to %(dst)s") + subcmd_cp_mv(args, s3.object_move, "File %(src)s moved to %(dst)s") def cmd_info(args): s3 = S3(Config()) @@ -676,12 +692,9 @@ info(u"Verifying attributes...") cfg = Config() exists_list = SortedDict() - if cfg.debug_syncmatch: - logging.root.setLevel(logging.DEBUG) for file in src_list.keys(): - if not cfg.debug_syncmatch: - debug(u"CHECK: %s" % file) + debug(u"CHECK: %s" % file) if dst_list.has_key(file): ## Was --skip-existing requested? if cfg.skip_existing: @@ -719,10 +732,6 @@ ## Remove from destination-list, all that is left there will be deleted del(dst_list[file]) - - if cfg.debug_syncmatch: - warning(u"Exiting because of --debug-syncmatch") - sys.exit(1) return src_list, dst_list, exists_list @@ -969,12 +978,13 @@ src = item['full_name'] uri = S3Uri(item['remote_uri']) seq_label = "[%d of %d]" % (seq, local_count) - attr_header = None + extra_headers = copy(cfg.extra_headers) if cfg.preserve_attrs: attr_header = _build_attr_header(src) - debug(attr_header) + debug(u"attr_header: %s" % attr_header) + extra_headers.update(attr_header) try: - response = s3.object_put(src, uri, attr_header, extra_label = seq_label) + response = s3.object_put(src, uri, extra_headers, extra_label = seq_label) except S3UploadError, e: error(u"%s: upload failed too many times. Skipping that file." % item['full_name_unicode']) continue @@ -1267,10 +1277,10 @@ #{"cmd":"mkdir", "label":"Make a virtual S3 directory", "param":"s3://BUCKET/path/to/dir", "func":cmd_mkdir, "argc":1}, {"cmd":"sync", "label":"Synchronize a directory tree to S3", "param":"LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR", "func":cmd_sync, "argc":2}, {"cmd":"du", "label":"Disk usage by buckets", "param":"[s3://BUCKET[/PREFIX]]", "func":cmd_du, "argc":0}, - {"cmd":"info", "label":"Get various information about Buckets or Objects", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1}, + {"cmd":"info", "label":"Get various information about Buckets or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1}, {"cmd":"cp", "label":"Copy object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_cp, "argc":2}, {"cmd":"mv", "label":"Move object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_mv, "argc":2}, - {"cmd":"setacl", "label":"Modify Access control list for Bucket or Object", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1}, + {"cmd":"setacl", "label":"Modify Access control list for Bucket or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1}, ## CloudFront commands {"cmd":"cflist", "label":"List CloudFront distribution points", "param":"", "func":CfCmd.info, "argc":0}, {"cmd":"cfinfo", "label":"Display CloudFront distribution point parameters", "param":"[cf://DIST_ID]", "func":CfCmd.info, "argc":0}, @@ -1304,9 +1314,6 @@ def main(): global cfg - if float("%d.%d" %(sys.version_info[0], sys.version_info[1])) < 2.4: - sys.stderr.write("ERROR: Python 2.4 or higher required, sorry.\n") - sys.exit(1) commands_list = get_commands_list() commands = {} @@ -1336,7 +1343,7 @@ optparser.add_option("-c", "--config", dest="config", metavar="FILE", help="Config file name. Defaults to %default") optparser.add_option( "--dump-config", dest="dump_config", action="store_true", help="Dump current configuration after parsing config files and command line options and exit.") - optparser.add_option("-n", "--dry-run", dest="dry_run", action="store_true", help="Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though (only for [sync] command)") + optparser.add_option("-n", "--dry-run", dest="dry_run", action="store_true", help="Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though (only for file transfer commands)") optparser.add_option("-e", "--encrypt", dest="encrypt", action="store_true", help="Encrypt files before uploading to S3.") optparser.add_option( "--no-encrypt", dest="encrypt", action="store_false", help="Don't encrypt files.") @@ -1358,16 +1365,18 @@ optparser.add_option( "--include-from", dest="include_from", action="append", metavar="FILE", help="Read --include GLOBs from FILE") optparser.add_option( "--rinclude", dest="rinclude", action="append", metavar="REGEXP", help="Same as --include but uses REGEXP (regular expression) instead of GLOB") optparser.add_option( "--rinclude-from", dest="rinclude_from", action="append", metavar="FILE", help="Read --rinclude REGEXPs from FILE") - optparser.add_option( "--debug-syncmatch", "--debug-exclude", dest="debug_syncmatch", action="store_true", help="Output detailed information about remote vs. local filelist matching and --exclude processing and then exit") optparser.add_option( "--bucket-location", dest="bucket_location", help="Datacentre to create bucket in. Either EU or US (default)") optparser.add_option("-m", "--mime-type", dest="default_mime_type", type="mimetype", metavar="MIME/TYPE", help="Default MIME-type to be set for objects stored.") optparser.add_option("-M", "--guess-mime-type", dest="guess_mime_type", action="store_true", help="Guess MIME-type of files by their extension. Falls back to default MIME-Type as specified by --mime-type option") + optparser.add_option( "--add-header", dest="add_header", action="append", metavar="NAME:VALUE", help="Add a given HTTP header to the upload request. Can be used multiple times. For instance set 'Expires' or 'Cache-Control' headers (or both) using this options if you like.") + optparser.add_option( "--encoding", dest="encoding", metavar="ENCODING", help="Override autodetected terminal and filesystem encoding (character set). Autodetected: %s" % preferred_encoding) - optparser.add_option("-H", "--human-readable-sizes", dest="human_readable_sizes", action="store_true", help="Print sizes in human readable form.") + optparser.add_option( "--list-md5", dest="list_md5", action="store_true", help="Include MD5 sums in bucket listings (only for 'ls' command).") + optparser.add_option("-H", "--human-readable-sizes", dest="human_readable_sizes", action="store_true", help="Print sizes in human readable form (eg 1kB instead of 1234).") optparser.add_option( "--progress", dest="progress_meter", action="store_true", help="Display progress meter (default on TTY).") optparser.add_option( "--no-progress", dest="progress_meter", action="store_false", help="Don't display progress meter (default on non-TTY).") @@ -1434,6 +1443,21 @@ if cfg.progress_meter: error(u"Option --progress is not yet supported on MS Windows platform. Assuming --no-progress.") cfg.progress_meter = False + + ## Pre-process --add-header's and put them to Config.extra_headers SortedDict() + if options.add_header: + for hdr in options.add_header: + try: + key, val = hdr.split(":", 1) + except ValueError: + raise ParameterError("Invalid header format: %s" % hdr) + key_inval = re.sub("[a-zA-Z0-9-.]", "", key) + if key_inval: + key_inval = key_inval.replace(" ", "") + key_inval = key_inval.replace("\t", "") + raise ParameterError("Invalid character(s) in header name '%s': \"%s\"" % (key, key_inval)) + debug(u"Updating Config.Config extra_headers[%s] -> %s" % (key.strip(), val.strip())) + cfg.extra_headers[key.strip()] = val.strip() ## Update Config with other parameters for option in cfg.option_list(): @@ -1519,9 +1543,6 @@ except S3Error, e: error(u"S3 error: %s" % e) sys.exit(1) - except ParameterError, e: - error(u"Parameter problem: %s" % e) - sys.exit(1) if __name__ == '__main__': try: @@ -1540,6 +1561,10 @@ main() sys.exit(0) + + except ParameterError, e: + error(u"Parameter problem: %s" % e) + sys.exit(1) except SystemExit, e: sys.exit(e.code) diff --git a/s3cmd.1 b/s3cmd.1 index a0f2730..c064ce9 100644 --- a/s3cmd.1 +++ b/s3cmd.1 @@ -1,6 +1,6 @@ .TH s3cmd 1 .SH NAME -s3cmd \- tool for managing Amazon S3 storage space +s3cmd \- tool for managing Amazon S3 storage space and Amazon CloudFront content delivery network .SH SYNOPSIS .B s3cmd [\fIOPTIONS\fR] \fICOMMAND\fR [\fIPARAMETERS\fR] @@ -42,13 +42,16 @@ \fBsync\fR \fIs3://BUCKET[/PREFIX] LOCAL_DIR\fR Restore a tree from S3 to local directory .TP -\fBcp\fR \fIs3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]\fR -\fBmv\fR \fIs3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]\fR +\fBcp\fR \fIs3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]\fR, \fBmv\fR \fIs3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]\fR Make a copy of a file (\fIcp\fR) or move a file (\fImv\fR). Destination can be in the same bucket with a different name or in another bucket with the same or different name. Adding \fI\-\-acl\-public\fR will make the destination object publicly accessible (see below). +.TP +\fBsetacl\fR \fIs3://BUCKET[/OBJECT]\fR +Modify \fIAccess control list\fI for Bucket or Files. Use with +\fI\-\-acl\-public\fR or \fI\-\-acl\-private\fR .TP \fBinfo\fR \fIs3://BUCKET[/OBJECT]\fR Get various information about a Bucket or Object @@ -56,6 +59,24 @@ \fBdu\fR \fI[s3://BUCKET[/PREFIX]]\fR Disk usage \- amount of data stored in S3 +.PP +Commands for CloudFront management +.TP +\fBcflist\fR +List CloudFront distribution points +.TP +\fBcfinfo\fR [\fIcf://DIST_ID\fR] +Display CloudFront distribution point parameters +.TP +\fBcfcreate\fR \fIs3://BUCKET\fR +Create CloudFront distribution point +.TP +\fBcfdelete\fR \fIcf://DIST_ID\fR +Delete CloudFront distribution point +.TP +\fBcfmodify\fR \fIcf://DIST_ID\fR +Change CloudFront distribution point parameters + .SH OPTIONS .PP Some of the below specified options can have their default @@ -63,9 +84,9 @@ .B s3cmd config file (by default $HOME/.s3cmd). As it's a simple text file feel free to open it with your favorite text editor and do any -changes you like. -.PP -Config file related options. +changes you like. +.PP +\fIConfig file related options.\fR .TP \fB\-\-configure\fR Invoke interactive (re)configuration tool. Don't worry, you won't @@ -78,24 +99,26 @@ Dump current configuration after parsing config files and command line options and exit. .PP -Most options can have a default value set in the above specified config file. -.PP -Options specific to \fBsync\fR command: +\fIOptions specific for \fIfile transfer commands\fR (\fBsync\fR, \fBput\fR and \fBget\fR): +.TP +\fB\-n\fR, \fB\-\-dry\-run\fR +Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other in +formation though. .TP \fB\-\-delete\-removed\fR Delete remote objects with no corresponding local file when \fIsync\fRing \fBto\fR S3 or delete local files with no corresponding object in S3 when \fIsync\fRing \fBfrom\fR S3. .TP \fB\-\-no\-delete\-removed\fR -Don't delete remote objects. Default for 'sync' command. +Don't delete remote objects. Default for \fIsync\fR command. .TP \fB\-p\fR, \fB\-\-preserve\fR -Preserve filesystem attributes (mode, ownership, timestamps). Default for 'sync' command. +Preserve filesystem attributes (mode, ownership, timestamps). Default for \fIsync\fR command. .TP \fB\-\-no\-preserve\fR Don't store filesystem attributes with uploaded files. .TP \fB\-\-exclude GLOB\fR -Exclude files matching GLOB (a.k.a. shell-style wildcard) from \fIsync\fI. See SYNC COMMAND section for more information. +Exclude files matching GLOB (a.k.a. shell-style wildcard) from \fIsync\fI. See FILE TRANSFERS section and \fIhttp://s3tools.org/s3cmd-sync\fR for more information. .TP \fB\-\-exclude\-from FILE\fR Same as \-\-exclude but reads GLOBs from the given FILE instead of expecting them on the command line. @@ -106,31 +129,14 @@ \fB\-\-rexclude\-from FILE\fR Same as \-\-exclude\-from but works with REGEXPs. .TP -\fB\-\-debug\-syncmatch\fR or \fB\-\-debug\-exclude\fR (alias) -Display detailed information about matching file names against exclude\-rules as well as information about remote vs local filelists matching. S3cmd exits after performing the match and no actual transfer takes place. -.\".TP -.\"\fB\-n\fR, \fB\-\-dry\-run\fR -.\"Only show what would be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though. -.PP -Options common for all commands (where it makes sense indeed): -.TP -\fB\-f\fR, \fB\-\-force\fR -Force overwrite and other dangerous operations. +\fB\-\-include=GLOB\fR, \fB\-\-include\-from=FILE\fR, \fB\-\-rinclude=REGEXP\fR, \fB\-\-rinclude\-from=FILE\fR +Filenames and paths matching GLOB or REGEXP will be included even if previously excluded by one of \-\-(r)exclude(\-from) patterns .TP \fB\-\-continue\fR -Continue getting a partially downloaded file (only for \fIget\fR command). This comes handy once download of a large file, say an ISO image, from a S3 bucket fails and a partially downloaded file is left on the disk. Unfortunately \fIput\fR command doesn't support restarting of failed upload due to Amazon S3 limitation. -.TP -\fB\-P\fR, \fB\-\-acl\-public\fR -Store objects with permissions allowing read for anyone. -.TP -\fB\-\-acl\-private\fR -Store objects with default ACL allowing access for you only. -.TP -\fB\-\-bucket\-location\fR=BUCKET_LOCATION -Specify datacentre where to create the bucket. Possible values are \fIUS\fR (default) or \fIEU\fR. -.TP -\fB\-e\fR, \fB\-\-encrypt\fR -Use GPG encryption to protect stored objects from unauthorized access. +Continue getting a partially downloaded file (only for \fIget\fR command). This comes handy once download of a large file, say an ISO image, from a S3 bucket fails and a partially downloaded file is left on the disk. Unfortunately \fIput\fR command doesn't support restarting of failed upload due to Amazon S3 limitations. +.TP +\fB\-\-skip\-existing\fR +Skip over files that exist at the destination (only for \fIget\fR and \fIsync\fR commands). .TP \fB\-m\fR MIME/TYPE, \fB\-\-mime\-type\fR=MIME/TYPE Default MIME\-type to be set for objects stored. @@ -140,15 +146,65 @@ back to default MIME\(hyType as specified by \fB\-\-mime\-type\fR option .TP +\fB\-\-add\-header=NAME:VALUE\fR +Add a given HTTP header to the upload request. Can be used multiple times with different header names. For instance set 'Expires' or 'Cache-Control' headers (or both) using this options if you like. +.TP +\fB\-P\fR, \fB\-\-acl\-public\fR +Store objects with permissions allowing read for anyone. See \fIhttp://s3tools.org/s3cmd-public\fR for details and hints for storing publicly accessible files. +.TP +\fB\-\-acl\-private\fR +Store objects with default ACL allowing access for you only. +.TP +\fB\-e\fR, \fB\-\-encrypt\fR +Use GPG encryption to protect stored objects from unauthorized access. See \fIhttp://s3tools.org/s3cmd-public\fR for details about encryption. +.TP +\fB\-\-no\-encrypt\fR +Don't encrypt files. +.PP +\fIOptions for CloudFront commands\fR: +.PP +See \fIhttp://s3tools.org/s3cmd-cloudfront\fR for more details. +.TP +\fB\-\-enable\fR +Enable given CloudFront distribution (only for \fIcfmodify\fR command) +.TP +\fB\-\-disable\fR +Enable given CloudFront distribution (only for \fIcfmodify\fR command) +.TP +\fB\-\-cf\-add\-cname=CNAME\fR +Add given CNAME to a CloudFront distribution (only for \fIcfcreate\fR and \fIcfmodify\fR commands) +.TP +\fB\-\-cf\-remove\-cname=CNAME\fR +Remove given CNAME from a CloudFront distribution (only for \fIcfmodify\fR command) +.TP +\fB\-\-cf\-comment=COMMENT\fR +Set COMMENT for a given CloudFront distribution (only for \fIcfcreate\fR and \fIcfmodify\fR commands) +.PP +\fIOptions common for all commands\fR (where it makes sense indeed): +.TP +\fB\-r\fR, \fB\-\-recursive\fR +Recursive upload, download or removal. When used with \fIdel\fR it can +remove all the files in a bucket. +.TP +\fB\-f\fR, \fB\-\-force\fR +Force overwrite and other dangerous operations. Can be used to remove +a non\-empty buckets with \fIs3cmd rb \-\-force s3://bkt\fR +.TP +\fB\-\-bucket\-location\fR=BUCKET_LOCATION +Specify datacentre where to create the bucket. Possible values are \fIUS\fR (default) or \fIEU\fR. +.TP \fB\-H\fR, \fB\-\-human\-readable\-sizes\fR Print sizes in human readable form. -.\".TP -.\"\fB\-u\fR, \fB\-\-show\-uri\fR -.\"Show complete S3 URI in listings. +.TP +\fB\-\-list\-md5\fR +Include MD5 sums in bucket listings (only for \fIls\fR command). .TP \fB\-\-progress\fR, \fB\-\-no\-progress\fR Display or don't display progress meter. When running on TTY (e.g. console or xterm) the default is to display progress meter. If not on TTY (e.g. output is redirected somewhere or running from cron) the default is to not display progress meter. .TP +\fB\-\-encoding=ENCODING\fR +Override autodetected terminal and filesystem encoding (character set). +.TP \fB\-v\fR, \fB\-\-verbose\fR Enable verbose output. .TP @@ -163,77 +219,101 @@ .B s3cmd version and exit. -.SH SYNC COMMAND +.SH FILE TRANSFERS One of the most powerful commands of \fIs3cmd\fR is \fBs3cmd sync\fR used for -synchronising complete directory trees to or from remote S3 storage. +synchronising complete directory trees to or from remote S3 storage. To some extent +\fBs3cmd put\fR and \fBs3cmd get\fR share a similar behaviour with \fBsync\fR. .PP Basic usage common in backup scenarios is as simple as: .nf - s3cmd sync /local/path s3://test-bucket/backup + s3cmd sync /local/path/ s3://test-bucket/backup/ .fi .PP This command will find all files under /local/path directory and copy them to corresponding paths under s3://test-bucket/backup on the remote side. For example: .nf -/local/path\fB/file1.ext\fR \-> s3://test-bucket/backup\fB/file1.ext\fR -/local/path\fB/dir123/file2.bin\fR \-> s3://test-bucket/backup\fB/dir123/file2.bin\fR -.fi - + /local/path/\fBfile1.ext\fR \-> s3://bucket/backup/\fBfile1.ext\fR + /local/path/\fBdir123/file2.bin\fR \-> s3://bucket/backup/\fBdir123/file2.bin\fR +.fi +.PP +However if the local path doesn't end with a slash the last directory's name +is used on the remote side as well. Compare these with the previous example: +.nf + s3cmd sync /local/path s3://test-bucket/backup/ +.fi +will sync: +.nf + /local/\fBpath/file1.ext\fR \-> s3://bucket/backup/\fBpath/file1.ext\fR + /local/\fBpath/dir123/file2.bin\fR \-> s3://bucket/backup/\fBpath/dir123/file2.bin\fR +.fi +.PP To retrieve the files back from S3 use inverted syntax: .nf - s3cmd sync s3://test-bucket/backup/ /tmp/restore + s3cmd sync s3://test-bucket/backup/ /tmp/restore/ .fi that will download files: .nf -s3://test-bucket/backup\fB/file1.ext\fR \-> /tmp/restore\fB/file1.ext\fR -s3://test-bucket/backup\fB/dir123/file2.bin\fR \-> /tmp/restore\fB/dir123/file2.bin\fR -.fi - -For the purpose of \fB\-\-exclude\fR and \fB\-\-exclude\-from\fR matching the file name -\fIalways\fR begins with \fB/\fR (slash) and has the local or remote common part removed. -For instance in the previous example the file names tested against \-\-exclude list -will be \fB/\fRfile1.ext and \fB/\fRdir123/file2.bin, that is both with the leading -slash regardless whether you specified s3://test-bucket/backup or -s3://test-bucket/backup/ (note the trailing slash) on the command line. - -Both \fB\-\-exclude\fR and \fB\-\-exclude\-from\fR work with shell-style wildcards (a.k.a. GLOB). + s3://bucket/backup/\fBfile1.ext\fR \-> /tmp/restore/\fBfile1.ext\fR + s3://bucket/backup/\fBdir123/file2.bin\fR \-> /tmp/restore/\fBdir123/file2.bin\fR +.fi +.PP +Without the trailing slash on source the behaviour is similar to +what has been demonstrated with upload: +.nf + s3cmd sync s3://test-bucket/backup /tmp/restore/ +.fi +will download the files as: +.nf + s3://bucket/\fBbackup/file1.ext\fR \-> /tmp/restore/\fBbackup/file1.ext\fR + s3://bucket/\fBbackup/dir123/file2.bin\fR \-> /tmp/restore/\fBbackup/dir123/file2.bin\fR +.fi +.PP +All source file names, the bold ones above, are matched against \fBexclude\fR +rules and those that match are then re\-checked against \fBinclude\fR rules to see +whether they should be excluded or kept in the source list. +.PP +For the purpose of \fB\-\-exclude\fR and \fB\-\-include\fR matching only the +bold file names above are used. For instance only \fBpath/file1.ext\fR is tested +against the patterns, not \fI/local/\fBpath/file1.ext\fR +.PP +Both \fB\-\-exclude\fR and \fB\-\-include\fR work with shell-style wildcards (a.k.a. GLOB). For a greater flexibility s3cmd provides Regular-expression versions of the two exclude options -named \fB\-\-rexclude\fR and \fB\-\-rexclude\-from\fR. - -Run s3cmd with \fB\-\-debug\-syncmatch\fR to get detailed information -about matching file names against exclude rules. - -For example to exclude all files with ".bin" extension with a REGEXP use: -.PP - \-\-rexclude '\.bin$' -.PP -to exclude all hidden files and subdirectories (i.e. those whose name begins with dot ".") use GLOB: -.PP - \-\-exclude '/.*' -.PP -on the other hand to exclude only hidden files but not hidden subdirectories use REGEXP: -.PP - \-\-rexclude '/\.[^/]*$' -.PP -etc... - -.SH AUTHOR -Written by Michal Ludvig -.SH REPORTING BUGS -Report bugs to -.I s3tools\-general@lists.sourceforge.net -.SH COPYRIGHT -Copyright \(co 2007,2008 Michal Ludvig -.br -This is free software. You may redistribute copies of it under the terms of -the GNU General Public License version 2 . -There is NO WARRANTY, to the extent permitted by law. +named \fB\-\-rexclude\fR and \fB\-\-rinclude\fR. +The options with ...\fB\-from\fR suffix (eg \-\-rinclude\-from) expect a filename as +an argument. Each line of such a file is treated as one pattern. +.PP +There is only one set of patterns built from all \fB\-\-(r)exclude(\-from)\fR options +and similarly for include variant. Any file excluded with eg \-\-exclude can +be put back with a pattern found in \-\-rinclude\-from list. +.PP +Run s3cmd with \fB\-\-dry\-run\fR to verify that your rules work as expected. +Use together with \fB\-\-debug\fR get detailed information +about matching file names against exclude and include rules. +.PP +For example to exclude all files with ".jpg" extension except those beginning with a number use: +.PP + \-\-exclude '*.jpg' \-\-rinclude '[0-9].*\.jpg' + .SH SEE ALSO For the most up to date list of options run .B s3cmd \-\-help .br For more info about usage, examples and other related info visit project homepage at .br -.B http://s3tools.logix.cz - +.B http://s3tools.org + +.SH AUTHOR +Written by Michal Ludvig +.SH CONTACT, SUPPORT +Prefered way to get support is our mailing list: +.I s3tools\-general@lists.sourceforge.net +.SH REPORTING BUGS +Report bugs to +.I s3tools\-bugs@lists.sourceforge.net +.SH COPYRIGHT +Copyright \(co 2007,2008,2009 Michal Ludvig +.br +This is free software. You may redistribute copies of it under the terms of +the GNU General Public License version 2 . +There is NO WARRANTY, to the extent permitted by law.