Codebase list s3cmd / e053bd3
Imported Upstream version 0.9.9 Gianfranco Costamagna 8 years ago
8 changed file(s) with 464 addition(s) and 289 deletion(s). Raw diff Collapse all Expand all
+26
-48
NEWS less more
0 s3cmd 0.9.9-rc3 - 2009-02-02
1 ===============
2 * Fixed crash in S3Error().__str__() (typically Amazon's Internal
3 errors, etc).
0 s3cmd 0.9.9 - 2009-02-17
1 ===========
2 New commands:
3 * Commands for copying and moving objects, within or
4 between buckets: [cp] and [mv] (Andrew Ryan)
5 * CloudFront support through [cfcreate], [cfdelete],
6 [cfmodify] and [cfinfo] commands. (sponsored by Joseph Denne)
7 * New command [setacl] for setting ACL on existing objects,
8 use together with --acl-public/--acl-private (sponsored by
9 Joseph Denne)
410
5 s3cmd 0.9.9-rc2 - 2009-01-30
6 ===============
7 * Fixed s3cmd crash when put / get / sync found
8 zero files to transfer.
9 * In --dry-run output files to be deleted only
10 with --delete-removed, otherwise users get scared.
11
12 s3cmd 0.9.9-rc1 - 2009-01-27
13 ===============
14 * CloudFront support through cfcreate, cfdelete,
15 cfmodify and cfinfo commands.
11 Other major features:
12 * Improved source dirname handling for [put], [get] and [sync].
13 * Recursive and wildcard support for [put], [get] and [del].
14 * Support for non-recursive [ls].
15 * Enabled --dry-run for [put], [get] and [sync].
16 * Allowed removal of non-empty buckets with [rb --force].
17 * Implemented progress meter (--progress / --no-progress)
1618 * Added --include / --rinclude / --(r)include-from
1719 options to override --exclude exclusions.
18 * Enabled --dry-run for [put] and [get].
20 * Added --add-header option for [put], [sync], [cp] and [mv].
21 Good for setting e.g. Expires or Cache-control headers.
22 * Added --list-md5 option for [ls].
23 * Continue [get] partially downloaded files with --continue
24 * New option --skip-existing for [get] and [sync].
25
26 Minor features and bugfixes:
1927 * Fixed GPG (--encrypt) compatibility with Python 2.6.
20
21 s3cmd 0.9.9-pre5 - 2009-01-22
22 ================
23 * New command 'setacl' for setting ACL on existing objects.
24 * Recursive [put] with a slightly different semantic.
25 * Multiple sources for [sync] and slightly different semantics.
26 * Support for --dry-run with [sync]
27
28 s3cmd 0.9.9-pre4 - 2008-12-30
29 ================
30 * Support for non-recursive [ls]
31 * Support for multiple sources and recursive [get].
32 * Improved wildcard [get].
33 * New option --skip-existing for [get] and [sync].
34 * Improved Progress class (fixes Mac OS X)
28 * Always send Content-Length header to satisfy some http proxies.
3529 * Fixed installation on Windows and Mac OS X.
3630 * Don't print nasty backtrace on KeyboardInterrupt.
3731 * Should work fine on non-UTF8 systems, provided all
38 the files are in current system encoding.
32 the files are in current system encoding.
3933 * System encoding can be overriden using --encoding.
40
41 s3cmd 0.9.9-pre3 - 2008-12-01
42 ================
43 * Bugfixes only
44 - Fixed sync from S3 to local
45 - Fixed progress meter with Unicode chars
46
47 s3cmd 0.9.9-pre2 - 2008-11-24
48 ================
49 * Implemented progress meter (--progress / --no-progress)
50 * Removing of non-empty buckets with --force
51 * Recursively remove objects from buckets with a given
52 prefix with --recursive (-r)
53 * Copying and moving objects, within or between buckets.
54 (Andrew Ryan)
55 * Continue getting partially downloaded files with --continue
5634 * Improved resistance to communication errors (Connection
5735 reset by peer, etc.)
5836
00 Metadata-Version: 1.0
11 Name: s3cmd
2 Version: 0.9.9-rc3
2 Version: 0.9.9
33 Summary: Command line tool for managing Amazon S3 and CloudFront services
4 Home-page: http://s3tools.logix.cz
4 Home-page: http://s3tools.org
55 Author: Michal Ludvig
66 Author-email: michal@logix.cz
77 License: GPL version 2
+195
-108
README less more
44 Michal Ludvig <michal@logix.cz>
55
66 S3tools / S3cmd project homepage:
7 http://s3tools.sourceforge.net
8
9 S3tools / S3cmd mailing list:
10 s3tools-general@lists.sourceforge.net
7 http://s3tools.org
8
9 S3tools / S3cmd mailing lists:
10 * Announcements of new releases:
11 s3tools-announce@lists.sourceforge.net
12
13 * General questions and discussion about usage
14 s3tools-general@lists.sourceforge.net
15
16 * Bug reports
17 s3tools-bugs@lists.sourceforge.net
1118
1219 Amazon S3 homepage:
1320 http://aws.amazon.com/s3
7885
7986 That's the pricing model of Amazon S3 in a nutshell. Check
8087 Amazon S3 homepage at http://aws.amazon.com/s3 for more
81 details.
88 details.
8289
8390 Needless to say that all these money are charged by Amazon
8491 itself, there is obviously no payment for using S3cmd :-)
8592
8693 Amazon S3 basics
8794 ----------------
88 Files stored in S3 are called "objects" and their names are
89 officially called "keys". Each object belongs to exactly one
90 "bucket". Buckets are kind of directories or folders with
91 some restrictions: 1) each user can only have 100 buckets at
92 the most, 2) bucket names must be unique amongst all users
93 of S3, 3) buckets can not be nested into a deeper
94 hierarchy and 4) a name of a bucket can only consist of basic
95 alphanumeric characters plus dot (.) and dash (-). No spaces,
96 no accented or UTF-8 letters, etc.
97
98 On the other hand there are almost no restrictions on object
99 names ("keys"). These can be any UTF-8 strings of up to 1024
100 bytes long. Interestingly enough the object name can contain
101 forward slash character (/) thus a "my/funny/picture.jpg" is
102 a valid object name. Note that there are not directories nor
95 Files stored in S3 are called "objects" and their names are
96 officially called "keys". Since this is sometimes confusing
97 for the users we often refer to the objects as "files" or
98 "remote files". Each object belongs to exactly one "bucket".
99
100 To describe objects in S3 storage we invented a URI-like
101 schema in the following form:
102
103 s3://BUCKET
104 or
105 s3://BUCKET/OBJECT
106
107 Buckets
108 -------
109 Buckets are sort of like directories or folders with some
110 restrictions:
111 1) each user can only have 100 buckets at the most,
112 2) bucket names must be unique amongst all users of S3,
113 3) buckets can not be nested into a deeper hierarchy and
114 4) a name of a bucket can only consist of basic alphanumeric
115 characters plus dot (.) and dash (-). No spaces, no accented
116 or UTF-8 letters, etc.
117
118 It is a good idea to use DNS-compatible bucket names. That
119 for instance means you should not use upper case characters.
120 While DNS compliance is not strictly required some features
121 described below are not available for DNS-incompatible named
122 buckets. One more step further is using a fully qualified
123 domain name (FQDN) for a bucket - that has even more benefits.
124
125 * For example "s3://--My-Bucket--" is not DNS compatible.
126 * On the other hand "s3://my-bucket" is DNS compatible but
127 is not FQDN.
128 * Finally "s3://my-bucket.s3tools.org" is DNS compatible
129 and FQDN provided you own the s3tools.org domain and can
130 create the domain record for "my-bucket.s3tools.org".
131
132 Look for "Virtual Hosts" later in this text for more details
133 regarding FQDN named buckets.
134
135 Objects (files stored in Amazon S3)
136 -----------------------------------
137 Unlike for buckets there are almost no restrictions on object
138 names. These can be any UTF-8 strings of up to 1024 bytes long.
139 Interestingly enough the object name can contain forward
140 slash character (/) thus a "my/funny/picture.jpg" is a valid
141 object name. Note that there are not directories nor
103142 buckets called "my" and "funny" - it is really a single object
104143 name called "my/funny/picture.jpg" and S3 does not care at
105144 all that it _looks_ like a directory structure.
106145
107 To describe objects in S3 storage we invented a URI-like
108 schema in the following form:
109
110 s3://BUCKET/OBJECT
111
112 See the HowTo later in this document for example usages of
113 this S3-URI schema.
114
115 Simple S3cmd HowTo
146 The full URI of such an image could be, for example:
147
148 s3://my-bucket/my/funny/picture.jpg
149
150 Public vs Private files
151 -----------------------
152 The files stored in S3 can be either Private or Public. The
153 Private ones are readable only by the user who uploaded them
154 while the Public ones can be read by anyone. Additionally the
155 Public files can be accessed using HTTP protocol, not only
156 using s3cmd or a similar tool.
157
158 The ACL (Access Control List) of a file can be set at the
159 time of upload using --acl-public or --acl-private options
160 with 's3cmd put' or 's3cmd sync' commands (see below).
161
162 Alternatively the ACL can be altered for existing remote files
163 with 's3cmd setacl --acl-public' (or --acl-private) command.
164
165 Simple s3cmd HowTo
116166 ------------------
117167 1) Register for Amazon AWS / S3
118168 Go to http://aws.amazon.com/s3, click the "Sign up
120170 through the registration. You will have to supply
121171 your Credit Card details in order to allow Amazon
122172 charge you for S3 usage.
123 At the end you should posses your Access and Secret Keys
173 At the end you should have your Access and Secret Keys
124174
125175 2) Run "s3cmd --configure"
126176 You will be asked for the two keys - copy and paste
134184 you as of now. So the output will be empty.
135185
136186 4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
137 As mentioned above bucket names must be unique amongst
187 As mentioned above the bucket names must be unique amongst
138188 _all_ users of S3. That means the simple names like "test"
139189 or "asdf" are already taken and you must make up something
140 more original. I sometimes prefix my bucket names with
141 my e-mail domain name (logix.cz) leading to a bucket name,
142 for instance, 'logix.cz-test':
143
144 ~$ s3cmd mb s3://logix.cz-test
145 Bucket 'logix.cz-test' created
190 more original. To demonstrate as many features as possible
191 let's create a FQDN-named bucket s3://public.s3tools.org:
192
193 ~$ s3cmd mb s3://public.s3tools.org
194 Bucket 's3://public.s3tools.org' created
146195
147196 5) List your buckets again with "s3cmd ls"
148197 Now you should see your freshly created bucket
149198
150199 ~$ s3cmd ls
151 2007-01-19 01:41 s3://logix.cz-test
200 2009-01-28 12:34 s3://public.s3tools.org
152201
153202 6) List the contents of the bucket
154203
155 ~$ s3cmd ls s3://logix.cz-test
156 Bucket 'logix.cz-test':
204 ~$ s3cmd ls s3://public.s3tools.org
157205 ~$
158206
159207 It's empty, indeed.
160208
161 7) Upload a file into the bucket
162
163 ~$ s3cmd put addressbook.xml s3://logix.cz-test/addrbook.xml
164 File 'addressbook.xml' stored as s3://logix.cz-test/addrbook.xml (123456 bytes)
165
166 8) Now we can list the bucket contents again
167
168 ~$ s3cmd ls s3://logix.cz-test
169 Bucket 'logix.cz-test':
170 2007-01-19 01:46 120k s3://logix.cz-test/addrbook.xml
171
172 9) Retrieve the file back and verify that its hasn't been
173 corrupted
174
175 ~$ s3cmd get s3://logix.cz-test/addrbook.xml addressbook-2.xml
176 Object s3://logix.cz-test/addrbook.xml saved as 'addressbook-2.xml' (123456 bytes)
177
178 ~$ md5sum addressbook.xml addressbook-2.xml
179 39bcb6992e461b269b95b3bda303addf addressbook.xml
180 39bcb6992e461b269b95b3bda303addf addressbook-2.xml
209 7) Upload a single file into the bucket:
210
211 ~$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
212 some-file.xml -> s3://public.s3tools.org/somefile.xml [1 of 1]
213 123456 of 123456 100% in 2s 51.75 kB/s done
214
215 Upload a two directory tree into the bucket's virtual 'directory':
216
217 ~$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
218 File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
219 File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
220 File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
221 File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
222 File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]
223
224 As you can see we didn't have to create the /somewhere
225 'directory'. In fact it's only a filename prefix, not
226 a real directory and it doesn't have to be created in
227 any way beforehand.
228
229 8) Now list the bucket contents again:
230
231 ~$ s3cmd ls s3://public.s3tools.org
232 DIR s3://public.s3tools.org/somewhere/
233 2009-02-10 05:10 123456 s3://public.s3tools.org/somefile.xml
234
235 Use --recursive (or -r) to list all the remote files:
236
237 ~$ s3cmd ls s3://public.s3tools.org
238 2009-02-10 05:10 123456 s3://public.s3tools.org/somefile.xml
239 2009-02-10 05:13 18 s3://public.s3tools.org/somewhere/dir1/file1-1.txt
240 2009-02-10 05:13 8 s3://public.s3tools.org/somewhere/dir1/file1-2.txt
241 2009-02-10 05:13 16 s3://public.s3tools.org/somewhere/dir1/file1-3.log
242 2009-02-10 05:13 11 s3://public.s3tools.org/somewhere/dir2/file2-1.bin
243 2009-02-10 05:13 8 s3://public.s3tools.org/somewhere/dir2/file2-2.txt
244
245 9) Retrieve one of the files back and verify that it hasn't been
246 corrupted:
247
248 ~$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
249 s3://public.s3tools.org/somefile.xml -> some-file-2.xml [1 of 1]
250 123456 of 123456 100% in 3s 35.75 kB/s done
251
252 ~$ md5sum some-file.xml some-file-2.xml
253 39bcb6992e461b269b95b3bda303addf some-file.xml
254 39bcb6992e461b269b95b3bda303addf some-file-2.xml
181255
182256 Checksums of the original file matches the one of the
183257 retrieved one. Looks like it worked :-)
184258
185 10) Clean up: delete the object and remove the bucket
186
187 ~$ s3cmd rb s3://logix.cz-test
188 ERROR: S3 error: 409 (Conflict): BucketNotEmpty
189
190 Ouch, we can only remove empty buckets!
191
192 ~$ s3cmd del s3://logix.cz-test/addrbook.xml
193 Object s3://logix.cz-test/addrbook.xml deleted
194
195 ~$ s3cmd rb s3://logix.cz-test
196 Bucket 'logix.cz-test' removed
259 To retrieve a whole 'directory tree' from S3 use recursive get:
260
261 ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere
262 File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
263 File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
264 File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
265 File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
266 File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'
267
268 Since the destination directory wasn't specified s3cmd
269 saved the directory structure in a current working
270 directory ('.').
271
272 There is an important difference between:
273 get s3://public.s3tools.org/somewhere
274 and
275 get s3://public.s3tools.org/somewhere/
276 (note the trailing slash)
277 S3cmd always uses the last path part, ie the word
278 after the last slash, for naming files.
279
280 In the case of s3://.../somewhere the last path part
281 is 'somewhere' and therefore the recursive get names
282 the local files as somewhere/dir1, somewhere/dir2, etc.
283
284 On the other hand in s3://.../somewhere/ the last path
285 part is empty and s3cmd will only create 'dir1' and 'dir2'
286 without the 'somewhere/' prefix:
287
288 ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
289 File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
290 File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
291 File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
292 File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'
293
294 See? It's /tmp/dir1 and not /tmp/somewhere/dir1 as it
295 was in the previous example.
296
297 10) Clean up - delete the remote files and remove the bucket:
298
299 Remove everything under s3://public.s3tools.org/somewhere/
300
301 ~$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
302 File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
303 File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
304 ...
305
306 Now try to remove the bucket:
307
308 ~$ s3cmd rb s3://public.s3tools.org
309 ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
310
311 Ouch, we forgot about s3://public.s3tools.org/somefile.xml
312 We can force the bucket removal anyway:
313
314 ~$ s3cmd rb --force s3://public.s3tools.org/
315 WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
316 File s3://public.s3tools.org/somefile.xml deleted
317 Bucket 's3://public.s3tools.org/' removed
197318
198319 Hints
199320 -----
206327
207328 After configuring it with --configure all available options
208329 are spitted into your ~/.s3cfg file. It's a text file ready
209 to be modified in your favourite text editor.
210
211 Multiple local files may be specified for "s3cmd put"
212 operation. In that case the S3 URI should only include
213 the bucket name, not the object part:
214
215 ~$ s3cmd put file-* s3://logix.cz-test/
216 File 'file-one.txt' stored as s3://logix.cz-test/file-one.txt (4 bytes)
217 File 'file-two.txt' stored as s3://logix.cz-test/file-two.txt (4 bytes)
218
219 Alternatively if you specify the object part as well it
220 will be treated as a prefix and all filenames given on the
221 command line will be appended to the prefix making up
222 the object name. However --force option is required in this
223 case:
224
225 ~$ s3cmd put --force file-* s3://logix.cz-test/prefixed:
226 File 'file-one.txt' stored as s3://logix.cz-test/prefixed:file-one.txt (4 bytes)
227 File 'file-two.txt' stored as s3://logix.cz-test/prefixed:file-two.txt (4 bytes)
228
229 This prefixing mode works with "s3cmd ls" as well:
230
231 ~$ s3cmd ls s3://logix.cz-test
232 Bucket 'logix.cz-test':
233 2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt
234 2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt
235 2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-one.txt
236 2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-two.txt
237
238 Now with a prefix to list only names beginning with "file-":
239
240 ~$ s3cmd ls s3://logix.cz-test/file-*
241 Bucket 'logix.cz-test':
242 2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt
243 2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt
330 to be modified in your favourite text editor.
244331
245332 For more information refer to:
246 * S3cmd / S3tools homepage at http://s3tools.sourceforge.net
333 * S3cmd / S3tools homepage at http://s3tools.org
247334 * Amazon S3 homepage at http://aws.amazon.com/s3
248335
249336 Enjoy!
66 from logging import debug, info, warning, error
77 import re
88 import Progress
9 from SortedDict import SortedDict
910
1011 class Config(object):
1112 _instance = None
2324 progress_class = Progress.ProgressCR
2425 send_chunk = 4096
2526 recv_chunk = 4096
27 list_md5 = False
2628 human_readable_sizes = False
29 extra_headers = SortedDict()
2730 force = False
2831 get_continue = False
2932 skip_existing = False
5558 bucket_location = "US"
5659 default_mime_type = "binary/octet-stream"
5760 guess_mime_type = True
58 debug_syncmatch = False
5961 # List of checks to be performed for 'sync'
6062 sync_checks = ['size', 'md5'] # 'weak-timestamp'
6163 # List of compiled REGEXPs
00 package = "s3cmd"
1 version = "0.9.9-rc3"
2 url = "http://s3tools.logix.cz"
1 version = "0.9.9"
2 url = "http://s3tools.org"
33 license = "GPL version 2"
44 short_description = "Command line tool for managing Amazon S3 and CloudFront services"
55 long_description = """
153153 self.check_bucket_name(bucket, dns_strict = True)
154154 else:
155155 self.check_bucket_name(bucket, dns_strict = False)
156 headers["content-length"] = len(body)
157156 if self.config.acl_public:
158157 headers["x-amz-acl"] = "public-read"
159158 request = self.create_request("BUCKET_CREATE", bucket = bucket, headers = headers)
326325 if not headers:
327326 headers = SortedDict()
328327
328 debug("headers: %s" % headers)
329
329330 if headers.has_key("date"):
330331 if not headers.has_key("x-amz-date"):
331332 headers["x-amz-date"] = headers["date"]
355356 def send_request(self, request, body = None, retries = _max_retries):
356357 method_string, resource, headers = request
357358 debug("Processing request, please wait...")
359 if not headers.has_key('content-length'):
360 headers['content-length'] = body and len(body) or 0
358361 try:
359362 conn = self.get_connection(resource['bucket'])
360363 conn.request(method_string, self.format_uri(resource), body, headers)
+62
-37
s3cmd less more
55 ## License: GPL Version 2
66
77 import sys
8
9 if float("%d.%d" %(sys.version_info[0], sys.version_info[1])) < 2.4:
10 sys.stderr.write("ERROR: Python 2.4 or higher required, sorry.\n")
11 sys.exit(1)
12
813 import logging
914 import time
1015 import os
117122 else:
118123 raise
119124
125 if cfg.list_md5:
126 format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(md5)32s %(uri)s"
127 else:
128 format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(uri)s"
129
120130 for prefix in response['common_prefixes']:
121 output(u"%s %s" % (
122 "DIR".rjust(26),
123 uri.compose_uri(bucket, prefix["Prefix"])))
131 output(format_string % {
132 "timestamp": "",
133 "size": "DIR",
134 "coeff": "",
135 "md5": "",
136 "uri": uri.compose_uri(bucket, prefix["Prefix"])})
124137
125138 for object in response["list"]:
126139 size, size_coeff = formatSize(object["Size"], Config().human_readable_sizes)
127 output(u"%s %s%s %s" % (
128 formatDateTime(object["LastModified"]),
129 str(size).rjust(8), size_coeff.ljust(1),
130 uri.compose_uri(bucket, object["Key"]),
131 ))
140 output(format_string % {
141 "timestamp": formatDateTime(object["LastModified"]),
142 "size" : str(size),
143 "coeff": size_coeff,
144 "md5" : object['ETag'].strip('"'),
145 "uri": uri.compose_uri(bucket, object["Key"]),
146 })
132147
133148 def cmd_bucket_create(args):
134149 s3 = S3(Config())
309324
310325 uri_final = S3Uri(local_list[key]['remote_uri'])
311326
312 extra_headers = {}
327 extra_headers = copy(cfg.extra_headers)
313328 full_name_orig = local_list[key]['full_name']
314329 full_name = full_name_orig
315330 seq_label = "[%d of %d]" % (seq, local_count)
474489 if Config().recursive and not Config().force:
475490 raise ParameterError("Please use --force to delete ALL contents of %s" % uri)
476491 elif not Config().recursive:
477 raise ParameterError("Object name required, not only the bucket name")
492 raise ParameterError("File name required, not only the bucket name")
478493 subcmd_object_del_uri(uri)
479494
480495 def subcmd_object_del_uri(uri, recursive = None):
493508 uri_list.append(uri)
494509 for _uri in uri_list:
495510 response = s3.object_delete(_uri)
496 output(u"Object %s deleted" % _uri)
511 output(u"File %s deleted" % _uri)
497512
498513 def subcmd_cp_mv(args, process_fce, message):
499514 src_uri = S3Uri(args.pop(0))
507522
508523 if dst_uri.object() == "":
509524 dst_uri = S3Uri(dst_uri.uri() + src_uri.object())
510
511 response = process_fce(src_uri, dst_uri)
525
526 extra_headers = copy(cfg.extra_headers)
527 response = process_fce(src_uri, dst_uri, extra_headers)
512528 output(message % { "src" : src_uri, "dst" : dst_uri})
513529 if Config().acl_public:
514530 output(u"Public URL is: %s" % dst_uri.public_url())
515531
516532 def cmd_cp(args):
517533 s3 = S3(Config())
518 subcmd_cp_mv(args, s3.object_copy, "Object %(src)s copied to %(dst)s")
534 subcmd_cp_mv(args, s3.object_copy, "File %(src)s copied to %(dst)s")
519535
520536 def cmd_mv(args):
521537 s3 = S3(Config())
522 subcmd_cp_mv(args, s3.object_move, "Object %(src)s moved to %(dst)s")
538 subcmd_cp_mv(args, s3.object_move, "File %(src)s moved to %(dst)s")
523539
524540 def cmd_info(args):
525541 s3 = S3(Config())
675691 info(u"Verifying attributes...")
676692 cfg = Config()
677693 exists_list = SortedDict()
678 if cfg.debug_syncmatch:
679 logging.root.setLevel(logging.DEBUG)
680694
681695 for file in src_list.keys():
682 if not cfg.debug_syncmatch:
683 debug(u"CHECK: %s" % file)
696 debug(u"CHECK: %s" % file)
684697 if dst_list.has_key(file):
685698 ## Was --skip-existing requested?
686699 if cfg.skip_existing:
718731
719732 ## Remove from destination-list, all that is left there will be deleted
720733 del(dst_list[file])
721
722 if cfg.debug_syncmatch:
723 warning(u"Exiting because of --debug-syncmatch")
724 sys.exit(1)
725734
726735 return src_list, dst_list, exists_list
727736
968977 src = item['full_name']
969978 uri = S3Uri(item['remote_uri'])
970979 seq_label = "[%d of %d]" % (seq, local_count)
971 attr_header = None
980 extra_headers = copy(cfg.extra_headers)
972981 if cfg.preserve_attrs:
973982 attr_header = _build_attr_header(src)
974 debug(attr_header)
983 debug(u"attr_header: %s" % attr_header)
984 extra_headers.update(attr_header)
975985 try:
976 response = s3.object_put(src, uri, attr_header, extra_label = seq_label)
986 response = s3.object_put(src, uri, extra_headers, extra_label = seq_label)
977987 except S3UploadError, e:
978988 error(u"%s: upload failed too many times. Skipping that file." % item['full_name_unicode'])
979989 continue
12661276 #{"cmd":"mkdir", "label":"Make a virtual S3 directory", "param":"s3://BUCKET/path/to/dir", "func":cmd_mkdir, "argc":1},
12671277 {"cmd":"sync", "label":"Synchronize a directory tree to S3", "param":"LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR", "func":cmd_sync, "argc":2},
12681278 {"cmd":"du", "label":"Disk usage by buckets", "param":"[s3://BUCKET[/PREFIX]]", "func":cmd_du, "argc":0},
1269 {"cmd":"info", "label":"Get various information about Buckets or Objects", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1},
1279 {"cmd":"info", "label":"Get various information about Buckets or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1},
12701280 {"cmd":"cp", "label":"Copy object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_cp, "argc":2},
12711281 {"cmd":"mv", "label":"Move object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_mv, "argc":2},
1272 {"cmd":"setacl", "label":"Modify Access control list for Bucket or Object", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1},
1282 {"cmd":"setacl", "label":"Modify Access control list for Bucket or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1},
12731283 ## CloudFront commands
12741284 {"cmd":"cflist", "label":"List CloudFront distribution points", "param":"", "func":CfCmd.info, "argc":0},
12751285 {"cmd":"cfinfo", "label":"Display CloudFront distribution point parameters", "param":"[cf://DIST_ID]", "func":CfCmd.info, "argc":0},
13031313
13041314 def main():
13051315 global cfg
1306 if float("%d.%d" %(sys.version_info[0], sys.version_info[1])) < 2.4:
1307 sys.stderr.write("ERROR: Python 2.4 or higher required, sorry.\n")
1308 sys.exit(1)
13091316
13101317 commands_list = get_commands_list()
13111318 commands = {}
13351342 optparser.add_option("-c", "--config", dest="config", metavar="FILE", help="Config file name. Defaults to %default")
13361343 optparser.add_option( "--dump-config", dest="dump_config", action="store_true", help="Dump current configuration after parsing config files and command line options and exit.")
13371344
1338 optparser.add_option("-n", "--dry-run", dest="dry_run", action="store_true", help="Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though (only for [sync] command)")
1345 optparser.add_option("-n", "--dry-run", dest="dry_run", action="store_true", help="Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though (only for file transfer commands)")
13391346
13401347 optparser.add_option("-e", "--encrypt", dest="encrypt", action="store_true", help="Encrypt files before uploading to S3.")
13411348 optparser.add_option( "--no-encrypt", dest="encrypt", action="store_false", help="Don't encrypt files.")
13571364 optparser.add_option( "--include-from", dest="include_from", action="append", metavar="FILE", help="Read --include GLOBs from FILE")
13581365 optparser.add_option( "--rinclude", dest="rinclude", action="append", metavar="REGEXP", help="Same as --include but uses REGEXP (regular expression) instead of GLOB")
13591366 optparser.add_option( "--rinclude-from", dest="rinclude_from", action="append", metavar="FILE", help="Read --rinclude REGEXPs from FILE")
1360 optparser.add_option( "--debug-syncmatch", "--debug-exclude", dest="debug_syncmatch", action="store_true", help="Output detailed information about remote vs. local filelist matching and --exclude processing and then exit")
13611367
13621368 optparser.add_option( "--bucket-location", dest="bucket_location", help="Datacentre to create bucket in. Either EU or US (default)")
13631369
13641370 optparser.add_option("-m", "--mime-type", dest="default_mime_type", type="mimetype", metavar="MIME/TYPE", help="Default MIME-type to be set for objects stored.")
13651371 optparser.add_option("-M", "--guess-mime-type", dest="guess_mime_type", action="store_true", help="Guess MIME-type of files by their extension. Falls back to default MIME-Type as specified by --mime-type option")
13661372
1373 optparser.add_option( "--add-header", dest="add_header", action="append", metavar="NAME:VALUE", help="Add a given HTTP header to the upload request. Can be used multiple times. For instance set 'Expires' or 'Cache-Control' headers (or both) using this options if you like.")
1374
13671375 optparser.add_option( "--encoding", dest="encoding", metavar="ENCODING", help="Override autodetected terminal and filesystem encoding (character set). Autodetected: %s" % preferred_encoding)
13681376
1369 optparser.add_option("-H", "--human-readable-sizes", dest="human_readable_sizes", action="store_true", help="Print sizes in human readable form.")
1377 optparser.add_option( "--list-md5", dest="list_md5", action="store_true", help="Include MD5 sums in bucket listings (only for 'ls' command).")
1378 optparser.add_option("-H", "--human-readable-sizes", dest="human_readable_sizes", action="store_true", help="Print sizes in human readable form (eg 1kB instead of 1234).")
13701379
13711380 optparser.add_option( "--progress", dest="progress_meter", action="store_true", help="Display progress meter (default on TTY).")
13721381 optparser.add_option( "--no-progress", dest="progress_meter", action="store_false", help="Don't display progress meter (default on non-TTY).")
14331442 if cfg.progress_meter:
14341443 error(u"Option --progress is not yet supported on MS Windows platform. Assuming --no-progress.")
14351444 cfg.progress_meter = False
1445
1446 ## Pre-process --add-header's and put them to Config.extra_headers SortedDict()
1447 if options.add_header:
1448 for hdr in options.add_header:
1449 try:
1450 key, val = hdr.split(":", 1)
1451 except ValueError:
1452 raise ParameterError("Invalid header format: %s" % hdr)
1453 key_inval = re.sub("[a-zA-Z0-9-.]", "", key)
1454 if key_inval:
1455 key_inval = key_inval.replace(" ", "<space>")
1456 key_inval = key_inval.replace("\t", "<tab>")
1457 raise ParameterError("Invalid character(s) in header name '%s': \"%s\"" % (key, key_inval))
1458 debug(u"Updating Config.Config extra_headers[%s] -> %s" % (key.strip(), val.strip()))
1459 cfg.extra_headers[key.strip()] = val.strip()
14361460
14371461 ## Update Config with other parameters
14381462 for option in cfg.option_list():
15181542 except S3Error, e:
15191543 error(u"S3 error: %s" % e)
15201544 sys.exit(1)
1521 except ParameterError, e:
1522 error(u"Parameter problem: %s" % e)
1523 sys.exit(1)
15241545
15251546 if __name__ == '__main__':
15261547 try:
15391560
15401561 main()
15411562 sys.exit(0)
1563
1564 except ParameterError, e:
1565 error(u"Parameter problem: %s" % e)
1566 sys.exit(1)
15421567
15431568 except SystemExit, e:
15441569 sys.exit(e.code)
00 .TH s3cmd 1
11 .SH NAME
2 s3cmd \- tool for managing Amazon S3 storage space
2 s3cmd \- tool for managing Amazon S3 storage space and Amazon CloudFront content delivery network
33 .SH SYNOPSIS
44 .B s3cmd
55 [\fIOPTIONS\fR] \fICOMMAND\fR [\fIPARAMETERS\fR]
4141 \fBsync\fR \fIs3://BUCKET[/PREFIX] LOCAL_DIR\fR
4242 Restore a tree from S3 to local directory
4343 .TP
44 \fBcp\fR \fIs3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]\fR
45 \fBmv\fR \fIs3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]\fR
44 \fBcp\fR \fIs3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]\fR, \fBmv\fR \fIs3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]\fR
4645 Make a copy of a file (\fIcp\fR) or move a file (\fImv\fR).
4746 Destination can be in the same bucket with a different name
4847 or in another bucket with the same or different name.
4948 Adding \fI\-\-acl\-public\fR will make the destination object
5049 publicly accessible (see below).
50 .TP
51 \fBsetacl\fR \fIs3://BUCKET[/OBJECT]\fR
52 Modify \fIAccess control list\fI for Bucket or Files. Use with
53 \fI\-\-acl\-public\fR or \fI\-\-acl\-private\fR
5154 .TP
5255 \fBinfo\fR \fIs3://BUCKET[/OBJECT]\fR
5356 Get various information about a Bucket or Object
5558 \fBdu\fR \fI[s3://BUCKET[/PREFIX]]\fR
5659 Disk usage \- amount of data stored in S3
5760
61 .PP
62 Commands for CloudFront management
63 .TP
64 \fBcflist\fR
65 List CloudFront distribution points
66 .TP
67 \fBcfinfo\fR [\fIcf://DIST_ID\fR]
68 Display CloudFront distribution point parameters
69 .TP
70 \fBcfcreate\fR \fIs3://BUCKET\fR
71 Create CloudFront distribution point
72 .TP
73 \fBcfdelete\fR \fIcf://DIST_ID\fR
74 Delete CloudFront distribution point
75 .TP
76 \fBcfmodify\fR \fIcf://DIST_ID\fR
77 Change CloudFront distribution point parameters
78
5879 .SH OPTIONS
5980 .PP
6081 Some of the below specified options can have their default
6283 .B s3cmd
6384 config file (by default $HOME/.s3cmd). As it's a simple text file
6485 feel free to open it with your favorite text editor and do any
65 changes you like.
66 .PP
67 Config file related options.
86 changes you like.
87 .PP
88 \fIConfig file related options.\fR
6889 .TP
6990 \fB\-\-configure\fR
7091 Invoke interactive (re)configuration tool. Don't worry, you won't
7798 Dump current configuration after parsing config files
7899 and command line options and exit.
79100 .PP
80 Most options can have a default value set in the above specified config file.
81 .PP
82 Options specific to \fBsync\fR command:
101 \fIOptions specific for \fIfile transfer commands\fR (\fBsync\fR, \fBput\fR and \fBget\fR):
102 .TP
103 \fB\-n\fR, \fB\-\-dry\-run\fR
104 Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other in
105 formation though.
83106 .TP
84107 \fB\-\-delete\-removed\fR
85108 Delete remote objects with no corresponding local file when \fIsync\fRing \fBto\fR S3 or delete local files with no corresponding object in S3 when \fIsync\fRing \fBfrom\fR S3.
86109 .TP
87110 \fB\-\-no\-delete\-removed\fR
88 Don't delete remote objects. Default for 'sync' command.
111 Don't delete remote objects. Default for \fIsync\fR command.
89112 .TP
90113 \fB\-p\fR, \fB\-\-preserve\fR
91 Preserve filesystem attributes (mode, ownership, timestamps). Default for 'sync' command.
114 Preserve filesystem attributes (mode, ownership, timestamps). Default for \fIsync\fR command.
92115 .TP
93116 \fB\-\-no\-preserve\fR
94117 Don't store filesystem attributes with uploaded files.
95118 .TP
96119 \fB\-\-exclude GLOB\fR
97 Exclude files matching GLOB (a.k.a. shell-style wildcard) from \fIsync\fI. See SYNC COMMAND section for more information.
120 Exclude files matching GLOB (a.k.a. shell-style wildcard) from \fIsync\fI. See FILE TRANSFERS section and \fIhttp://s3tools.org/s3cmd-sync\fR for more information.
98121 .TP
99122 \fB\-\-exclude\-from FILE\fR
100123 Same as \-\-exclude but reads GLOBs from the given FILE instead of expecting them on the command line.
105128 \fB\-\-rexclude\-from FILE\fR
106129 Same as \-\-exclude\-from but works with REGEXPs.
107130 .TP
108 \fB\-\-debug\-syncmatch\fR or \fB\-\-debug\-exclude\fR (alias)
109 Display detailed information about matching file names against exclude\-rules as well as information about remote vs local filelists matching. S3cmd exits after performing the match and no actual transfer takes place.
110 .\".TP
111 .\"\fB\-n\fR, \fB\-\-dry\-run\fR
112 .\"Only show what would be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though.
113 .PP
114 Options common for all commands (where it makes sense indeed):
115 .TP
116 \fB\-f\fR, \fB\-\-force\fR
117 Force overwrite and other dangerous operations.
131 \fB\-\-include=GLOB\fR, \fB\-\-include\-from=FILE\fR, \fB\-\-rinclude=REGEXP\fR, \fB\-\-rinclude\-from=FILE\fR
132 Filenames and paths matching GLOB or REGEXP will be included even if previously excluded by one of \-\-(r)exclude(\-from) patterns
118133 .TP
119134 \fB\-\-continue\fR
120 Continue getting a partially downloaded file (only for \fIget\fR command). This comes handy once download of a large file, say an ISO image, from a S3 bucket fails and a partially downloaded file is left on the disk. Unfortunately \fIput\fR command doesn't support restarting of failed upload due to Amazon S3 limitation.
121 .TP
122 \fB\-P\fR, \fB\-\-acl\-public\fR
123 Store objects with permissions allowing read for anyone.
124 .TP
125 \fB\-\-acl\-private\fR
126 Store objects with default ACL allowing access for you only.
127 .TP
128 \fB\-\-bucket\-location\fR=BUCKET_LOCATION
129 Specify datacentre where to create the bucket. Possible values are \fIUS\fR (default) or \fIEU\fR.
130 .TP
131 \fB\-e\fR, \fB\-\-encrypt\fR
132 Use GPG encryption to protect stored objects from unauthorized access.
135 Continue getting a partially downloaded file (only for \fIget\fR command). This comes handy once download of a large file, say an ISO image, from a S3 bucket fails and a partially downloaded file is left on the disk. Unfortunately \fIput\fR command doesn't support restarting of failed upload due to Amazon S3 limitations.
136 .TP
137 \fB\-\-skip\-existing\fR
138 Skip over files that exist at the destination (only for \fIget\fR and \fIsync\fR commands).
133139 .TP
134140 \fB\-m\fR MIME/TYPE, \fB\-\-mime\-type\fR=MIME/TYPE
135141 Default MIME\-type to be set for objects stored.
139145 back to default MIME\(hyType as specified by \fB\-\-mime\-type\fR
140146 option
141147 .TP
148 \fB\-\-add\-header=NAME:VALUE\fR
149 Add a given HTTP header to the upload request. Can be used multiple times with different header names. For instance set 'Expires' or 'Cache-Control' headers (or both) using this options if you like.
150 .TP
151 \fB\-P\fR, \fB\-\-acl\-public\fR
152 Store objects with permissions allowing read for anyone. See \fIhttp://s3tools.org/s3cmd-public\fR for details and hints for storing publicly accessible files.
153 .TP
154 \fB\-\-acl\-private\fR
155 Store objects with default ACL allowing access for you only.
156 .TP
157 \fB\-e\fR, \fB\-\-encrypt\fR
158 Use GPG encryption to protect stored objects from unauthorized access. See \fIhttp://s3tools.org/s3cmd-public\fR for details about encryption.
159 .TP
160 \fB\-\-no\-encrypt\fR
161 Don't encrypt files.
162 .PP
163 \fIOptions for CloudFront commands\fR:
164 .PP
165 See \fIhttp://s3tools.org/s3cmd-cloudfront\fR for more details.
166 .TP
167 \fB\-\-enable\fR
168 Enable given CloudFront distribution (only for \fIcfmodify\fR command)
169 .TP
170 \fB\-\-disable\fR
171 Enable given CloudFront distribution (only for \fIcfmodify\fR command)
172 .TP
173 \fB\-\-cf\-add\-cname=CNAME\fR
174 Add given CNAME to a CloudFront distribution (only for \fIcfcreate\fR and \fIcfmodify\fR commands)
175 .TP
176 \fB\-\-cf\-remove\-cname=CNAME\fR
177 Remove given CNAME from a CloudFront distribution (only for \fIcfmodify\fR command)
178 .TP
179 \fB\-\-cf\-comment=COMMENT\fR
180 Set COMMENT for a given CloudFront distribution (only for \fIcfcreate\fR and \fIcfmodify\fR commands)
181 .PP
182 \fIOptions common for all commands\fR (where it makes sense indeed):
183 .TP
184 \fB\-r\fR, \fB\-\-recursive\fR
185 Recursive upload, download or removal. When used with \fIdel\fR it can
186 remove all the files in a bucket.
187 .TP
188 \fB\-f\fR, \fB\-\-force\fR
189 Force overwrite and other dangerous operations. Can be used to remove
190 a non\-empty buckets with \fIs3cmd rb \-\-force s3://bkt\fR
191 .TP
192 \fB\-\-bucket\-location\fR=BUCKET_LOCATION
193 Specify datacentre where to create the bucket. Possible values are \fIUS\fR (default) or \fIEU\fR.
194 .TP
142195 \fB\-H\fR, \fB\-\-human\-readable\-sizes\fR
143196 Print sizes in human readable form.
144 .\".TP
145 .\"\fB\-u\fR, \fB\-\-show\-uri\fR
146 .\"Show complete S3 URI in listings.
197 .TP
198 \fB\-\-list\-md5\fR
199 Include MD5 sums in bucket listings (only for \fIls\fR command).
147200 .TP
148201 \fB\-\-progress\fR, \fB\-\-no\-progress\fR
149202 Display or don't display progress meter. When running on TTY (e.g. console or xterm) the default is to display progress meter. If not on TTY (e.g. output is redirected somewhere or running from cron) the default is to not display progress meter.
150203 .TP
204 \fB\-\-encoding=ENCODING\fR
205 Override autodetected terminal and filesystem encoding (character set).
206 .TP
151207 \fB\-v\fR, \fB\-\-verbose\fR
152208 Enable verbose output.
153209 .TP
162218 .B s3cmd
163219 version and exit.
164220
165 .SH SYNC COMMAND
221 .SH FILE TRANSFERS
166222 One of the most powerful commands of \fIs3cmd\fR is \fBs3cmd sync\fR used for
167 synchronising complete directory trees to or from remote S3 storage.
223 synchronising complete directory trees to or from remote S3 storage. To some extent
224 \fBs3cmd put\fR and \fBs3cmd get\fR share a similar behaviour with \fBsync\fR.
168225 .PP
169226 Basic usage common in backup scenarios is as simple as:
170227 .nf
171 s3cmd sync /local/path s3://test-bucket/backup
228 s3cmd sync /local/path/ s3://test-bucket/backup/
172229 .fi
173230 .PP
174231 This command will find all files under /local/path directory and copy them
175232 to corresponding paths under s3://test-bucket/backup on the remote side.
176233 For example:
177234 .nf
178 /local/path\fB/file1.ext\fR \-> s3://test-bucket/backup\fB/file1.ext\fR
179 /local/path\fB/dir123/file2.bin\fR \-> s3://test-bucket/backup\fB/dir123/file2.bin\fR
180 .fi
181
235 /local/path/\fBfile1.ext\fR \-> s3://bucket/backup/\fBfile1.ext\fR
236 /local/path/\fBdir123/file2.bin\fR \-> s3://bucket/backup/\fBdir123/file2.bin\fR
237 .fi
238 .PP
239 However if the local path doesn't end with a slash the last directory's name
240 is used on the remote side as well. Compare these with the previous example:
241 .nf
242 s3cmd sync /local/path s3://test-bucket/backup/
243 .fi
244 will sync:
245 .nf
246 /local/\fBpath/file1.ext\fR \-> s3://bucket/backup/\fBpath/file1.ext\fR
247 /local/\fBpath/dir123/file2.bin\fR \-> s3://bucket/backup/\fBpath/dir123/file2.bin\fR
248 .fi
249 .PP
182250 To retrieve the files back from S3 use inverted syntax:
183251 .nf
184 s3cmd sync s3://test-bucket/backup/ /tmp/restore
252 s3cmd sync s3://test-bucket/backup/ /tmp/restore/
185253 .fi
186254 that will download files:
187255 .nf
188 s3://test-bucket/backup\fB/file1.ext\fR \-> /tmp/restore\fB/file1.ext\fR
189 s3://test-bucket/backup\fB/dir123/file2.bin\fR \-> /tmp/restore\fB/dir123/file2.bin\fR
190 .fi
191
192 For the purpose of \fB\-\-exclude\fR and \fB\-\-exclude\-from\fR matching the file name
193 \fIalways\fR begins with \fB/\fR (slash) and has the local or remote common part removed.
194 For instance in the previous example the file names tested against \-\-exclude list
195 will be \fB/\fRfile1.ext and \fB/\fRdir123/file2.bin, that is both with the leading
196 slash regardless whether you specified s3://test-bucket/backup or
197 s3://test-bucket/backup/ (note the trailing slash) on the command line.
198
199 Both \fB\-\-exclude\fR and \fB\-\-exclude\-from\fR work with shell-style wildcards (a.k.a. GLOB).
256 s3://bucket/backup/\fBfile1.ext\fR \-> /tmp/restore/\fBfile1.ext\fR
257 s3://bucket/backup/\fBdir123/file2.bin\fR \-> /tmp/restore/\fBdir123/file2.bin\fR
258 .fi
259 .PP
260 Without the trailing slash on source the behaviour is similar to
261 what has been demonstrated with upload:
262 .nf
263 s3cmd sync s3://test-bucket/backup /tmp/restore/
264 .fi
265 will download the files as:
266 .nf
267 s3://bucket/\fBbackup/file1.ext\fR \-> /tmp/restore/\fBbackup/file1.ext\fR
268 s3://bucket/\fBbackup/dir123/file2.bin\fR \-> /tmp/restore/\fBbackup/dir123/file2.bin\fR
269 .fi
270 .PP
271 All source file names, the bold ones above, are matched against \fBexclude\fR
272 rules and those that match are then re\-checked against \fBinclude\fR rules to see
273 whether they should be excluded or kept in the source list.
274 .PP
275 For the purpose of \fB\-\-exclude\fR and \fB\-\-include\fR matching only the
276 bold file names above are used. For instance only \fBpath/file1.ext\fR is tested
277 against the patterns, not \fI/local/\fBpath/file1.ext\fR
278 .PP
279 Both \fB\-\-exclude\fR and \fB\-\-include\fR work with shell-style wildcards (a.k.a. GLOB).
200280 For a greater flexibility s3cmd provides Regular-expression versions of the two exclude options
201 named \fB\-\-rexclude\fR and \fB\-\-rexclude\-from\fR.
202
203 Run s3cmd with \fB\-\-debug\-syncmatch\fR to get detailed information
204 about matching file names against exclude rules.
205
206 For example to exclude all files with ".bin" extension with a REGEXP use:
207 .PP
208 \-\-rexclude '\.bin$'
209 .PP
210 to exclude all hidden files and subdirectories (i.e. those whose name begins with dot ".") use GLOB:
211 .PP
212 \-\-exclude '/.*'
213 .PP
214 on the other hand to exclude only hidden files but not hidden subdirectories use REGEXP:
215 .PP
216 \-\-rexclude '/\.[^/]*$'
217 .PP
218 etc...
219
220 .SH AUTHOR
221 Written by Michal Ludvig <michal@logix.cz>
222 .SH REPORTING BUGS
223 Report bugs to
224 .I s3tools\-general@lists.sourceforge.net
225 .SH COPYRIGHT
226 Copyright \(co 2007,2008 Michal Ludvig <http://www.logix.cz/michal>
227 .br
228 This is free software. You may redistribute copies of it under the terms of
229 the GNU General Public License version 2 <http://www.gnu.org/licenses/gpl.html>.
230 There is NO WARRANTY, to the extent permitted by law.
281 named \fB\-\-rexclude\fR and \fB\-\-rinclude\fR.
282 The options with ...\fB\-from\fR suffix (eg \-\-rinclude\-from) expect a filename as
283 an argument. Each line of such a file is treated as one pattern.
284 .PP
285 There is only one set of patterns built from all \fB\-\-(r)exclude(\-from)\fR options
286 and similarly for include variant. Any file excluded with eg \-\-exclude can
287 be put back with a pattern found in \-\-rinclude\-from list.
288 .PP
289 Run s3cmd with \fB\-\-dry\-run\fR to verify that your rules work as expected.
290 Use together with \fB\-\-debug\fR get detailed information
291 about matching file names against exclude and include rules.
292 .PP
293 For example to exclude all files with ".jpg" extension except those beginning with a number use:
294 .PP
295 \-\-exclude '*.jpg' \-\-rinclude '[0-9].*\.jpg'
296
231297 .SH SEE ALSO
232298 For the most up to date list of options run
233299 .B s3cmd \-\-help
234300 .br
235301 For more info about usage, examples and other related info visit project homepage at
236302 .br
237 .B http://s3tools.logix.cz
238
303 .B http://s3tools.org
304
305 .SH AUTHOR
306 Written by Michal Ludvig <michal@logix.cz>
307 .SH CONTACT, SUPPORT
308 Prefered way to get support is our mailing list:
309 .I s3tools\-general@lists.sourceforge.net
310 .SH REPORTING BUGS
311 Report bugs to
312 .I s3tools\-bugs@lists.sourceforge.net
313 .SH COPYRIGHT
314 Copyright \(co 2007,2008,2009 Michal Ludvig <http://www.logix.cz/michal>
315 .br
316 This is free software. You may redistribute copies of it under the terms of
317 the GNU General Public License version 2 <http://www.gnu.org/licenses/gpl.html>.
318 There is NO WARRANTY, to the extent permitted by law.