Imported Debian patch 1.5.0~rc1-1
Mikhail Gusarov authored 9 years ago
Gianfranco Costamagna committed 8 years ago
0 | ## Run 'svn propset svn:ignore -F .svnignore .' after you change this list | |
1 | *.pyc | |
2 | tst.* | |
3 | MANIFEST | |
4 | dist | |
5 | build | |
6 | .*.swp | |
7 | s3cmd.1.gz |
0 | 2011-06-06 Michal Ludvig <mludvig@logix.net.nz> | |
1 | ||
2 | ===== Migrated to GIT ===== | |
3 | ||
4 | No longer keeping ChangeLog up to date, use git log instead! | |
5 | ||
6 | Two "official" repositories (both the same content): | |
7 | ||
8 | * git://github.com/s3tools/s3cmd.git (primary) | |
9 | * git://s3tools.git.sourceforge.net/gitroot/s3tools/s3cmd.git | |
10 | ||
11 | 2011-04-11 Michal Ludvig <mludvig@logix.net.nz> | |
12 | ||
13 | * S3/S3Uri.py: Fixed cf:// uri parsing. | |
14 | * S3/CloudFront.py: Don't fail if there are no cfinval | |
15 | requests. | |
16 | ||
17 | 2011-04-11 Michal Ludvig <mludvig@logix.net.nz> | |
18 | ||
19 | * S3/PkgInfo.py: Updated to 1.1.0-beta1 | |
20 | * NEWS: Updated. | |
21 | * s3cmd.1: Regenerated. | |
22 | ||
23 | 2011-04-11 Michal Ludvig <mludvig@logix.net.nz> | |
24 | ||
25 | * S3/Config.py: Increase socket_timeout from 10 secs to 5 mins. | |
26 | ||
27 | 2011-04-10 Michal Ludvig <mludvig@logix.net.nz> | |
28 | ||
29 | * s3cmd, S3/CloudFront.py, S3/S3Uri.py: Support for checking | |
30 | status of CF Invalidation Requests [cfinvalinfo]. | |
31 | * s3cmd, S3/CloudFront.py, S3/Config.py: Support for CloudFront | |
32 | invalidation using [sync --cf-invalidate] command. | |
33 | * S3/Utils.py: getDictFromTree() now recurses into | |
34 | sub-trees. | |
35 | ||
36 | 2011-03-30 Michal Ludvig <mludvig@logix.net.nz> | |
37 | ||
38 | * S3/CloudFront.py: Fix warning with Python 2.7 | |
39 | * S3/CloudFront.py: Cmd._get_dist_name_for_bucket() moved to | |
40 | CloudFront class. | |
41 | ||
42 | 2011-01-13 Michal Ludvig <mludvig@logix.net.nz> | |
43 | ||
44 | * s3cmd, S3/FileLists.py: Move file/object listing functions | |
45 | to S3/FileLists.py | |
46 | ||
47 | 2011-01-09 Michal Ludvig <mludvig@logix.net.nz> | |
48 | ||
49 | * Released version 1.0.0 | |
50 | ---------------------- | |
51 | ||
52 | * S3/PkgInfo.py: Updated to 1.0.0 | |
53 | * NEWS: Updated. | |
54 | ||
55 | 2011-01-02 Michal Ludvig <mludvig@logix.net.nz> | |
56 | ||
57 | * s3cmd: Improved r457 (Don't crash when file disappears | |
58 | before checking MD5). | |
59 | * s3cmd, s3cmd.1, format-manpage.pl: Improved --help text | |
60 | and manpage. | |
61 | * s3cmd: Removed explicit processing of --follow-symlinks | |
62 | (is cought by the default / main loop). | |
63 | ||
64 | 2010-12-24 Michal Ludvig <mludvig@logix.net.nz> | |
65 | ||
66 | * s3cmd: Set 10s socket timeout for read()/write(). | |
67 | * s3cmd: Added --(no-)check-md5 for [sync]. | |
68 | * run-tests.py, testsuite.tar.gz: Added testsuite for | |
69 | the above. | |
70 | * NEWS: Document the above. | |
71 | * s3cmd: Don't crash when file disappears before | |
72 | checking MD5. | |
73 | ||
74 | 2010-12-09 Michal Ludvig <mludvig@logix.net.nz> | |
75 | ||
76 | * Released version 1.0.0-rc2 | |
77 | -------------------------- | |
78 | ||
79 | * S3/PkgInfo.py: Updated to 1.0.0-rc2 | |
80 | * NEWS, TODO, s3cmd.1: Updated. | |
81 | ||
82 | 2010-11-13 Michal Ludvig <mludvig@logix.net.nz> | |
83 | ||
84 | * s3cmd: Added support for remote-to-remote sync. | |
85 | (Based on patch from Sundar Raman - thanks!) | |
86 | * run-tests.py: Testsuite for the above. | |
87 | ||
88 | 2010-11-12 Michal Ludvig <mludvig@logix.net.nz> | |
89 | ||
90 | * s3cmd: Fixed typo in "s3cmd du" error path. | |
91 | ||
92 | 2010-11-12 Michal Ludvig <mludvig@logix.net.nz> | |
93 | ||
94 | * format-manpage.pl: new manpage auto-formatter | |
95 | * s3cmd.1: Updated using the above helper script | |
96 | * setup.py: Warn if manpage is too old. | |
97 | ||
98 | 2010-10-27 Michal Ludvig <mludvig@logix.net.nz> | |
99 | ||
100 | * run-tests.py, testsuite.tar.gz: Keep the testsuite in | |
101 | SVN as a tarball. There's too many "strange" things | |
102 | in the directory for it to be kept in SVN. | |
103 | ||
104 | 2010-10-27 Michal Ludvig <mludvig@logix.net.nz> | |
105 | ||
106 | * TODO: Updated. | |
107 | * upload-to-sf.sh: Updated for new SF.net system | |
108 | ||
109 | 2010-10-26 Michal Ludvig <mludvig@logix.net.nz> | |
110 | ||
111 | * Released version 1.0.0-rc1 | |
112 | -------------------------- | |
113 | ||
114 | * S3/PkgInfo.py: Updated to 1.0.0-rc1 | |
115 | * NEWS, TODO: Updated. | |
116 | ||
117 | 2010-10-26 Michal Ludvig <mludvig@logix.net.nz> | |
118 | ||
119 | * s3cmd, S3/CloudFront.py, S3/Config.py: Added support | |
120 | for CloudFront DefaultRootObject. Thanks to Luke Andrew. | |
121 | ||
122 | 2010-10-25 Michal Ludvig <mludvig@logix.net.nz> | |
123 | ||
124 | * s3cmd: Improved 'fixbucket' command. Thanks to Srinivasa | |
125 | Moorthy. | |
126 | * s3cmd: Read config file even if User Profile directory on | |
127 | Windows contains non-ascii symbols. Thx Slava Vishnyakov | |
128 | ||
129 | 2010-10-25 Michal Ludvig <mludvig@logix.net.nz> | |
130 | ||
131 | * s3cmd: Don't fail when a local node is a directory | |
132 | and we expected a file. (as if for example /etc/passwd | |
133 | was a dir) | |
134 | ||
135 | 2010-10-25 Michal Ludvig <mludvig@logix.net.nz> | |
136 | ||
137 | * s3cmd, S3/S3.py: Ignore inaccessible (and missing) files | |
138 | on upload. | |
139 | * run-tests.py: Extended [sync] test to verify correct | |
140 | handling of inaccessible files. | |
141 | * testsuite/permission-tests: New testsuite files. | |
142 | ||
143 | 2010-10-24 Michal Ludvig <mludvig@logix.net.nz> | |
144 | ||
145 | * S3/S3.py: "Stringify" all headers. Httplib should do | |
146 | it but some Python 2.7 users reported problems that should | |
147 | now be fixed. | |
148 | * run-tests.py: Fixed test #6 | |
149 | ||
150 | 2010-07-25 Aaron Maxwell <amax@resymbol.net> | |
151 | ||
152 | * S3/Config.py, testsuite/etc/, run-tests.py, s3cmd.1, s3cmd: | |
153 | Option to follow local symlinks for sync and | |
154 | put (--follow-symlinks option), including tests and documentation | |
155 | * run-tests.py: --bucket-prefix option, to allow different | |
156 | developers to run tests in their own sandbox | |
157 | ||
158 | 2010-07-08 Michal Ludvig <mludvig@logix.net.nz> | |
159 | ||
160 | * run-tests.py, testsuite/crappy-file-name.tar.gz: | |
161 | Updated testsuite, work around a problem with [s3cmd cp] | |
162 | when the source file contains '?' or '\x7f' | |
163 | (where the inability to copy '?' is especially annoying). | |
164 | ||
165 | 2010-07-08 Michal Ludvig <mludvig@logix.net.nz> | |
166 | ||
167 | * S3/Utils.py, S3/S3Uri.py: Fixed names after moving | |
168 | functions between modules. | |
169 | ||
170 | 2010-06-29 Timothee Groleau <kde@timotheegroleau.com> | |
171 | ||
172 | * S3/ACL.py: Fix isAnonRead method on Grantees | |
173 | * ChangeLog: Update name of contributor for Timothee Groleau | |
174 | ||
175 | 2010-06-13 Michal Ludvig <mludvig@logix.net.nz> | |
176 | ||
177 | * s3cmd, S3/CloudFront.py: Both [accesslog] and [cfmodify] | |
178 | access logging can now be disabled with --no-access-logging | |
179 | ||
180 | 2010-06-13 Michal Ludvig <mludvig@logix.net.nz> | |
181 | ||
182 | * S3/CloudFront.py: Allow s3:// URI as well as cf:// URI | |
183 | for most CloudFront-related commands. | |
184 | ||
185 | 2010-06-12 Michal Ludvig <mludvig@logix.net.nz> | |
186 | ||
187 | * s3cmd, S3/CloudFront.py, S3/Config.py: Support access | |
188 | logging for CloudFront distributions. | |
189 | * S3/S3.py, S3/Utils.py: Moved some functions to Utils.py | |
190 | to make them available to CloudFront.py | |
191 | * NEWS: Document the above. | |
192 | ||
193 | 2010-05-27 Michal Ludvig <mludvig@logix.net.nz> | |
194 | ||
195 | * S3/S3.py: Fix bucket listing for buckets with | |
196 | over 1000 prefixes. (contributed by Timothee Groleau) | |
197 | * S3/S3.py: Fixed code formating. | |
198 | ||
199 | 2010-05-21 Michal Ludvig <mludvig@logix.net.nz> | |
200 | ||
201 | * s3cmd, S3/S3.py: Added support for bucket locations | |
202 | outside US/EU (i.e. us-west-1 and ap-southeast-1 as of now). | |
203 | ||
204 | 2010-05-21 Michal Ludvig <mludvig@logix.net.nz> | |
205 | ||
206 | * s3cmd, S3/S3.py, S3/Config.py: Added --reduced-redundancy | |
207 | switch for Reduced Redundancy Storage. | |
208 | ||
209 | 2010-05-20 Michal Ludvig <mludvig@logix.net.nz> | |
210 | ||
211 | * s3cmd, S3/ACL.py, S3/Config.py: Support for --acl-grant | |
212 | and --acl-revoke (contributed by Timothee Groleau) | |
213 | * s3cmd: Couple of fixes on top of the above commit. | |
214 | * s3cmd: Pre-parse ACL parameters in OptionS3ACL() | |
215 | ||
216 | 2010-05-20 Michal Ludvig <mludvig@logix.net.nz> | |
217 | ||
218 | * S3/Exceptions.py, S3/S3.py: Some HTTP_400 exceptions | |
219 | are retriable. | |
220 | ||
221 | 2010-03-19 Michal Ludvig <mludvig@logix.net.nz> | |
222 | ||
223 | * s3cmd, S3/ACL.py: Print all ACLs for a Grantee | |
224 | (one Grantee can have multiple different Grant entries) | |
225 | ||
226 | 2010-03-19 Michal Ludvig <mludvig@logix.net.nz> | |
227 | ||
228 | * s3cmd: Enable bucket-level ACL setting | |
229 | * s3cmd, S3/AccessLog.py, ...: Added [accesslog] command. | |
230 | * s3cmd: Fix imports from S3.Utils | |
231 | ||
232 | 2009-12-10 Michal Ludvig <mludvig@logix.net.nz> | |
233 | ||
234 | * s3cmd: Path separator conversion on Windows hosts. | |
235 | ||
236 | 2009-10-08 Michal Ludvig <mludvig@logix.net.nz> | |
237 | ||
238 | * Released version 0.9.9.91 | |
239 | ------------------------- | |
240 | ||
241 | * S3/PkgInfo.py: Updated to 0.9.9.91 | |
242 | * NEWS: News for 0.9.9.91 | |
243 | ||
244 | 2009-10-08 Michal Ludvig <mludvig@logix.net.nz> | |
245 | ||
246 | * S3/S3.py: fixed reference to _max_retries. | |
247 | ||
248 | 2009-10-06 Michal Ludvig <mludvig@logix.net.nz> | |
249 | ||
250 | * Released version 0.9.9.90 | |
251 | ------------------------- | |
252 | ||
253 | * S3/PkgInfo.py: Updated to 0.9.9.90 | |
254 | * NEWS: News for 0.9.9.90 | |
255 | ||
256 | 2009-10-06 Michal Ludvig <mludvig@logix.net.nz> | |
257 | ||
258 | * S3/S3.py: Introduce throttling on upload only after | |
259 | second failure. I.e. first retry at full speed. | |
260 | * TODO: Updated with new ideas. | |
261 | ||
262 | 2009-06-02 Michal Ludvig <michal@logix.cz> | |
263 | ||
264 | * s3cmd: New [fixbucket] command for fixing invalid object | |
265 | names in a given Bucket. For instance names with  in | |
266 | them (not sure how people manage to upload them but they do). | |
267 | * S3/S3.py, S3/Utils.py, S3/Config.py: Support methods for | |
268 | the above, plus advise user to run 'fixbucket' when XML parsing | |
269 | fails. | |
270 | * NEWS: Updated. | |
271 | ||
272 | 2009-05-29 Michal Ludvig <michal@logix.cz> | |
273 | ||
274 | * S3/Utils.py: New function replace_nonprintables() | |
275 | * s3cmd: Filter local filenames through the above function | |
276 | to avoid problems with uploaded filenames containing invalid | |
277 | XML entities, eg  | |
278 | * S3/S3.py: Warn if a non-printables char is passed to | |
279 | urlencode_string() - they should have been replaced earlier | |
280 | in the processing. | |
281 | * run-tests.py, TODO, NEWS: Updated. | |
282 | * testsuite/crappy-file-name.tar.gz: Tarball with a crappy-named | |
283 | file. Untar for the testsuite. | |
284 | ||
285 | 2009-05-29 Michal Ludvig <michal@logix.cz> | |
286 | ||
287 | * testsuite/blahBlah/*: Added files needed for run-tests.py | |
288 | ||
289 | 2009-05-28 Michal Ludvig <michal@logix.cz> | |
290 | ||
291 | * S3/Utils.py (dateS3toPython): Be more relaxed about | |
292 | timestamps format. | |
293 | ||
294 | 2009-05-28 Michal Ludvig <michal@logix.cz> | |
295 | ||
296 | * s3cmd, run-test.py, TODO, NEWS: Added --dry-run | |
297 | and --exclude/--include for [setacl]. | |
298 | * s3cmd, run-test.py, TODO, NEWS: Added --dry-run | |
299 | and --exclude/--include for [del]. | |
300 | ||
301 | 2009-05-28 Michal Ludvig <michal@logix.cz> | |
302 | ||
303 | * s3cmd: Support for recursive [cp] and [mv], including | |
304 | multiple-source arguments, --include/--exclude, | |
305 | --dry-run, etc. | |
306 | * run-tests.py: Tests for the above. | |
307 | * S3/S3.py: Preserve metadata (eg ACL or MIME type) | |
308 | during [cp] and [mv]. | |
309 | * NEWS, TODO: Updated. | |
310 | ||
311 | 2009-05-28 Michal Ludvig <michal@logix.cz> | |
312 | ||
313 | * run-tests.py: Added --verbose mode. | |
314 | ||
315 | 2009-05-27 Michal Ludvig <michal@logix.cz> | |
316 | ||
317 | * NEWS: Added info about --verbatim. | |
318 | * TODO: Added more tasks. | |
319 | ||
320 | 2009-05-27 Michal Ludvig <michal@logix.cz> | |
321 | ||
322 | * S3/SortedDict.py: Add case-sensitive mode. | |
323 | * s3cmd, S3/S3.py, S3/Config.py: Use SortedDict() in | |
324 | case-sensitive mode to avoid dropping filenames | |
325 | differing only in capitalisation | |
326 | * run-tests.py: Testsuite for the above. | |
327 | * NEWS: Updated. | |
328 | ||
329 | 2009-03-20 Michal Ludvig <michal@logix.cz> | |
330 | ||
331 | * S3/S3.py: Re-sign requests before retrial to avoid | |
332 | RequestTimeTooSkewed errors on failed long-running | |
333 | uploads. | |
334 | BTW 'request' now has its own class S3Request. | |
335 | ||
336 | 2009-03-04 Michal Ludvig <michal@logix.cz> | |
337 | ||
338 | * s3cmd, S3/Config.py, S3/S3.py: Support for --verbatim. | |
339 | ||
340 | 2009-02-25 Michal Ludvig <michal@logix.cz> | |
341 | ||
342 | * s3cmd: Fixed "put file.ext s3://bkt" (ie just the bucket name). | |
343 | * s3cmd: Fixed reporting of ImportError of S3 modules. | |
344 | * s3cmd: Fixed Error: global name 'real_filename' is not defined | |
345 | ||
346 | 2009-02-24 Michal Ludvig <michal@logix.cz> | |
347 | ||
348 | * s3cmd: New command [sign] | |
349 | * S3/Utils.py: New function sign_string() | |
350 | * S3/S3.py, S3/CloudFront.py: Use sign_string(). | |
351 | * NEWS: Updated. | |
352 | ||
353 | 2009-02-17 Michal Ludvig <michal@logix.cz> | |
354 | ||
355 | * Released version 0.9.9 | |
356 | ---------------------- | |
357 | ||
358 | * S3/PkgInfo.py: Updated to 0.9.9 | |
359 | * NEWS: Compile a big news list for 0.9.9 | |
360 | ||
361 | 2009-02-17 Michal Ludvig <michal@logix.cz> | |
362 | ||
363 | * s3cmd.1: Document all the new options and commands. | |
364 | * s3cmd, S3/Config.py: Updated some help texts. Removed | |
365 | option --debug-syncmatch along the way (because --dry-run | |
366 | with --debug is good enough). | |
367 | * TODO: Updated. | |
368 | ||
369 | 2009-02-16 Michal Ludvig <michal@logix.cz> | |
370 | ||
371 | * s3cmd: Check Python version >= 2.4 as soon as possible. | |
372 | ||
373 | 2009-02-14 Michal Ludvig <michal@logix.cz> | |
374 | ||
375 | * s3cmd, S3/Config.py, S3/S3.py: Added --add-header option. | |
376 | * NEWS: Documented --add-header. | |
377 | * run-tests.py: Fixed for new messages. | |
378 | ||
379 | 2009-02-14 Michal Ludvig <michal@logix.cz> | |
380 | ||
381 | * README: Updated for 0.9.9 | |
382 | * s3cmd, S3/PkgInfo.py, s3cmd.1: Replaced project | |
383 | URLs with http://s3tools.org | |
384 | * NEWS: Improved message. | |
385 | ||
386 | 2009-02-12 Michal Ludvig <michal@logix.cz> | |
387 | ||
388 | * s3cmd: Added --list-md5 for 'ls' command. | |
389 | * S3/Config.py: New setting list_md5 | |
390 | ||
391 | 2009-02-12 Michal Ludvig <michal@logix.cz> | |
392 | ||
393 | * s3cmd: Set Content-Length header for requests with 'body'. | |
394 | * s3cmd: And send it for requests with no body as well... | |
395 | ||
396 | 2009-02-02 Michal Ludvig <michal@logix.cz> | |
397 | ||
398 | * Released version 0.9.9-rc3 | |
399 | -------------------------- | |
400 | ||
401 | * S3/PkgInfo.py, NEWS: Updated for 0.9.9-rc3 | |
402 | ||
403 | 2009-02-01 Michal Ludvig <michal@logix.cz> | |
404 | ||
405 | * S3/Exceptions.py: Correct S3Exception.__str__() to | |
406 | avoid crash in S3Error() subclass. Reported by '~t2~'. | |
407 | * NEWS: Updated. | |
408 | ||
409 | 2009-01-30 Michal Ludvig <michal@logix.cz> | |
410 | ||
411 | * Released version 0.9.9-rc2 | |
412 | -------------------------- | |
413 | ||
414 | * S3/PkgInfo.py, NEWS, TODO: Updated for 0.9.9-rc2 | |
415 | ||
416 | 2009-01-30 Michal Ludvig <michal@logix.cz> | |
417 | ||
418 | * s3cmd: Under some circumstance s3cmd crashed | |
419 | when put/get/sync had 0 files to transmit. Fixed now. | |
420 | ||
421 | 2009-01-28 Michal Ludvig <michal@logix.cz> | |
422 | ||
423 | * s3cmd: Output 'delete:' in --dry-run only when | |
424 | used together with --delete-removed. Otherwise | |
425 | the user will think that without --dry-run it | |
426 | would really delete the files. | |
427 | ||
428 | 2009-01-27 Michal Ludvig <michal@logix.cz> | |
429 | ||
430 | * Released version 0.9.9-rc1 | |
431 | -------------------------- | |
432 | ||
433 | * S3/PkgInfo.py, NEWS, TODO: Updated for 0.9.9-rc1 | |
434 | ||
435 | 2009-01-26 Michal Ludvig <michal@logix.cz> | |
436 | ||
437 | * Merged CloudFront support from branches/s3cmd-airlock | |
438 | See the ChangeLog in that branch for details. | |
439 | ||
440 | 2009-01-25 W. Tell <w_tell -at- sourceforge> | |
441 | ||
442 | * s3cmd: Implemented --include and friends. | |
443 | ||
444 | 2009-01-25 Michal Ludvig <michal@logix.cz> | |
445 | ||
446 | * s3cmd: Enabled --dry-run and --exclude for 'put' and 'get'. | |
447 | * S3/Exceptions.py: Remove DeprecationWarning about | |
448 | BaseException.message in Python 2.6 | |
449 | * s3cmd: Rewritten gpg_command() to use subprocess.Popen() | |
450 | instead of os.popen4() deprecated in 2.6 | |
451 | * TODO: Note about failing GPG. | |
452 | ||
453 | 2009-01-22 Michal Ludvig <michal@logix.cz> | |
454 | ||
455 | * S3/Config.py: guess_mime_type = True (will affect new | |
456 | installations only). | |
457 | ||
458 | 2009-01-22 Michal Ludvig <michal@logix.cz> | |
459 | ||
460 | * Released version 0.9.9-pre5 | |
461 | --------------------------- | |
462 | ||
463 | * S3/PkgInfo.py, NEWS, TODO: Updated for 0.9.9-pre5 | |
464 | ||
465 | 2009-01-22 Michal Ludvig <michal@logix.cz> | |
466 | ||
467 | * run-tests.py: Updated paths for the new sync | |
468 | semantics. | |
469 | * s3cmd, S3/S3.py: Small fixes to make testsuite happy. | |
470 | ||
471 | 2009-01-21 Michal Ludvig <michal@logix.cz> | |
472 | ||
473 | * s3cmd: Migrated 'sync' local->remote to the new | |
474 | scheme with fetch_{local,remote}_list(). | |
475 | Enabled --dry-run for 'sync'. | |
476 | ||
477 | 2009-01-20 Michal Ludvig <michal@logix.cz> | |
478 | ||
479 | * s3cmd: Migrated 'sync' remote->local to the new | |
480 | scheme with fetch_{local,remote}_list(). | |
481 | Changed fetch_remote_list() to return dict() compatible | |
482 | with fetch_local_list(). | |
483 | Re-implemented --exclude / --include processing. | |
484 | * S3/Utils.py: functions for parsing RFC822 dates (for HTTP | |
485 | header responses). | |
486 | * S3/Config.py: placeholders for --include. | |
487 | ||
488 | 2009-01-15 Michal Ludvig <michal@logix.cz> | |
489 | ||
490 | * s3cmd, S3/S3Uri.py, NEWS: Support for recursive 'put'. | |
491 | ||
492 | 2009-01-13 Michal Ludvig <michal@logix.cz> | |
493 | ||
494 | * TODO: Updated. | |
495 | * s3cmd: renamed (fetch_)remote_keys to remote_list and | |
496 | a few other renames for consistency. | |
497 | ||
498 | 2009-01-08 Michal Ludvig <michal@logix.cz> | |
499 | ||
500 | * S3/S3.py: Some errors during file upload were incorrectly | |
501 | interpreted as MD5 mismatch. (bug #2384990) | |
502 | * S3/ACL.py: Move attributes from class to instance. | |
503 | * run-tests.py: Tests for ACL. | |
504 | * s3cmd: Minor messages changes. | |
505 | ||
506 | 2009-01-07 Michal Ludvig <michal@logix.cz> | |
507 | ||
508 | * s3cmd: New command 'setacl'. | |
509 | * S3/S3.py: Implemented set_acl(). | |
510 | * S3/ACL.py: Fill in <Owner/> tag in ACL XML. | |
511 | * NEWS: Info about 'setacl'. | |
512 | ||
513 | 2009-01-07 Michal Ludvig <michal@logix.cz> | |
514 | ||
515 | * s3cmd: Factored remote_keys generation from cmd_object_get() | |
516 | to fetch_remote_keys(). | |
517 | * s3cmd: Display Public URL in 'info' for AnonRead objects. | |
518 | * S3/ACL.py: Generate XML from a current list of Grantees | |
519 | ||
520 | 2009-01-07 Michal Ludvig <michal@logix.cz> | |
521 | ||
522 | * S3/ACL.py: Keep ACL internally as a list of of 'Grantee' objects. | |
523 | * S3/Utils.py: Fix crash in stripNameSpace() when the XML has no NS. | |
524 | ||
525 | 2009-01-07 Michal Ludvig <michal@logix.cz> | |
526 | ||
527 | * S3/ACL.py: New object for handling ACL issues. | |
528 | * S3/S3.py: Moved most of S3.get_acl() to ACL class. | |
529 | * S3/Utils.py: Reworked XML helpers - remove XMLNS before | |
530 | parsing the input XML to avoid having all Tags prefixed | |
531 | with {XMLNS} by ElementTree. | |
532 | ||
533 | 2009-01-03 Michal Ludvig <michal@logix.cz> | |
534 | ||
535 | * s3cmd: Don't fail when neither $HOME nor %USERPROFILE% is set. | |
536 | (fixes #2483388) | |
537 | ||
538 | 2009-01-01 W. Tell <w_tell -at- sourceforge> | |
539 | ||
540 | * S3/S3.py, S3/Utils.py: Use 'hashlib' instead of md5 and sha | |
541 | modules to avoid Python 2.6 warnings. | |
542 | ||
543 | 2008-12-31 Michal Ludvig <michal@logix.cz> | |
544 | ||
545 | * Released version 0.9.9-pre4 | |
546 | --------------------------- | |
547 | ||
548 | 2008-12-31 Michal Ludvig <michal@logix.cz> | |
549 | ||
550 | * s3cmd: Reworked internal handling of unicode vs encoded filenames. | |
551 | Should replace unknown characters with '?' instead of baling out. | |
552 | ||
553 | 2008-12-31 Michal Ludvig <michal@logix.cz> | |
554 | ||
555 | * run-tests.py: Display system encoding in use. | |
556 | * s3cmd: Print a nice error message when --exclude-from | |
557 | file is not readable. | |
558 | * S3/PkgInfo.py: Bumped up version to 0.9.9-pre4 | |
559 | * S3/Exceptions.py: Added missing imports. | |
560 | * NEWS: Updated. | |
561 | * testsuite: reorganised UTF-8 files, added GBK encoding files, | |
562 | moved encoding-specific files to 'tar.gz' archives, removed | |
563 | unicode dir. | |
564 | * run-tests.py: Adapted to the above change. | |
565 | * run-tests.sh: removed. | |
566 | * testsuite/exclude.encodings: Added. | |
567 | * run-tests.py: Don't assume utf-8, use preferred encoding | |
568 | instead. | |
569 | * s3cmd, S3/Utils.py, S3/Exceptions.py, S3/Progress.py, | |
570 | S3/Config.py, S3/S3.py: Added --encoding switch and | |
571 | Config.encoding variable. Don't assume utf-8 for filesystem | |
572 | and terminal output anymore. | |
573 | * s3cmd: Avoid ZeroDivisionError on fast links. | |
574 | * s3cmd: Unicodised all info() output. | |
575 | ||
576 | 2008-12-30 Michal Ludvig <michal@logix.cz> | |
577 | ||
578 | * s3cmd: Replace unknown Unicode characters with '?' | |
579 | to avoid UnicodeEncodeError's. Also make all output strings | |
580 | unicode. | |
581 | * run-tests.py: Exit on failed test. Fixed order of tests. | |
582 | ||
583 | 2008-12-29 Michal Ludvig <michal@logix.cz> | |
584 | ||
585 | * TODO, NEWS: Updated | |
586 | * s3cmd: Improved wildcard get. | |
587 | * run-tests.py: Improved testsuite, added parameters support | |
588 | to run only specified tests, cleaned up win/posix integration. | |
589 | * S3/Exception.py: Python 2.4 doesn't automatically set | |
590 | Exception.message. | |
591 | ||
592 | 2008-12-29 Michal Ludvig <michal@logix.cz> | |
593 | ||
594 | * s3cmd, run-tests.py: Make it work on Windows. | |
595 | ||
596 | 2008-12-26 Michal Ludvig <michal@logix.cz> | |
597 | ||
598 | * setup.cfg: Remove explicit install prefix. That should fix | |
599 | Mac OS X and Windows "setup.py install" runs. | |
600 | ||
601 | 2008-12-22 Michal Ludvig <michal@logix.cz> | |
602 | ||
603 | * s3cmd, S3/S3.py, S3/Progress.py: Display "[X of Y]" | |
604 | in --progress mode. | |
605 | * s3cmd, S3/Config.py: Implemented recursive [get]. | |
606 | Added --skip-existing option for [get] and [sync]. | |
607 | ||
608 | 2008-12-17 Michal Ludvig <michal@logix.cz> | |
609 | ||
610 | * TODO: Updated | |
611 | ||
612 | 2008-12-14 Michal Ludvig <michal@logix.cz> | |
613 | ||
614 | * S3/Progress.py: Restructured import Utils to avoid import | |
615 | conflicts. | |
616 | ||
617 | 2008-12-12 Michal Ludvig <michal@logix.cz> | |
618 | ||
619 | * s3cmd: Better Exception output. Print sys.path on ImportError, | |
620 | don't print backtrace on KeyboardInterrupt | |
621 | ||
622 | 2008-12-11 Michal Ludvig <michal@logix.cz> | |
623 | ||
624 | * s3cmd: Support for multiple sources in 'get' command. | |
625 | ||
626 | 2008-12-10 Michal Ludvig <michal@logix.cz> | |
627 | ||
628 | * TODO: Updated list. | |
629 | * s3cmd: Don't display download/upload completed message | |
630 | in --progress mode. | |
631 | * S3/S3.py: Pass src/dst names down to Progress class. | |
632 | * S3/Progress.py: added new class ProgressCR - apparently | |
633 | ProgressANSI doesn't work on MacOS-X (and perhaps elsewhere). | |
634 | * S3/Config.py: Default progress meter is now ProgressCR | |
635 | * s3cmd: Updated email address for reporting bugs. | |
636 | ||
637 | 2008-12-02 Michal Ludvig <michal@logix.cz> | |
638 | ||
639 | * s3cmd, S3/S3.py, NEWS: Support for (non-)recursive 'ls' | |
640 | ||
641 | 2008-12-01 Michal Ludvig <michal@logix.cz> | |
642 | ||
643 | * Released version 0.9.9-pre3 | |
644 | --------------------------- | |
645 | ||
646 | * S3/PkgInfo.py: Bumped up version to 0.9.9-pre3 | |
647 | ||
648 | 2008-12-01 Michal Ludvig <michal@logix.cz> | |
649 | ||
650 | * run-tests.py: Added a lot of new tests. | |
651 | * testsuite/etc/logo.png: New file. | |
652 | ||
653 | 2008-11-30 Michal Ludvig <michal@logix.cz> | |
654 | ||
655 | * S3/S3.py: object_get() -- make start_position argument optional. | |
656 | ||
657 | 2008-11-29 Michal Ludvig <michal@logix.cz> | |
658 | ||
659 | * s3cmd: Delete local files with "sync --delete-removed" | |
660 | ||
661 | 2008-11-25 Michal Ludvig <michal@logix.cz> | |
662 | ||
663 | * s3cmd, S3/Progress.py: Fixed Unicode output in Progress meter. | |
664 | * s3cmd: Fixed 'del --recursive' without prefix (i.e. all objects). | |
665 | * TODO: Updated list. | |
666 | * upload-to-sf.sh: Helper script. | |
667 | * S3/PkgInfo.py: Bumped up version to 0.9.9-pre2+svn | |
668 | ||
669 | 2008-11-24 Michal Ludvig <michal@logix.cz> | |
670 | ||
671 | * Released version 0.9.9-pre2 | |
672 | ------------------------ | |
673 | ||
674 | * S3/PkgInfo.py: Bumped up version to 0.9.9-pre2 | |
675 | * NEWS: Added 0.9.9-pre2 | |
676 | ||
677 | 2008-11-24 Michal Ludvig <michal@logix.cz> | |
678 | ||
679 | * s3cmd, s3cmd.1, S3/S3.py: Display or don't display progress meter | |
680 | default depends on whether we're on TTY (console) or not. | |
681 | ||
682 | 2008-11-24 Michal Ludvig <michal@logix.cz> | |
683 | ||
684 | * s3cmd: Fixed 'get' conflict. | |
685 | * s3cmd.1, TODO: Document 'mv' command. | |
686 | ||
687 | 2008-11-24 Michal Ludvig <michal@logix.cz> | |
688 | ||
689 | * S3/S3.py, s3cmd, S3/Config.py, s3cmd.1: Added --continue for | |
690 | 'get' command, improved 'get' failure resiliency. | |
691 | * S3/Progress.py: Support for progress meter not starting in 0. | |
692 | * S3/S3.py: improved retrying in send_request() and send_file() | |
693 | ||
694 | 2008-11-24 Michal Ludvig <michal@logix.cz> | |
695 | ||
696 | * s3cmd, S3/S3.py, NEWS: "s3cmd mv" for moving objects | |
697 | ||
698 | 2008-11-24 Michal Ludvig <michal@logix.cz> | |
699 | ||
700 | * S3/Utils.py: Common XML parser. | |
701 | * s3cmd, S3/Exeptions.py: Print info message on Error. | |
702 | ||
703 | 2008-11-21 Michal Ludvig <michal@logix.cz> | |
704 | ||
705 | * s3cmd: Support for 'cp' command. | |
706 | * S3/S3.py: Added S3.object.copy() method. | |
707 | * s3cmd.1: Document 'cp' command. | |
708 | * NEWS: Let everyone know ;-) | |
709 | Thanks Andrew Ryan for a patch proposal! | |
710 | https://sourceforge.net/forum/forum.php?thread_id=2346987&forum_id=618865 | |
711 | ||
712 | 2008-11-17 Michal Ludvig <michal@logix.cz> | |
713 | ||
714 | * S3/Progress.py: Two progress meter implementations. | |
715 | * S3/Config.py, s3cmd: New --progress / --no-progress parameters | |
716 | and Config() members. | |
717 | * S3/S3.py: Call Progress() in send_file()/recv_file() | |
718 | * NEWS: Let everyone know ;-) | |
719 | ||
720 | 2008-11-16 Michal Ludvig <michal@logix.cz> | |
721 | ||
722 | * NEWS: Fetch 0.9.8.4 release news from 0.9.8.x branch. | |
723 | ||
724 | 2008-11-16 Michal Ludvig <michal@logix.cz> | |
725 | ||
726 | Merge from 0.9.8.x branch, rel 251: | |
727 | * S3/S3.py: Adjusting previous commit (orig 249) - it's not a good idea | |
728 | to retry ALL failures. Especially not those code=4xx where AmazonS3 | |
729 | servers are not happy with our requests. | |
730 | Merge from 0.9.8.x branch, rel 249: | |
731 | * S3/S3.py, S3/Exception.py: Re-issue failed requests in S3.send_request() | |
732 | Merge from 0.9.8.x branch, rel 248: | |
733 | * s3cmd: Don't leak open filehandles in sync. Thx Patrick Linskey for report. | |
734 | Merge from 0.9.8.x branch, rel 247: | |
735 | * s3cmd: Re-raise the right exception. | |
736 | Merge from 0.9.8.x branch, rel 246: | |
737 | * s3cmd, S3/S3.py, S3/Exceptions.py: Don't abort 'sync' or 'put' on files | |
738 | that can't be open (e.g. Permision denied). Print a warning and skip over | |
739 | instead. | |
740 | Merge from 0.9.8.x branch, rel 245: | |
741 | * S3/S3.py: Escape parameters in strings. Fixes sync to and | |
742 | ls of directories with spaces. (Thx Lubomir Rintel from Fedora Project) | |
743 | Merge from 0.9.8.x branch, rel 244: | |
744 | * s3cmd: Unicode brainfuck again. This time force all output | |
745 | in UTF-8, will see how many complaints we'll get... | |
746 | ||
747 | 2008-09-16 Michal Ludvig <michal@logix.cz> | |
748 | ||
749 | * NEWS: s3cmd 0.9.8.4 released from branches/0.9.8.x SVN branch. | |
750 | ||
751 | 2008-09-16 Michal Ludvig <michal@logix.cz> | |
752 | ||
753 | * S3/S3.py: Don't run into ZeroDivisionError when speed counter | |
754 | returns 0s elapsed on upload/download file. | |
755 | ||
756 | 2008-09-15 Michal Ludvig <michal@logix.cz> | |
757 | ||
758 | * s3cmd, S3/S3.py, S3/Utils.py, S3/S3Uri.py, S3/Exceptions.py: | |
759 | Yet anoter Unicode round. Unicodised all command line arguments | |
760 | before processing. | |
761 | ||
762 | 2008-09-15 Michal Ludvig <michal@logix.cz> | |
763 | ||
764 | * S3/S3.py: "s3cmd mb" can create upper-case buckets again | |
765 | in US. Non-US (e.g. EU) bucket names must conform to strict | |
766 | DNS-rules. | |
767 | * S3/S3Uri.py: Display public URLs correctly for non-DNS buckets. | |
768 | ||
769 | 2008-09-10 Michal Ludvig <michal@logix.cz> | |
770 | ||
771 | * testsuite, run-tests.py: Added testsuite with first few tests. | |
772 | ||
773 | 2008-09-10 Michal Ludvig <michal@logix.cz> | |
774 | ||
775 | * s3cmd, S3/S3Uri.py, S3/S3.py: All internal representations of | |
776 | S3Uri()s are Unicode (i.e. not UTF-8 but type()==unicode). It | |
777 | still doesn't work on non-UTF8 systems though. | |
778 | ||
779 | 2008-09-04 Michal Ludvig <michal@logix.cz> | |
780 | ||
781 | * s3cmd: Rework UTF-8 output to keep sys.stdout untouched (or it'd | |
782 | break 's3cmd get' to stdout for binary files). | |
783 | ||
784 | 2008-09-03 Michal Ludvig <michal@logix.cz> | |
785 | ||
786 | * s3cmd, S3/S3.py, S3/Config.py: Removed --use-old-connect-method | |
787 | again. Autodetect the need for old connect method instead. | |
788 | ||
789 | 2008-09-03 Michal Ludvig <michal@logix.cz> | |
790 | ||
791 | * s3cmd, S3/S3.py: Make --verbose mode more useful and default | |
792 | mode less verbose. | |
793 | ||
794 | 2008-09-03 Michal Ludvig <michal@logix.cz> | |
795 | ||
796 | * s3cmd, S3/Config.py: [rb] Allow removal of non-empty buckets | |
797 | with --force. | |
798 | [mb, rb] Allow multiple arguments, i.e. create or remove | |
799 | multiple buckets at once. | |
800 | [del] Perform recursive removal with --recursive (or -r). | |
801 | ||
802 | 2008-09-01 Michal Ludvig <michal@logix.cz> | |
803 | ||
804 | * s3cmd: Refuse 'sync' together with '--encrypt'. | |
805 | * S3/S3.py: removed object_{get,put,delete}_uri() functions | |
806 | and made object_{get,put,delete}() accept URI instead of | |
807 | bucket/object parameters. | |
808 | ||
809 | 2008-09-01 Michal Ludvig <michal@logix.cz> | |
810 | ||
811 | * S3/PkgInfo.py: Bumped up version to 0.9.9-pre1 | |
812 | ||
813 | 2008-09-01 Michal Ludvig <michal@logix.cz> | |
814 | ||
815 | * s3cmd, S3/S3.py, S3/Config.py: Allow access to upper-case | |
816 | named buckets again with --use-old-connect-method | |
817 | (uses http://s3.amazonaws.com/bucket/object instead of | |
818 | http://bucket.s3.amazonaws.com/object) | |
819 | ||
820 | 2008-08-19 Michal Ludvig <michal@logix.cz> | |
821 | ||
822 | * s3cmd: Always output UTF-8, even on output redirects. | |
823 | ||
824 | 2008-08-01 Michal Ludvig <michal@logix.cz> | |
825 | ||
826 | * TODO: Add some items | |
827 | ||
828 | 2008-07-29 Michal Ludvig <michal@logix.cz> | |
829 | ||
830 | * Released version 0.9.8.3 | |
831 | ------------------------ | |
832 | ||
833 | 2008-07-29 Michal Ludvig <michal@logix.cz> | |
834 | ||
835 | * S3/PkgInfo.py: Bumped up version to 0.9.8.3 | |
836 | * NEWS: Added 0.9.8.3 | |
837 | ||
838 | 2008-07-29 Michal Ludvig <michal@logix.cz> | |
839 | ||
840 | * S3/Utils.py (hash_file_md5): Hash files in 32kB chunks | |
841 | instead of reading it all up to a memory first to avoid | |
842 | OOM on large files. | |
843 | ||
844 | 2008-07-07 Michal Ludvig <michal@logix.cz> | |
845 | ||
846 | * s3cmd.1: couple of syntax fixes from Mikhail Gusarov | |
847 | ||
848 | 2008-07-03 Michal Ludvig <michal@logix.cz> | |
849 | ||
850 | * Released version 0.9.8.2 | |
851 | ------------------------ | |
852 | ||
853 | 2008-07-03 Michal Ludvig <michal@logix.cz> | |
854 | ||
855 | * S3/PkgInfo.py: Bumped up version to 0.9.8.2 | |
856 | * NEWS: Added 0.9.8.2 | |
857 | * s3cmd: Print version info on 'unexpected error' output. | |
858 | ||
859 | 2008-06-30 Michal Ludvig <michal@logix.cz> | |
860 | ||
861 | * S3/S3.py: Re-upload when Amazon doesn't send ETag | |
862 | in PUT response. It happens from time to time for | |
863 | unknown reasons. Thanks "Burtc" for report and | |
864 | "hermzz" for fix. | |
865 | ||
866 | 2008-06-27 Michal Ludvig <michal@logix.cz> | |
867 | ||
868 | * Released version 0.9.8.1 | |
869 | ------------------------ | |
870 | ||
871 | 2008-06-27 Michal Ludvig <michal@logix.cz> | |
872 | ||
873 | * S3/PkgInfo.py: Bumped up version to 0.9.8.1 | |
874 | * NEWS: Added 0.9.8.1 | |
875 | * s3cmd: make 'cfg' global | |
876 | * run-tests.sh: Sort-of testsuite | |
877 | ||
878 | 2008-06-23 Michal Ludvig <michal@logix.cz> | |
879 | ||
880 | * Released version 0.9.8 | |
881 | ---------------------- | |
882 | ||
883 | 2008-06-23 Michal Ludvig <michal@logix.cz> | |
884 | ||
885 | * S3/PkgInfo.py: Bumped up version to 0.9.8 | |
886 | * NEWS: Added 0.9.8 | |
887 | * TODO: Removed completed tasks | |
888 | ||
889 | 2008-06-23 Michal Ludvig <michal@logix.cz> | |
890 | ||
891 | * s3cmd: Last-minute compatibility fixes for Python 2.4 | |
892 | * s3cmd, s3cmd.1: --debug-exclude is an alias for --debug-syncmatch | |
893 | * s3cmd: Don't require $HOME env variable to be set. | |
894 | Fixes #2000133 | |
895 | * s3cmd: Wrapped all execution in a try/except block | |
896 | to catch all exceptions and ask for a report. | |
897 | ||
898 | 2008-06-18 Michal Ludvig <michal@logix.cz> | |
899 | ||
900 | * S3/PkgInfo.py: Version 0.9.8-rc3 | |
901 | ||
902 | 2008-06-18 Michal Ludvig <michal@logix.cz> | |
903 | ||
904 | * S3/S3.py: Bucket name can't contain upper-case letters (S3/DNS limitation). | |
905 | ||
906 | 2008-06-12 Michal Ludvig <michal@logix.cz> | |
907 | ||
908 | * S3/PkgInfo.py: Version 0.9.8-rc2 | |
909 | ||
910 | 2008-06-12 Michal Ludvig <michal@logix.cz> | |
911 | ||
912 | * s3cmd, s3cmd.1: Added GLOB (shell-style wildcard) exclude, renamed | |
913 | orig regexp-style --exclude to --rexclude | |
914 | ||
915 | 2008-06-11 Michal Ludvig <michal@logix.cz> | |
916 | ||
917 | * S3/PkgInfo.py: Version 0.9.8-rc1 | |
918 | ||
919 | 2008-06-11 Michal Ludvig <michal@logix.cz> | |
920 | ||
921 | * s3cmd: Remove python 2.5 specific code (try/except/finally | |
922 | block) and make s3cmd compatible with python 2.4 again. | |
923 | * s3cmd, S3/Config.py, s3cmd.1: Added --exclude-from and --debug-syncmatch | |
924 | switches for sync. | |
925 | ||
926 | 2008-06-10 Michal Ludvig <michal@logix.cz> | |
927 | ||
928 | * s3cmd: Added --exclude switch for sync. | |
929 | * s3cmd.1, NEWS: Document --exclude | |
930 | ||
931 | 2008-06-05 Michal Ludvig <michal@logix.cz> | |
932 | ||
933 | * Released version 0.9.7 | |
934 | ---------------------- | |
935 | ||
936 | 2008-06-05 Michal Ludvig <michal@logix.cz> | |
937 | ||
938 | * S3/PkgInfo.py: Bumped up version to 0.9.7 | |
939 | * NEWS: Added 0.9.7 | |
940 | * TODO: Removed completed tasks | |
941 | * s3cmd, s3cmd.1: Updated help texts, | |
942 | removed --dry-run option as it's not implemented. | |
943 | ||
944 | 2008-06-05 Michal Ludvig <michal@logix.cz> | |
945 | ||
946 | * S3/Config.py: Store more file attributes in sync to S3. | |
947 | * s3cmd: Make sync remote2local more error-resilient. | |
948 | ||
949 | 2008-06-04 Michal Ludvig <michal@logix.cz> | |
950 | ||
951 | * s3cmd: Implemented cmd_sync_remote2local() for restoring | |
952 | backup from S3 to a local filesystem | |
953 | * S3/S3.py: S3.object_get_uri() now requires writable stream | |
954 | and not a path name. | |
955 | * S3/Utils.py: Added mkdir_with_parents() | |
956 | ||
957 | 2008-06-04 Michal Ludvig <michal@logix.cz> | |
958 | ||
959 | * s3cmd: Refactored cmd_sync() in preparation | |
960 | for remote->local sync. | |
961 | ||
962 | 2008-04-30 Michal Ludvig <michal@logix.cz> | |
963 | ||
964 | * s3db, S3/SimpleDB.py: Implemented almost full SimpleDB API. | |
965 | ||
966 | 2008-04-29 Michal Ludvig <michal@logix.cz> | |
967 | ||
968 | * s3db, S3/SimpleDB.py: Initial support for Amazon SimpleDB. | |
969 | For now implements ListDomains() call and most of the | |
970 | infrastructure required for request creation. | |
971 | ||
972 | 2008-04-29 Michal Ludvig <michal@logix.cz> | |
973 | ||
974 | * S3/Exceptions.py: Exceptions moved out of S3.S3 | |
975 | * S3/SortedDict.py: rewritten from scratch to preserve | |
976 | case of keys while still sorting in case-ignore mode. | |
977 | ||
978 | 2008-04-28 Michal Ludvig <michal@logix.cz> | |
979 | ||
980 | * S3/S3.py: send_file() now computes MD5 sum of the file | |
981 | being uploaded, compares with ETag returned by Amazon | |
982 | and retries upload if they don't match. | |
983 | ||
984 | 2008-03-05 Michal Ludvig <michal@logix.cz> | |
985 | ||
986 | * s3cmd, S3/S3.py, S3/Utils.py: Throttle upload speed and retry | |
987 | when upload failed. | |
988 | Report download/upload speed and time elapsed. | |
989 | ||
990 | 2008-02-28 Michal Ludvig <michal@logix.cz> | |
991 | ||
992 | * Released version 0.9.6 | |
993 | ---------------------- | |
994 | ||
995 | 2008-02-28 Michal Ludvig <michal@logix.cz> | |
996 | ||
997 | * S3/PkgInfo.py: bumped up version to 0.9.6 | |
998 | * NEWS: What's new in 0.9.6 | |
999 | ||
1000 | 2008-02-27 Michal Ludvig <michal@logix.cz> | |
1001 | ||
1002 | * s3cmd, s3cmd.1: Updated help and man page. | |
1003 | * S3/S3.py, S3/Utils.py, s3cmd: Support for 's3cmd info' command. | |
1004 | * s3cmd: Fix crash when 'sync'ing files with unresolvable owner uid/gid. | |
1005 | * S3/S3.py, S3/Utils.py: open files in binary mode (otherwise windows | |
1006 | users have problems). | |
1007 | * S3/S3.py: modify 'x-amz-date' format (problems reported on MacOS X). | |
1008 | Thanks Jon Larkowski for fix. | |
1009 | ||
1010 | 2008-02-27 Michal Ludvig <michal@logix.cz> | |
1011 | ||
1012 | * TODO: Updated wishlist. | |
1013 | ||
1014 | 2008-02-11 Michal Ludvig <michal@logix.cz> | |
1015 | ||
1016 | * S3/S3.py: Properly follow RedirectPermanent responses for EU buckets | |
1017 | * S3/S3.py: Create public buckets with -P (#1837328) | |
1018 | * S3/S3.py, s3cmd: Correctly display public URL on uploads. | |
1019 | * S3/S3.py, S3/Config.py: Support for MIME types. Both | |
1020 | default and guessing. Fixes bug #1872192 (Thanks Martin Herr) | |
1021 | ||
1022 | 2007-11-13 Michal Ludvig <michal@logix.cz> | |
1023 | ||
1024 | * Released version 0.9.5 | |
1025 | ---------------------- | |
1026 | ||
1027 | 2007-11-13 Michal Ludvig <michal@logix.cz> | |
1028 | ||
1029 | * S3/S3.py: Support for buckets stored in Europe, access now | |
1030 | goes via <bucket>.s3.amazonaws.com where possible. | |
1031 | ||
1032 | 2007-11-12 Michal Ludvig <michal@logix.cz> | |
1033 | ||
1034 | * s3cmd: Support for storing file attributes (like ownership, | |
1035 | mode, etc) in sync operation. | |
1036 | * s3cmd, S3/S3.py: New command 'ib' to get information about | |
1037 | bucket (only 'LocationConstraint' supported for now). | |
1038 | ||
1039 | 2007-10-01 Michal Ludvig <michal@logix.cz> | |
1040 | ||
1041 | * s3cmd: Fix typo in argument name (patch | |
1042 | from Kim-Minh KAPLAN, SF #1804808) | |
1043 | ||
1044 | 2007-09-25 Michal Ludvig <michal@logix.cz> | |
1045 | ||
1046 | * s3cmd: Exit with error code on error (patch | |
1047 | from Kim-Minh KAPLAN, SF #1800583) | |
1048 | ||
1049 | 2007-09-25 Michal Ludvig <michal@logix.cz> | |
1050 | ||
1051 | * S3/S3.py: Don't fail if bucket listing doesn't have | |
1052 | <IsTruncated> node. | |
1053 | * s3cmd: Create ~/.s3cfg with 0600 permissions. | |
1054 | ||
1055 | 2007-09-13 Michal Ludvig <michal@logix.cz> | |
1056 | ||
1057 | * s3cmd: Improved 'sync' | |
1058 | * S3/S3.py: Support for buckets with over 1000 objects. | |
1059 | ||
1060 | 2007-09-03 Michal Ludvig <michal@logix.cz> | |
1061 | ||
1062 | * s3cmd: Small tweaks to --configure workflow. | |
1063 | ||
1064 | 2007-09-02 Michal Ludvig <michal@logix.cz> | |
1065 | ||
1066 | * s3cmd: Initial support for 'sync' operation. For | |
1067 | now only local->s3 direction. In this version doesn't | |
1068 | work well with non-ASCII filenames and doesn't support | |
1069 | encryption. | |
1070 | ||
1071 | 2007-08-24 Michal Ludvig <michal@logix.cz> | |
1072 | ||
1073 | * s3cmd, S3/Util.py: More ElementTree imports cleanup | |
1074 | ||
1075 | 2007-08-19 Michal Ludvig <michal@logix.cz> | |
1076 | ||
1077 | * NEWS: Added news for 0.9.5 | |
1078 | ||
1079 | 2007-08-19 Michal Ludvig <michal@logix.cz> | |
1080 | ||
1081 | * s3cmd: Better handling of multiple arguments for put, get and del | |
1082 | ||
1083 | 2007-08-14 Michal Ludvig <michal@logix.cz> | |
1084 | ||
1085 | * setup.py, S3/Utils.py: Try import xml.etree.ElementTree | |
1086 | or elementtree.ElementTree module. | |
1087 | ||
1088 | 2007-08-14 Michal Ludvig <michal@logix.cz> | |
1089 | ||
1090 | * s3cmd.1: Add info about --encrypt parameter. | |
1091 | ||
1092 | 2007-08-14 Michal Ludvig <michal@logix.cz> | |
1093 | ||
1094 | * S3/PkgInfo.py: Bump up version to 0.9.5-pre | |
1095 | ||
1096 | 2007-08-13 Michal Ludvig <michal@logix.cz> | |
1097 | ||
1098 | * Released version 0.9.4 | |
1099 | ---------------------- | |
1100 | ||
1101 | 2007-08-13 Michal Ludvig <michal@logix.cz> | |
1102 | ||
1103 | * S3/S3.py: Added function urlencode_string() that encodes | |
1104 | non-ascii characters in object name before sending it to S3. | |
1105 | ||
1106 | 2007-08-13 Michal Ludvig <michal@logix.cz> | |
1107 | ||
1108 | * README: Updated Amazon S3 pricing overview | |
1109 | ||
1110 | 2007-08-13 Michal Ludvig <michal@logix.cz> | |
1111 | ||
1112 | * s3cmd, S3/Config.py, S3/S3.py: HTTPS support | |
1113 | ||
1114 | 2007-07-20 Michal Ludvig <michal@logix.cz> | |
1115 | ||
1116 | * setup.py: Check correct Python version and ElementTree availability. | |
1117 | ||
1118 | 2007-07-05 Michal Ludvig <michal@logix.cz> | |
1119 | ||
1120 | * s3cmd: --configure support for Proxy | |
1121 | * S3/S3.py: HTTP proxy support from | |
1122 | John D. Rowell <jdrowell@exerciseyourbrain.com> | |
1123 | ||
1124 | 2007-06-19 Michal Ludvig <michal@logix.cz> | |
1125 | ||
1126 | * setup.py: Check for S3CMD_PACKAGING and don't install | |
1127 | manpages and docs if defined. | |
1128 | * INSTALL: Document the above change. | |
1129 | * MANIFEST.in: Include uncompressed manpage | |
1130 | ||
1131 | 2007-06-17 Michal Ludvig <michal@logix.cz> | |
1132 | ||
1133 | * s3cmd: Added encryption key support to --configure | |
1134 | * S3/PkgInfo.py: Bump up version to 0.9.4-pre | |
1135 | * setup.py: Cleaned up some rpm-specific stuff that | |
1136 | caused problems to Debian packager Mikhail Gusarov | |
1137 | * setup.cfg: Removed [bdist_rpm] section | |
1138 | * MANIFEST.in: Include S3/*.py | |
1139 | ||
1140 | 2007-06-16 Michal Ludvig <michal@logix.cz> | |
1141 | ||
1142 | * s3cmd.1: Syntax fixes from Mikhail Gusarov <dottedmag@dottedmag.net> | |
1143 | ||
1144 | 2007-05-27 Michal Ludvig <michal@logix.cz> | |
1145 | ||
1146 | * Support for on-the-fly GPG encryption. | |
1147 | ||
1148 | 2007-05-26 Michal Ludvig <michal@logix.cz> | |
1149 | ||
1150 | * s3cmd.1: Add info about "s3cmd du" command. | |
1151 | ||
1152 | 2007-05-26 Michal Ludvig <michal@logix.cz> | |
1153 | ||
1154 | * Released version 0.9.3 | |
1155 | ---------------------- | |
1156 | ||
1157 | 2007-05-26 Michal Ludvig <michal@logix.cz> | |
1158 | ||
1159 | * s3cmd: Patch from Basil Shubin <basil.shubin@gmail.com> | |
1160 | adding support for "s3cmd du" command. | |
1161 | * s3cmd: Modified output format of "s3cmd du" to conform | |
1162 | with unix "du". | |
1163 | * setup.cfg: Require Python 2.5 in RPM. Otherwise it needs | |
1164 | to require additional python modules (e.g. ElementTree) | |
1165 | which may have different names in different distros. It's | |
1166 | indeed still possible to manually install s3cmd with | |
1167 | Python 2.4 and appropriate modules. | |
1168 | ||
1169 | 2007-04-09 Michal Ludvig <michal@logix.cz> | |
1170 | ||
1171 | * Released version 0.9.2 | |
1172 | ---------------------- | |
1173 | ||
1174 | 2007-04-09 Michal Ludvig <michal@logix.cz> | |
1175 | ||
1176 | * s3cmd.1: Added manpage | |
1177 | * Updated infrastructure files to create "better" | |
1178 | distribution archives. | |
1179 | ||
1180 | 2007-03-26 Michal Ludvig <michal@logix.cz> | |
1181 | ||
1182 | * setup.py, S3/PkgInfo.py: Move package info out of setup.py | |
1183 | * s3cmd: new parameter --version | |
1184 | * s3cmd, S3/S3Uri.py: Output public HTTP URL for objects | |
1185 | stored with Public ACL. | |
1186 | ||
1187 | 2007-02-28 Michal Ludvig <michal@logix.cz> | |
1188 | ||
1189 | * s3cmd: Verify supplied accesskey and secretkey | |
1190 | in interactive configuration path. | |
1191 | * S3/Config.py: Hide access key and secret key | |
1192 | from debug output. | |
1193 | * S3/S3.py: Modify S3Error exception to work | |
1194 | in python 2.4 (=> don't expect Exception is | |
1195 | a new-style class). | |
1196 | * s3cmd: Updated for the above change. | |
1197 | ||
1198 | 2007-02-19 Michal Ludvig <michal@logix.cz> | |
1199 | ||
1200 | * NEWS, INSTALL, README, setup.py: Added | |
1201 | more documentation. | |
1202 | ||
1203 | 2007-02-19 Michal Ludvig <michal@logix.cz> | |
1204 | ||
1205 | * S3/S3.py, s3cmd: New feature - allow "get" to stdout | |
1206 | ||
1207 | 2007-02-19 Michal Ludvig <michal@logix.cz> | |
1208 | ||
1209 | * S3/S3fs.py: Removed (development moved to branch s3fs-devel). | |
1210 | ||
1211 | 2007-02-08 Michal Ludvig <michal@logix.cz> | |
1212 | ||
1213 | * S3/S3fs.py: | |
1214 | - Implemented mknod() | |
1215 | - Can create directory structure | |
1216 | - Rewritten to use SQLite3. Currently can create | |
1217 | the filesystem, and a root inode. | |
1218 | ||
1219 | 2007-02-07 Michal Ludvig <michal@logix.cz> | |
1220 | ||
1221 | * s3cmd (from /s3py:74): Renamed SVN top-level project | |
1222 | s3py to s3cmd | |
1223 | ||
1224 | 2007-02-07 Michal Ludvig <michal@logix.cz> | |
1225 | ||
1226 | * setup.cfg: Only require Python 2.4, not 2.5 | |
1227 | * S3/Config.py: Removed show_uri - no longer needed, | |
1228 | it's now default | |
1229 | ||
1230 | 2007-02-07 Michal Ludvig <michal@logix.cz> | |
1231 | ||
1232 | * setup.py | |
1233 | - Version 0.9.1 | |
1234 | ||
1235 | 2007-02-07 Michal Ludvig <michal@logix.cz> | |
1236 | ||
1237 | * s3cmd: Change all "exit()" calls to "sys.exit()" | |
1238 | and allow for python 2.4 | |
1239 | * S3/S3.py: Removed dependency on hashlib -> allow for python 2.4 | |
1240 | ||
1241 | 2007-01-27 Michal Ludvig <michal@logix.cz> | |
1242 | ||
1243 | * S3/S3.py, S3/S3Uri.py: Case insensitive regex in S3Uri.py | |
1244 | ||
1245 | 2007-01-26 Michal Ludvig <michal@logix.cz> | |
1246 | ||
1247 | * S3/S3fs.py: Added support for stroing/loading inodes. | |
1248 | No data yet however. | |
1249 | ||
1250 | 2007-01-26 Michal Ludvig <michal@logix.cz> | |
1251 | ||
1252 | * S3/S3fs.py: Initial version of S3fs module. | |
1253 | Can create filesystem via "S3fs.mkfs()" | |
1254 | ||
1255 | 2007-01-26 Michal Ludvig <michal@logix.cz> | |
1256 | ||
1257 | * S3/BidirMap.py, S3/Config.py, S3/S3.py, S3/S3Uri.py, | |
1258 | S3/SortedDict.py, S3/Utils.py, s3cmd: Added headers with | |
1259 | copyright to all files | |
1260 | * S3/S3.py, S3/S3Uri.py: Removed S3.compose_uri(), introduced | |
1261 | S3UriS3.compose_uri() instead. | |
1262 | ||
1263 | 2007-01-26 Michal Ludvig <michal@logix.cz> | |
1264 | ||
1265 | * S3/S3.py, S3/S3Uri.py, s3cmd: | |
1266 | - Converted all users of parse_uri to S3Uri class API | |
1267 | - Removed "cp" command again. Will have to use 'put' | |
1268 | and 'get' for now. | |
1269 | ||
1270 | 2007-01-25 Michal Ludvig <michal@logix.cz> | |
1271 | ||
1272 | * S3/S3Uri.py: New module S3/S3Uri.py | |
1273 | * S3/S3.py, s3cmd: Converted "put" operation to use | |
1274 | the new S3Uri class. | |
1275 | ||
1276 | 2007-01-24 Michal Ludvig <michal@logix.cz> | |
1277 | ||
1278 | * S3/S3.py | |
1279 | * s3cmd | |
1280 | - Added 'cp' command | |
1281 | - Renamed parse_s3_uri to parse_uri (this will go away anyway) | |
1282 | ||
1283 | 2007-01-19 Michal Ludvig <michal@logix.cz> | |
1284 | ||
1285 | * setup.cfg | |
1286 | * setup.py | |
1287 | - Include README into tarballs | |
1288 | ||
1289 | 2007-01-19 Michal Ludvig <michal@logix.cz> | |
1290 | ||
1291 | * README | |
1292 | - Added comprehensive README file | |
1293 | ||
1294 | 2007-01-19 Michal Ludvig <michal@logix.cz> | |
1295 | ||
1296 | * setup.cfg | |
1297 | * setup.py | |
1298 | - Added configuration for setup.py sdist | |
1299 | ||
1300 | 2007-01-19 Michal Ludvig <michal@logix.cz> | |
1301 | ||
1302 | * S3/Config.py | |
1303 | * s3cmd | |
1304 | - Added interactive configurator (--configure) | |
1305 | - Added config dumper (--dump-config) | |
1306 | - Improved --help output | |
1307 | ||
1308 | 2007-01-19 Michal Ludvig <michal@logix.cz> | |
1309 | ||
1310 | * setup.cfg | |
1311 | * setup.py | |
1312 | Added info for building RPM packages. | |
1313 | ||
1314 | 2007-01-18 Michal Ludvig <michal@logix.cz> | |
1315 | ||
1316 | * S3/Config.py | |
1317 | * S3/S3.py | |
1318 | * s3cmd | |
1319 | Moved class Config from S3/S3.py to S3/Config.py | |
1320 | ||
1321 | 2007-01-18 Michal Ludvig <michal@logix.cz> | |
1322 | ||
1323 | * S3/Config.py (from /s3py/trunk/S3/ConfigParser.py:47) | |
1324 | * S3/ConfigParser.py | |
1325 | * S3/S3.py | |
1326 | Renamed S3/ConfigParser.py to S3/Config.py | |
1327 | ||
1328 | 2007-01-18 Michal Ludvig <michal@logix.cz> | |
1329 | ||
1330 | * s3cmd | |
1331 | Added info about homepage | |
1332 | ||
1333 | 2007-01-17 Michal Ludvig <michal@logix.cz> | |
1334 | ||
1335 | * S3/S3.py | |
1336 | * s3cmd | |
1337 | - Use prefix for listings if specified. | |
1338 | - List all commands in --help | |
1339 | ||
1340 | 2007-01-16 Michal Ludvig <michal@logix.cz> | |
1341 | ||
1342 | * S3/S3.py | |
1343 | * s3cmd | |
1344 | Major rework of Config class: | |
1345 | - Renamed from AwsConfig to Config | |
1346 | - Converted to Singleton (see Config.__new__() and an article on | |
1347 | Wikipedia) | |
1348 | - No more explicit listing of options - use introspection to get them | |
1349 | (class variables that of type str, int or bool that don't start with | |
1350 | underscore) | |
1351 | - Check values read from config file and verify their type. | |
1352 | ||
1353 | Added OptionMimeType and -m/-M options. Not yet implemented | |
1354 | functionality in the rest of S3/S3.py | |
1355 | ||
1356 | 2007-01-15 Michal Ludvig <michal@logix.cz> | |
1357 | ||
1358 | * S3/S3.py | |
1359 | * s3cmd | |
1360 | - Merged list-buckets and bucket-list-objects operations into | |
1361 | a single 'ls' command. | |
1362 | - New parameter -P for uploading publicly readable objects | |
1363 | ||
1364 | 2007-01-14 Michal Ludvig <michal@logix.cz> | |
1365 | ||
1366 | * s3.py | |
1367 | * setup.py | |
1368 | Renamed s3.py to s3cmd (take 2) | |
1369 | ||
1370 | 2007-01-14 Michal Ludvig <michal@logix.cz> | |
1371 | ||
1372 | * s3cmd (from /s3py/trunk/s3.py:45) | |
1373 | Renamed s3.py to s3cmd | |
1374 | ||
1375 | 2007-01-14 Michal Ludvig <michal@logix.cz> | |
1376 | ||
1377 | * S3 | |
1378 | * S3/S3.py | |
1379 | * s3.py | |
1380 | * setup.py | |
1381 | All classes from s3.py go to S3/S3.py | |
1382 | Added setup.py | |
1383 | ||
1384 | 2007-01-14 Michal Ludvig <michal@logix.cz> | |
1385 | ||
1386 | * s3.py | |
1387 | Minor fix S3.utils -> S3.Utils | |
1388 | ||
1389 | 2007-01-14 Michal Ludvig <michal@logix.cz> | |
1390 | ||
1391 | * .svnignore | |
1392 | * BidirMap.py | |
1393 | * ConfigParser.py | |
1394 | * S3 | |
1395 | * S3/BidirMap.py (from /s3py/trunk/BidirMap.py:35) | |
1396 | * S3/ConfigParser.py (from /s3py/trunk/ConfigParser.py:38) | |
1397 | * S3/SortedDict.py (from /s3py/trunk/SortedDict.py:35) | |
1398 | * S3/Utils.py (from /s3py/trunk/utils.py:39) | |
1399 | * S3/__init__.py | |
1400 | * SortedDict.py | |
1401 | * s3.py | |
1402 | * utils.py | |
1403 | Moved modules to their own package | |
1404 | ||
1405 | 2007-01-12 Michal Ludvig <michal@logix.cz> | |
1406 | ||
1407 | * s3.py | |
1408 | Added "del" command | |
1409 | Converted all (?) commands to accept s3-uri | |
1410 | Added -u/--show-uri parameter | |
1411 | ||
1412 | 2007-01-11 Michal Ludvig <michal@logix.cz> | |
1413 | ||
1414 | * s3.py | |
1415 | Verify MD5 on received files | |
1416 | Improved upload of multiple files | |
1417 | Initial S3-URI support (more tbd) | |
1418 | ||
1419 | 2007-01-11 Michal Ludvig <michal@logix.cz> | |
1420 | ||
1421 | * s3.py | |
1422 | Minor fixes: | |
1423 | - store names of parsed files in AwsConfig | |
1424 | - Print total size with upload/download | |
1425 | ||
1426 | 2007-01-11 Michal Ludvig <michal@logix.cz> | |
1427 | ||
1428 | * s3.py | |
1429 | * utils.py | |
1430 | Added support for sending and receiving files. | |
1431 | ||
1432 | 2007-01-11 Michal Ludvig <michal@logix.cz> | |
1433 | ||
1434 | * ConfigParser.py | |
1435 | * s3.py | |
1436 | List all Objects in all Buckets command | |
1437 | Yet another logging improvement | |
1438 | Version check for Python 2.5 or higher | |
1439 | ||
1440 | 2007-01-11 Michal Ludvig <michal@logix.cz> | |
1441 | ||
1442 | * ConfigParser.py | |
1443 | * s3.py | |
1444 | * utils.py | |
1445 | Added ConfigParser | |
1446 | Improved setting logging levels | |
1447 | It can now quite reliably list buckets and objects | |
1448 | ||
1449 | 2007-01-11 Michal Ludvig <michal@logix.cz> | |
1450 | ||
1451 | * .svnignore | |
1452 | Added ignore list | |
1453 | ||
1454 | 2007-01-11 Michal Ludvig <michal@logix.cz> | |
1455 | ||
1456 | * .svnignore | |
1457 | * BidirMap.py | |
1458 | * SortedDict.py | |
1459 | * s3.py | |
1460 | * utils.py | |
1461 | Initial import |
0 | 0 | Installation of s3cmd package |
1 | 1 | ============================= |
2 | 2 | |
3 | Author: | |
4 | Michal Ludvig <michal@logix.cz> | |
3 | Copyright: | |
4 | TGRMN Software and contributors | |
5 | 5 | |
6 | 6 | S3tools / S3cmd project homepage: |
7 | http://s3tools.sourceforge.net | |
8 | ||
9 | Amazon S3 homepage: | |
10 | http://aws.amazon.com/s3 | |
7 | http://s3tools.org | |
11 | 8 | |
12 | 9 | !!! |
13 | 10 | !!! Please consult README file for setup, usage and examples! |
16 | 13 | Package formats |
17 | 14 | --------------- |
18 | 15 | S3cmd is distributed in two formats: |
16 | ||
19 | 17 | 1) Prebuilt RPM file - should work on most RPM-based |
20 | 18 | distributions |
19 | ||
21 | 20 | 2) Source .tar.gz package |
22 | ||
23 | 21 | |
24 | 22 | |
25 | 23 | Installation of RPM package |
35 | 33 | distribution documentation on ways to solve the problem. |
36 | 34 | |
37 | 35 | |
38 | Installation of source .tar.gz package | |
39 | -------------------------------------- | |
36 | Installation from zip file | |
37 | -------------------------- | |
40 | 38 | There are three options to run s3cmd from source tarball: |
41 | 39 | |
42 | 1) S3cmd program as distributed in s3cmd-X.Y.Z.tar.gz | |
43 | can be run directly from where you untar'ed the package. | |
40 | 1) The S3cmd program, as distributed in s3cmd-X.Y.Z.tar.gz | |
41 | on SourceForge or in master.zip on GitHub, can be run directly | |
42 | from where you unzipped the package. | |
44 | 43 | |
45 | 44 | 2) Or you may want to move "s3cmd" file and "S3" subdirectory |
46 | 45 | to some other path. Make sure that "S3" subdirectory ends up |
50 | 49 | you will have $HOME/bin/s3cmd file and $HOME/bin/S3 directory |
51 | 50 | with a number of support files. |
52 | 51 | |
53 | 3) The cleanest and most recommended approach is to run | |
52 | 3) The cleanest and most recommended approach is to unzip the | |
53 | package and then just run: | |
54 | 54 | |
55 | 55 | python setup.py install |
56 | 56 | |
64 | 64 | Again, consult your distribution documentation on how to |
65 | 65 | find out the actual package name and how to install it then. |
66 | 66 | |
67 | Note that on Linux, if you are not "root" already, you may | |
68 | need to run: | |
69 | ||
70 | sudo python setup.py install | |
67 | 71 | |
68 | Note to distibutions package maintainers | |
72 | instead. | |
73 | ||
74 | ||
75 | Note to distributions package maintainers | |
69 | 76 | ---------------------------------------- |
70 | 77 | Define shell environment variable S3CMD_PACKAGING=yes if you |
71 | 78 | don't want setup.py to install manpages and doc files. You'll |
85 | 92 | |
86 | 93 | s3tools-general@lists.sourceforge.net |
87 | 94 | |
88 | For more information refer to: | |
89 | * S3cmd / S3tools homepage at http://s3tools.sourceforge.net | |
95 | or visit the S3cmd / S3tools homepage at: | |
90 | 96 | |
91 | Enjoy! | |
92 | ||
93 | Michal Ludvig | |
94 | * michal@logix.cz | |
95 | * http://www.logix.cz/michal | |
96 | ||
97 | http://s3tools.org |
0 | GNU GENERAL PUBLIC LICENSE | |
1 | Version 2, June 1991 | |
2 | ||
3 | Copyright (C) 1989, 1991 Free Software Foundation, Inc., | |
4 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA | |
5 | Everyone is permitted to copy and distribute verbatim copies | |
6 | of this license document, but changing it is not allowed. | |
7 | ||
8 | Preamble | |
9 | ||
10 | The licenses for most software are designed to take away your | |
11 | freedom to share and change it. By contrast, the GNU General Public | |
12 | License is intended to guarantee your freedom to share and change free | |
13 | software--to make sure the software is free for all its users. This | |
14 | General Public License applies to most of the Free Software | |
15 | Foundation's software and to any other program whose authors commit to | |
16 | using it. (Some other Free Software Foundation software is covered by | |
17 | the GNU Lesser General Public License instead.) You can apply it to | |
18 | your programs, too. | |
19 | ||
20 | When we speak of free software, we are referring to freedom, not | |
21 | price. Our General Public Licenses are designed to make sure that you | |
22 | have the freedom to distribute copies of free software (and charge for | |
23 | this service if you wish), that you receive source code or can get it | |
24 | if you want it, that you can change the software or use pieces of it | |
25 | in new free programs; and that you know you can do these things. | |
26 | ||
27 | To protect your rights, we need to make restrictions that forbid | |
28 | anyone to deny you these rights or to ask you to surrender the rights. | |
29 | These restrictions translate to certain responsibilities for you if you | |
30 | distribute copies of the software, or if you modify it. | |
31 | ||
32 | For example, if you distribute copies of such a program, whether | |
33 | gratis or for a fee, you must give the recipients all the rights that | |
34 | you have. You must make sure that they, too, receive or can get the | |
35 | source code. And you must show them these terms so they know their | |
36 | rights. | |
37 | ||
38 | We protect your rights with two steps: (1) copyright the software, and | |
39 | (2) offer you this license which gives you legal permission to copy, | |
40 | distribute and/or modify the software. | |
41 | ||
42 | Also, for each author's protection and ours, we want to make certain | |
43 | that everyone understands that there is no warranty for this free | |
44 | software. If the software is modified by someone else and passed on, we | |
45 | want its recipients to know that what they have is not the original, so | |
46 | that any problems introduced by others will not reflect on the original | |
47 | authors' reputations. | |
48 | ||
49 | Finally, any free program is threatened constantly by software | |
50 | patents. We wish to avoid the danger that redistributors of a free | |
51 | program will individually obtain patent licenses, in effect making the | |
52 | program proprietary. To prevent this, we have made it clear that any | |
53 | patent must be licensed for everyone's free use or not licensed at all. | |
54 | ||
55 | The precise terms and conditions for copying, distribution and | |
56 | modification follow. | |
57 | ||
58 | GNU GENERAL PUBLIC LICENSE | |
59 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION | |
60 | ||
61 | 0. This License applies to any program or other work which contains | |
62 | a notice placed by the copyright holder saying it may be distributed | |
63 | under the terms of this General Public License. The "Program", below, | |
64 | refers to any such program or work, and a "work based on the Program" | |
65 | means either the Program or any derivative work under copyright law: | |
66 | that is to say, a work containing the Program or a portion of it, | |
67 | either verbatim or with modifications and/or translated into another | |
68 | language. (Hereinafter, translation is included without limitation in | |
69 | the term "modification".) Each licensee is addressed as "you". | |
70 | ||
71 | Activities other than copying, distribution and modification are not | |
72 | covered by this License; they are outside its scope. The act of | |
73 | running the Program is not restricted, and the output from the Program | |
74 | is covered only if its contents constitute a work based on the | |
75 | Program (independent of having been made by running the Program). | |
76 | Whether that is true depends on what the Program does. | |
77 | ||
78 | 1. You may copy and distribute verbatim copies of the Program's | |
79 | source code as you receive it, in any medium, provided that you | |
80 | conspicuously and appropriately publish on each copy an appropriate | |
81 | copyright notice and disclaimer of warranty; keep intact all the | |
82 | notices that refer to this License and to the absence of any warranty; | |
83 | and give any other recipients of the Program a copy of this License | |
84 | along with the Program. | |
85 | ||
86 | You may charge a fee for the physical act of transferring a copy, and | |
87 | you may at your option offer warranty protection in exchange for a fee. | |
88 | ||
89 | 2. You may modify your copy or copies of the Program or any portion | |
90 | of it, thus forming a work based on the Program, and copy and | |
91 | distribute such modifications or work under the terms of Section 1 | |
92 | above, provided that you also meet all of these conditions: | |
93 | ||
94 | a) You must cause the modified files to carry prominent notices | |
95 | stating that you changed the files and the date of any change. | |
96 | ||
97 | b) You must cause any work that you distribute or publish, that in | |
98 | whole or in part contains or is derived from the Program or any | |
99 | part thereof, to be licensed as a whole at no charge to all third | |
100 | parties under the terms of this License. | |
101 | ||
102 | c) If the modified program normally reads commands interactively | |
103 | when run, you must cause it, when started running for such | |
104 | interactive use in the most ordinary way, to print or display an | |
105 | announcement including an appropriate copyright notice and a | |
106 | notice that there is no warranty (or else, saying that you provide | |
107 | a warranty) and that users may redistribute the program under | |
108 | these conditions, and telling the user how to view a copy of this | |
109 | License. (Exception: if the Program itself is interactive but | |
110 | does not normally print such an announcement, your work based on | |
111 | the Program is not required to print an announcement.) | |
112 | ||
113 | These requirements apply to the modified work as a whole. If | |
114 | identifiable sections of that work are not derived from the Program, | |
115 | and can be reasonably considered independent and separate works in | |
116 | themselves, then this License, and its terms, do not apply to those | |
117 | sections when you distribute them as separate works. But when you | |
118 | distribute the same sections as part of a whole which is a work based | |
119 | on the Program, the distribution of the whole must be on the terms of | |
120 | this License, whose permissions for other licensees extend to the | |
121 | entire whole, and thus to each and every part regardless of who wrote it. | |
122 | ||
123 | Thus, it is not the intent of this section to claim rights or contest | |
124 | your rights to work written entirely by you; rather, the intent is to | |
125 | exercise the right to control the distribution of derivative or | |
126 | collective works based on the Program. | |
127 | ||
128 | In addition, mere aggregation of another work not based on the Program | |
129 | with the Program (or with a work based on the Program) on a volume of | |
130 | a storage or distribution medium does not bring the other work under | |
131 | the scope of this License. | |
132 | ||
133 | 3. You may copy and distribute the Program (or a work based on it, | |
134 | under Section 2) in object code or executable form under the terms of | |
135 | Sections 1 and 2 above provided that you also do one of the following: | |
136 | ||
137 | a) Accompany it with the complete corresponding machine-readable | |
138 | source code, which must be distributed under the terms of Sections | |
139 | 1 and 2 above on a medium customarily used for software interchange; or, | |
140 | ||
141 | b) Accompany it with a written offer, valid for at least three | |
142 | years, to give any third party, for a charge no more than your | |
143 | cost of physically performing source distribution, a complete | |
144 | machine-readable copy of the corresponding source code, to be | |
145 | distributed under the terms of Sections 1 and 2 above on a medium | |
146 | customarily used for software interchange; or, | |
147 | ||
148 | c) Accompany it with the information you received as to the offer | |
149 | to distribute corresponding source code. (This alternative is | |
150 | allowed only for noncommercial distribution and only if you | |
151 | received the program in object code or executable form with such | |
152 | an offer, in accord with Subsection b above.) | |
153 | ||
154 | The source code for a work means the preferred form of the work for | |
155 | making modifications to it. For an executable work, complete source | |
156 | code means all the source code for all modules it contains, plus any | |
157 | associated interface definition files, plus the scripts used to | |
158 | control compilation and installation of the executable. However, as a | |
159 | special exception, the source code distributed need not include | |
160 | anything that is normally distributed (in either source or binary | |
161 | form) with the major components (compiler, kernel, and so on) of the | |
162 | operating system on which the executable runs, unless that component | |
163 | itself accompanies the executable. | |
164 | ||
165 | If distribution of executable or object code is made by offering | |
166 | access to copy from a designated place, then offering equivalent | |
167 | access to copy the source code from the same place counts as | |
168 | distribution of the source code, even though third parties are not | |
169 | compelled to copy the source along with the object code. | |
170 | ||
171 | 4. You may not copy, modify, sublicense, or distribute the Program | |
172 | except as expressly provided under this License. Any attempt | |
173 | otherwise to copy, modify, sublicense or distribute the Program is | |
174 | void, and will automatically terminate your rights under this License. | |
175 | However, parties who have received copies, or rights, from you under | |
176 | this License will not have their licenses terminated so long as such | |
177 | parties remain in full compliance. | |
178 | ||
179 | 5. You are not required to accept this License, since you have not | |
180 | signed it. However, nothing else grants you permission to modify or | |
181 | distribute the Program or its derivative works. These actions are | |
182 | prohibited by law if you do not accept this License. Therefore, by | |
183 | modifying or distributing the Program (or any work based on the | |
184 | Program), you indicate your acceptance of this License to do so, and | |
185 | all its terms and conditions for copying, distributing or modifying | |
186 | the Program or works based on it. | |
187 | ||
188 | 6. Each time you redistribute the Program (or any work based on the | |
189 | Program), the recipient automatically receives a license from the | |
190 | original licensor to copy, distribute or modify the Program subject to | |
191 | these terms and conditions. You may not impose any further | |
192 | restrictions on the recipients' exercise of the rights granted herein. | |
193 | You are not responsible for enforcing compliance by third parties to | |
194 | this License. | |
195 | ||
196 | 7. If, as a consequence of a court judgment or allegation of patent | |
197 | infringement or for any other reason (not limited to patent issues), | |
198 | conditions are imposed on you (whether by court order, agreement or | |
199 | otherwise) that contradict the conditions of this License, they do not | |
200 | excuse you from the conditions of this License. If you cannot | |
201 | distribute so as to satisfy simultaneously your obligations under this | |
202 | License and any other pertinent obligations, then as a consequence you | |
203 | may not distribute the Program at all. For example, if a patent | |
204 | license would not permit royalty-free redistribution of the Program by | |
205 | all those who receive copies directly or indirectly through you, then | |
206 | the only way you could satisfy both it and this License would be to | |
207 | refrain entirely from distribution of the Program. | |
208 | ||
209 | If any portion of this section is held invalid or unenforceable under | |
210 | any particular circumstance, the balance of the section is intended to | |
211 | apply and the section as a whole is intended to apply in other | |
212 | circumstances. | |
213 | ||
214 | It is not the purpose of this section to induce you to infringe any | |
215 | patents or other property right claims or to contest validity of any | |
216 | such claims; this section has the sole purpose of protecting the | |
217 | integrity of the free software distribution system, which is | |
218 | implemented by public license practices. Many people have made | |
219 | generous contributions to the wide range of software distributed | |
220 | through that system in reliance on consistent application of that | |
221 | system; it is up to the author/donor to decide if he or she is willing | |
222 | to distribute software through any other system and a licensee cannot | |
223 | impose that choice. | |
224 | ||
225 | This section is intended to make thoroughly clear what is believed to | |
226 | be a consequence of the rest of this License. | |
227 | ||
228 | 8. If the distribution and/or use of the Program is restricted in | |
229 | certain countries either by patents or by copyrighted interfaces, the | |
230 | original copyright holder who places the Program under this License | |
231 | may add an explicit geographical distribution limitation excluding | |
232 | those countries, so that distribution is permitted only in or among | |
233 | countries not thus excluded. In such case, this License incorporates | |
234 | the limitation as if written in the body of this License. | |
235 | ||
236 | 9. The Free Software Foundation may publish revised and/or new versions | |
237 | of the General Public License from time to time. Such new versions will | |
238 | be similar in spirit to the present version, but may differ in detail to | |
239 | address new problems or concerns. | |
240 | ||
241 | Each version is given a distinguishing version number. If the Program | |
242 | specifies a version number of this License which applies to it and "any | |
243 | later version", you have the option of following the terms and conditions | |
244 | either of that version or of any later version published by the Free | |
245 | Software Foundation. If the Program does not specify a version number of | |
246 | this License, you may choose any version ever published by the Free Software | |
247 | Foundation. | |
248 | ||
249 | 10. If you wish to incorporate parts of the Program into other free | |
250 | programs whose distribution conditions are different, write to the author | |
251 | to ask for permission. For software which is copyrighted by the Free | |
252 | Software Foundation, write to the Free Software Foundation; we sometimes | |
253 | make exceptions for this. Our decision will be guided by the two goals | |
254 | of preserving the free status of all derivatives of our free software and | |
255 | of promoting the sharing and reuse of software generally. | |
256 | ||
257 | NO WARRANTY | |
258 | ||
259 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY | |
260 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN | |
261 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES | |
262 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED | |
263 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF | |
264 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS | |
265 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE | |
266 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, | |
267 | REPAIR OR CORRECTION. | |
268 | ||
269 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING | |
270 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR | |
271 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, | |
272 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING | |
273 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED | |
274 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY | |
275 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER | |
276 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE | |
277 | POSSIBILITY OF SUCH DAMAGES. | |
278 | ||
279 | END OF TERMS AND CONDITIONS | |
280 | ||
281 | How to Apply These Terms to Your New Programs | |
282 | ||
283 | If you develop a new program, and you want it to be of the greatest | |
284 | possible use to the public, the best way to achieve this is to make it | |
285 | free software which everyone can redistribute and change under these terms. | |
286 | ||
287 | To do so, attach the following notices to the program. It is safest | |
288 | to attach them to the start of each source file to most effectively | |
289 | convey the exclusion of warranty; and each file should have at least | |
290 | the "copyright" line and a pointer to where the full notice is found. | |
291 | ||
292 | <one line to give the program's name and a brief idea of what it does.> | |
293 | Copyright (C) <year> <name of author> | |
294 | ||
295 | This program is free software; you can redistribute it and/or modify | |
296 | it under the terms of the GNU General Public License as published by | |
297 | the Free Software Foundation; either version 2 of the License, or | |
298 | (at your option) any later version. | |
299 | ||
300 | This program is distributed in the hope that it will be useful, | |
301 | but WITHOUT ANY WARRANTY; without even the implied warranty of | |
302 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
303 | GNU General Public License for more details. | |
304 | ||
305 | You should have received a copy of the GNU General Public License along | |
306 | with this program; if not, write to the Free Software Foundation, Inc., | |
307 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. | |
308 | ||
309 | Also add information on how to contact you by electronic and paper mail. | |
310 | ||
311 | If the program is interactive, make it output a short notice like this | |
312 | when it starts in an interactive mode: | |
313 | ||
314 | Gnomovision version 69, Copyright (C) year name of author | |
315 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. | |
316 | This is free software, and you are welcome to redistribute it | |
317 | under certain conditions; type `show c' for details. | |
318 | ||
319 | The hypothetical commands `show w' and `show c' should show the appropriate | |
320 | parts of the General Public License. Of course, the commands you use may | |
321 | be called something other than `show w' and `show c'; they could even be | |
322 | mouse-clicks or menu items--whatever suits your program. | |
323 | ||
324 | You should also get your employer (if you work as a programmer) or your | |
325 | school, if any, to sign a "copyright disclaimer" for the program, if | |
326 | necessary. Here is a sample; alter the names: | |
327 | ||
328 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program | |
329 | `Gnomovision' (which makes passes at compilers) written by James Hacker. | |
330 | ||
331 | <signature of Ty Coon>, 1 April 1989 | |
332 | Ty Coon, President of Vice | |
333 | ||
334 | This General Public License does not permit incorporating your program into | |
335 | proprietary programs. If your program is a subroutine library, you may | |
336 | consider it more useful to permit linking proprietary applications with the | |
337 | library. If this is what you want to do, use the GNU Lesser General | |
338 | Public License instead of this License. |
0 | VERSION := 1.5.0 | |
1 | SHELL := /bin/bash | |
2 | SPEC := s3cmd.spec | |
3 | COMMIT := $(shell git rev-parse HEAD) | |
4 | SHORTCOMMIT := $(shell git rev-parse --short=8 HEAD) | |
5 | TARBALL = s3cmd-$(VERSION)-$(SHORTCOMMIT).tar.gz | |
6 | ||
7 | release: | |
8 | python setup.py register sdist upload | |
9 | ||
10 | clean: | |
11 | -rm -rf s3cmd-*.tar.gz *.rpm *~ $(SPEC) | |
12 | -find . -name \*.pyc -exec rm \{\} \; | |
13 | -find . -name \*.pyo -exec rm \{\} \; | |
14 | ||
15 | $(SPEC): $(SPEC).in | |
16 | sed -e 's/##VERSION##/$(VERSION)/' \ | |
17 | -e 's/##COMMIT##/$(COMMIT)/' \ | |
18 | -e 's/##SHORTCOMMIT##/$(SHORTCOMMIT)/' \ | |
19 | $(SPEC).in > $(SPEC) | |
20 | ||
21 | tarball: | |
22 | git archive --format tar --prefix s3cmd-$(COMMIT)/ HEAD | gzip -c > $(TARBALL) | |
23 | ||
24 | # Use older digest algorithms for local rpmbuilds, as EPEL5 and | |
25 | # earlier releases need this. When building using mock for a | |
26 | # particular target, it will use the proper (newer) digests if that | |
27 | # target supports it. | |
28 | rpm: clean tarball $(SPEC) | |
29 | tmp_dir=`mktemp -d` ; \ | |
30 | mkdir -p $${tmp_dir}/{BUILD,RPMS,SRPMS,SPECS,SOURCES} ; \ | |
31 | cp $(TARBALL) $${tmp_dir}/SOURCES ; \ | |
32 | cp $(SPEC) $${tmp_dir}/SPECS ; \ | |
33 | cd $${tmp_dir} > /dev/null 2>&1; \ | |
34 | rpmbuild -ba --define "_topdir $${tmp_dir}" \ | |
35 | --define "_source_filedigest_algorithm 0" \ | |
36 | --define "_binary_filedigest_algorithm 0" \ | |
37 | --define "dist %{nil}" \ | |
38 | SPECS/$(SPEC) ; \ | |
39 | cd - > /dev/null 2>&1; \ | |
40 | cp $${tmp_dir}/RPMS/noarch/* $${tmp_dir}/SRPMS/* . ; \ | |
41 | rm -rf $${tmp_dir} ; \ | |
42 | rpmlint *.rpm *.spec |
0 | s3cmd 1.5.0-rc1 - 2014-06-29 | |
1 | =============== | |
2 | [TODO - extract from: git log --no-merges v1.5.0-beta1..] | |
3 | ||
0 | 4 | s3cmd 1.5.0-beta1 - 2013-12-02 |
1 | 5 | ================= |
2 | 6 | * Brougt to you by Matt Domsch and contributors, thanks guys! :) |
0 | Metadata-Version: 1.1 | |
1 | Name: s3cmd | |
2 | Version: 1.5.0-rc1 | |
3 | Summary: Command line tool for managing Amazon S3 and CloudFront services | |
4 | Home-page: http://s3tools.org | |
5 | Author: Michal Ludvig | |
6 | Author-email: michal@logix.cz | |
7 | License: GPL version 2 | |
8 | Description: | |
9 | ||
10 | S3cmd lets you copy files from/to Amazon S3 | |
11 | (Simple Storage Service) using a simple to use | |
12 | command line client. Supports rsync-like backup, | |
13 | GPG encryption, and more. Also supports management | |
14 | of Amazon's CloudFront content delivery network. | |
15 | ||
16 | ||
17 | Authors: | |
18 | -------- | |
19 | Michal Ludvig <michal@logix.cz> | |
20 | ||
21 | Platform: UNKNOWN | |
22 | Requires: dateutil |
2 | 2 | |
3 | 3 | Author: |
4 | 4 | Michal Ludvig <michal@logix.cz> |
5 | Copyright (c) TGRMN Software - http://www.tgrmn.com - and contributors | |
5 | 6 | |
6 | 7 | S3tools / S3cmd project homepage: |
7 | 8 | http://s3tools.org |
8 | 9 | |
9 | 10 | S3tools / S3cmd mailing lists: |
11 | ||
10 | 12 | * Announcements of new releases: |
11 | 13 | s3tools-announce@lists.sourceforge.net |
12 | 14 | |
15 | 17 | |
16 | 18 | * Bug reports |
17 | 19 | s3tools-bugs@lists.sourceforge.net |
18 | ||
19 | Amazon S3 homepage: | |
20 | http://aws.amazon.com/s3 | |
21 | 20 | |
22 | 21 | !!! |
23 | 22 | !!! Please consult INSTALL file for installation instructions! |
24 | 23 | !!! |
24 | ||
25 | What is S3cmd | |
26 | -------------- | |
27 | S3cmd is a free command line tool and client for uploading, | |
28 | retrieving and managing data in Amazon S3 and other cloud | |
29 | storage service providers that use the S3 protocol, such as | |
30 | Google Cloud Storage or DreamHost DreamObjects. It is best | |
31 | suited for power users who are familiar with command line | |
32 | programs. It is also ideal for batch scripts and automated | |
33 | backup to S3, triggered from cron, etc. | |
34 | ||
35 | S3cmd is written in Python. It's an open source project | |
36 | available under GNU Public License v2 (GPLv2) and is free | |
37 | for both commercial and private use. You will only have | |
38 | to pay Amazon for using their storage. | |
39 | ||
40 | Lots of features and options have been added to S3cmd, | |
41 | since its very first release in 2008.... we recently counted | |
42 | more than 60 command line options, including multipart | |
43 | uploads, encryption, incremental backup, s3 sync, ACL and | |
44 | Metadata management, S3 bucket size, bucket policies, and | |
45 | more! | |
25 | 46 | |
26 | 47 | What is Amazon S3 |
27 | 48 | ----------------- |
28 | 49 | Amazon S3 provides a managed internet-accessible storage |
29 | 50 | service where anyone can store any amount of data and |
30 | retrieve it later again. Maximum amount of data in one | |
31 | "object" is 5GB, maximum number of objects is not limited. | |
32 | ||
33 | S3 is a paid service operated by the well known Amazon.com | |
34 | internet book shop. Before storing anything into S3 you | |
35 | must sign up for an "AWS" account (where AWS = Amazon Web | |
36 | Services) to obtain a pair of identifiers: Access Key and | |
37 | Secret Key. You will need to give these keys to S3cmd. | |
51 | retrieve it later again. | |
52 | ||
53 | S3 is a paid service operated by Amazon. Before storing | |
54 | anything into S3 you must sign up for an "AWS" account | |
55 | (where AWS = Amazon Web Services) to obtain a pair of | |
56 | identifiers: Access Key and Secret Key. You will need to | |
57 | give these keys to S3cmd. | |
38 | 58 | Think of them as if they were a username and password for |
39 | 59 | your S3 account. |
40 | 60 | |
41 | Pricing explained | |
42 | ----------------- | |
61 | Amazon S3 pricing explained | |
62 | --------------------------- | |
43 | 63 | At the time of this writing the costs of using S3 are (in USD): |
44 | 64 | |
45 | 65 | $0.15 per GB per month of storage space used |
334 | 354 | |
335 | 355 | For more information refer to: |
336 | 356 | * S3cmd / S3tools homepage at http://s3tools.org |
337 | * Amazon S3 homepage at http://aws.amazon.com/s3 | |
338 | ||
339 | Enjoy! | |
340 | ||
341 | Michal Ludvig | |
342 | * michal@logix.cz | |
343 | * http://www.logix.cz/michal | |
344 | ||
357 | ||
358 | =========================================================================== | |
359 | Copyright (C) 2014 TGRMN Software - http://www.tgrmn.com - and contributors | |
360 | ||
361 | This program is free software; you can redistribute it and/or modify | |
362 | it under the terms of the GNU General Public License as published by | |
363 | the Free Software Foundation; either version 2 of the License, or | |
364 | (at your option) any later version. | |
365 | ||
366 | This program is distributed in the hope that it will be useful, | |
367 | but WITHOUT ANY WARRANTY; without even the implied warranty of | |
368 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
369 | GNU General Public License for more details.⏎ |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | from Utils import getTreeFromXml |
6 | 7 | |
158 | 159 | grantee.permission = permission |
159 | 160 | |
160 | 161 | if name.find('@') > -1: |
161 | grantee.name = grantee.name.lower | |
162 | grantee.name = grantee.name.lower() | |
162 | 163 | grantee.xsi_type = "AmazonCustomerByEmail" |
163 | 164 | grantee.tag = "EmailAddress" |
164 | 165 | elif name.find('http://acs.amazonaws.com/groups/') > -1: |
165 | 166 | grantee.xsi_type = "Group" |
166 | 167 | grantee.tag = "URI" |
167 | 168 | else: |
168 | grantee.name = grantee.name.lower | |
169 | grantee.name = grantee.name.lower() | |
169 | 170 | grantee.xsi_type = "CanonicalUser" |
170 | 171 | grantee.tag = "ID" |
171 | 172 |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | import S3Uri |
6 | 7 | from Exceptions import ParameterError |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | class BidirMap(object): |
6 | 7 | def __init__(self, **map): |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | import sys |
6 | 7 | import time |
14 | 15 | except ImportError: |
15 | 16 | import elementtree.ElementTree as ET |
16 | 17 | |
18 | from S3 import S3 | |
17 | 19 | from Config import Config |
18 | 20 | from Exceptions import * |
19 | 21 | from Utils import getTreeFromXml, appendXmlTextNode, getDictFromTree, dateS3toPython, sign_string, getBucketFromHostname, getHostnameFromBucket |
279 | 281 | |
280 | 282 | def __str__(self): |
281 | 283 | tree = ET.Element("InvalidationBatch") |
284 | s3 = S3(Config()) | |
282 | 285 | |
283 | 286 | for path in self.paths: |
284 | 287 | if len(path) < 1 or path[0] != "/": |
285 | 288 | path = "/" + path |
286 | appendXmlTextNode("Path", path, tree) | |
289 | appendXmlTextNode("Path", s3.urlencode_string(path), tree) | |
287 | 290 | appendXmlTextNode("CallerReference", self.reference, tree) |
288 | 291 | return ET.tostring(tree) |
289 | 292 | |
431 | 434 | new_paths = [] |
432 | 435 | default_index_suffix = '/' + default_index_file |
433 | 436 | for path in paths: |
434 | if path.endswith(default_index_suffix) or path == default_index_file: | |
437 | if path.endswith(default_index_suffix) or path == default_index_file: | |
435 | 438 | if invalidate_default_index_on_cf: |
436 | 439 | new_paths.append(path) |
437 | 440 | if invalidate_default_index_root_on_cf: |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | import logging |
6 | 7 | from logging import debug, info, warning, error |
91 | 92 | debug_exclude = {} |
92 | 93 | debug_include = {} |
93 | 94 | encoding = "utf-8" |
94 | add_content_encoding = True | |
95 | 95 | urlencoding_mode = "normal" |
96 | 96 | log_target_prefix = "" |
97 | 97 | reduced_redundancy = False |
109 | 109 | cache_file = "" |
110 | 110 | add_headers = "" |
111 | 111 | ignore_failed_copy = False |
112 | expiry_days = "" | |
113 | expiry_date = "" | |
114 | expiry_prefix = "" | |
112 | 115 | |
113 | 116 | ## Creating a singleton |
114 | def __new__(self, configfile = None): | |
117 | def __new__(self, configfile = None, access_key=None, secret_key=None): | |
115 | 118 | if self._instance is None: |
116 | 119 | self._instance = object.__new__(self) |
117 | 120 | return self._instance |
118 | 121 | |
119 | def __init__(self, configfile = None): | |
122 | def __init__(self, configfile = None, access_key=None, secret_key=None): | |
120 | 123 | if configfile: |
121 | 124 | try: |
122 | 125 | self.read_config_file(configfile) |
123 | 126 | except IOError, e: |
124 | 127 | if 'AWS_CREDENTIAL_FILE' in os.environ: |
125 | 128 | self.env_config() |
129 | ||
130 | # override these if passed on the command-line | |
131 | if access_key and secret_key: | |
132 | self.access_key = access_key | |
133 | self.secret_key = secret_key | |
134 | ||
126 | 135 | if len(self.access_key)==0: |
127 | self.role_config() | |
136 | env_access_key = os.environ.get("AWS_ACCESS_KEY", None) or os.environ.get("AWS_ACCESS_KEY_ID", None) | |
137 | env_secret_key = os.environ.get("AWS_SECRET_KEY", None) or os.environ.get("AWS_SECRET_ACCESS_KEY", None) | |
138 | if env_access_key: | |
139 | self.access_key = env_access_key | |
140 | self.secret_key = env_secret_key | |
141 | else: | |
142 | self.role_config() | |
128 | 143 | |
129 | 144 | def role_config(self): |
130 | 145 | if sys.version_info[0] * 10 + sys.version_info[1] < 26: |
222 | 237 | def update_option(self, option, value): |
223 | 238 | if value is None: |
224 | 239 | return |
240 | ||
225 | 241 | #### Handle environment reference |
226 | 242 | if str(value).startswith("$"): |
227 | 243 | return self.update_option(option, os.getenv(str(value)[1:])) |
244 | ||
228 | 245 | #### Special treatment of some options |
229 | 246 | ## verbosity must be known to "logging" module |
230 | 247 | if option == "verbosity": |
248 | # support integer verboisities | |
231 | 249 | try: |
232 | setattr(Config, "verbosity", logging._levelNames[value]) | |
233 | except KeyError: | |
234 | error("Config: verbosity level '%s' is not valid" % value) | |
250 | value = int(value) | |
251 | except ValueError, e: | |
252 | try: | |
253 | # otherwise it must be a key known to the logging module | |
254 | value = logging._levelNames[value] | |
255 | except KeyError: | |
256 | error("Config: verbosity level '%s' is not valid" % value) | |
257 | return | |
258 | ||
235 | 259 | ## allow yes/no, true/false, on/off and 1/0 for boolean options |
236 | 260 | elif type(getattr(Config, option)) is type(True): # bool |
237 | 261 | if str(value).lower() in ("true", "yes", "on", "1"): |
238 | setattr(Config, option, True) | |
262 | value = True | |
239 | 263 | elif str(value).lower() in ("false", "no", "off", "0"): |
240 | setattr(Config, option, False) | |
264 | value = False | |
241 | 265 | else: |
242 | 266 | error("Config: value of option '%s' must be Yes or No, not '%s'" % (option, value)) |
267 | return | |
268 | ||
243 | 269 | elif type(getattr(Config, option)) is type(42): # int |
244 | 270 | try: |
245 | setattr(Config, option, int(value)) | |
271 | value = int(value) | |
246 | 272 | except ValueError, e: |
247 | 273 | error("Config: value of option '%s' must be an integer, not '%s'" % (option, value)) |
248 | else: # string | |
249 | setattr(Config, option, value) | |
274 | return | |
275 | ||
276 | setattr(Config, option, value) | |
250 | 277 | |
251 | 278 | class ConfigParser(object): |
252 | 279 | def __init__(self, file, sections = []): |
304 | 331 | def dump(self, section, config): |
305 | 332 | self.stream.write("[%s]\n" % section) |
306 | 333 | for option in config.option_list(): |
307 | self.stream.write("%s = %s\n" % (option, getattr(config, option))) | |
334 | value = getattr(config, option) | |
335 | if option == "verbosity": | |
336 | # we turn level numbers back into strings if possible | |
337 | if isinstance(value,int) and value in logging._levelNames: | |
338 | value = logging._levelNames[value] | |
339 | ||
340 | self.stream.write("%s = %s\n" % (option, value)) | |
308 | 341 | |
309 | 342 | # vim:et:ts=4:sts=4:ai |
0 | ## Amazon S3 manager | |
1 | ## Author: Michal Ludvig <michal@logix.cz> | |
2 | ## http://www.logix.cz/michal | |
3 | ## License: GPL Version 2 | |
4 | ## Copyright: TGRMN Software and contributors | |
5 | ||
0 | 6 | import httplib |
1 | 7 | from urlparse import urlparse |
2 | 8 | from threading import Semaphore |
33 | 39 | conn = None |
34 | 40 | if cfg.proxy_host != "": |
35 | 41 | if ssl: |
36 | raise ParameterError("use_ssl=True can't be used with proxy") | |
42 | raise ParameterError("use_https=True can't be used with proxy") | |
37 | 43 | conn_id = "proxy://%s:%s" % (cfg.proxy_host, cfg.proxy_port) |
38 | 44 | else: |
39 | 45 | conn_id = "http%s://%s" % (ssl and "s" or "", hostname) |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | from Utils import getTreeFromXml, unicodise, deunicodise |
6 | 7 | from logging import debug, info, warning, error |
44 | 45 | for header in response["headers"]: |
45 | 46 | debug("HttpHeader: %s: %s" % (header, response["headers"][header])) |
46 | 47 | if response.has_key("data") and response["data"]: |
47 | tree = getTreeFromXml(response["data"]) | |
48 | error_node = tree | |
49 | if not error_node.tag == "Error": | |
50 | error_node = tree.find(".//Error") | |
51 | for child in error_node.getchildren(): | |
52 | if child.text != "": | |
53 | debug("ErrorXML: " + child.tag + ": " + repr(child.text)) | |
54 | self.info[child.tag] = child.text | |
48 | try: | |
49 | tree = getTreeFromXml(response["data"]) | |
50 | except ET.ParseError: | |
51 | debug("Not an XML response") | |
52 | else: | |
53 | self.info.update(self.parse_error_xml(tree)) | |
54 | ||
55 | 55 | self.code = self.info["Code"] |
56 | 56 | self.message = self.info["Message"] |
57 | 57 | self.resource = self.info["Resource"] |
62 | 62 | if self.info.has_key("Message"): |
63 | 63 | retval += (u": %s" % self.info["Message"]) |
64 | 64 | return retval |
65 | ||
66 | @staticmethod | |
67 | def parse_error_xml(tree): | |
68 | info = {} | |
69 | error_node = tree | |
70 | if not error_node.tag == "Error": | |
71 | error_node = tree.find(".//Error") | |
72 | for child in error_node.getchildren(): | |
73 | if child.text != "": | |
74 | debug("ErrorXML: " + child.tag + ": " + repr(child.text)) | |
75 | info[child.tag] = child.text | |
76 | ||
77 | return info | |
78 | ||
65 | 79 | |
66 | 80 | class CloudFrontError(S3Error): |
67 | 81 | pass |
0 | # patterned on /usr/include/sysexits.h | |
1 | ||
2 | EX_OK = 0 | |
3 | EX_GENERAL = 1 | |
4 | EX_SOMEFAILED = 2 # some parts of the command succeeded, while others failed | |
5 | EX_USAGE = 64 # The command was used incorrectly (e.g. bad command line syntax) | |
6 | EX_SOFTWARE = 70 # internal software error (e.g. S3 error of unknown specificity) | |
7 | EX_OSERR = 71 # system error (e.g. out of memory) | |
8 | EX_OSFILE = 72 # OS error (e.g. invalid Python version) | |
9 | EX_IOERR = 74 # An error occurred while doing I/O on some file. | |
10 | EX_TEMPFAIL = 75 # temporary failure (S3DownloadError or similar, retry later) | |
11 | EX_NOPERM = 77 # Insufficient permissions to perform the operation on S3 | |
12 | EX_CONFIG = 78 # Configuration file error | |
13 | _EX_SIGNAL = 128 | |
14 | _EX_SIGINT = 2 | |
15 | EX_BREAK = _EX_SIGNAL + _EX_SIGINT # Control-C (KeyboardInterrupt raised) |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
6 | import logging | |
5 | 7 | from SortedDict import SortedDict |
6 | 8 | import Utils |
9 | import Config | |
10 | ||
11 | zero_length_md5 = "d41d8cd98f00b204e9800998ecf8427e" | |
12 | cfg = Config.Config() | |
7 | 13 | |
8 | 14 | class FileDict(SortedDict): |
9 | 15 | def __init__(self, mapping = {}, ignore_case = True, **kwargs): |
12 | 18 | self.by_md5 = dict() # {md5: set(relative_files)} |
13 | 19 | |
14 | 20 | def record_md5(self, relative_file, md5): |
21 | if md5 is None: return | |
22 | if md5 == zero_length_md5: return | |
15 | 23 | if md5 not in self.by_md5: |
16 | 24 | self.by_md5[md5] = set() |
17 | 25 | self.by_md5[md5].add(relative_file) |
18 | 26 | |
19 | 27 | def find_md5_one(self, md5): |
28 | if md5 is None: return None | |
20 | 29 | try: |
21 | 30 | return list(self.by_md5.get(md5, set()))[0] |
22 | 31 | except: |
28 | 37 | if 'md5' in self[relative_file]: |
29 | 38 | return self[relative_file]['md5'] |
30 | 39 | md5 = self.get_hardlink_md5(relative_file) |
31 | if md5 is None: | |
40 | if md5 is None and 'md5' in cfg.sync_checks: | |
41 | logging.debug(u"doing file I/O to read md5 of %s" % relative_file) | |
32 | 42 | md5 = Utils.hash_file_md5(self[relative_file]['full_name']) |
33 | 43 | self.record_md5(relative_file, md5) |
34 | 44 | self[relative_file]['md5'] = md5 |
35 | 45 | return md5 |
36 | 46 | |
37 | def record_hardlink(self, relative_file, dev, inode, md5): | |
47 | def record_hardlink(self, relative_file, dev, inode, md5, size): | |
48 | if md5 is None: return | |
49 | if size == 0: return # don't record 0-length files | |
38 | 50 | if dev == 0 or inode == 0: return # Windows |
39 | 51 | if dev not in self.hardlinks: |
40 | 52 | self.hardlinks[dev] = dict() |
44 | 56 | |
45 | 57 | def get_hardlink_md5(self, relative_file): |
46 | 58 | md5 = None |
47 | dev = self[relative_file]['dev'] | |
48 | inode = self[relative_file]['inode'] | |
49 | 59 | try: |
60 | dev = self[relative_file]['dev'] | |
61 | inode = self[relative_file]['inode'] | |
50 | 62 | md5 = self.hardlinks[dev][inode]['md5'] |
51 | except: | |
63 | except KeyError: | |
52 | 64 | pass |
53 | 65 | return md5 |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | from S3 import S3 |
6 | 7 | from Config import Config |
19 | 20 | import re |
20 | 21 | import errno |
21 | 22 | |
22 | __all__ = ["fetch_local_list", "fetch_remote_list", "compare_filelists", "filter_exclude_include"] | |
23 | __all__ = ["fetch_local_list", "fetch_remote_list", "compare_filelists"] | |
23 | 24 | |
24 | 25 | def _fswalk_follow_symlinks(path): |
25 | 26 | ''' |
58 | 59 | yield (dirpath, dirnames, filenames) |
59 | 60 | |
60 | 61 | def filter_exclude_include(src_list): |
61 | info(u"Applying --exclude/--include") | |
62 | debug(u"Applying --exclude/--include") | |
62 | 63 | cfg = Config() |
63 | 64 | exclude_list = FileDict(ignore_case = False) |
64 | 65 | for file in src_list.keys(): |
89 | 90 | def handle_exclude_include_walk(root, dirs, files): |
90 | 91 | cfg = Config() |
91 | 92 | copydirs = copy.copy(dirs) |
92 | copyfiles = copy.copy(files) | |
93 | ||
94 | 93 | # exclude dir matches in the current directory |
95 | 94 | # this prevents us from recursing down trees we know we want to ignore |
96 | 95 | for x in copydirs: |
98 | 97 | debug(u"CHECK: %r" % d) |
99 | 98 | excluded = False |
100 | 99 | for r in cfg.exclude: |
100 | if not r.pattern.endswith(u'/'): continue # we only check for directories here | |
101 | 101 | if r.search(d): |
102 | 102 | excluded = True |
103 | 103 | debug(u"EXCL-MATCH: '%s'" % (cfg.debug_exclude[r])) |
105 | 105 | if excluded: |
106 | 106 | ## No need to check for --include if not excluded |
107 | 107 | for r in cfg.include: |
108 | if not r.pattern.endswith(u'/'): continue # we only check for directories here | |
109 | debug(u"INCL-TEST: %s ~ %s" % (d, r.pattern)) | |
108 | 110 | if r.search(d): |
109 | 111 | excluded = False |
110 | 112 | debug(u"INCL-MATCH: '%s'" % (cfg.debug_include[r])) |
117 | 119 | else: |
118 | 120 | debug(u"PASS: %r" % (d)) |
119 | 121 | |
120 | # exclude file matches in the current directory | |
121 | for x in copyfiles: | |
122 | file = os.path.join(root, x) | |
123 | debug(u"CHECK: %r" % file) | |
124 | excluded = False | |
125 | for r in cfg.exclude: | |
126 | if r.search(file): | |
127 | excluded = True | |
128 | debug(u"EXCL-MATCH: '%s'" % (cfg.debug_exclude[r])) | |
129 | break | |
130 | if excluded: | |
131 | ## No need to check for --include if not excluded | |
132 | for r in cfg.include: | |
133 | if r.search(file): | |
134 | excluded = False | |
135 | debug(u"INCL-MATCH: '%s'" % (cfg.debug_include[r])) | |
136 | break | |
137 | if excluded: | |
138 | ## Still excluded - ok, action it | |
139 | debug(u"EXCLUDE: %s" % file) | |
140 | files.remove(x) | |
141 | continue | |
142 | else: | |
143 | debug(u"PASS: %r" % (file)) | |
144 | ||
145 | 122 | |
146 | 123 | def _get_filelist_from_file(cfg, local_path): |
147 | 124 | def _append(d, key, value): |
181 | 158 | return result |
182 | 159 | |
183 | 160 | def fetch_local_list(args, is_src = False, recursive = None): |
161 | ||
162 | def _fetch_local_list_info(loc_list): | |
163 | len_loc_list = len(loc_list) | |
164 | info(u"Running stat() and reading/calculating MD5 values on %d files, this may take some time..." % len_loc_list) | |
165 | counter = 0 | |
166 | for relative_file in loc_list: | |
167 | counter += 1 | |
168 | if counter % 1000 == 0: | |
169 | info(u"[%d/%d]" % (counter, len_loc_list)) | |
170 | ||
171 | if relative_file == '-': continue | |
172 | ||
173 | full_name = loc_list[relative_file]['full_name'] | |
174 | try: | |
175 | sr = os.stat_result(os.stat(full_name)) | |
176 | except OSError, e: | |
177 | if e.errno == errno.ENOENT: | |
178 | # file was removed async to us getting the list | |
179 | continue | |
180 | else: | |
181 | raise | |
182 | loc_list[relative_file].update({ | |
183 | 'size' : sr.st_size, | |
184 | 'mtime' : sr.st_mtime, | |
185 | 'dev' : sr.st_dev, | |
186 | 'inode' : sr.st_ino, | |
187 | 'uid' : sr.st_uid, | |
188 | 'gid' : sr.st_gid, | |
189 | 'sr': sr # save it all, may need it in preserve_attrs_list | |
190 | ## TODO: Possibly more to save here... | |
191 | }) | |
192 | if 'md5' in cfg.sync_checks: | |
193 | md5 = cache.md5(sr.st_dev, sr.st_ino, sr.st_mtime, sr.st_size) | |
194 | if md5 is None: | |
195 | try: | |
196 | md5 = loc_list.get_md5(relative_file) # this does the file I/O | |
197 | except IOError: | |
198 | continue | |
199 | cache.add(sr.st_dev, sr.st_ino, sr.st_mtime, sr.st_size, md5) | |
200 | loc_list.record_hardlink(relative_file, sr.st_dev, sr.st_ino, md5, sr.st_size) | |
201 | ||
202 | ||
184 | 203 | def _get_filelist_local(loc_list, local_uri, cache): |
185 | 204 | info(u"Compiling list of local files...") |
186 | 205 | |
236 | 255 | relative_file = replace_nonprintables(relative_file) |
237 | 256 | if relative_file.startswith('./'): |
238 | 257 | relative_file = relative_file[2:] |
239 | try: | |
240 | sr = os.stat_result(os.stat(full_name)) | |
241 | except OSError, e: | |
242 | if e.errno == errno.ENOENT: | |
243 | # file was removed async to us getting the list | |
244 | continue | |
245 | else: | |
246 | raise | |
247 | 258 | loc_list[relative_file] = { |
248 | 259 | 'full_name_unicode' : unicodise(full_name), |
249 | 260 | 'full_name' : full_name, |
250 | 'size' : sr.st_size, | |
251 | 'mtime' : sr.st_mtime, | |
252 | 'dev' : sr.st_dev, | |
253 | 'inode' : sr.st_ino, | |
254 | 'uid' : sr.st_uid, | |
255 | 'gid' : sr.st_gid, | |
256 | 'sr': sr # save it all, may need it in preserve_attrs_list | |
257 | ## TODO: Possibly more to save here... | |
258 | 261 | } |
259 | if 'md5' in cfg.sync_checks: | |
260 | md5 = cache.md5(sr.st_dev, sr.st_ino, sr.st_mtime, sr.st_size) | |
261 | if md5 is None: | |
262 | try: | |
263 | md5 = loc_list.get_md5(relative_file) # this does the file I/O | |
264 | except IOError: | |
265 | continue | |
266 | cache.add(sr.st_dev, sr.st_ino, sr.st_mtime, sr.st_size, md5) | |
267 | loc_list.record_hardlink(relative_file, sr.st_dev, sr.st_ino, md5) | |
262 | ||
268 | 263 | return loc_list, single_file |
269 | 264 | |
270 | 265 | def _maintain_cache(cache, local_list): |
317 | 312 | if len(local_list) > 1: |
318 | 313 | single_file = False |
319 | 314 | |
315 | local_list, exclude_list = filter_exclude_include(local_list) | |
316 | _fetch_local_list_info(local_list) | |
320 | 317 | _maintain_cache(cache, local_list) |
321 | ||
322 | return local_list, single_file | |
323 | ||
324 | def fetch_remote_list(args, require_attribs = False, recursive = None): | |
318 | return local_list, single_file, exclude_list | |
319 | ||
320 | def fetch_remote_list(args, require_attribs = False, recursive = None, uri_params = {}): | |
325 | 321 | def _get_remote_attribs(uri, remote_item): |
326 | 322 | response = S3(cfg).object_info(uri) |
327 | 323 | remote_item.update({ |
353 | 349 | ## { 'xyz/blah.txt' : {} } |
354 | 350 | |
355 | 351 | info(u"Retrieving list of remote files for %s ..." % remote_uri) |
352 | empty_fname_re = re.compile(r'\A\s*\Z') | |
356 | 353 | |
357 | 354 | s3 = S3(Config()) |
358 | response = s3.bucket_list(remote_uri.bucket(), prefix = remote_uri.object(), recursive = recursive) | |
355 | response = s3.bucket_list(remote_uri.bucket(), prefix = remote_uri.object(), | |
356 | recursive = recursive, uri_params = uri_params) | |
359 | 357 | |
360 | 358 | rem_base_original = rem_base = remote_uri.object() |
361 | 359 | remote_uri_original = remote_uri |
375 | 373 | else: |
376 | 374 | key = object['Key'][rem_base_len:] ## Beware - this may be '' if object['Key']==rem_base !! |
377 | 375 | object_uri_str = remote_uri.uri() + key |
376 | if empty_fname_re.match(key): | |
377 | # Objects may exist on S3 with empty names (''), which don't map so well to common filesystems. | |
378 | warning(u"Empty object name on S3 found, ignoring.") | |
379 | continue | |
378 | 380 | rem_list[key] = { |
379 | 381 | 'size' : int(object['Size']), |
380 | 382 | 'timestamp' : dateS3toUnix(object['LastModified']), ## Sadly it's upload time, not our lastmod time :-( |
411 | 413 | |
412 | 414 | if recursive: |
413 | 415 | for uri in remote_uris: |
414 | objectlist = _get_filelist_remote(uri) | |
416 | objectlist = _get_filelist_remote(uri, recursive = True) | |
415 | 417 | for key in objectlist: |
416 | 418 | remote_list[key] = objectlist[key] |
417 | 419 | remote_list.record_md5(key, objectlist.get_md5(key)) |
418 | 420 | else: |
419 | 421 | for uri in remote_uris: |
420 | uri_str = str(uri) | |
422 | uri_str = unicode(uri) | |
421 | 423 | ## Wildcards used in remote URI? |
422 | 424 | ## If yes we'll need a bucket listing... |
423 | 425 | wildcard_split_result = re.split("\*|\?", uri_str, maxsplit=1) |
448 | 450 | md5 = remote_item.get('md5') |
449 | 451 | if md5: |
450 | 452 | remote_list.record_md5(key, md5) |
451 | return remote_list | |
453 | ||
454 | remote_list, exclude_list = filter_exclude_include(remote_list) | |
455 | return remote_list, exclude_list | |
452 | 456 | |
453 | 457 | |
454 | 458 | def compare_filelists(src_list, dst_list, src_remote, dst_remote, delay_updates = False): |
7 | 7 | from logging import debug, info, warning, error |
8 | 8 | from Utils import getTextFromXml, getTreeFromXml, formatSize, unicodise, calculateChecksum, parseNodes |
9 | 9 | from Exceptions import S3UploadError |
10 | from collections import defaultdict | |
11 | 10 | |
12 | 11 | class MultiPartUpload(object): |
13 | 12 | |
27 | 26 | multipart_response = self.s3.list_multipart(uri, upload_id) |
28 | 27 | tree = getTreeFromXml(multipart_response['data']) |
29 | 28 | |
30 | parts = defaultdict(lambda: None) | |
29 | parts = dict() | |
31 | 30 | for elem in parseNodes(tree): |
32 | 31 | try: |
33 | 32 | parts[int(elem['PartNumber'])] = {'checksum': elem['ETag'], 'size': elem['Size']} |
92 | 91 | else: |
93 | 92 | debug("MultiPart: Uploading from %s" % (self.file.name)) |
94 | 93 | |
95 | remote_statuses = defaultdict(lambda: None) | |
94 | remote_statuses = dict() | |
96 | 95 | if self.s3.config.put_continue: |
97 | 96 | remote_statuses = self.get_parts_information(self.uri, self.upload_id) |
98 | 97 | |
108 | 107 | 'extra' : "[part %d of %d, %s]" % (seq, nr_parts, "%d%sB" % formatSize(current_chunk_size, human_readable = True)) |
109 | 108 | } |
110 | 109 | try: |
111 | self.upload_part(seq, offset, current_chunk_size, labels, remote_status = remote_statuses[seq]) | |
110 | self.upload_part(seq, offset, current_chunk_size, labels, remote_status = remote_statuses.get(seq)) | |
112 | 111 | except: |
113 | 112 | error(u"\nUpload of '%s' part %d failed. Use\n %s abortmp %s %s\nto abort the upload, or\n %s --upload-id %s put ...\nto continue the upload." |
114 | 113 | % (self.file.name, seq, sys.argv[0], self.uri, self.upload_id, sys.argv[0], self.upload_id)) |
127 | 126 | if len(buffer) == 0: # EOF |
128 | 127 | break |
129 | 128 | try: |
130 | self.upload_part(seq, offset, current_chunk_size, labels, buffer, remote_status = remote_statuses[seq]) | |
129 | self.upload_part(seq, offset, current_chunk_size, labels, buffer, remote_status = remote_statuses.get(seq)) | |
131 | 130 | except: |
132 | 131 | error(u"\nUpload of '%s' part %d failed. Use\n %s abortmp %s %s\nto abort, or\n %s --upload-id %s put ...\nto continue the upload." |
133 | 132 | % (self.file.name, seq, self.uri, sys.argv[0], self.upload_id, sys.argv[0], self.upload_id)) |
0 | ## Amazon S3 manager | |
1 | ## Author: Michal Ludvig <michal@logix.cz> | |
2 | ## http://www.logix.cz/michal | |
3 | ## License: GPL Version 2 | |
4 | ## Copyright: TGRMN Software and contributors | |
5 | ||
0 | 6 | package = "s3cmd" |
1 | version = "1.5.0-beta1" | |
7 | version = "1.5.0-rc1" | |
2 | 8 | url = "http://s3tools.org" |
3 | 9 | license = "GPL version 2" |
4 | 10 | short_description = "Command line tool for managing Amazon S3 and CloudFront services" |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | import sys |
6 | 7 | import datetime |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | import sys |
6 | 7 | import os, os.path |
7 | 8 | import time |
8 | 9 | import errno |
10 | import base64 | |
9 | 11 | import httplib |
10 | 12 | import logging |
11 | 13 | import mimetypes |
12 | 14 | import re |
15 | from xml.sax import saxutils | |
16 | import base64 | |
13 | 17 | from logging import debug, info, warning, error |
14 | 18 | from stat import ST_SIZE |
15 | 19 | |
30 | 34 | from ConnMan import ConnMan |
31 | 35 | |
32 | 36 | try: |
33 | import magic, gzip | |
37 | import magic | |
34 | 38 | try: |
35 | 39 | ## https://github.com/ahupp/python-magic |
36 | 40 | magic_ = magic.Magic(mime=True) |
37 | 41 | def mime_magic_file(file): |
38 | 42 | return magic_.from_file(file) |
39 | def mime_magic_buffer(buffer): | |
40 | return magic_.from_buffer(buffer) | |
41 | 43 | except TypeError: |
42 | 44 | ## http://pypi.python.org/pypi/filemagic |
43 | 45 | try: |
44 | 46 | magic_ = magic.Magic(flags=magic.MAGIC_MIME) |
45 | 47 | def mime_magic_file(file): |
46 | 48 | return magic_.id_filename(file) |
47 | def mime_magic_buffer(buffer): | |
48 | return magic_.id_buffer(buffer) | |
49 | 49 | except TypeError: |
50 | 50 | ## file-5.11 built-in python bindings |
51 | 51 | magic_ = magic.open(magic.MAGIC_MIME) |
52 | 52 | magic_.load() |
53 | 53 | def mime_magic_file(file): |
54 | 54 | return magic_.file(file) |
55 | def mime_magic_buffer(buffer): | |
56 | return magic_.buffer(buffer) | |
57 | ||
58 | 55 | except AttributeError: |
59 | 56 | ## Older python-magic versions |
60 | 57 | magic_ = magic.open(magic.MAGIC_MIME) |
61 | 58 | magic_.load() |
62 | 59 | def mime_magic_file(file): |
63 | 60 | return magic_.file(file) |
64 | def mime_magic_buffer(buffer): | |
65 | return magic_.buffer(buffer) | |
66 | ||
67 | def mime_magic(file): | |
68 | type = mime_magic_file(file) | |
69 | if type != "application/x-gzip; charset=binary": | |
70 | return (type, None) | |
71 | else: | |
72 | return (mime_magic_buffer(gzip.open(file).read(8192)), 'gzip') | |
73 | 61 | |
74 | 62 | except ImportError, e: |
75 | 63 | if str(e).find("magic") >= 0: |
78 | 66 | magic_message = "Module python-magic can't be used (%s)." % e.message |
79 | 67 | magic_message += " Guessing MIME types based on file extensions." |
80 | 68 | magic_warned = False |
81 | def mime_magic(file): | |
69 | def mime_magic_file(file): | |
82 | 70 | global magic_warned |
83 | 71 | if (not magic_warned): |
84 | 72 | warning(magic_message) |
85 | 73 | magic_warned = True |
86 | return mimetypes.guess_type(file) | |
74 | return mimetypes.guess_type(file)[0] | |
75 | ||
76 | def mime_magic(file): | |
77 | # we can't tell if a given copy of the magic library will take a | |
78 | # filesystem-encoded string or a unicode value, so try first | |
79 | # with the encoded string, then unicode. | |
80 | def _mime_magic(file): | |
81 | magictype = None | |
82 | try: | |
83 | magictype = mime_magic_file(file) | |
84 | except UnicodeDecodeError: | |
85 | magictype = mime_magic_file(unicodise(file)) | |
86 | return magictype | |
87 | ||
88 | result = _mime_magic(file) | |
89 | if result is not None: | |
90 | if isinstance(result, str): | |
91 | if ';' in result: | |
92 | mimetype, charset = result.split(';') | |
93 | charset = charset[len('charset'):] | |
94 | result = (mimetype, charset) | |
95 | else: | |
96 | result = (result, None) | |
97 | if result is None: | |
98 | result = (None, None) | |
99 | return result | |
87 | 100 | |
88 | 101 | __all__ = [] |
89 | 102 | class S3Request(object): |
160 | 173 | SERVICE = 0x0100, |
161 | 174 | BUCKET = 0x0200, |
162 | 175 | OBJECT = 0x0400, |
176 | BATCH = 0x0800, | |
163 | 177 | MASK = 0x0700, |
164 | 178 | ) |
165 | 179 | |
174 | 188 | OBJECT_HEAD = targets["OBJECT"] | http_methods["HEAD"], |
175 | 189 | OBJECT_DELETE = targets["OBJECT"] | http_methods["DELETE"], |
176 | 190 | OBJECT_POST = targets["OBJECT"] | http_methods["POST"], |
191 | BATCH_DELETE = targets["BATCH"] | http_methods["POST"], | |
177 | 192 | ) |
178 | 193 | |
179 | 194 | codes = { |
222 | 237 | response["list"] = getListFromXml(response["data"], "Bucket") |
223 | 238 | return response |
224 | 239 | |
225 | def bucket_list(self, bucket, prefix = None, recursive = None): | |
240 | def bucket_list(self, bucket, prefix = None, recursive = None, uri_params = {}): | |
226 | 241 | def _list_truncated(data): |
227 | 242 | ## <IsTruncated> can either be "true" or "false" or be missing completely |
228 | 243 | is_truncated = getTextFromXml(data, ".//IsTruncated") or "false" |
234 | 249 | def _get_common_prefixes(data): |
235 | 250 | return getListFromXml(data, "CommonPrefixes") |
236 | 251 | |
237 | uri_params = {} | |
252 | uri_params = uri_params.copy() | |
238 | 253 | truncated = True |
239 | 254 | list = [] |
240 | 255 | prefixes = [] |
366 | 381 | |
367 | 382 | return response |
368 | 383 | |
384 | def expiration_info(self, uri, bucket_location = None): | |
385 | headers = SortedDict(ignore_case = True) | |
386 | bucket = uri.bucket() | |
387 | body = "" | |
388 | ||
389 | request = self.create_request("BUCKET_LIST", bucket = bucket, extra="?lifecycle") | |
390 | try: | |
391 | response = self.send_request(request, body) | |
392 | response['prefix'] = getTextFromXml(response['data'], ".//Rule//Prefix") | |
393 | response['date'] = getTextFromXml(response['data'], ".//Rule//Expiration//Date") | |
394 | response['days'] = getTextFromXml(response['data'], ".//Rule//Expiration//Days") | |
395 | return response | |
396 | except S3Error, e: | |
397 | if e.status == 404: | |
398 | debug("Could not get /?lifecycle - lifecycle probably not configured for this bucket") | |
399 | return None | |
400 | raise | |
401 | ||
402 | def expiration_set(self, uri, bucket_location = None): | |
403 | if self.config.expiry_date and self.config.expiry_days: | |
404 | raise ParameterError("Expect either --expiry-day or --expiry-date") | |
405 | if not (self.config.expiry_date or self.config.expiry_days): | |
406 | if self.config.expiry_prefix: | |
407 | raise ParameterError("Expect either --expiry-day or --expiry-date") | |
408 | debug("del bucket lifecycle") | |
409 | bucket = uri.bucket() | |
410 | body = "" | |
411 | request = self.create_request("BUCKET_DELETE", bucket = bucket, extra="?lifecycle") | |
412 | else: | |
413 | request, body = self._expiration_set(uri) | |
414 | debug("About to send request '%s' with body '%s'" % (request, body)) | |
415 | response = self.send_request(request, body) | |
416 | debug("Received response '%s'" % (response)) | |
417 | return response | |
418 | ||
419 | def _expiration_set(self, uri): | |
420 | debug("put bucket lifecycle") | |
421 | body = '<LifecycleConfiguration>' | |
422 | body += ' <Rule>' | |
423 | body += (' <Prefix>%s</Prefix>' % self.config.expiry_prefix) | |
424 | body += (' <Status>Enabled</Status>') | |
425 | body += (' <Expiration>') | |
426 | if self.config.expiry_date: | |
427 | body += (' <Date>%s</Date>' % self.config.expiry_date) | |
428 | elif self.config.expiry_days: | |
429 | body += (' <Days>%s</Days>' % self.config.expiry_days) | |
430 | body += (' </Expiration>') | |
431 | body += ' </Rule>' | |
432 | body += '</LifecycleConfiguration>' | |
433 | ||
434 | headers = SortedDict(ignore_case = True) | |
435 | headers['content-md5'] = compute_content_md5(body) | |
436 | bucket = uri.bucket() | |
437 | request = self.create_request("BUCKET_CREATE", bucket = bucket, headers = headers, extra="?lifecycle") | |
438 | return (request, body) | |
439 | ||
369 | 440 | def add_encoding(self, filename, content_type): |
370 | 441 | if content_type.find("charset=") != -1: |
371 | 442 | return False |
409 | 480 | |
410 | 481 | ## MIME-type handling |
411 | 482 | content_type = self.config.mime_type |
412 | content_encoding = None | |
483 | content_charset = None | |
413 | 484 | if filename != "-" and not content_type and self.config.guess_mime_type: |
414 | 485 | if self.config.use_mime_magic: |
415 | (content_type, content_encoding) = mime_magic(filename) | |
416 | else: | |
417 | (content_type, content_encoding) = mimetypes.guess_type(filename) | |
486 | (content_type, content_charset) = mime_magic(filename) | |
487 | else: | |
488 | (content_type, content_charset) = mimetypes.guess_type(filename) | |
418 | 489 | if not content_type: |
419 | 490 | content_type = self.config.default_mime_type |
491 | if not content_charset: | |
492 | content_charset = self.config.encoding.upper() | |
420 | 493 | |
421 | 494 | ## add charset to content type |
422 | if self.add_encoding(filename, content_type): | |
423 | content_type = content_type + "; charset=" + self.config.encoding.upper() | |
495 | if self.add_encoding(filename, content_type) and content_charset is not None: | |
496 | content_type = content_type + "; charset=" + content_charset | |
424 | 497 | |
425 | 498 | headers["content-type"] = content_type |
426 | if content_encoding is not None and self.config.add_content_encoding: | |
427 | headers["content-encoding"] = content_encoding | |
428 | 499 | |
429 | 500 | ## Other Amazon S3 attributes |
430 | 501 | if self.config.acl_public: |
483 | 554 | response = self.recv_file(request, stream, labels, start_position) |
484 | 555 | return response |
485 | 556 | |
557 | def object_batch_delete(self, remote_list): | |
558 | def compose_batch_del_xml(bucket, key_list): | |
559 | body = u"<?xml version=\"1.0\" encoding=\"UTF-8\"?><Delete>" | |
560 | for key in key_list: | |
561 | uri = S3Uri(key) | |
562 | if uri.type != "s3": | |
563 | raise ValueError("Excpected URI type 's3', got '%s'" % uri.type) | |
564 | if not uri.has_object(): | |
565 | raise ValueError("URI '%s' has no object" % key) | |
566 | if uri.bucket() != bucket: | |
567 | raise ValueError("The batch should contain keys from the same bucket") | |
568 | object = saxutils.escape(uri.object()) | |
569 | body += u"<Object><Key>%s</Key></Object>" % object | |
570 | body += u"</Delete>" | |
571 | body = body.encode('utf-8') | |
572 | return body | |
573 | ||
574 | batch = [remote_list[item]['object_uri_str'] for item in remote_list] | |
575 | if len(batch) == 0: | |
576 | raise ValueError("Key list is empty") | |
577 | bucket = S3Uri(batch[0]).bucket() | |
578 | request_body = compose_batch_del_xml(bucket, batch) | |
579 | md5_hash = md5() | |
580 | md5_hash.update(request_body) | |
581 | headers = {'content-md5': base64.b64encode(md5_hash.digest())} | |
582 | request = self.create_request("BATCH_DELETE", bucket = bucket, extra = '?delete', headers = headers) | |
583 | response = self.send_request(request, request_body) | |
584 | return response | |
585 | ||
486 | 586 | def object_delete(self, uri): |
487 | 587 | if uri.type != "s3": |
488 | 588 | raise ValueError("Expected URI type 's3', got '%s'" % uri.type) |
489 | 589 | request = self.create_request("OBJECT_DELETE", uri = uri) |
490 | 590 | response = self.send_request(request) |
491 | 591 | return response |
492 | ||
592 | ||
493 | 593 | def object_restore(self, uri): |
494 | 594 | if uri.type != "s3": |
495 | 595 | raise ValueError("Expected URI type 's3', got '%s'" % uri.type) |
515 | 615 | headers["x-amz-acl"] = "public-read" |
516 | 616 | if self.config.reduced_redundancy: |
517 | 617 | headers["x-amz-storage-class"] = "REDUCED_REDUNDANCY" |
518 | # if extra_headers: | |
519 | # headers.update(extra_headers) | |
520 | 618 | |
521 | 619 | ## Set server side encryption |
522 | 620 | if self.config.server_side_encryption: |
523 | 621 | headers["x-amz-server-side-encryption"] = "AES256" |
524 | 622 | |
623 | if extra_headers: | |
624 | headers['x-amz-metadata-directive'] = "REPLACE" | |
625 | headers.update(extra_headers) | |
525 | 626 | request = self.create_request("OBJECT_PUT", uri = dst_uri, headers = headers) |
526 | 627 | response = self.send_request(request) |
527 | 628 | return response |
580 | 681 | def delete_policy(self, uri): |
581 | 682 | request = self.create_request("BUCKET_DELETE", uri = uri, extra = "?policy") |
582 | 683 | debug(u"delete_policy(%s)" % uri) |
684 | response = self.send_request(request) | |
685 | return response | |
686 | ||
687 | def set_lifecycle_policy(self, uri, policy): | |
688 | headers = SortedDict(ignore_case = True) | |
689 | headers['content-md5'] = compute_content_md5(policy) | |
690 | request = self.create_request("BUCKET_CREATE", uri = uri, | |
691 | extra = "?lifecycle", headers=headers) | |
692 | body = policy | |
693 | debug(u"set_lifecycle_policy(%s): policy-xml: %s" % (uri, body)) | |
694 | request.sign() | |
695 | response = self.send_request(request, body=body) | |
696 | return response | |
697 | ||
698 | def delete_lifecycle_policy(self, uri): | |
699 | request = self.create_request("BUCKET_DELETE", uri = uri, extra = "?lifecycle") | |
700 | debug(u"delete_lifecycle_policy(%s)" % uri) | |
583 | 701 | response = self.send_request(request) |
584 | 702 | return response |
585 | 703 | |
741 | 859 | ConnMan.put(conn) |
742 | 860 | except ParameterError, e: |
743 | 861 | raise |
862 | except (IOError, OSError), e: | |
863 | raise | |
744 | 864 | except Exception, e: |
745 | 865 | if retries: |
746 | 866 | warning("Retrying failed request: %s (%s)" % (resource['uri'], e)) |
948 | 1068 | debug("Response: %s" % response) |
949 | 1069 | except ParameterError, e: |
950 | 1070 | raise |
1071 | except (IOError, OSError), e: | |
1072 | raise | |
951 | 1073 | except Exception, e: |
952 | 1074 | if self.config.progress_meter: |
953 | 1075 | progress.done("failed") |
1000 | 1122 | if self.config.progress_meter: |
1001 | 1123 | progress.update(delta_position = len(data)) |
1002 | 1124 | ConnMan.put(conn) |
1125 | except (IOError, OSError), e: | |
1126 | raise | |
1003 | 1127 | except Exception, e: |
1004 | 1128 | if self.config.progress_meter: |
1005 | 1129 | progress.done("failed") |
1036 | 1160 | response["md5"] = response["headers"]["etag"] |
1037 | 1161 | |
1038 | 1162 | md5_hash = response["headers"]["etag"] |
1039 | try: | |
1040 | md5_hash = response["s3cmd-attrs"]["md5"] | |
1041 | except KeyError: | |
1042 | pass | |
1163 | if not 'x-amz-meta-s3tools-gpgenc' in response["headers"]: | |
1164 | # we can't trust our stored md5 because we | |
1165 | # encrypted the file after calculating it but before | |
1166 | # uploading it. | |
1167 | try: | |
1168 | md5_hash = response["s3cmd-attrs"]["md5"] | |
1169 | except KeyError: | |
1170 | pass | |
1043 | 1171 | |
1044 | 1172 | response["md5match"] = md5_hash.find(response["md5"]) >= 0 |
1045 | 1173 | response["elapsed"] = timestamp_end - timestamp_start |
1061 | 1189 | key, val = attr.split(":") |
1062 | 1190 | attrs[key] = val |
1063 | 1191 | return attrs |
1192 | ||
1193 | def compute_content_md5(body): | |
1194 | m = md5(body) | |
1195 | base64md5 = base64.encodestring(m.digest()) | |
1196 | if base64md5[-1] == '\n': | |
1197 | base64md5 = base64md5[0:-1] | |
1198 | return base64md5 | |
1064 | 1199 | # vim:et:ts=4:sts=4:ai |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | import os |
6 | 7 | import re |
73 | 74 | return bool(self._object) |
74 | 75 | |
75 | 76 | def uri(self): |
76 | return "/".join(["s3:/", self._bucket, self._object]) | |
77 | return u"/".join([u"s3:/", self._bucket, self._object]) | |
77 | 78 | |
78 | 79 | def is_dns_compatible(self): |
79 | 80 | return check_bucket_name_dns_conformity(self._bucket) |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | from BidirMap import BidirMap |
6 | 7 | import Utils |
45 | 46 | def __iter__(self): |
46 | 47 | return SortedDictIterator(self, self.keys()) |
47 | 48 | |
49 | def __getslice__(self, i=0, j=-1): | |
50 | keys = self.keys()[i:j] | |
51 | r = SortedDict(ignore_case = self.ignore_case) | |
52 | for k in keys: | |
53 | r[k] = self[k] | |
54 | return r | |
48 | 55 | |
49 | 56 | |
50 | 57 | if __name__ == "__main__": |
1 | 1 | ## Author: Michal Ludvig <michal@logix.cz> |
2 | 2 | ## http://www.logix.cz/michal |
3 | 3 | ## License: GPL Version 2 |
4 | ## Copyright: TGRMN Software and contributors | |
4 | 5 | |
5 | 6 | import datetime |
6 | 7 | import os |
14 | 15 | import base64 |
15 | 16 | import errno |
16 | 17 | import urllib |
17 | ||
18 | from calendar import timegm | |
18 | 19 | from logging import debug, info, warning, error |
19 | ||
20 | from ExitCodes import EX_OSFILE | |
21 | try: | |
22 | import dateutil.parser | |
23 | except ImportError: | |
24 | sys.stderr.write(u""" | |
25 | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
26 | ImportError trying to import dateutil.parser. | |
27 | Please install the python dateutil module: | |
28 | $ sudo apt-get install python-dateutil | |
29 | or | |
30 | $ sudo yum install python-dateutil | |
31 | or | |
32 | $ pip install python-dateutil | |
33 | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
34 | """) | |
35 | sys.stderr.flush() | |
36 | sys.exit(EX_OSFILE) | |
20 | 37 | |
21 | 38 | import Config |
22 | 39 | import Exceptions |
75 | 92 | except ExpatError, e: |
76 | 93 | error(e) |
77 | 94 | raise Exceptions.ParameterError("Bucket contains invalid filenames. Please run: s3cmd fixbucket s3://your-bucket/") |
95 | except Exception, e: | |
96 | error(e) | |
97 | error(xml) | |
98 | raise | |
99 | ||
78 | 100 | __all__.append("getTreeFromXml") |
79 | 101 | |
80 | 102 | def getListFromXml(xml, node): |
132 | 154 | __all__.append("appendXmlTextNode") |
133 | 155 | |
134 | 156 | def dateS3toPython(date): |
135 | date = re.compile("(\.\d*)?Z").sub(".000Z", date) | |
136 | return time.strptime(date, "%Y-%m-%dT%H:%M:%S.000Z") | |
157 | # Reset milliseconds to 000 | |
158 | date = re.compile('\.[0-9]*(?:[Z\\-\\+]*?)').sub(".000", date) | |
159 | return dateutil.parser.parse(date, fuzzy=True) | |
137 | 160 | __all__.append("dateS3toPython") |
138 | 161 | |
139 | 162 | def dateS3toUnix(date): |
140 | ## FIXME: This should be timezone-aware. | |
141 | ## Currently the argument to strptime() is GMT but mktime() | |
142 | ## treats it as "localtime". Anyway... | |
143 | return time.mktime(dateS3toPython(date)) | |
163 | ## NOTE: This is timezone-aware and return the timestamp regarding GMT | |
164 | return timegm(dateS3toPython(date).utctimetuple()) | |
144 | 165 | __all__.append("dateS3toUnix") |
145 | 166 | |
146 | 167 | def dateRFC822toPython(date): |
147 | return rfc822.parsedate(date) | |
168 | return dateutil.parser.parse(date, fuzzy=True) | |
148 | 169 | __all__.append("dateRFC822toPython") |
149 | 170 | |
150 | 171 | def dateRFC822toUnix(date): |
151 | return time.mktime(dateRFC822toPython(date)) | |
172 | return timegm(dateRFC822toPython(date).utctimetuple()) | |
152 | 173 | __all__.append("dateRFC822toUnix") |
153 | 174 | |
154 | 175 | def formatSize(size, human_readable = False, floating_point = False): |
165 | 186 | __all__.append("formatSize") |
166 | 187 | |
167 | 188 | def formatDateTime(s3timestamp): |
168 | try: | |
169 | import pytz | |
170 | timezone = pytz.timezone(os.environ.get('TZ', 'UTC')) | |
171 | tz = pytz.timezone('UTC') | |
172 | ## Can't unpack args and follow that with kwargs in python 2.5 | |
173 | ## So we pass them all as kwargs | |
174 | params = zip(('year', 'month', 'day', 'hour', 'minute', 'second', 'tzinfo'), | |
175 | dateS3toPython(s3timestamp)[0:6] + (tz,)) | |
176 | params = dict(params) | |
177 | utc_dt = datetime.datetime(**params) | |
178 | dt_object = utc_dt.astimezone(timezone) | |
179 | except ImportError: | |
180 | dt_object = datetime.datetime(*dateS3toPython(s3timestamp)[0:6]) | |
181 | return dt_object.strftime("%Y-%m-%d %H:%M") | |
189 | date_obj = dateutil.parser.parse(s3timestamp, fuzzy=True) | |
190 | return date_obj.strftime("%Y-%m-%d %H:%M") | |
182 | 191 | __all__.append("formatDateTime") |
183 | 192 | |
184 | 193 | def convertTupleListToDict(list): |
363 | 372 | """Shared implementation of sign_url methods. Takes a hash of 'bucket', 'object' and 'expiry' as args.""" |
364 | 373 | parms['expiry']=time_to_epoch(parms['expiry']) |
365 | 374 | parms['access_key']=Config.Config().access_key |
375 | parms['host_base']=Config.Config().host_base | |
366 | 376 | debug("Expiry interpreted as epoch time %s", parms['expiry']) |
367 | 377 | signtext = 'GET\n\n\n%(expiry)d\n/%(bucket)s/%(object)s' % parms |
368 | 378 | debug("Signing plaintext: %r", signtext) |
369 | 379 | parms['sig'] = urllib.quote_plus(sign_string(signtext)) |
370 | 380 | debug("Urlencoded signature: %s", parms['sig']) |
371 | return "http://%(bucket)s.s3.amazonaws.com/%(object)s?AWSAccessKeyId=%(access_key)s&Expires=%(expiry)d&Signature=%(sig)s" % parms | |
381 | return "http://%(bucket)s.%(host_base)s/%(object)s?AWSAccessKeyId=%(access_key)s&Expires=%(expiry)d&Signature=%(sig)s" % parms | |
372 | 382 | |
373 | 383 | def time_to_epoch(t): |
374 | 384 | """Convert time specified in a variety of forms into UNIX epoch time. |
477 | 487 | __all__.append("calculateChecksum") |
478 | 488 | |
479 | 489 | |
480 | # Deal with the fact that pwd and grp modules don't exist for Windos | |
490 | # Deal with the fact that pwd and grp modules don't exist for Windows | |
481 | 491 | try: |
482 | 492 | import pwd |
483 | 493 | def getpwuid_username(uid): |
484 | 494 | """returns a username from the password databse for the given uid""" |
485 | 495 | return pwd.getpwuid(uid).pw_name |
486 | 496 | except ImportError: |
497 | import getpass | |
487 | 498 | def getpwuid_username(uid): |
488 | 499 | return getpass.getuser() |
489 | 500 | __all__.append("getpwuid_username") |
0 | TODO list for s3cmd project | |
1 | =========================== | |
2 | ||
3 | - Before 1.0.0 (or asap after 1.0.0) | |
4 | - Make 'sync s3://bkt/some-filename local/other-filename' work | |
5 | (at the moment it'll always download). | |
6 | - Enable --exclude for [ls]. | |
7 | - Allow change /tmp to somewhere else | |
8 | - With --guess-mime use 'magic' module if available. | |
9 | - Support --preserve for [put] and [get]. Update manpage. | |
10 | - Don't let --continue fail if the file is already fully downloaded. | |
11 | - Option --mime-type should set mime type with 'cp' and 'mv'. | |
12 | If possible --guess-mime-type should do as well. | |
13 | - Make upload throttling configurable. | |
14 | - Allow removing 'DefaultRootObject' from CloudFront distributions. | |
15 | - Get s3://bucket/non-existent creates empty local file 'non-existent' | |
16 | - Add 'geturl' command, both Unicode and urlencoded output. | |
17 | - Add a command for generating "Query String Authentication" URLs. | |
18 | - Support --acl-grant (together with --acl-public/private) for [put] and [sync] | |
19 | - Filter 's3cmd ls' output by --bucket-location= | |
20 | ||
21 | - After 1.0.0 | |
22 | - Sync must backup non-files as well. At least directories, | |
23 | symlinks and device nodes. | |
24 | - Speed up upload / download with multiple threads. | |
25 | (see http://blog.50projects.com/p/s3cmd-modifications.html) | |
26 | - Sync should be able to update metadata (UID, timstamps, etc) | |
27 | if only these change (i.e. same content, different metainfo). | |
28 | - If GPG fails error() and exit. If un-GPG fails save the | |
29 | file with .gpg extension. | |
30 | - Keep backup files remotely on put/sync-to if requested | |
31 | (move the old 'object' to e.g. 'object~' and only then upload | |
32 | the new one). Could be more advanced to keep, say, last 5 | |
33 | copies, etc. | |
34 | - Memory consumption on very large upload sets is terribly high. | |
35 | - Implement per-bucket (or per-regexp?) default settings. For | |
36 | example regarding ACLs, encryption, etc. | |
37 | ||
38 | - Implement GPG for sync | |
39 | (it's not that easy since it won't be easy to compare | |
40 | the encrypted-remote-object size with local file. | |
41 | either we can store the metadata in a dedicated file | |
42 | where we face a risk of inconsistencies, or we'll store | |
43 | the metadata encrypted in each object header where we'll | |
44 | have to do large number for object/HEAD requests. tough | |
45 | call). | |
46 | Or we can only compare local timestamps with remote object | |
47 | timestamps. If the local one is older we'll *assume* it | |
48 | hasn't been changed. But what to do about remote2local sync? | |
49 | ||
50 | - Keep man page up to date and write some more documentation | |
51 | - Yeah, right ;-) |
0 | s3cmd (1.5.0~rc1-1) experimental; urgency=low | |
1 | ||
2 | * Upstream release candidate. | |
3 | ||
4 | -- Mikhail Gusarov <dottedmag@debian.org> Fri, 04 Jul 2014 12:17:16 +0200 | |
5 | ||
0 | 6 | s3cmd (1.5.0~20140213-1) experimental; urgency=low |
1 | 7 | |
2 | 8 | * New upstream snapshot (targeting experimental): |
1 | 1 | # 'Cache-Control' expansion as macro |
2 | 2 | --- a/s3cmd.1 |
3 | 3 | +++ b/s3cmd.1 |
4 | @@ -79,7 +79,7 @@ | |
4 | @@ -88,7 +88,7 @@ | |
5 | 5 | s3cmd \fBaccesslog\fR \fIs3://BUCKET\fR |
6 | 6 | Enable/disable bucket access logging |
7 | 7 | .TP |
10 | 10 | Sign arbitrary string using the secret key |
11 | 11 | .TP |
12 | 12 | s3cmd \fBsignurl\fR \fIs3://BUCKET/OBJECT expiry_epoch\fR |
13 | @@ -138,7 +138,7 @@ | |
13 | @@ -156,7 +156,7 @@ | |
14 | 14 | .TP |
15 | 15 | \fB\-\-configure\fR |
16 | 16 | Invoke interactive (re)configuration tool. Optionally |
19 | 19 | to a specific bucket instead of attempting to list |
20 | 20 | them all. |
21 | 21 | .TP |
22 | @@ -258,29 +258,29 @@ | |
22 | @@ -280,29 +280,29 @@ | |
23 | 23 | from sync |
24 | 24 | .TP |
25 | 25 | \fB\-\-exclude\-from\fR=FILE |
55 | 55 | .TP |
56 | 56 | \fB\-\-ignore\-failed\-copy\fR |
57 | 57 | Don't exit unsuccessfully because of missing keys |
58 | @@ -311,9 +311,9 @@ | |
58 | @@ -333,9 +333,9 @@ | |
59 | 59 | default is binary/octet-stream. |
60 | 60 | .TP |
61 | 61 | \fB\-M\fR, \fB\-\-guess\-mime\-type\fR |
68 | 68 | .TP |
69 | 69 | \fB\-\-no\-guess\-mime\-type\fR |
70 | 70 | Don't guess MIME-type and use the default type |
71 | @@ -323,13 +323,13 @@ | |
71 | @@ -345,13 +345,13 @@ | |
72 | 72 | Don't use mime magic when guessing MIME-type. |
73 | 73 | .TP |
74 | 74 | \fB\-m\fR MIME/TYPE, \fB\-\-mime\-type\fR=MIME/TYPE |
75 | 75 | -Force MIME-type. Override both \fB--default-mime-type\fR and |
76 | 76 | -\fB--guess-mime-type\fR. |
77 | +Force MIME\-type. Override both \fB\-\-default-mime-type\fR and | |
78 | +\fB\-\-guess-mime-type\fR. | |
77 | +Force MIME\-type. Override both \fB\-\-default\-mime\-type\fR | |
78 | +and \fB\-\-guess\-mime\-type\fR. | |
79 | 79 | .TP |
80 | 80 | \fB\-\-add\-header\fR=NAME:VALUE |
81 | 81 | Add a given HTTP header to the upload request. Can be |
82 | 82 | used multiple times. For instance set 'Expires' or |
83 | -'Cache-Control' headers (or both) using this options | |
84 | +\&'Cache-Control' headers (or both) using this options | |
85 | if you like. | |
83 | -'Cache-Control' headers (or both) using this option. | |
84 | +\&'Cache-Control' headers (or both) using this option. | |
86 | 85 | .TP |
87 | 86 | \fB\-\-server\-side\-encryption\fR |
87 | Specifies that server-side encryption will be used |
0 | 0 | version=3 |
1 | opts=uversionmangle=s/-(beta|pre|rc)/~$1/;s/-(speedup)/~$1/ http://sf.net/s3tools/s3cmd-(.*)\.tar\.gz | |
1 | opts=uversionmangle=s/-(alpha|beta|pre|rc)/~$1/;s/-(speedup)/~$1/ http://sf.net/s3tools/s3cmd-(.*)\.tar\.gz |
0 | #!/usr/bin/perl | |
1 | ||
2 | # Format s3cmd.1 manpage | |
3 | # Usage: | |
4 | # s3cmd --help | format-manpage.pl > s3cmd.1 | |
5 | ||
6 | use strict; | |
7 | ||
8 | my $commands = ""; | |
9 | my $cfcommands = ""; | |
10 | my $wscommands = ""; | |
11 | my $options = ""; | |
12 | ||
13 | while (<>) { | |
14 | if (/^Commands:/) { | |
15 | while (<>) { | |
16 | last if (/^\s*$/); | |
17 | my ($desc, $cmd, $cmdline); | |
18 | ($desc = $_) =~ s/^\s*(.*?)\s*$/$1/; | |
19 | ($cmdline = <>) =~ s/^\s*s3cmd (.*?) (.*?)\s*$/s3cmd \\fB$1\\fR \\fI$2\\fR/; | |
20 | $cmd = $1; | |
21 | if ($cmd =~ /^cf/) { | |
22 | $cfcommands .= ".TP\n$cmdline\n$desc\n"; | |
23 | } elsif ($cmd =~ /^ws/) { | |
24 | $wscommands .= ".TP\n$cmdline\n$desc\n"; | |
25 | } else { | |
26 | $commands .= ".TP\n$cmdline\n$desc\n"; | |
27 | } | |
28 | } | |
29 | } | |
30 | if (/^Options:/) { | |
31 | my ($opt, $desc); | |
32 | while (<>) { | |
33 | last if (/^\s*$/); | |
34 | $_ =~ s/(.*?)\s*$/$1/; | |
35 | $desc = ""; | |
36 | $opt = ""; | |
37 | if (/^ (-.*)/) { | |
38 | $opt = $1; | |
39 | if ($opt =~ / /) { | |
40 | ($opt, $desc) = split(/\s\s+/, $opt, 2); | |
41 | } | |
42 | $opt =~ s/(-[^ ,=\.]+)/\\fB$1\\fR/g; | |
43 | $opt =~ s/-/\\-/g; | |
44 | $options .= ".TP\n$opt\n"; | |
45 | } else { | |
46 | $_ =~ s/\s*(.*?)\s*$/$1/; | |
47 | $_ =~ s/(--[^ ,=\.]+)/\\fB$1\\fR/g; | |
48 | $desc .= $_; | |
49 | } | |
50 | if ($desc) { | |
51 | $options .= "$desc\n"; | |
52 | } | |
53 | } | |
54 | } | |
55 | } | |
56 | print " | |
57 | .\\\" !!! IMPORTANT: This file is generated from s3cmd --help output using format-manpage.pl | |
58 | .\\\" !!! Do your changes either in s3cmd file or in 'format-manpage.pl' otherwise | |
59 | .\\\" !!! they will be overwritten! | |
60 | ||
61 | .TH s3cmd 1 | |
62 | .SH NAME | |
63 | s3cmd \\- tool for managing Amazon S3 storage space and Amazon CloudFront content delivery network | |
64 | .SH SYNOPSIS | |
65 | .B s3cmd | |
66 | [\\fIOPTIONS\\fR] \\fICOMMAND\\fR [\\fIPARAMETERS\\fR] | |
67 | .SH DESCRIPTION | |
68 | .PP | |
69 | .B s3cmd | |
70 | is a command line client for copying files to/from | |
71 | Amazon S3 (Simple Storage Service) and performing other | |
72 | related tasks, for instance creating and removing buckets, | |
73 | listing objects, etc. | |
74 | ||
75 | .SH COMMANDS | |
76 | .PP | |
77 | .B s3cmd | |
78 | can do several \\fIactions\\fR specified by the following \\fIcommands\\fR. | |
79 | $commands | |
80 | ||
81 | .PP | |
82 | Commands for static WebSites configuration | |
83 | $wscommands | |
84 | ||
85 | .PP | |
86 | Commands for CloudFront management | |
87 | $cfcommands | |
88 | ||
89 | .SH OPTIONS | |
90 | .PP | |
91 | Some of the below specified options can have their default | |
92 | values set in | |
93 | .B s3cmd | |
94 | config file (by default \$HOME/.s3cmd). As it's a simple text file | |
95 | feel free to open it with your favorite text editor and do any | |
96 | changes you like. | |
97 | $options | |
98 | ||
99 | .SH EXAMPLES | |
100 | One of the most powerful commands of \\fIs3cmd\\fR is \\fBs3cmd sync\\fR used for | |
101 | synchronising complete directory trees to or from remote S3 storage. To some extent | |
102 | \\fBs3cmd put\\fR and \\fBs3cmd get\\fR share a similar behaviour with \\fBsync\\fR. | |
103 | .PP | |
104 | Basic usage common in backup scenarios is as simple as: | |
105 | .nf | |
106 | s3cmd sync /local/path/ s3://test-bucket/backup/ | |
107 | .fi | |
108 | .PP | |
109 | This command will find all files under /local/path directory and copy them | |
110 | to corresponding paths under s3://test-bucket/backup on the remote side. | |
111 | For example: | |
112 | .nf | |
113 | /local/path/\\fBfile1.ext\\fR \\-> s3://bucket/backup/\\fBfile1.ext\\fR | |
114 | /local/path/\\fBdir123/file2.bin\\fR \\-> s3://bucket/backup/\\fBdir123/file2.bin\\fR | |
115 | .fi | |
116 | .PP | |
117 | However if the local path doesn't end with a slash the last directory's name | |
118 | is used on the remote side as well. Compare these with the previous example: | |
119 | .nf | |
120 | s3cmd sync /local/path s3://test-bucket/backup/ | |
121 | .fi | |
122 | will sync: | |
123 | .nf | |
124 | /local/\\fBpath/file1.ext\\fR \\-> s3://bucket/backup/\\fBpath/file1.ext\\fR | |
125 | /local/\\fBpath/dir123/file2.bin\\fR \\-> s3://bucket/backup/\\fBpath/dir123/file2.bin\\fR | |
126 | .fi | |
127 | .PP | |
128 | To retrieve the files back from S3 use inverted syntax: | |
129 | .nf | |
130 | s3cmd sync s3://test-bucket/backup/ /tmp/restore/ | |
131 | .fi | |
132 | that will download files: | |
133 | .nf | |
134 | s3://bucket/backup/\\fBfile1.ext\\fR \\-> /tmp/restore/\\fBfile1.ext\\fR | |
135 | s3://bucket/backup/\\fBdir123/file2.bin\\fR \\-> /tmp/restore/\\fBdir123/file2.bin\\fR | |
136 | .fi | |
137 | .PP | |
138 | Without the trailing slash on source the behaviour is similar to | |
139 | what has been demonstrated with upload: | |
140 | .nf | |
141 | s3cmd sync s3://test-bucket/backup /tmp/restore/ | |
142 | .fi | |
143 | will download the files as: | |
144 | .nf | |
145 | s3://bucket/\\fBbackup/file1.ext\\fR \\-> /tmp/restore/\\fBbackup/file1.ext\\fR | |
146 | s3://bucket/\\fBbackup/dir123/file2.bin\\fR \\-> /tmp/restore/\\fBbackup/dir123/file2.bin\\fR | |
147 | .fi | |
148 | .PP | |
149 | All source file names, the bold ones above, are matched against \\fBexclude\\fR | |
150 | rules and those that match are then re\\-checked against \\fBinclude\\fR rules to see | |
151 | whether they should be excluded or kept in the source list. | |
152 | .PP | |
153 | For the purpose of \\fB\\-\\-exclude\\fR and \\fB\\-\\-include\\fR matching only the | |
154 | bold file names above are used. For instance only \\fBpath/file1.ext\\fR is tested | |
155 | against the patterns, not \\fI/local/\\fBpath/file1.ext\\fR | |
156 | .PP | |
157 | Both \\fB\\-\\-exclude\\fR and \\fB\\-\\-include\\fR work with shell-style wildcards (a.k.a. GLOB). | |
158 | For a greater flexibility s3cmd provides Regular-expression versions of the two exclude options | |
159 | named \\fB\\-\\-rexclude\\fR and \\fB\\-\\-rinclude\\fR. | |
160 | The options with ...\\fB\\-from\\fR suffix (eg \\-\\-rinclude\\-from) expect a filename as | |
161 | an argument. Each line of such a file is treated as one pattern. | |
162 | .PP | |
163 | There is only one set of patterns built from all \\fB\\-\\-(r)exclude(\\-from)\\fR options | |
164 | and similarly for include variant. Any file excluded with eg \\-\\-exclude can | |
165 | be put back with a pattern found in \\-\\-rinclude\\-from list. | |
166 | .PP | |
167 | Run s3cmd with \\fB\\-\\-dry\\-run\\fR to verify that your rules work as expected. | |
168 | Use together with \\fB\\-\\-debug\\fR get detailed information | |
169 | about matching file names against exclude and include rules. | |
170 | .PP | |
171 | For example to exclude all files with \".jpg\" extension except those beginning with a number use: | |
172 | .PP | |
173 | \\-\\-exclude '*.jpg' \\-\\-rinclude '[0-9].*\\.jpg' | |
174 | .SH SEE ALSO | |
175 | For the most up to date list of options run | |
176 | .B s3cmd \\-\\-help | |
177 | .br | |
178 | For more info about usage, examples and other related info visit project homepage at | |
179 | .br | |
180 | .B http://s3tools.org | |
181 | .SH DONATIONS | |
182 | Please consider a donation if you have found s3cmd useful: | |
183 | .br | |
184 | .B http://s3tools.org/donate | |
185 | .SH AUTHOR | |
186 | Written by Michal Ludvig <mludvig\@logix.net.nz> and 15+ contributors | |
187 | .SH CONTACT, SUPPORT | |
188 | Preferred way to get support is our mailing list: | |
189 | .I s3tools\\-general\@lists.sourceforge.net | |
190 | .SH REPORTING BUGS | |
191 | Report bugs to | |
192 | .I s3tools\\-bugs\@lists.sourceforge.net | |
193 | .SH COPYRIGHT | |
194 | Copyright \\(co 2007,2008,2009,2010,2011,2012 Michal Ludvig <http://www.logix.cz/michal> | |
195 | .br | |
196 | This is free software. You may redistribute copies of it under the terms of | |
197 | the GNU General Public License version 2 <http://www.gnu.org/licenses/gpl.html>. | |
198 | There is NO WARRANTY, to the extent permitted by law. | |
199 | "; |
0 | # Additional magic for common web file types | |
1 | ||
2 | 0 string/b {\ " JSON data | |
3 | !:mime application/json | |
4 | 0 string/b {\ } JSON data | |
5 | !:mime application/json | |
6 | 0 string/b [ JSON data | |
7 | !:mime application/json | |
8 | ||
9 | 0 search/4000 function | |
10 | >&0 search/32/b )\ { JavaScript program | |
11 | !:mime application/javascript | |
12 | ||
13 | 0 search/4000 @media CSS stylesheet | |
14 | !:mime text/css | |
15 | 0 search/4000 @import CSS stylesheet | |
16 | !:mime text/css | |
17 | 0 search/4000 @namespace CSS stylesheet | |
18 | !:mime text/css | |
19 | 0 search/4000/b {\ background CSS stylesheet | |
20 | !:mime text/css | |
21 | 0 search/4000/b {\ border CSS stylesheet | |
22 | !:mime text/css | |
23 | 0 search/4000/b {\ bottom CSS stylesheet | |
24 | !:mime text/css | |
25 | 0 search/4000/b {\ color CSS stylesheet | |
26 | !:mime text/css | |
27 | 0 search/4000/b {\ cursor CSS stylesheet | |
28 | !:mime text/css | |
29 | 0 search/4000/b {\ direction CSS stylesheet | |
30 | !:mime text/css | |
31 | 0 search/4000/b {\ display CSS stylesheet | |
32 | !:mime text/css | |
33 | 0 search/4000/b {\ float CSS stylesheet | |
34 | !:mime text/css | |
35 | 0 search/4000/b {\ font CSS stylesheet | |
36 | !:mime text/css | |
37 | 0 search/4000/b {\ height CSS stylesheet | |
38 | !:mime text/css | |
39 | 0 search/4000/b {\ left CSS stylesheet | |
40 | !:mime text/css | |
41 | 0 search/4000/b {\ line- CSS stylesheet | |
42 | !:mime text/css | |
43 | 0 search/4000/b {\ margin CSS stylesheet | |
44 | !:mime text/css | |
45 | 0 search/4000/b {\ padding CSS stylesheet | |
46 | !:mime text/css | |
47 | 0 search/4000/b {\ position CSS stylesheet | |
48 | !:mime text/css | |
49 | 0 search/4000/b {\ right CSS stylesheet | |
50 | !:mime text/css | |
51 | 0 search/4000/b {\ text- CSS stylesheet | |
52 | !:mime text/css | |
53 | 0 search/4000/b {\ top CSS stylesheet | |
54 | !:mime text/css | |
55 | 0 search/4000/b {\ width CSS stylesheet | |
56 | !:mime text/css | |
57 | 0 search/4000/b {\ visibility CSS stylesheet | |
58 | !:mime text/css | |
59 | 0 search/4000/b {\ -moz- CSS stylesheet | |
60 | !:mime text/css | |
61 | 0 search/4000/b {\ -webkit- CSS stylesheet | |
62 | !:mime text/css |
0 | #!/usr/bin/env python | |
1 | # -*- coding=utf-8 -*- | |
2 | ||
3 | ## Amazon S3cmd - testsuite | |
4 | ## Author: Michal Ludvig <michal@logix.cz> | |
5 | ## http://www.logix.cz/michal | |
6 | ## License: GPL Version 2 | |
7 | ||
8 | import sys | |
9 | import os | |
10 | import re | |
11 | from subprocess import Popen, PIPE, STDOUT | |
12 | import locale | |
13 | import getpass | |
14 | ||
15 | count_pass = 0 | |
16 | count_fail = 0 | |
17 | count_skip = 0 | |
18 | ||
19 | test_counter = 0 | |
20 | run_tests = [] | |
21 | exclude_tests = [] | |
22 | ||
23 | verbose = False | |
24 | ||
25 | if os.name == "posix": | |
26 | have_wget = True | |
27 | elif os.name == "nt": | |
28 | have_wget = False | |
29 | else: | |
30 | print "Unknown platform: %s" % os.name | |
31 | sys.exit(1) | |
32 | ||
33 | ## Unpack testsuite/ directory | |
34 | if not os.path.isdir('testsuite') and os.path.isfile('testsuite.tar.gz'): | |
35 | os.system("tar -xz -f testsuite.tar.gz") | |
36 | if not os.path.isdir('testsuite'): | |
37 | print "Something went wrong while unpacking testsuite.tar.gz" | |
38 | sys.exit(1) | |
39 | ||
40 | os.system("tar -xf testsuite/checksum.tar -C testsuite") | |
41 | if not os.path.isfile('testsuite/checksum/cksum33.txt'): | |
42 | print "Something went wrong while unpacking testsuite/checkum.tar" | |
43 | sys.exit(1) | |
44 | ||
45 | ## Fix up permissions for permission-denied tests | |
46 | os.chmod("testsuite/permission-tests/permission-denied-dir", 0444) | |
47 | os.chmod("testsuite/permission-tests/permission-denied.txt", 0000) | |
48 | ||
49 | ## Patterns for Unicode tests | |
50 | patterns = {} | |
51 | patterns['UTF-8'] = u"ŪņЇЌœđЗ/☺ unicode € rocks ™" | |
52 | patterns['GBK'] = u"12月31日/1-特色條目" | |
53 | ||
54 | encoding = locale.getpreferredencoding() | |
55 | if not encoding: | |
56 | print "Guessing current system encoding failed. Consider setting $LANG variable." | |
57 | sys.exit(1) | |
58 | else: | |
59 | print "System encoding: " + encoding | |
60 | ||
61 | have_encoding = os.path.isdir('testsuite/encodings/' + encoding) | |
62 | if not have_encoding and os.path.isfile('testsuite/encodings/%s.tar.gz' % encoding): | |
63 | os.system("tar xvz -C testsuite/encodings -f testsuite/encodings/%s.tar.gz" % encoding) | |
64 | have_encoding = os.path.isdir('testsuite/encodings/' + encoding) | |
65 | ||
66 | if have_encoding: | |
67 | #enc_base_remote = "%s/xyz/%s/" % (pbucket(1), encoding) | |
68 | enc_pattern = patterns[encoding] | |
69 | else: | |
70 | print encoding + " specific files not found." | |
71 | ||
72 | if not os.path.isdir('testsuite/crappy-file-name'): | |
73 | os.system("tar xvz -C testsuite -f testsuite/crappy-file-name.tar.gz") | |
74 | # TODO: also unpack if the tarball is newer than the directory timestamp | |
75 | # for instance when a new version was pulled from SVN. | |
76 | ||
77 | def test(label, cmd_args = [], retcode = 0, must_find = [], must_not_find = [], must_find_re = [], must_not_find_re = []): | |
78 | def command_output(): | |
79 | print "----" | |
80 | print " ".join([arg.find(" ")>=0 and "'%s'" % arg or arg for arg in cmd_args]) | |
81 | print "----" | |
82 | print stdout | |
83 | print "----" | |
84 | ||
85 | def failure(message = ""): | |
86 | global count_fail | |
87 | if message: | |
88 | message = " (%r)" % message | |
89 | print "\x1b[31;1mFAIL%s\x1b[0m" % (message) | |
90 | count_fail += 1 | |
91 | command_output() | |
92 | #return 1 | |
93 | sys.exit(1) | |
94 | def success(message = ""): | |
95 | global count_pass | |
96 | if message: | |
97 | message = " (%r)" % message | |
98 | print "\x1b[32;1mOK\x1b[0m%s" % (message) | |
99 | count_pass += 1 | |
100 | if verbose: | |
101 | command_output() | |
102 | return 0 | |
103 | def skip(message = ""): | |
104 | global count_skip | |
105 | if message: | |
106 | message = " (%r)" % message | |
107 | print "\x1b[33;1mSKIP\x1b[0m%s" % (message) | |
108 | count_skip += 1 | |
109 | return 0 | |
110 | def compile_list(_list, regexps = False): | |
111 | if regexps == False: | |
112 | _list = [re.escape(item.encode(encoding, "replace")) for item in _list] | |
113 | ||
114 | return [re.compile(item, re.MULTILINE) for item in _list] | |
115 | ||
116 | global test_counter | |
117 | test_counter += 1 | |
118 | print ("%3d %s " % (test_counter, label)).ljust(30, "."), | |
119 | sys.stdout.flush() | |
120 | ||
121 | if run_tests.count(test_counter) == 0 or exclude_tests.count(test_counter) > 0: | |
122 | return skip() | |
123 | ||
124 | if not cmd_args: | |
125 | return skip() | |
126 | ||
127 | p = Popen(cmd_args, stdout = PIPE, stderr = STDOUT, universal_newlines = True) | |
128 | stdout, stderr = p.communicate() | |
129 | if retcode != p.returncode: | |
130 | return failure("retcode: %d, expected: %d" % (p.returncode, retcode)) | |
131 | ||
132 | if type(must_find) not in [ list, tuple ]: must_find = [must_find] | |
133 | if type(must_find_re) not in [ list, tuple ]: must_find_re = [must_find_re] | |
134 | if type(must_not_find) not in [ list, tuple ]: must_not_find = [must_not_find] | |
135 | if type(must_not_find_re) not in [ list, tuple ]: must_not_find_re = [must_not_find_re] | |
136 | ||
137 | find_list = [] | |
138 | find_list.extend(compile_list(must_find)) | |
139 | find_list.extend(compile_list(must_find_re, regexps = True)) | |
140 | find_list_patterns = [] | |
141 | find_list_patterns.extend(must_find) | |
142 | find_list_patterns.extend(must_find_re) | |
143 | ||
144 | not_find_list = [] | |
145 | not_find_list.extend(compile_list(must_not_find)) | |
146 | not_find_list.extend(compile_list(must_not_find_re, regexps = True)) | |
147 | not_find_list_patterns = [] | |
148 | not_find_list_patterns.extend(must_not_find) | |
149 | not_find_list_patterns.extend(must_not_find_re) | |
150 | ||
151 | for index in range(len(find_list)): | |
152 | match = find_list[index].search(stdout) | |
153 | if not match: | |
154 | return failure("pattern not found: %s" % find_list_patterns[index]) | |
155 | for index in range(len(not_find_list)): | |
156 | match = not_find_list[index].search(stdout) | |
157 | if match: | |
158 | return failure("pattern found: %s (match: %s)" % (not_find_list_patterns[index], match.group(0))) | |
159 | ||
160 | return success() | |
161 | ||
162 | def test_s3cmd(label, cmd_args = [], **kwargs): | |
163 | if not cmd_args[0].endswith("s3cmd"): | |
164 | cmd_args.insert(0, "python") | |
165 | cmd_args.insert(1, "s3cmd") | |
166 | ||
167 | return test(label, cmd_args, **kwargs) | |
168 | ||
169 | def test_mkdir(label, dir_name): | |
170 | if os.name in ("posix", "nt"): | |
171 | cmd = ['mkdir', '-p'] | |
172 | else: | |
173 | print "Unknown platform: %s" % os.name | |
174 | sys.exit(1) | |
175 | cmd.append(dir_name) | |
176 | return test(label, cmd) | |
177 | ||
178 | def test_rmdir(label, dir_name): | |
179 | if os.path.isdir(dir_name): | |
180 | if os.name == "posix": | |
181 | cmd = ['rm', '-rf'] | |
182 | elif os.name == "nt": | |
183 | cmd = ['rmdir', '/s/q'] | |
184 | else: | |
185 | print "Unknown platform: %s" % os.name | |
186 | sys.exit(1) | |
187 | cmd.append(dir_name) | |
188 | return test(label, cmd) | |
189 | else: | |
190 | return test(label, []) | |
191 | ||
192 | def test_flushdir(label, dir_name): | |
193 | test_rmdir(label + "(rm)", dir_name) | |
194 | return test_mkdir(label + "(mk)", dir_name) | |
195 | ||
196 | def test_copy(label, src_file, dst_file): | |
197 | if os.name == "posix": | |
198 | cmd = ['cp', '-f'] | |
199 | elif os.name == "nt": | |
200 | cmd = ['copy'] | |
201 | else: | |
202 | print "Unknown platform: %s" % os.name | |
203 | sys.exit(1) | |
204 | cmd.append(src_file) | |
205 | cmd.append(dst_file) | |
206 | return test(label, cmd) | |
207 | ||
208 | bucket_prefix = u"%s-" % getpass.getuser() | |
209 | print "Using bucket prefix: '%s'" % bucket_prefix | |
210 | ||
211 | argv = sys.argv[1:] | |
212 | while argv: | |
213 | arg = argv.pop(0) | |
214 | if arg.startswith('--bucket-prefix='): | |
215 | print "Usage: '--bucket-prefix PREFIX', not '--bucket-prefix=PREFIX'" | |
216 | sys.exit(0) | |
217 | if arg in ("-h", "--help"): | |
218 | print "%s A B K..O -N" % sys.argv[0] | |
219 | print "Run tests number A, B and K through to O, except for N" | |
220 | sys.exit(0) | |
221 | if arg in ("-l", "--list"): | |
222 | exclude_tests = range(0, 999) | |
223 | break | |
224 | if arg in ("-v", "--verbose"): | |
225 | verbose = True | |
226 | continue | |
227 | if arg in ("-p", "--bucket-prefix"): | |
228 | try: | |
229 | bucket_prefix = argv.pop(0) | |
230 | except IndexError: | |
231 | print "Bucket prefix option must explicitly supply a bucket name prefix" | |
232 | sys.exit(0) | |
233 | continue | |
234 | if arg.find("..") >= 0: | |
235 | range_idx = arg.find("..") | |
236 | range_start = arg[:range_idx] or 0 | |
237 | range_end = arg[range_idx+2:] or 999 | |
238 | run_tests.extend(range(int(range_start), int(range_end) + 1)) | |
239 | elif arg.startswith("-"): | |
240 | exclude_tests.append(int(arg[1:])) | |
241 | else: | |
242 | run_tests.append(int(arg)) | |
243 | ||
244 | if not run_tests: | |
245 | run_tests = range(0, 999) | |
246 | ||
247 | # helper functions for generating bucket names | |
248 | def bucket(tail): | |
249 | '''Test bucket name''' | |
250 | label = 'autotest' | |
251 | if str(tail) == '3': | |
252 | label = 'Autotest' | |
253 | return '%ss3cmd-%s-%s' % (bucket_prefix, label, tail) | |
254 | ||
255 | def pbucket(tail): | |
256 | '''Like bucket(), but prepends "s3://" for you''' | |
257 | return 's3://' + bucket(tail) | |
258 | ||
259 | ## ====== Remove test buckets | |
260 | test_s3cmd("Remove test buckets", ['rb', '-r', pbucket(1), pbucket(2), pbucket(3)], | |
261 | must_find = [ "Bucket '%s/' removed" % pbucket(1), | |
262 | "Bucket '%s/' removed" % pbucket(2), | |
263 | "Bucket '%s/' removed" % pbucket(3) ]) | |
264 | ||
265 | ||
266 | ## ====== Create one bucket (EU) | |
267 | test_s3cmd("Create one bucket (EU)", ['mb', '--bucket-location=EU', pbucket(1)], | |
268 | must_find = "Bucket '%s/' created" % pbucket(1)) | |
269 | ||
270 | ||
271 | ||
272 | ## ====== Create multiple buckets | |
273 | test_s3cmd("Create multiple buckets", ['mb', pbucket(2), pbucket(3)], | |
274 | must_find = [ "Bucket '%s/' created" % pbucket(2), "Bucket '%s/' created" % pbucket(3)]) | |
275 | ||
276 | ||
277 | ## ====== Invalid bucket name | |
278 | test_s3cmd("Invalid bucket name", ["mb", "--bucket-location=EU", pbucket('EU')], | |
279 | retcode = 1, | |
280 | must_find = "ERROR: Parameter problem: Bucket name '%s' contains disallowed character" % bucket('EU'), | |
281 | must_not_find_re = "Bucket.*created") | |
282 | ||
283 | ||
284 | ## ====== Buckets list | |
285 | test_s3cmd("Buckets list", ["ls"], | |
286 | must_find = [ "autotest-1", "autotest-2", "Autotest-3" ], must_not_find_re = "autotest-EU") | |
287 | ||
288 | ||
289 | ## ====== Sync to S3 | |
290 | test_s3cmd("Sync to S3", ['sync', 'testsuite/', pbucket(1) + '/xyz/', '--exclude', 'demo/*', '--exclude', '*.png', '--no-encrypt', '--exclude-from', 'testsuite/exclude.encodings' ], | |
291 | must_find = [ "WARNING: 32 non-printable characters replaced in: crappy-file-name/non-printables ^A^B^C^D^E^F^G^H^I^J^K^L^M^N^O^P^Q^R^S^T^U^V^W^X^Y^Z^[^\^]^^^_^? +-[\]^<>%%\"'#{}`&?.end", | |
292 | "WARNING: File can not be uploaded: testsuite/permission-tests/permission-denied.txt: Permission denied", | |
293 | "stored as '%s/xyz/crappy-file-name/non-printables ^A^B^C^D^E^F^G^H^I^J^K^L^M^N^O^P^Q^R^S^T^U^V^W^X^Y^Z^[^\^]^^^_^? +-[\\]^<>%%%%\"'#{}`&?.end'" % pbucket(1) ], | |
294 | must_not_find_re = [ "demo/", "\.png$", "permission-denied-dir" ]) | |
295 | ||
296 | if have_encoding: | |
297 | ## ====== Sync UTF-8 / GBK / ... to S3 | |
298 | test_s3cmd("Sync %s to S3" % encoding, ['sync', 'testsuite/encodings/' + encoding, '%s/xyz/encodings/' % pbucket(1), '--exclude', 'demo/*', '--no-encrypt' ], | |
299 | must_find = [ u"File 'testsuite/encodings/%(encoding)s/%(pattern)s' stored as '%(pbucket)s/xyz/encodings/%(encoding)s/%(pattern)s'" % { 'encoding' : encoding, 'pattern' : enc_pattern , 'pbucket' : pbucket(1)} ]) | |
300 | ||
301 | ||
302 | ## ====== List bucket content | |
303 | test_s3cmd("List bucket content", ['ls', '%s/xyz/' % pbucket(1) ], | |
304 | must_find_re = [ u"DIR %s/xyz/binary/$" % pbucket(1) , u"DIR %s/xyz/etc/$" % pbucket(1) ], | |
305 | must_not_find = [ u"random-crap.md5", u"/demo" ]) | |
306 | ||
307 | ||
308 | ## ====== List bucket recursive | |
309 | must_find = [ u"%s/xyz/binary/random-crap.md5" % pbucket(1) ] | |
310 | if have_encoding: | |
311 | must_find.append(u"%(pbucket)s/xyz/encodings/%(encoding)s/%(pattern)s" % { 'encoding' : encoding, 'pattern' : enc_pattern, 'pbucket' : pbucket(1) }) | |
312 | ||
313 | test_s3cmd("List bucket recursive", ['ls', '--recursive', pbucket(1)], | |
314 | must_find = must_find, | |
315 | must_not_find = [ "logo.png" ]) | |
316 | ||
317 | ## ====== FIXME | |
318 | # test_s3cmd("Recursive put", ['put', '--recursive', 'testsuite/etc', '%s/xyz/' % pbucket(1) ]) | |
319 | ||
320 | ||
321 | ## ====== Clean up local destination dir | |
322 | test_flushdir("Clean testsuite-out/", "testsuite-out") | |
323 | ||
324 | ||
325 | ## ====== Sync from S3 | |
326 | must_find = [ "File '%s/xyz/binary/random-crap.md5' stored as 'testsuite-out/xyz/binary/random-crap.md5'" % pbucket(1) ] | |
327 | if have_encoding: | |
328 | must_find.append(u"File '%(pbucket)s/xyz/encodings/%(encoding)s/%(pattern)s' stored as 'testsuite-out/xyz/encodings/%(encoding)s/%(pattern)s' " % { 'encoding' : encoding, 'pattern' : enc_pattern, 'pbucket' : pbucket(1) }) | |
329 | test_s3cmd("Sync from S3", ['sync', '%s/xyz' % pbucket(1), 'testsuite-out'], | |
330 | must_find = must_find) | |
331 | ||
332 | ||
333 | ## ====== Remove 'demo' directory | |
334 | test_rmdir("Remove 'dir-test/'", "testsuite-out/xyz/dir-test/") | |
335 | ||
336 | ||
337 | ## ====== Create dir with name of a file | |
338 | test_mkdir("Create file-dir dir", "testsuite-out/xyz/dir-test/file-dir") | |
339 | ||
340 | ||
341 | ## ====== Skip dst dirs | |
342 | test_s3cmd("Skip over dir", ['sync', '%s/xyz' % pbucket(1), 'testsuite-out'], | |
343 | must_find = "WARNING: testsuite-out/xyz/dir-test/file-dir is a directory - skipping over") | |
344 | ||
345 | ||
346 | ## ====== Clean up local destination dir | |
347 | test_flushdir("Clean testsuite-out/", "testsuite-out") | |
348 | ||
349 | ||
350 | ## ====== Put public, guess MIME | |
351 | test_s3cmd("Put public, guess MIME", ['put', '--guess-mime-type', '--acl-public', 'testsuite/etc/logo.png', '%s/xyz/etc/logo.png' % pbucket(1)], | |
352 | must_find = [ "stored as '%s/xyz/etc/logo.png'" % pbucket(1) ]) | |
353 | ||
354 | ||
355 | ## ====== Retrieve from URL | |
356 | if have_wget: | |
357 | test("Retrieve from URL", ['wget', '-O', 'testsuite-out/logo.png', 'http://%s.s3.amazonaws.com/xyz/etc/logo.png' % bucket(1)], | |
358 | must_find_re = [ 'logo.png.*saved \[22059/22059\]' ]) | |
359 | ||
360 | ||
361 | ## ====== Change ACL to Private | |
362 | test_s3cmd("Change ACL to Private", ['setacl', '--acl-private', '%s/xyz/etc/l*.png' % pbucket(1)], | |
363 | must_find = [ "logo.png: ACL set to Private" ]) | |
364 | ||
365 | ||
366 | ## ====== Verify Private ACL | |
367 | if have_wget: | |
368 | test("Verify Private ACL", ['wget', '-O', 'testsuite-out/logo.png', 'http://%s.s3.amazonaws.com/xyz/etc/logo.png' % bucket(1)], | |
369 | retcode = 8, | |
370 | must_find_re = [ 'ERROR 403: Forbidden' ]) | |
371 | ||
372 | ||
373 | ## ====== Change ACL to Public | |
374 | test_s3cmd("Change ACL to Public", ['setacl', '--acl-public', '--recursive', '%s/xyz/etc/' % pbucket(1) , '-v'], | |
375 | must_find = [ "logo.png: ACL set to Public" ]) | |
376 | ||
377 | ||
378 | ## ====== Verify Public ACL | |
379 | if have_wget: | |
380 | test("Verify Public ACL", ['wget', '-O', 'testsuite-out/logo.png', 'http://%s.s3.amazonaws.com/xyz/etc/logo.png' % bucket(1)], | |
381 | must_find_re = [ 'logo.png.*saved \[22059/22059\]' ]) | |
382 | ||
383 | ||
384 | ## ====== Sync more to S3 | |
385 | test_s3cmd("Sync more to S3", ['sync', 'testsuite/', 's3://%s/xyz/' % bucket(1), '--no-encrypt' ], | |
386 | must_find = [ "File 'testsuite/demo/some-file.xml' stored as '%s/xyz/demo/some-file.xml' " % pbucket(1) ], | |
387 | must_not_find = [ "File 'testsuite/etc/linked.png' stored as '%s/xyz/etc/linked.png" % pbucket(1) ]) | |
388 | ||
389 | ||
390 | ## ====== Don't check MD5 sum on Sync | |
391 | test_copy("Change file cksum1.txt", "testsuite/checksum/cksum2.txt", "testsuite/checksum/cksum1.txt") | |
392 | test_copy("Change file cksum33.txt", "testsuite/checksum/cksum2.txt", "testsuite/checksum/cksum33.txt") | |
393 | test_s3cmd("Don't check MD5", ['sync', 'testsuite/', 's3://%s/xyz/' % bucket(1), '--no-encrypt', '--no-check-md5'], | |
394 | must_find = [ "cksum33.txt" ], | |
395 | must_not_find = [ "cksum1.txt" ]) | |
396 | ||
397 | ||
398 | ## ====== Check MD5 sum on Sync | |
399 | test_s3cmd("Check MD5", ['sync', 'testsuite/', 's3://%s/xyz/' % bucket(1), '--no-encrypt', '--check-md5'], | |
400 | must_find = [ "cksum1.txt" ]) | |
401 | ||
402 | ||
403 | ## ====== Rename within S3 | |
404 | test_s3cmd("Rename within S3", ['mv', '%s/xyz/etc/logo.png' % pbucket(1), '%s/xyz/etc2/Logo.PNG' % pbucket(1)], | |
405 | must_find = [ 'File %s/xyz/etc/logo.png moved to %s/xyz/etc2/Logo.PNG' % (pbucket(1), pbucket(1))]) | |
406 | ||
407 | ||
408 | ## ====== Rename (NoSuchKey) | |
409 | test_s3cmd("Rename (NoSuchKey)", ['mv', '%s/xyz/etc/logo.png' % pbucket(1), '%s/xyz/etc2/Logo.PNG' % pbucket(1)], | |
410 | retcode = 1, | |
411 | must_find_re = [ 'ERROR:.*NoSuchKey' ], | |
412 | must_not_find = [ 'File %s/xyz/etc/logo.png moved to %s/xyz/etc2/Logo.PNG' % (pbucket(1), pbucket(1)) ]) | |
413 | ||
414 | ## ====== Sync more from S3 (invalid src) | |
415 | test_s3cmd("Sync more from S3 (invalid src)", ['sync', '--delete-removed', '%s/xyz/DOESNOTEXIST' % pbucket(1), 'testsuite-out'], | |
416 | must_not_find = [ "deleted: testsuite-out/logo.png" ]) | |
417 | ||
418 | ## ====== Sync more from S3 | |
419 | test_s3cmd("Sync more from S3", ['sync', '--delete-removed', '%s/xyz' % pbucket(1), 'testsuite-out'], | |
420 | must_find = [ "deleted: testsuite-out/logo.png", | |
421 | "File '%s/xyz/etc2/Logo.PNG' stored as 'testsuite-out/xyz/etc2/Logo.PNG' (22059 bytes" % pbucket(1), | |
422 | "File '%s/xyz/demo/some-file.xml' stored as 'testsuite-out/xyz/demo/some-file.xml' " % pbucket(1) ], | |
423 | must_not_find_re = [ "not-deleted.*etc/logo.png" ]) | |
424 | ||
425 | ||
426 | ## ====== Make dst dir for get | |
427 | test_rmdir("Remove dst dir for get", "testsuite-out") | |
428 | ||
429 | ||
430 | ## ====== Get multiple files | |
431 | test_s3cmd("Get multiple files", ['get', '%s/xyz/etc2/Logo.PNG' % pbucket(1), '%s/xyz/etc/AtomicClockRadio.ttf' % pbucket(1), 'testsuite-out'], | |
432 | retcode = 1, | |
433 | must_find = [ 'Destination must be a directory or stdout when downloading multiple sources.' ]) | |
434 | ||
435 | ||
436 | ## ====== Make dst dir for get | |
437 | test_mkdir("Make dst dir for get", "testsuite-out") | |
438 | ||
439 | ||
440 | ## ====== Get multiple files | |
441 | test_s3cmd("Get multiple files", ['get', '%s/xyz/etc2/Logo.PNG' % pbucket(1), '%s/xyz/etc/AtomicClockRadio.ttf' % pbucket(1), 'testsuite-out'], | |
442 | must_find = [ u"saved as 'testsuite-out/Logo.PNG'", u"saved as 'testsuite-out/AtomicClockRadio.ttf'" ]) | |
443 | ||
444 | ## ====== Upload files differing in capitalisation | |
445 | test_s3cmd("blah.txt / Blah.txt", ['put', '-r', 'testsuite/blahBlah', pbucket(1)], | |
446 | must_find = [ '%s/blahBlah/Blah.txt' % pbucket(1), '%s/blahBlah/blah.txt' % pbucket(1)]) | |
447 | ||
448 | ## ====== Copy between buckets | |
449 | test_s3cmd("Copy between buckets", ['cp', '%s/xyz/etc2/Logo.PNG' % pbucket(1), '%s/xyz/etc2/logo.png' % pbucket(3)], | |
450 | must_find = [ "File %s/xyz/etc2/Logo.PNG copied to %s/xyz/etc2/logo.png" % (pbucket(1), pbucket(3)) ]) | |
451 | ||
452 | ## ====== Recursive copy | |
453 | test_s3cmd("Recursive copy, set ACL", ['cp', '-r', '--acl-public', '%s/xyz/' % pbucket(1), '%s/copy' % pbucket(2), '--exclude', 'demo/dir?/*.txt', '--exclude', 'non-printables*'], | |
454 | must_find = [ "File %s/xyz/etc2/Logo.PNG copied to %s/copy/etc2/Logo.PNG" % (pbucket(1), pbucket(2)), | |
455 | "File %s/xyz/blahBlah/Blah.txt copied to %s/copy/blahBlah/Blah.txt" % (pbucket(1), pbucket(2)), | |
456 | "File %s/xyz/blahBlah/blah.txt copied to %s/copy/blahBlah/blah.txt" % (pbucket(1), pbucket(2)) ], | |
457 | must_not_find = [ "demo/dir1/file1-1.txt" ]) | |
458 | ||
459 | ## ====== Verify ACL and MIME type | |
460 | test_s3cmd("Verify ACL and MIME type", ['info', '%s/copy/etc2/Logo.PNG' % pbucket(2) ], | |
461 | must_find_re = [ "MIME type:.*image/png", | |
462 | "ACL:.*\*anon\*: READ", | |
463 | "URL:.*http://%s.s3.amazonaws.com/copy/etc2/Logo.PNG" % bucket(2) ]) | |
464 | ||
465 | ## ====== Rename within S3 | |
466 | test_s3cmd("Rename within S3", ['mv', '%s/copy/etc2/Logo.PNG' % pbucket(2), '%s/copy/etc/logo.png' % pbucket(2)], | |
467 | must_find = [ 'File %s/copy/etc2/Logo.PNG moved to %s/copy/etc/logo.png' % (pbucket(2), pbucket(2))]) | |
468 | ||
469 | ## ====== Sync between buckets | |
470 | test_s3cmd("Sync remote2remote", ['sync', '%s/xyz/' % pbucket(1), '%s/copy/' % pbucket(2), '--delete-removed', '--exclude', 'non-printables*'], | |
471 | must_find = [ "File %s/xyz/demo/dir1/file1-1.txt copied to %s/copy/demo/dir1/file1-1.txt" % (pbucket(1), pbucket(2)), | |
472 | "remote copy: etc/logo.png -> etc2/Logo.PNG", | |
473 | "deleted: '%s/copy/etc/logo.png'" % pbucket(2) ], | |
474 | must_not_find = [ "blah.txt" ]) | |
475 | ||
476 | ## ====== Don't Put symbolic link | |
477 | test_s3cmd("Don't put symbolic links", ['put', 'testsuite/etc/linked1.png', 's3://%s/xyz/' % bucket(1),], | |
478 | must_not_find_re = [ "linked1.png"]) | |
479 | ||
480 | ## ====== Put symbolic link | |
481 | test_s3cmd("Put symbolic links", ['put', 'testsuite/etc/linked1.png', 's3://%s/xyz/' % bucket(1),'--follow-symlinks' ], | |
482 | must_find = [ "File 'testsuite/etc/linked1.png' stored as '%s/xyz/linked1.png'" % pbucket(1)]) | |
483 | ||
484 | ## ====== Sync symbolic links | |
485 | test_s3cmd("Sync symbolic links", ['sync', 'testsuite/', 's3://%s/xyz/' % bucket(1), '--no-encrypt', '--follow-symlinks' ], | |
486 | must_find = ["remote copy: etc2/Logo.PNG -> etc/linked.png"], | |
487 | # Don't want to recursively copy linked directories! | |
488 | must_not_find_re = ["etc/more/linked-dir/more/give-me-more.txt", | |
489 | "etc/brokenlink.png"], | |
490 | ) | |
491 | ||
492 | ## ====== Multi source move | |
493 | test_s3cmd("Multi-source move", ['mv', '-r', '%s/copy/blahBlah/Blah.txt' % pbucket(2), '%s/copy/etc/' % pbucket(2), '%s/moved/' % pbucket(2)], | |
494 | must_find = [ "File %s/copy/blahBlah/Blah.txt moved to %s/moved/Blah.txt" % (pbucket(2), pbucket(2)), | |
495 | "File %s/copy/etc/AtomicClockRadio.ttf moved to %s/moved/AtomicClockRadio.ttf" % (pbucket(2), pbucket(2)), | |
496 | "File %s/copy/etc/TypeRa.ttf moved to %s/moved/TypeRa.ttf" % (pbucket(2), pbucket(2)) ], | |
497 | must_not_find = [ "blah.txt" ]) | |
498 | ||
499 | ## ====== Verify move | |
500 | test_s3cmd("Verify move", ['ls', '-r', pbucket(2)], | |
501 | must_find = [ "%s/moved/Blah.txt" % pbucket(2), | |
502 | "%s/moved/AtomicClockRadio.ttf" % pbucket(2), | |
503 | "%s/moved/TypeRa.ttf" % pbucket(2), | |
504 | "%s/copy/blahBlah/blah.txt" % pbucket(2) ], | |
505 | must_not_find = [ "%s/copy/blahBlah/Blah.txt" % pbucket(2), | |
506 | "%s/copy/etc/AtomicClockRadio.ttf" % pbucket(2), | |
507 | "%s/copy/etc/TypeRa.ttf" % pbucket(2) ]) | |
508 | ||
509 | ## ====== Simple delete | |
510 | test_s3cmd("Simple delete", ['del', '%s/xyz/etc2/Logo.PNG' % pbucket(1)], | |
511 | must_find = [ "File %s/xyz/etc2/Logo.PNG deleted" % pbucket(1) ]) | |
512 | ||
513 | ||
514 | ## ====== Recursive delete maximum exceeed | |
515 | test_s3cmd("Recursive delete maximum exceeded", ['del', '--recursive', '--max-delete=1', '--exclude', 'Atomic*', '%s/xyz/etc' % pbucket(1)], | |
516 | must_not_find = [ "File %s/xyz/etc/TypeRa.ttf deleted" % pbucket(1) ]) | |
517 | ||
518 | ## ====== Recursive delete | |
519 | test_s3cmd("Recursive delete", ['del', '--recursive', '--exclude', 'Atomic*', '%s/xyz/etc' % pbucket(1)], | |
520 | must_find = [ "File %s/xyz/etc/TypeRa.ttf deleted" % pbucket(1) ], | |
521 | must_find_re = [ "File .*/etc/logo.png deleted" ], | |
522 | must_not_find = [ "AtomicClockRadio.ttf" ]) | |
523 | ||
524 | ## ====== Recursive delete all | |
525 | test_s3cmd("Recursive delete all", ['del', '--recursive', '--force', pbucket(1)], | |
526 | must_find_re = [ "File .*binary/random-crap deleted" ]) | |
527 | ||
528 | ||
529 | ## ====== Remove empty bucket | |
530 | test_s3cmd("Remove empty bucket", ['rb', pbucket(1)], | |
531 | must_find = [ "Bucket '%s/' removed" % pbucket(1) ]) | |
532 | ||
533 | ||
534 | ## ====== Remove remaining buckets | |
535 | test_s3cmd("Remove remaining buckets", ['rb', '--recursive', pbucket(2), pbucket(3)], | |
536 | must_find = [ "Bucket '%s/' removed" % pbucket(2), | |
537 | "Bucket '%s/' removed" % pbucket(3) ]) | |
538 | ||
539 | # vim:et:ts=4:sts=4:ai |
0 | 0 | #!/usr/bin/env python |
1 | 1 | |
2 | ## Amazon S3 manager | |
3 | ## Author: Michal Ludvig <michal@logix.cz> | |
4 | ## http://www.logix.cz/michal | |
5 | ## License: GPL Version 2 | |
2 | ## -------------------------------------------------------------------- | |
3 | ## s3cmd - S3 client | |
4 | ## | |
5 | ## Authors : Michal Ludvig and contributors | |
6 | ## Copyright : TGRMN Software - http://www.tgrmn.com - and contributors | |
7 | ## Website : http://s3tools.org | |
8 | ## License : GPL Version 2 | |
9 | ## -------------------------------------------------------------------- | |
10 | ## This program is free software; you can redistribute it and/or modify | |
11 | ## it under the terms of the GNU General Public License as published by | |
12 | ## the Free Software Foundation; either version 2 of the License, or | |
13 | ## (at your option) any later version. | |
14 | ## This program is distributed in the hope that it will be useful, | |
15 | ## but WITHOUT ANY WARRANTY; without even the implied warranty of | |
16 | ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
17 | ## GNU General Public License for more details. | |
18 | ## -------------------------------------------------------------------- | |
6 | 19 | |
7 | 20 | import sys |
8 | 21 | |
9 | 22 | if float("%d.%d" %(sys.version_info[0], sys.version_info[1])) < 2.4: |
10 | sys.stderr.write("ERROR: Python 2.4 or higher required, sorry.\n") | |
11 | sys.exit(1) | |
23 | sys.stderr.write(u"ERROR: Python 2.4 or higher required, sorry.\n") | |
24 | sys.exit(EX_OSFILE) | |
12 | 25 | |
13 | 26 | import logging |
14 | 27 | import time |
45 | 58 | uri = S3Uri(args[0]) |
46 | 59 | if uri.type == "s3" and uri.has_bucket(): |
47 | 60 | subcmd_bucket_usage(s3, uri) |
48 | return | |
61 | return EX_OK | |
49 | 62 | subcmd_bucket_usage_all(s3) |
63 | return EX_OK | |
50 | 64 | |
51 | 65 | def subcmd_bucket_usage_all(s3): |
66 | """ | |
67 | Returns: sum of bucket sizes as integer | |
68 | Raises: S3Error | |
69 | """ | |
52 | 70 | response = s3.list_all_buckets() |
53 | 71 | |
54 | 72 | buckets_size = 0 |
60 | 78 | total_size_str = str(total_size) + size_coeff |
61 | 79 | output(u"".rjust(8, "-")) |
62 | 80 | output(u"%s Total" % (total_size_str.ljust(8))) |
81 | return size | |
63 | 82 | |
64 | 83 | def subcmd_bucket_usage(s3, uri): |
84 | """ | |
85 | Returns: bucket size as integer | |
86 | Raises: S3Error | |
87 | """ | |
88 | ||
65 | 89 | bucket = uri.bucket() |
66 | 90 | object = uri.object() |
67 | 91 | |
77 | 101 | except S3Error, e: |
78 | 102 | if S3.codes.has_key(e.info["Code"]): |
79 | 103 | error(S3.codes[e.info["Code"]] % bucket) |
80 | return | |
81 | else: | |
82 | raise | |
104 | raise | |
83 | 105 | |
84 | 106 | # objects in the current scope: |
85 | 107 | for obj in response["list"]: |
100 | 122 | uri = S3Uri(args[0]) |
101 | 123 | if uri.type == "s3" and uri.has_bucket(): |
102 | 124 | subcmd_bucket_list(s3, uri) |
103 | return | |
125 | return EX_OK | |
104 | 126 | subcmd_buckets_list_all(s3) |
127 | return EX_OK | |
105 | 128 | |
106 | 129 | def cmd_buckets_list_all_all(args): |
107 | 130 | s3 = S3(Config()) |
111 | 134 | for bucket in response["list"]: |
112 | 135 | subcmd_bucket_list(s3, S3Uri("s3://" + bucket["Name"])) |
113 | 136 | output(u"") |
114 | ||
137 | return EX_OK | |
115 | 138 | |
116 | 139 | def subcmd_buckets_list_all(s3): |
117 | 140 | response = s3.list_all_buckets() |
133 | 156 | except S3Error, e: |
134 | 157 | if S3.codes.has_key(e.info["Code"]): |
135 | 158 | error(S3.codes[e.info["Code"]] % bucket) |
136 | return | |
137 | else: | |
138 | raise | |
159 | raise | |
139 | 160 | |
140 | 161 | if cfg.list_md5: |
141 | 162 | format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(md5)32s %(uri)s" |
182 | 203 | except S3Error, e: |
183 | 204 | if S3.codes.has_key(e.info["Code"]): |
184 | 205 | error(S3.codes[e.info["Code"]] % uri.bucket()) |
185 | return | |
186 | else: | |
187 | raise | |
206 | raise | |
207 | return EX_OK | |
188 | 208 | |
189 | 209 | def cmd_website_info(args): |
190 | 210 | s3 = S3(Config()) |
204 | 224 | except S3Error, e: |
205 | 225 | if S3.codes.has_key(e.info["Code"]): |
206 | 226 | error(S3.codes[e.info["Code"]] % uri.bucket()) |
207 | return | |
208 | else: | |
209 | raise | |
227 | raise | |
228 | return EX_OK | |
210 | 229 | |
211 | 230 | def cmd_website_create(args): |
212 | 231 | s3 = S3(Config()) |
220 | 239 | except S3Error, e: |
221 | 240 | if S3.codes.has_key(e.info["Code"]): |
222 | 241 | error(S3.codes[e.info["Code"]] % uri.bucket()) |
223 | return | |
224 | else: | |
225 | raise | |
242 | raise | |
243 | return EX_OK | |
226 | 244 | |
227 | 245 | def cmd_website_delete(args): |
228 | 246 | s3 = S3(Config()) |
236 | 254 | except S3Error, e: |
237 | 255 | if S3.codes.has_key(e.info["Code"]): |
238 | 256 | error(S3.codes[e.info["Code"]] % uri.bucket()) |
239 | return | |
240 | else: | |
241 | raise | |
242 | ||
243 | def cmd_bucket_delete(args): | |
244 | def _bucket_delete_one(uri): | |
245 | try: | |
246 | response = s3.bucket_delete(uri.bucket()) | |
247 | except S3Error, e: | |
248 | if e.info['Code'] == 'BucketNotEmpty' and (cfg.force or cfg.recursive): | |
249 | warning(u"Bucket is not empty. Removing all the objects from it first. This may take some time...") | |
250 | subcmd_object_del_uri(uri.uri(), recursive = True) | |
251 | return _bucket_delete_one(uri) | |
252 | elif S3.codes.has_key(e.info["Code"]): | |
253 | error(S3.codes[e.info["Code"]] % uri.bucket()) | |
254 | return | |
255 | else: | |
256 | raise | |
257 | ||
257 | raise | |
258 | return EX_OK | |
259 | ||
260 | def cmd_expiration_set(args): | |
258 | 261 | s3 = S3(Config()) |
259 | 262 | for arg in args: |
260 | 263 | uri = S3Uri(arg) |
261 | 264 | if not uri.type == "s3" or not uri.has_bucket() or uri.has_object(): |
262 | 265 | raise ParameterError("Expecting S3 URI with just the bucket name set instead of '%s'" % arg) |
263 | _bucket_delete_one(uri) | |
264 | output(u"Bucket '%s' removed" % uri.uri()) | |
266 | try: | |
267 | response = s3.expiration_set(uri, cfg.bucket_location) | |
268 | if response["status"] is 200: | |
269 | output(u"Bucket '%s': expiration configuration is set." % (uri.uri())) | |
270 | elif response["status"] is 204: | |
271 | output(u"Bucket '%s': expiration configuration is deleted." % (uri.uri())) | |
272 | except S3Error, e: | |
273 | if S3.codes.has_key(e.info["Code"]): | |
274 | error(S3.codes[e.info["Code"]] % uri.bucket()) | |
275 | raise | |
276 | return EX_OK | |
277 | ||
278 | def cmd_bucket_delete(args): | |
279 | def _bucket_delete_one(uri): | |
280 | try: | |
281 | response = s3.bucket_delete(uri.bucket()) | |
282 | output(u"Bucket '%s' removed" % uri.uri()) | |
283 | except S3Error, e: | |
284 | if e.info['Code'] == 'NoSuchBucket': | |
285 | if cfg.force: | |
286 | return EX_OK | |
287 | else: | |
288 | return EX_USAGE | |
289 | if e.info['Code'] == 'BucketNotEmpty' and (cfg.force or cfg.recursive): | |
290 | warning(u"Bucket is not empty. Removing all the objects from it first. This may take some time...") | |
291 | rc = subcmd_batch_del(uri_str = uri.uri()) | |
292 | if rc == EX_OK: | |
293 | return _bucket_delete_one(uri) | |
294 | else: | |
295 | output(u"Bucket was not removed") | |
296 | elif S3.codes.has_key(e.info["Code"]): | |
297 | error(S3.codes[e.info["Code"]] % uri.bucket()) | |
298 | raise | |
299 | return EX_OK | |
300 | ||
301 | s3 = S3(Config()) | |
302 | for arg in args: | |
303 | uri = S3Uri(arg) | |
304 | if not uri.type == "s3" or not uri.has_bucket() or uri.has_object(): | |
305 | raise ParameterError("Expecting S3 URI with just the bucket name set instead of '%s'" % arg) | |
306 | rc = _bucket_delete_one(uri) | |
307 | if rc != EX_OK: | |
308 | return rc | |
309 | return EX_OK | |
265 | 310 | |
266 | 311 | def cmd_object_put(args): |
267 | 312 | cfg = Config() |
274 | 319 | destination_base_uri = S3Uri(args.pop()) |
275 | 320 | if destination_base_uri.type != 's3': |
276 | 321 | raise ParameterError("Destination must be S3Uri. Got: %s" % destination_base_uri) |
277 | destination_base = str(destination_base_uri) | |
322 | destination_base = unicode(destination_base_uri) | |
278 | 323 | |
279 | 324 | if len(args) == 0: |
280 | 325 | raise ParameterError("Nothing to upload. Expecting a local file or directory.") |
281 | 326 | |
282 | local_list, single_file_local = fetch_local_list(args, is_src = True) | |
283 | ||
284 | local_list, exclude_list = filter_exclude_include(local_list) | |
327 | local_list, single_file_local, exclude_list = fetch_local_list(args, is_src = True) | |
285 | 328 | |
286 | 329 | local_count = len(local_list) |
287 | 330 | |
288 | 331 | info(u"Summary: %d local files to upload" % local_count) |
332 | ||
333 | if local_count == 0: | |
334 | raise ParameterError("Nothing to upload.") | |
289 | 335 | |
290 | 336 | if local_count > 0: |
291 | 337 | if not single_file_local and '-' in local_list.keys(): |
311 | 357 | output(u"upload: %s -> %s" % (nicekey, local_list[key]['remote_uri'])) |
312 | 358 | |
313 | 359 | warning(u"Exiting now because of --dry-run") |
314 | return | |
360 | return EX_OK | |
315 | 361 | |
316 | 362 | seq = 0 |
317 | 363 | for key in local_list: |
324 | 370 | full_name = full_name_orig |
325 | 371 | seq_label = "[%d of %d]" % (seq, local_count) |
326 | 372 | if Config().encrypt: |
327 | exitcode, full_name, extra_headers["x-amz-meta-s3tools-gpgenc"] = gpg_encrypt(full_name_orig) | |
373 | gpg_exitcode, full_name, extra_headers["x-amz-meta-s3tools-gpgenc"] = gpg_encrypt(full_name_orig) | |
328 | 374 | if cfg.preserve_attrs or local_list[key]['size'] > (cfg.multipart_chunk_size_mb * 1024 * 1024): |
329 | 375 | attr_header = _build_attr_header(local_list, key) |
330 | 376 | debug(u"attr_header: %s" % attr_header) |
349 | 395 | if Config().encrypt and full_name != full_name_orig: |
350 | 396 | debug(u"Removing temporary encrypted file: %s" % unicodise(full_name)) |
351 | 397 | os.remove(full_name) |
398 | return EX_OK | |
352 | 399 | |
353 | 400 | def cmd_object_get(args): |
354 | 401 | cfg = Config() |
397 | 444 | if len(args) == 0: |
398 | 445 | raise ParameterError("Nothing to download. Expecting S3 URI.") |
399 | 446 | |
400 | remote_list = fetch_remote_list(args, require_attribs = False) | |
401 | remote_list, exclude_list = filter_exclude_include(remote_list) | |
447 | remote_list, exclude_list = fetch_remote_list(args, require_attribs = False) | |
402 | 448 | |
403 | 449 | remote_count = len(remote_list) |
404 | 450 | |
429 | 475 | output(u"download: %s -> %s" % (remote_list[key]['object_uri_str'], remote_list[key]['local_filename'])) |
430 | 476 | |
431 | 477 | warning(u"Exiting now because of --dry-run") |
432 | return | |
478 | return EX_OK | |
433 | 479 | |
434 | 480 | seq = 0 |
435 | 481 | for key in remote_list: |
478 | 524 | continue |
479 | 525 | try: |
480 | 526 | response = s3.object_get(uri, dst_stream, start_position = start_position, extra_label = seq_label) |
527 | except S3DownloadError, e: | |
528 | error(u"%s: Skipping that file. This is usually a transient error, please try again later." % e) | |
529 | if not file_exists: # Delete, only if file didn't exist before! | |
530 | debug(u"object_get failed for '%s', deleting..." % (destination,)) | |
531 | os.unlink(destination) | |
532 | continue | |
481 | 533 | except S3Error, e: |
482 | 534 | if not file_exists: # Delete, only if file didn't exist before! |
483 | 535 | debug(u"object_get failed for '%s', deleting..." % (destination,)) |
487 | 539 | if response["headers"].has_key("x-amz-meta-s3tools-gpgenc"): |
488 | 540 | gpg_decrypt(destination, response["headers"]["x-amz-meta-s3tools-gpgenc"]) |
489 | 541 | response["size"] = os.stat(destination)[6] |
542 | if response["headers"].has_key("last-modified") and destination != "-": | |
543 | last_modified = time.mktime(time.strptime(response["headers"]["last-modified"], "%a, %d %b %Y %H:%M:%S GMT")) | |
544 | os.utime(destination, (last_modified, last_modified)) | |
545 | debug("set mtime to %s" % last_modified) | |
490 | 546 | if not Config().progress_meter and destination != "-": |
491 | 547 | speed_fmt = formatSize(response["speed"], human_readable = True, floating_point = True) |
492 | 548 | output(u"File %s saved as '%s' (%d bytes in %0.1f seconds, %0.2f %sB/s)" % |
494 | 550 | if Config().delete_after_fetch: |
495 | 551 | s3.object_delete(uri) |
496 | 552 | output(u"File %s removed after fetch" % (uri)) |
553 | return EX_OK | |
497 | 554 | |
498 | 555 | def cmd_object_del(args): |
556 | recursive = Config().recursive | |
499 | 557 | for uri_str in args: |
500 | 558 | uri = S3Uri(uri_str) |
501 | 559 | if uri.type != "s3": |
502 | 560 | raise ParameterError("Expecting S3 URI instead of '%s'" % uri_str) |
503 | 561 | if not uri.has_object(): |
504 | if Config().recursive and not Config().force: | |
562 | if recursive and not Config().force: | |
505 | 563 | raise ParameterError("Please use --force to delete ALL contents of %s" % uri_str) |
506 | elif not Config().recursive: | |
564 | elif not recursive: | |
507 | 565 | raise ParameterError("File name required, not only the bucket name. Alternatively use --recursive") |
508 | subcmd_object_del_uri(uri_str) | |
566 | ||
567 | if not recursive: | |
568 | rc = subcmd_object_del_uri(uri_str) | |
569 | else: | |
570 | rc = subcmd_batch_del(uri_str = uri_str) | |
571 | if not rc: | |
572 | return rc | |
573 | return EX_OK | |
574 | ||
575 | def subcmd_batch_del(uri_str = None, bucket = None, remote_list = None): | |
576 | """ | |
577 | Returns: EX_OK | |
578 | Raises: ValueError | |
579 | """ | |
580 | ||
581 | def _batch_del(remote_list): | |
582 | s3 = S3(cfg) | |
583 | to_delete = remote_list[:1000] | |
584 | remote_list = remote_list[1000:] | |
585 | while len(to_delete): | |
586 | debug(u"Batch delete %d, remaining %d" % (len(to_delete), len(remote_list))) | |
587 | if not cfg.dry_run: | |
588 | response = s3.object_batch_delete(to_delete) | |
589 | output('\n'.join((u"File %s deleted" % to_delete[p]['object_uri_str']) for p in to_delete)) | |
590 | to_delete = remote_list[:1000] | |
591 | remote_list = remote_list[1000:] | |
592 | ||
593 | if remote_list is not None and len(remote_list) == 0: | |
594 | return False | |
595 | ||
596 | if len([item for item in [uri_str, bucket, remote_list] if item]) != 1: | |
597 | raise ValueError("One and only one of 'uri_str', 'bucket', 'remote_list' can be specified.") | |
598 | ||
599 | if bucket: # bucket specified | |
600 | uri_str = "s3://%s" % bucket | |
601 | if remote_list is None: # uri_str specified | |
602 | remote_list, exclude_list = fetch_remote_list(uri_str, require_attribs = False) | |
603 | ||
604 | if len(remote_list) == 0: | |
605 | warning(u"Remote list is empty.") | |
606 | return EX_OK | |
607 | ||
608 | if cfg.max_delete > 0 and len(remote_list) > cfg.max_delete: | |
609 | warning(u"delete: maximum requested number of deletes would be exceeded, none performed.") | |
610 | return EX_OK | |
611 | ||
612 | _batch_del(remote_list) | |
613 | ||
614 | if cfg.dry_run: | |
615 | warning(u"Exiting now because of --dry-run") | |
616 | return EX_OK | |
617 | return EX_OK | |
509 | 618 | |
510 | 619 | def subcmd_object_del_uri(uri_str, recursive = None): |
620 | """ | |
621 | Returns: True if XXX, False if XXX | |
622 | Raises: ValueError | |
623 | """ | |
511 | 624 | s3 = S3(cfg) |
512 | 625 | |
513 | 626 | if recursive is None: |
514 | 627 | recursive = cfg.recursive |
515 | 628 | |
516 | remote_list = fetch_remote_list(uri_str, require_attribs = False, recursive = recursive) | |
517 | remote_list, exclude_list = filter_exclude_include(remote_list) | |
629 | remote_list, exclude_list = fetch_remote_list(uri_str, require_attribs = False, recursive = recursive) | |
518 | 630 | |
519 | 631 | remote_count = len(remote_list) |
520 | 632 | |
521 | 633 | info(u"Summary: %d remote files to delete" % remote_count) |
522 | 634 | if cfg.max_delete > 0 and remote_count > cfg.max_delete: |
523 | 635 | warning(u"delete: maximum requested number of deletes would be exceeded, none performed.") |
524 | return | |
636 | return False | |
525 | 637 | |
526 | 638 | if cfg.dry_run: |
527 | 639 | for key in exclude_list: |
530 | 642 | output(u"delete: %s" % remote_list[key]['object_uri_str']) |
531 | 643 | |
532 | 644 | warning(u"Exiting now because of --dry-run") |
533 | return | |
645 | return True | |
534 | 646 | |
535 | 647 | for key in remote_list: |
536 | 648 | item = remote_list[key] |
537 | 649 | response = s3.object_delete(S3Uri(item['object_uri_str'])) |
538 | 650 | output(u"File %s deleted" % item['object_uri_str']) |
539 | ||
651 | return True | |
652 | ||
540 | 653 | def cmd_object_restore(args): |
541 | 654 | s3 = S3(cfg) |
542 | ||
655 | ||
543 | 656 | if cfg.restore_days < 1: |
544 | 657 | raise ParameterError("You must restore a file for 1 or more days") |
545 | 658 | |
546 | remote_list = fetch_remote_list(args, require_attribs = False, recursive = cfg.recursive) | |
547 | remote_list, exclude_list = filter_exclude_include(remote_list) | |
659 | remote_list, exclude_list = fetch_remote_list(args, require_attribs = False, recursive = cfg.recursive) | |
548 | 660 | |
549 | 661 | remote_count = len(remote_list) |
550 | 662 | |
557 | 669 | output(u"restore: %s" % remote_list[key]['object_uri_str']) |
558 | 670 | |
559 | 671 | warning(u"Exiting now because of --dry-run") |
560 | return | |
672 | return EX_OK | |
561 | 673 | |
562 | 674 | for key in remote_list: |
563 | 675 | item = remote_list[key] |
564 | ||
676 | ||
565 | 677 | uri = S3Uri(item['object_uri_str']) |
566 | 678 | if not item['object_uri_str'].endswith("/"): |
567 | 679 | response = s3.object_restore(S3Uri(item['object_uri_str'])) |
568 | 680 | output(u"File %s restoration started" % item['object_uri_str']) |
569 | 681 | else: |
570 | 682 | debug(u"Skipping directory since only files may be restored") |
571 | ||
683 | return EX_OK | |
684 | ||
572 | 685 | |
573 | 686 | def subcmd_cp_mv(args, process_fce, action_str, message): |
574 | if len(args) < 2: | |
687 | if action_str != 'modify' and len(args) < 2: | |
575 | 688 | raise ParameterError("Expecting two or more S3 URIs for " + action_str) |
576 | dst_base_uri = S3Uri(args.pop()) | |
689 | if action_str == 'modify' and len(args) < 1: | |
690 | raise ParameterError("Expecting one or more S3 URIs for " + action_str) | |
691 | if action_str != 'modify': | |
692 | dst_base_uri = S3Uri(args.pop()) | |
693 | else: | |
694 | dst_base_uri = S3Uri(args[-1]) | |
695 | ||
577 | 696 | if dst_base_uri.type != "s3": |
578 | 697 | raise ParameterError("Destination must be S3 URI. To download a file use 'get' or 'sync'.") |
579 | 698 | destination_base = dst_base_uri.uri() |
580 | 699 | |
581 | remote_list = fetch_remote_list(args, require_attribs = False) | |
582 | remote_list, exclude_list = filter_exclude_include(remote_list) | |
700 | remote_list, exclude_list = fetch_remote_list(args, require_attribs = False) | |
583 | 701 | |
584 | 702 | remote_count = len(remote_list) |
585 | 703 | |
604 | 722 | output(u"%s: %s -> %s" % (action_str, remote_list[key]['object_uri_str'], remote_list[key]['dest_name'])) |
605 | 723 | |
606 | 724 | warning(u"Exiting now because of --dry-run") |
607 | return | |
725 | return EX_OK | |
608 | 726 | |
609 | 727 | seq = 0 |
610 | 728 | for key in remote_list: |
626 | 744 | warning(u"Key not found %s" % item['object_uri_str']) |
627 | 745 | else: |
628 | 746 | raise |
747 | return EX_OK | |
629 | 748 | |
630 | 749 | def cmd_cp(args): |
631 | 750 | s3 = S3(Config()) |
632 | subcmd_cp_mv(args, s3.object_copy, "copy", "File %(src)s copied to %(dst)s") | |
751 | return subcmd_cp_mv(args, s3.object_copy, "copy", u"File %(src)s copied to %(dst)s") | |
752 | ||
753 | def cmd_modify(args): | |
754 | s3 = S3(Config()) | |
755 | return subcmd_cp_mv(args, s3.object_copy, "modify", u"File %(src)s modified") | |
633 | 756 | |
634 | 757 | def cmd_mv(args): |
635 | 758 | s3 = S3(Config()) |
636 | subcmd_cp_mv(args, s3.object_move, "move", "File %(src)s moved to %(dst)s") | |
759 | return subcmd_cp_mv(args, s3.object_move, "move", u"File %(src)s moved to %(dst)s") | |
637 | 760 | |
638 | 761 | def cmd_info(args): |
639 | 762 | s3 = S3(Config()) |
666 | 789 | info = s3.bucket_info(uri) |
667 | 790 | output(u"%s (bucket):" % uri.uri()) |
668 | 791 | output(u" Location: %s" % info['bucket-location']) |
792 | try: | |
793 | expiration = s3.expiration_info(uri, cfg.bucket_location) | |
794 | expiration_desc = "Expiration Rule: " | |
795 | if expiration['prefix'] == "": | |
796 | expiration_desc += "all objects in this bucket " | |
797 | else: | |
798 | expiration_desc += "objects with key prefix '" + expiration['prefix'] + "' " | |
799 | expiration_desc += "will expire in '" | |
800 | if expiration['days']: | |
801 | expiration_desc += expiration['days'] + "' day(s) after creation" | |
802 | elif expiration['date']: | |
803 | expiration_desc += expiration['date'] + "' " | |
804 | output(u" %s" % expiration_desc) | |
805 | except: | |
806 | output(u" Expiration Rule: none") | |
669 | 807 | acl = s3.get_acl(uri) |
670 | 808 | acl_grant_list = acl.getGrantList() |
671 | ||
672 | 809 | try: |
673 | 810 | policy = s3.get_policy(uri) |
674 | 811 | output(u" policy: %s" % policy) |
683 | 820 | except S3Error, e: |
684 | 821 | if S3.codes.has_key(e.info["Code"]): |
685 | 822 | error(S3.codes[e.info["Code"]] % uri.bucket()) |
686 | return | |
687 | else: | |
688 | raise | |
823 | raise | |
824 | return EX_OK | |
689 | 825 | |
690 | 826 | def filedicts_to_keys(*args): |
691 | 827 | keys = set() |
696 | 832 | return keys |
697 | 833 | |
698 | 834 | def cmd_sync_remote2remote(args): |
699 | def _do_deletes(s3, dst_list): | |
700 | if cfg.max_delete > 0 and len(dst_list) > cfg.max_delete: | |
701 | warning(u"delete: maximum requested number of deletes would be exceeded, none performed.") | |
702 | return | |
703 | # Delete items in destination that are not in source | |
704 | if cfg.dry_run: | |
705 | for key in dst_list: | |
706 | output(u"delete: %s" % dst_list[key]['object_uri_str']) | |
707 | else: | |
708 | for key in dst_list: | |
709 | uri = S3Uri(dst_list[key]['object_uri_str']) | |
710 | s3.object_delete(uri) | |
711 | output(u"deleted: '%s'" % uri) | |
712 | ||
713 | 835 | s3 = S3(Config()) |
714 | 836 | |
715 | 837 | # Normalise s3://uri (e.g. assert trailing slash) |
716 | 838 | destination_base = unicode(S3Uri(args[-1])) |
717 | 839 | |
718 | src_list = fetch_remote_list(args[:-1], recursive = True, require_attribs = True) | |
719 | dst_list = fetch_remote_list(destination_base, recursive = True, require_attribs = True) | |
840 | src_list, src_exclude_list = fetch_remote_list(args[:-1], recursive = True, require_attribs = True) | |
841 | dst_list, dst_exclude_list = fetch_remote_list(destination_base, recursive = True, require_attribs = True) | |
720 | 842 | |
721 | 843 | src_count = len(src_list) |
722 | 844 | orig_src_count = src_count |
723 | 845 | dst_count = len(dst_list) |
724 | 846 | |
725 | 847 | info(u"Found %d source files, %d destination files" % (src_count, dst_count)) |
726 | ||
727 | src_list, src_exclude_list = filter_exclude_include(src_list) | |
728 | dst_list, dst_exclude_list = filter_exclude_include(dst_list) | |
729 | 848 | |
730 | 849 | src_list, dst_list, update_list, copy_pairs = compare_filelists(src_list, dst_list, src_remote = True, dst_remote = True, delay_updates = cfg.delay_updates) |
731 | 850 | |
751 | 870 | for key in src_list: |
752 | 871 | output(u"Sync: %s -> %s" % (src_list[key]['object_uri_str'], src_list[key]['target_uri'])) |
753 | 872 | warning(u"Exiting now because of --dry-run") |
754 | return | |
873 | return EX_OK | |
755 | 874 | |
756 | 875 | # if there are copy pairs, we can't do delete_before, on the chance |
757 | 876 | # we need one of the to-be-deleted files as a copy source. |
764 | 883 | |
765 | 884 | # Delete items in destination that are not in source |
766 | 885 | if cfg.delete_removed and not cfg.delete_after: |
767 | _do_deletes(s3, dst_list) | |
886 | subcmd_batch_del(remote_list = dst_list) | |
768 | 887 | |
769 | 888 | def _upload(src_list, seq, src_count): |
770 | 889 | file_list = src_list.keys() |
808 | 927 | |
809 | 928 | # Delete items in destination that are not in source |
810 | 929 | if cfg.delete_removed and cfg.delete_after: |
811 | _do_deletes(s3, dst_list) | |
930 | subcmd_batch_del(remote_list = dst_list) | |
931 | return EX_OK | |
812 | 932 | |
813 | 933 | def cmd_sync_remote2local(args): |
814 | 934 | def _do_deletes(local_list): |
822 | 942 | s3 = S3(Config()) |
823 | 943 | |
824 | 944 | destination_base = args[-1] |
825 | local_list, single_file_local = fetch_local_list(destination_base, is_src = False, recursive = True) | |
826 | remote_list = fetch_remote_list(args[:-1], recursive = True, require_attribs = True) | |
945 | local_list, single_file_local, dst_exclude_list = fetch_local_list(destination_base, is_src = False, recursive = True) | |
946 | remote_list, src_exclude_list = fetch_remote_list(args[:-1], recursive = True, require_attribs = True) | |
827 | 947 | |
828 | 948 | local_count = len(local_list) |
829 | 949 | remote_count = len(remote_list) |
830 | 950 | orig_remote_count = remote_count |
831 | 951 | |
832 | 952 | info(u"Found %d remote files, %d local files" % (remote_count, local_count)) |
833 | ||
834 | remote_list, src_exclude_list = filter_exclude_include(remote_list) | |
835 | local_list, dst_exclude_list = filter_exclude_include(local_list) | |
836 | 953 | |
837 | 954 | remote_list, local_list, update_list, copy_pairs = compare_filelists(remote_list, local_list, src_remote = True, dst_remote = False, delay_updates = cfg.delay_updates) |
838 | 955 | |
843 | 960 | |
844 | 961 | info(u"Summary: %d remote files to download, %d local files to delete, %d local files to hardlink" % (remote_count + update_count, local_count, copy_pairs_count)) |
845 | 962 | |
846 | empty_fname_re = re.compile(r'\A\s*\Z') | |
847 | 963 | def _set_local_filename(remote_list, destination_base): |
848 | 964 | if len(remote_list) == 0: |
849 | 965 | return |
856 | 972 | if destination_base[-1] != os.path.sep: |
857 | 973 | destination_base += os.path.sep |
858 | 974 | for key in remote_list: |
859 | local_basename = key | |
860 | if empty_fname_re.match(key): | |
861 | # Objects may exist on S3 with empty names (''), which don't map so well to common filesystems. | |
862 | local_basename = '__AWS-EMPTY-OBJECT-NAME__' | |
863 | warning(u"Empty object name on S3 found, saving locally as %s" % (local_basename)) | |
864 | local_filename = destination_base + local_basename | |
975 | local_filename = destination_base + key | |
865 | 976 | if os.path.sep != "/": |
866 | 977 | local_filename = os.path.sep.join(local_filename.split("/")) |
867 | 978 | remote_list[key]['local_filename'] = deunicodise(local_filename) |
882 | 993 | output(u"download: %s -> %s" % (update_list[key]['object_uri_str'], update_list[key]['local_filename'])) |
883 | 994 | |
884 | 995 | warning(u"Exiting now because of --dry-run") |
885 | return | |
996 | return EX_OK | |
886 | 997 | |
887 | 998 | # if there are copy pairs, we can't do delete_before, on the chance |
888 | 999 | # we need one of the to-be-deleted files as a copy source. |
897 | 1008 | _do_deletes(local_list) |
898 | 1009 | |
899 | 1010 | def _download(remote_list, seq, total, total_size, dir_cache): |
1011 | original_umask = os.umask(0); | |
1012 | os.umask(original_umask); | |
900 | 1013 | file_list = remote_list.keys() |
901 | 1014 | file_list.sort() |
902 | 1015 | for file in file_list: |
904 | 1017 | item = remote_list[file] |
905 | 1018 | uri = S3Uri(item['object_uri_str']) |
906 | 1019 | dst_file = item['local_filename'] |
1020 | is_empty_directory = dst_file.endswith('/') | |
907 | 1021 | seq_label = "[%d of %d]" % (seq, total) |
908 | 1022 | try: |
909 | 1023 | dst_dir = os.path.dirname(dst_file) |
910 | 1024 | if not dir_cache.has_key(dst_dir): |
911 | 1025 | dir_cache[dst_dir] = Utils.mkdir_with_parents(dst_dir) |
912 | 1026 | if dir_cache[dst_dir] == False: |
913 | warning(u"%s: destination directory not writable: %s" % (file, dst_dir)) | |
1027 | warning(u"%s: destination directory not writable: %s" % (unicodise(file), unicodise(dst_dir))) | |
914 | 1028 | continue |
1029 | ||
915 | 1030 | try: |
916 | debug(u"dst_file=%s" % unicodise(dst_file)) | |
917 | # create temporary files (of type .s3cmd.XXXX.tmp) in the same directory | |
918 | # for downloading and then rename once downloaded | |
919 | chkptfd, chkptfname = tempfile.mkstemp(".tmp",".s3cmd.",os.path.dirname(dst_file)) | |
920 | debug(u"created chkptfname=%s" % unicodise(chkptfname)) | |
921 | dst_stream = os.fdopen(chkptfd, "wb") | |
922 | response = s3.object_get(uri, dst_stream, extra_label = seq_label) | |
923 | dst_stream.close() | |
924 | # download completed, rename the file to destination | |
925 | os.rename(chkptfname, dst_file) | |
926 | ||
1031 | if not is_empty_directory: # ignore empty directory at S3: | |
1032 | debug(u"dst_file=%s" % unicodise(dst_file)) | |
1033 | # create temporary files (of type .s3cmd.XXXX.tmp) in the same directory | |
1034 | # for downloading and then rename once downloaded | |
1035 | chkptfd, chkptfname = tempfile.mkstemp(".tmp",".s3cmd.",os.path.dirname(dst_file)) | |
1036 | debug(u"created chkptfname=%s" % unicodise(chkptfname)) | |
1037 | dst_stream = os.fdopen(chkptfd, "wb") | |
1038 | response = s3.object_get(uri, dst_stream, extra_label = seq_label) | |
1039 | dst_stream.close() | |
1040 | # download completed, rename the file to destination | |
1041 | os.rename(chkptfname, dst_file) | |
1042 | debug(u"renamed chkptfname=%s to dst_file=%s" % (unicodise(chkptfname), unicodise(dst_file))) | |
1043 | except OSError, e: | |
1044 | if e.errno == errno.EISDIR: | |
1045 | warning(u"%s is a directory - skipping over" % unicodise(dst_file)) | |
1046 | continue | |
1047 | else: | |
1048 | raise | |
1049 | except S3DownloadError, e: | |
1050 | error(u"%s: Skipping that file. This is usually a transient error, please try again later." % e) | |
1051 | os.unlink(chkptfname) | |
1052 | continue | |
1053 | except S3Error, e: | |
1054 | warning(u"Remote file %s S3Error: %s" % (e.resource, e)) | |
1055 | continue | |
1056 | ||
1057 | try: | |
927 | 1058 | # set permissions on destination file |
928 | original_umask = os.umask(0); | |
929 | os.umask(original_umask); | |
930 | mode = 0777 - original_umask; | |
1059 | if not is_empty_directory: # a normal file | |
1060 | mode = 0777 - original_umask; | |
1061 | else: # an empty directory, make them readable/executable | |
1062 | mode = 0775 | |
931 | 1063 | debug(u"mode=%s" % oct(mode)) |
932 | 1064 | os.chmod(dst_file, mode); |
933 | debug(u"renamed chkptfname=%s to dst_file=%s" % (unicodise(chkptfname), unicodise(dst_file))) | |
1065 | except: | |
1066 | raise | |
1067 | ||
1068 | # because we don't upload empty directories, | |
1069 | # we can continue the loop here, we won't be setting stat info. | |
1070 | # if we do start to upload empty directories, we'll have to reconsider this. | |
1071 | if is_empty_directory: | |
1072 | continue | |
1073 | ||
1074 | try: | |
934 | 1075 | if response.has_key('s3cmd-attrs') and cfg.preserve_attrs: |
935 | 1076 | attrs = response['s3cmd-attrs'] |
936 | 1077 | if attrs.has_key('mode'): |
943 | 1084 | uid = int(attrs['uid']) |
944 | 1085 | gid = int(attrs['gid']) |
945 | 1086 | os.lchown(dst_file,uid,gid) |
1087 | elif response["headers"].has_key("last-modified"): | |
1088 | last_modified = time.mktime(time.strptime(response["headers"]["last-modified"], "%a, %d %b %Y %H:%M:%S GMT")) | |
1089 | os.utime(dst_file, (last_modified, last_modified)) | |
1090 | debug("set mtime to %s" % last_modified) | |
946 | 1091 | except OSError, e: |
947 | 1092 | try: |
948 | 1093 | dst_stream.close() |
949 | 1094 | os.remove(chkptfname) |
950 | 1095 | except: pass |
951 | 1096 | if e.errno == errno.EEXIST: |
952 | warning(u"%s exists - not overwriting" % (dst_file)) | |
1097 | warning(u"%s exists - not overwriting" % unicodise(dst_file)) | |
953 | 1098 | continue |
954 | 1099 | if e.errno in (errno.EPERM, errno.EACCES): |
955 | warning(u"%s not writable: %s" % (dst_file, e.strerror)) | |
956 | continue | |
957 | if e.errno == errno.EISDIR: | |
958 | warning(u"%s is a directory - skipping over" % dst_file) | |
1100 | warning(u"%s not writable: %s" % (unicodise(dst_file), e.strerror)) | |
959 | 1101 | continue |
960 | 1102 | raise e |
961 | 1103 | except KeyboardInterrupt: |
980 | 1122 | os.remove(chkptfname) |
981 | 1123 | except: pass |
982 | 1124 | except S3DownloadError, e: |
983 | error(u"%s: download failed too many times. Skipping that file." % file) | |
1125 | error(u"%s: download failed too many times. Skipping that file. This is usually a transient error, please try again later." % file) | |
984 | 1126 | continue |
985 | 1127 | speed_fmt = formatSize(response["speed"], human_readable = True, floating_point = True) |
986 | 1128 | if not Config().progress_meter: |
1018 | 1160 | |
1019 | 1161 | if cfg.delete_removed and cfg.delete_after: |
1020 | 1162 | _do_deletes(local_list) |
1163 | return EX_OK | |
1021 | 1164 | |
1022 | 1165 | def local_copy(copy_pairs, destination_base): |
1023 | 1166 | # Do NOT hardlink local files by default, that'd be silly |
1064 | 1207 | if attr == 'uname': |
1065 | 1208 | try: |
1066 | 1209 | val = Utils.getpwuid_username(local_list[src]['uid']) |
1067 | except KeyError: | |
1210 | except (KeyError, TypeError): | |
1068 | 1211 | attr = "uid" |
1069 | 1212 | val = local_list[src].get('uid') |
1070 | warning(u"%s: Owner username not known. Storing UID=%d instead." % (src, val)) | |
1213 | if val: | |
1214 | warning(u"%s: Owner username not known. Storing UID=%d instead." % (src, val)) | |
1071 | 1215 | elif attr == 'gname': |
1072 | 1216 | try: |
1073 | 1217 | val = Utils.getgrgid_grpname(local_list[src].get('gid')) |
1074 | except KeyError: | |
1218 | except (KeyError, TypeError): | |
1075 | 1219 | attr = "gid" |
1076 | 1220 | val = local_list[src].get('gid') |
1077 | warning(u"%s: Owner groupname not known. Storing GID=%d instead." % (src, val)) | |
1221 | if val: | |
1222 | warning(u"%s: Owner groupname not known. Storing GID=%d instead." % (src, val)) | |
1078 | 1223 | elif attr == 'md5': |
1079 | 1224 | try: |
1080 | 1225 | val = local_list.get_md5(src) |
1097 | 1242 | |
1098 | 1243 | |
1099 | 1244 | def cmd_sync_local2remote(args): |
1100 | ||
1101 | def _do_deletes(s3, remote_list): | |
1102 | if cfg.max_delete > 0 and len(remote_list) > cfg.max_delete: | |
1103 | warning(u"delete: maximum requested number of deletes would be exceeded, none performed.") | |
1104 | return | |
1105 | for key in remote_list: | |
1106 | uri = S3Uri(remote_list[key]['object_uri_str']) | |
1107 | s3.object_delete(uri) | |
1108 | output(u"deleted: '%s'" % uri) | |
1109 | ||
1110 | 1245 | def _single_process(local_list): |
1246 | any_child_failed = False | |
1111 | 1247 | for dest in destinations: |
1112 | 1248 | ## Normalize URI to convert s3://bkt to s3://bkt/ (trailing slash) |
1113 | 1249 | destination_base_uri = S3Uri(dest) |
1114 | 1250 | if destination_base_uri.type != 's3': |
1115 | 1251 | raise ParameterError("Destination must be S3Uri. Got: %s" % destination_base_uri) |
1116 | 1252 | destination_base = str(destination_base_uri) |
1117 | _child(destination_base, local_list) | |
1118 | return destination_base_uri | |
1253 | rc = _child(destination_base, local_list) | |
1254 | if rc: | |
1255 | any_child_failed = True | |
1256 | return any_child_failed | |
1119 | 1257 | |
1120 | 1258 | def _parent(): |
1121 | 1259 | # Now that we've done all the disk I/O to look at the local file system and |
1122 | 1260 | # calculate the md5 for each file, fork for each destination to upload to them separately |
1123 | 1261 | # and in parallel |
1124 | 1262 | child_pids = [] |
1263 | any_child_failed = False | |
1125 | 1264 | |
1126 | 1265 | for dest in destinations: |
1127 | 1266 | ## Normalize URI to convert s3://bkt to s3://bkt/ (trailing slash) |
1139 | 1278 | while len(child_pids): |
1140 | 1279 | (pid, status) = os.wait() |
1141 | 1280 | child_pids.remove(pid) |
1142 | ||
1143 | return | |
1281 | if status: | |
1282 | any_child_failed = True | |
1283 | ||
1284 | return any_child_failed | |
1144 | 1285 | |
1145 | 1286 | def _child(destination_base, local_list): |
1146 | 1287 | def _set_remote_uri(local_list, destination_base, single_file_local): |
1185 | 1326 | uploaded_objects_list.append(uri.object()) |
1186 | 1327 | return seq, total_size |
1187 | 1328 | |
1188 | remote_list = fetch_remote_list(destination_base, recursive = True, require_attribs = True) | |
1329 | remote_list, dst_exclude_list = fetch_remote_list(destination_base, recursive = True, require_attribs = True) | |
1189 | 1330 | |
1190 | 1331 | local_count = len(local_list) |
1191 | 1332 | orig_local_count = local_count |
1192 | 1333 | remote_count = len(remote_list) |
1193 | 1334 | |
1194 | 1335 | info(u"Found %d local files, %d remote files" % (local_count, remote_count)) |
1195 | ||
1196 | local_list, src_exclude_list = filter_exclude_include(local_list) | |
1197 | remote_list, dst_exclude_list = filter_exclude_include(remote_list) | |
1198 | 1336 | |
1199 | 1337 | if single_file_local and len(local_list) == 1 and len(remote_list) == 1: |
1200 | 1338 | ## Make remote_key same as local_key for comparison if we're dealing with only one file |
1209 | 1347 | update_count = len(update_list) |
1210 | 1348 | copy_count = len(copy_pairs) |
1211 | 1349 | remote_count = len(remote_list) |
1212 | ||
1213 | info(u"Summary: %d local files to upload, %d files to remote copy, %d remote files to delete" % (local_count + update_count, copy_count, remote_count)) | |
1350 | upload_count = local_count + update_count | |
1351 | ||
1352 | info(u"Summary: %d local files to upload, %d files to remote copy, %d remote files to delete" % (upload_count, copy_count, remote_count)) | |
1214 | 1353 | |
1215 | 1354 | _set_remote_uri(local_list, destination_base, single_file_local) |
1216 | 1355 | _set_remote_uri(update_list, destination_base, single_file_local) |
1230 | 1369 | output(u"delete: %s" % remote_list[key]['object_uri_str']) |
1231 | 1370 | |
1232 | 1371 | warning(u"Exiting now because of --dry-run") |
1233 | return | |
1372 | return EX_OK | |
1234 | 1373 | |
1235 | 1374 | # if there are copy pairs, we can't do delete_before, on the chance |
1236 | 1375 | # we need one of the to-be-deleted files as a copy source. |
1241 | 1380 | warning(u"delete: cowardly refusing to delete because no source files were found. Use --force to override.") |
1242 | 1381 | cfg.delete_removed = False |
1243 | 1382 | |
1244 | if cfg.delete_removed and not cfg.delete_after: | |
1245 | _do_deletes(s3, remote_list) | |
1383 | if cfg.delete_removed and not cfg.delete_after and remote_list: | |
1384 | subcmd_batch_del(remote_list = remote_list) | |
1246 | 1385 | |
1247 | 1386 | total_size = 0 |
1248 | 1387 | total_elapsed = 0.0 |
1249 | 1388 | timestamp_start = time.time() |
1250 | n, total_size = _upload(local_list, 0, local_count, total_size) | |
1251 | n, total_size = _upload(update_list, n, local_count, total_size) | |
1389 | n, total_size = _upload(local_list, 0, upload_count, total_size) | |
1390 | n, total_size = _upload(update_list, n, upload_count, total_size) | |
1252 | 1391 | n_copies, saved_bytes, failed_copy_files = remote_copy(s3, copy_pairs, destination_base) |
1253 | 1392 | |
1254 | 1393 | #upload file that could not be copied |
1257 | 1396 | _set_remote_uri(failed_copy_files, destination_base, single_file_local) |
1258 | 1397 | n, total_size = _upload(failed_copy_files, n, failed_copy_count, total_size) |
1259 | 1398 | |
1260 | if cfg.delete_removed and cfg.delete_after: | |
1261 | _do_deletes(s3, remote_list) | |
1399 | if cfg.delete_removed and cfg.delete_after and remote_list: | |
1400 | subcmd_batch_del(remote_list = remote_list) | |
1262 | 1401 | total_elapsed = time.time() - timestamp_start |
1263 | 1402 | total_speed = total_elapsed and total_size/total_elapsed or 0.0 |
1264 | 1403 | speed_fmt = formatSize(total_speed, human_readable = True, floating_point = True) |
1271 | 1410 | else: |
1272 | 1411 | info(outstr) |
1273 | 1412 | |
1274 | return | |
1413 | return EX_OK | |
1275 | 1414 | |
1276 | 1415 | def _invalidate_on_cf(destination_base_uri): |
1277 | 1416 | cf = CloudFront(cfg) |
1297 | 1436 | error(u"S3cmd 'sync' doesn't yet support GPG encryption, sorry.") |
1298 | 1437 | error(u"Either use unconditional 's3cmd put --recursive'") |
1299 | 1438 | error(u"or disable encryption with --no-encrypt parameter.") |
1300 | sys.exit(1) | |
1301 | ||
1302 | local_list, single_file_local = fetch_local_list(args[:-1], is_src = True, recursive = True) | |
1439 | sys.exit(EX_USAGE) | |
1440 | ||
1441 | local_list, single_file_local, src_exclude_list = fetch_local_list(args[:-1], is_src = True, recursive = True) | |
1303 | 1442 | |
1304 | 1443 | destinations = [args[-1]] |
1305 | 1444 | if cfg.additional_destinations: |
1306 | 1445 | destinations = destinations + cfg.additional_destinations |
1307 | 1446 | |
1308 | 1447 | if 'fork' not in os.__all__ or len(destinations) < 2: |
1309 | destination_base_uri = _single_process(local_list) | |
1448 | any_child_failed = _single_process(local_list) | |
1449 | destination_base_uri = S3Uri(destinations[-1]) | |
1310 | 1450 | if cfg.invalidate_on_cf: |
1311 | 1451 | if len(uploaded_objects_list) == 0: |
1312 | 1452 | info("Nothing to invalidate in CloudFront") |
1313 | 1453 | else: |
1314 | 1454 | _invalidate_on_cf(destination_base_uri) |
1315 | 1455 | else: |
1316 | _parent() | |
1456 | any_child_failed = _parent() | |
1317 | 1457 | if cfg.invalidate_on_cf: |
1318 | 1458 | error(u"You cannot use both --cf-invalidate and --add-destination.") |
1459 | return(EX_USAGE) | |
1460 | ||
1461 | if any_child_failed: | |
1462 | return EX_SOFTWARE | |
1463 | else: | |
1464 | return EX_OK | |
1319 | 1465 | |
1320 | 1466 | def cmd_sync(args): |
1321 | 1467 | if (len(args) < 2): |
1349 | 1495 | else: |
1350 | 1496 | args.append(arg) |
1351 | 1497 | |
1352 | remote_list = fetch_remote_list(args) | |
1353 | remote_list, exclude_list = filter_exclude_include(remote_list) | |
1498 | remote_list, exclude_list = fetch_remote_list(args) | |
1354 | 1499 | |
1355 | 1500 | remote_count = len(remote_list) |
1356 | 1501 | |
1363 | 1508 | output(u"setacl: %s" % remote_list[key]['object_uri_str']) |
1364 | 1509 | |
1365 | 1510 | warning(u"Exiting now because of --dry-run") |
1366 | return | |
1511 | return EX_OK | |
1367 | 1512 | |
1368 | 1513 | seq = 0 |
1369 | 1514 | for key in remote_list: |
1371 | 1516 | seq_label = "[%d of %d]" % (seq, remote_count) |
1372 | 1517 | uri = S3Uri(remote_list[key]['object_uri_str']) |
1373 | 1518 | update_acl(s3, uri, seq_label) |
1519 | return EX_OK | |
1374 | 1520 | |
1375 | 1521 | def cmd_setpolicy(args): |
1376 | 1522 | s3 = S3(cfg) |
1378 | 1524 | policy_file = args[0] |
1379 | 1525 | policy = open(policy_file, 'r').read() |
1380 | 1526 | |
1381 | if cfg.dry_run: return | |
1527 | if cfg.dry_run: return EX_OK | |
1382 | 1528 | |
1383 | 1529 | response = s3.set_policy(uri, policy) |
1384 | 1530 | |
1386 | 1532 | debug(u"response - %s" % response['status']) |
1387 | 1533 | if response['status'] == 204: |
1388 | 1534 | output(u"%s: Policy updated" % uri) |
1535 | return EX_OK | |
1389 | 1536 | |
1390 | 1537 | def cmd_delpolicy(args): |
1391 | 1538 | s3 = S3(cfg) |
1392 | 1539 | uri = S3Uri(args[0]) |
1393 | if cfg.dry_run: return | |
1540 | if cfg.dry_run: return EX_OK | |
1394 | 1541 | |
1395 | 1542 | response = s3.delete_policy(uri) |
1396 | 1543 | |
1397 | 1544 | #if retsponse['status'] == 200: |
1398 | 1545 | debug(u"response - %s" % response['status']) |
1399 | 1546 | output(u"%s: Policy deleted" % uri) |
1400 | ||
1547 | return EX_OK | |
1548 | ||
1549 | def cmd_setlifecycle(args): | |
1550 | s3 = S3(cfg) | |
1551 | uri = S3Uri(args[1]) | |
1552 | lifecycle_policy_file = args[0] | |
1553 | lifecycle_policy = open(lifecycle_policy_file, 'r').read() | |
1554 | ||
1555 | if cfg.dry_run: return EX_OK | |
1556 | ||
1557 | response = s3.set_lifecycle_policy(uri, lifecycle_policy) | |
1558 | ||
1559 | debug(u"response - %s" % response['status']) | |
1560 | if response['status'] == 204: | |
1561 | output(u"%s: Lifecycle Policy updated" % uri) | |
1562 | return EX_OK | |
1563 | ||
1564 | def cmd_dellifecycle(args): | |
1565 | s3 = S3(cfg) | |
1566 | uri = S3Uri(args[0]) | |
1567 | if cfg.dry_run: return EX_OK | |
1568 | ||
1569 | response = s3.delete_lifecycle_policy(uri) | |
1570 | ||
1571 | debug(u"response - %s" % response['status']) | |
1572 | output(u"%s: Lifecycle Policy deleted" % uri) | |
1573 | return EX_OK | |
1401 | 1574 | |
1402 | 1575 | def cmd_multipart(args): |
1403 | 1576 | s3 = S3(cfg) |
1417 | 1590 | output("%s\t%s\t%s" % (mpupload['Initiated'], "s3://" + uri.bucket() + "/" + mpupload['Key'], mpupload['UploadId'])) |
1418 | 1591 | except KeyError: |
1419 | 1592 | pass |
1593 | return EX_OK | |
1420 | 1594 | |
1421 | 1595 | def cmd_abort_multipart(args): |
1422 | 1596 | '''{"cmd":"abortmp", "label":"abort a multipart upload", "param":"s3://BUCKET Id", "func":cmd_abort_multipart, "argc":2},''' |
1426 | 1600 | response = s3.abort_multipart(uri, id) |
1427 | 1601 | debug(u"response - %s" % response['status']) |
1428 | 1602 | output(u"%s" % uri) |
1603 | return EX_OK | |
1429 | 1604 | |
1430 | 1605 | def cmd_list_multipart(args): |
1431 | 1606 | '''{"cmd":"abortmp", "label":"list a multipart upload", "param":"s3://BUCKET Id", "func":cmd_list_multipart, "argc":2},''' |
1442 | 1617 | output("%s\t%s\t%s\t%s" % (mpupload['LastModified'], mpupload['PartNumber'], mpupload['ETag'], mpupload['Size'])) |
1443 | 1618 | except: |
1444 | 1619 | pass |
1620 | return EX_OK | |
1445 | 1621 | |
1446 | 1622 | def cmd_accesslog(args): |
1447 | 1623 | s3 = S3(cfg) |
1463 | 1639 | if accesslog.isLoggingEnabled(): |
1464 | 1640 | output(u" Target prefix: %s" % accesslog.targetPrefix().uri()) |
1465 | 1641 | #output(u" Public Access: %s" % accesslog.isAclPublic()) |
1642 | return EX_OK | |
1466 | 1643 | |
1467 | 1644 | def cmd_sign(args): |
1468 | 1645 | string_to_sign = args.pop() |
1469 | 1646 | debug("string-to-sign: %r" % string_to_sign) |
1470 | 1647 | signature = Utils.sign_string(string_to_sign) |
1471 | 1648 | output("Signature: %s" % signature) |
1649 | return EX_OK | |
1472 | 1650 | |
1473 | 1651 | def cmd_signurl(args): |
1474 | 1652 | expiry = args.pop() |
1478 | 1656 | debug("url to sign: %r" % url_to_sign) |
1479 | 1657 | signed_url = Utils.sign_url(url_to_sign, expiry) |
1480 | 1658 | output(signed_url) |
1659 | return EX_OK | |
1481 | 1660 | |
1482 | 1661 | def cmd_fixbucket(args): |
1483 | 1662 | def _unescape(text): |
1545 | 1724 | warning("Fixed %d files' names. Their ACL were reset to Private." % count) |
1546 | 1725 | warning("Use 's3cmd setacl --acl-public s3://...' to make") |
1547 | 1726 | warning("them publicly readable if required.") |
1727 | return EX_OK | |
1548 | 1728 | |
1549 | 1729 | def resolve_list(lst, args): |
1550 | 1730 | retval = [] |
1570 | 1750 | "input_file" : filename, |
1571 | 1751 | "output_file" : tmp_filename, |
1572 | 1752 | } |
1573 | info(u"Encrypting file %(input_file)s to %(output_file)s..." % args) | |
1753 | info(u"Encrypting file %s to %s..." % (unicodise(filename), tmp_filename)) | |
1574 | 1754 | command = resolve_list(cfg.gpg_encrypt.split(" "), args) |
1575 | 1755 | code = gpg_command(command, cfg.gpg_passphrase) |
1576 | 1756 | return (code, tmp_filename, "gpg") |
1583 | 1763 | "input_file" : filename, |
1584 | 1764 | "output_file" : tmp_filename, |
1585 | 1765 | } |
1586 | info(u"Decrypting file %(input_file)s to %(output_file)s..." % args) | |
1766 | info(u"Decrypting file %s to %s..." % (unicodise(filename), tmp_filename)) | |
1587 | 1767 | command = resolve_list(cfg.gpg_decrypt.split(" "), args) |
1588 | 1768 | code = gpg_command(command, cfg.gpg_passphrase) |
1589 | 1769 | if code == 0 and in_place: |
1590 | debug(u"Renaming %s to %s" % (tmp_filename, filename)) | |
1770 | debug(u"Renaming %s to %s" % (tmp_filename, unicodise(filename))) | |
1591 | 1771 | os.unlink(filename) |
1592 | 1772 | os.rename(tmp_filename, filename) |
1593 | 1773 | tmp_filename = filename |
1596 | 1776 | def run_configure(config_file, args): |
1597 | 1777 | cfg = Config() |
1598 | 1778 | options = [ |
1599 | ("access_key", "Access Key", "Access key and Secret key are your identifiers for Amazon S3"), | |
1779 | ("access_key", "Access Key", "Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables."), | |
1600 | 1780 | ("secret_key", "Secret Key"), |
1601 | 1781 | ("gpg_passphrase", "Encryption password", "Encryption password is used to protect your files from reading\nby unauthorized persons while in transfer to S3"), |
1602 | 1782 | ("gpg_command", "Path to GPG program"), |
1697 | 1877 | else: |
1698 | 1878 | raise Exception("Encryption verification error.") |
1699 | 1879 | |
1880 | except S3Error, e: | |
1881 | error(u"Test failed: %s" % (e)) | |
1882 | if e.code == "AccessDenied": | |
1883 | error(u"Are you sure your keys have ListAllMyBuckets permissions?") | |
1884 | val = raw_input("\nRetry configuration? [Y/n] ") | |
1885 | if val.lower().startswith("y") or val == "": | |
1886 | continue | |
1700 | 1887 | except Exception, e: |
1701 | 1888 | error(u"Test failed: %s" % (e)) |
1702 | if e.find('403') != -1: | |
1703 | error(u"Are you sure your keys have ListAllMyBuckets permissions?") | |
1704 | 1889 | val = raw_input("\nRetry configuration? [Y/n] ") |
1705 | 1890 | if val.lower().startswith("y") or val == "": |
1706 | 1891 | continue |
1732 | 1917 | |
1733 | 1918 | except IOError, e: |
1734 | 1919 | error(u"Writing config file failed: %s: %s" % (config_file, e.strerror)) |
1735 | sys.exit(1) | |
1920 | sys.exit(EX_IOERR) | |
1736 | 1921 | |
1737 | 1922 | def process_patterns_from_file(fname, patterns_list): |
1738 | 1923 | try: |
1739 | 1924 | fn = open(fname, "rt") |
1740 | 1925 | except IOError, e: |
1741 | 1926 | error(e) |
1742 | sys.exit(1) | |
1927 | sys.exit(EX_IOERR) | |
1743 | 1928 | for pattern in fn: |
1744 | 1929 | pattern = pattern.strip() |
1745 | 1930 | if re.match("^#", pattern) or re.match("^\s*$", pattern): |
1788 | 1973 | {"cmd":"put", "label":"Put file into bucket", "param":"FILE [FILE...] s3://BUCKET[/PREFIX]", "func":cmd_object_put, "argc":2}, |
1789 | 1974 | {"cmd":"get", "label":"Get file from bucket", "param":"s3://BUCKET/OBJECT LOCAL_FILE", "func":cmd_object_get, "argc":1}, |
1790 | 1975 | {"cmd":"del", "label":"Delete file from bucket", "param":"s3://BUCKET/OBJECT", "func":cmd_object_del, "argc":1}, |
1976 | {"cmd":"rm", "label":"Delete file from bucket (alias for del)", "param":"s3://BUCKET/OBJECT", "func":cmd_object_del, "argc":1}, | |
1791 | 1977 | #{"cmd":"mkdir", "label":"Make a virtual S3 directory", "param":"s3://BUCKET/path/to/dir", "func":cmd_mkdir, "argc":1}, |
1792 | 1978 | {"cmd":"restore", "label":"Restore file from Glacier storage", "param":"s3://BUCKET/OBJECT", "func":cmd_object_restore, "argc":1}, |
1793 | 1979 | {"cmd":"sync", "label":"Synchronize a directory tree to S3", "param":"LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR", "func":cmd_sync, "argc":2}, |
1794 | 1980 | {"cmd":"du", "label":"Disk usage by buckets", "param":"[s3://BUCKET[/PREFIX]]", "func":cmd_du, "argc":0}, |
1795 | 1981 | {"cmd":"info", "label":"Get various information about Buckets or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1}, |
1796 | 1982 | {"cmd":"cp", "label":"Copy object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_cp, "argc":2}, |
1983 | {"cmd":"modify", "label":"Modify object metadata", "param":"s3://BUCKET1/OBJECT", "func":cmd_modify, "argc":1}, | |
1797 | 1984 | {"cmd":"mv", "label":"Move object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_mv, "argc":2}, |
1798 | 1985 | {"cmd":"setacl", "label":"Modify Access control list for Bucket or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1}, |
1799 | 1986 | |
1814 | 2001 | {"cmd":"ws-create", "label":"Create Website from bucket", "param":"s3://BUCKET", "func":cmd_website_create, "argc":1}, |
1815 | 2002 | {"cmd":"ws-delete", "label":"Delete Website", "param":"s3://BUCKET", "func":cmd_website_delete, "argc":1}, |
1816 | 2003 | {"cmd":"ws-info", "label":"Info about Website", "param":"s3://BUCKET", "func":cmd_website_info, "argc":1}, |
2004 | ||
2005 | ## Lifecycle commands | |
2006 | {"cmd":"expire", "label":"Set or delete expiration rule for the bucket", "param":"s3://BUCKET", "func":cmd_expiration_set, "argc":1}, | |
2007 | {"cmd":"setlifecycle", "label":"Upload a lifecycle policy for the bucket", "param":"s3://BUCKET", "func":cmd_setlifecycle, "argc":1}, | |
2008 | {"cmd":"dellifecycle", "label":"Remove a lifecycle policy for the bucket", "param":"s3://BUCKET", "func":cmd_dellifecycle, "argc":1}, | |
1817 | 2009 | |
1818 | 2010 | ## CloudFront commands |
1819 | 2011 | {"cmd":"cflist", "label":"List CloudFront distribution points", "param":"", "func":CfCmd.info, "argc":0}, |
1917 | 2109 | if cmd.has_key("cmd"): |
1918 | 2110 | commands[cmd["cmd"]] = cmd |
1919 | 2111 | |
1920 | default_verbosity = Config().verbosity | |
1921 | 2112 | optparser = OptionParser(option_class=OptionAll, formatter=MyHelpFormatter()) |
1922 | 2113 | #optparser.disable_interspersed_args() |
1923 | 2114 | |
1924 | 2115 | config_file = None |
1925 | if os.getenv("HOME"): | |
1926 | config_file = os.path.join(os.getenv("HOME"), ".s3cfg") | |
2116 | if os.getenv("S3CMD_CONFIG"): | |
2117 | config_file = os.getenv("S3CMD_CONFIG") | |
1927 | 2118 | elif os.name == "nt" and os.getenv("USERPROFILE"): |
1928 | config_file = os.path.join(os.getenv("USERPROFILE").decode('mbcs'), "Application Data", "s3cmd.ini") | |
2119 | config_file = os.path.join(os.getenv("USERPROFILE").decode('mbcs'), os.getenv("APPDATA").decode('mbcs') or 'Application Data', "s3cmd.ini") | |
2120 | else: | |
2121 | from os.path import expanduser | |
2122 | config_file = os.path.join(expanduser("~"), ".s3cfg") | |
1929 | 2123 | |
1930 | 2124 | preferred_encoding = locale.getpreferredencoding() or "UTF-8" |
1931 | 2125 | |
1932 | 2126 | optparser.set_defaults(encoding = preferred_encoding) |
1933 | 2127 | optparser.set_defaults(config = config_file) |
1934 | optparser.set_defaults(verbosity = default_verbosity) | |
1935 | 2128 | |
1936 | 2129 | optparser.add_option( "--configure", dest="run_configure", action="store_true", help="Invoke interactive (re)configuration tool. Optionally use as '--configure s3://some-bucket' to test access to a specific bucket instead of attempting to list them all.") |
1937 | 2130 | optparser.add_option("-c", "--config", dest="config", metavar="FILE", help="Config file name. Defaults to %default") |
1990 | 2183 | optparser.add_option( "--no-mime-magic", dest="use_mime_magic", action="store_false", help="Don't use mime magic when guessing MIME-type.") |
1991 | 2184 | optparser.add_option("-m", "--mime-type", dest="mime_type", type="mimetype", metavar="MIME/TYPE", help="Force MIME-type. Override both --default-mime-type and --guess-mime-type.") |
1992 | 2185 | |
1993 | optparser.add_option( "--add-header", dest="add_header", action="append", metavar="NAME:VALUE", help="Add a given HTTP header to the upload request. Can be used multiple times. For instance set 'Expires' or 'Cache-Control' headers (or both) using this options if you like.") | |
2186 | optparser.add_option( "--add-header", dest="add_header", action="append", metavar="NAME:VALUE", help="Add a given HTTP header to the upload request. Can be used multiple times. For instance set 'Expires' or 'Cache-Control' headers (or both) using this option.") | |
1994 | 2187 | |
1995 | 2188 | optparser.add_option( "--server-side-encryption", dest="server_side_encryption", action="store_true", help="Specifies that server-side encryption will be used when putting objects.") |
1996 | 2189 | |
1997 | 2190 | optparser.add_option( "--encoding", dest="encoding", metavar="ENCODING", help="Override autodetected terminal and filesystem encoding (character set). Autodetected: %s" % preferred_encoding) |
1998 | optparser.add_option( "--disable-content-encoding", dest="add_content_encoding", action="store_false", help="Don't include a Content-encoding header to the the uploaded objects") | |
1999 | 2191 | optparser.add_option( "--add-encoding-exts", dest="add_encoding_exts", metavar="EXTENSIONs", help="Add encoding to these comma delimited extensions i.e. (css,js,html) when uploading to S3 )") |
2000 | 2192 | optparser.add_option( "--verbatim", dest="urlencoding_mode", action="store_const", const="verbatim", help="Use the S3 name as given on the command line. No pre-processing, encoding, etc. Use with caution!") |
2001 | 2193 | |
2002 | 2194 | optparser.add_option( "--disable-multipart", dest="enable_multipart", action="store_false", help="Disable multipart upload on files bigger than --multipart-chunk-size-mb") |
2003 | optparser.add_option( "--multipart-chunk-size-mb", dest="multipart_chunk_size_mb", type="int", action="store", metavar="SIZE", help="Size of each chunk of a multipart upload. Files bigger than SIZE are automatically uploaded as multithreaded-multipart, smaller files are uploaded using the traditional method. SIZE is in Mega-Bytes, default chunk size is %defaultMB, minimum allowed chunk size is 5MB, maximum is 5GB.") | |
2195 | optparser.add_option( "--multipart-chunk-size-mb", dest="multipart_chunk_size_mb", type="int", action="store", metavar="SIZE", help="Size of each chunk of a multipart upload. Files bigger than SIZE are automatically uploaded as multithreaded-multipart, smaller files are uploaded using the traditional method. SIZE is in Mega-Bytes, default chunk size is 15MB, minimum allowed chunk size is 5MB, maximum is 5GB.") | |
2004 | 2196 | |
2005 | 2197 | optparser.add_option( "--list-md5", dest="list_md5", action="store_true", help="Include MD5 sums in bucket listings (only for 'ls' command).") |
2006 | 2198 | optparser.add_option("-H", "--human-readable-sizes", dest="human_readable_sizes", action="store_true", help="Print sizes in human readable form (eg 1kB instead of 1234).") |
2007 | 2199 | |
2008 | 2200 | optparser.add_option( "--ws-index", dest="website_index", action="store", help="Name of index-document (only for [ws-create] command)") |
2009 | 2201 | optparser.add_option( "--ws-error", dest="website_error", action="store", help="Name of error-document (only for [ws-create] command)") |
2202 | ||
2203 | optparser.add_option( "--expiry-date", dest="expiry_date", action="store", help="Indicates when the expiration rule takes effect. (only for [expire] command)") | |
2204 | optparser.add_option( "--expiry-days", dest="expiry_days", action="store", help="Indicates the number of days after object creation the expiration rule takes effect. (only for [expire] command)") | |
2205 | optparser.add_option( "--expiry-prefix", dest="expiry_prefix", action="store", help="Identifying one or more objects with the prefix to which the expiration rule applies. (only for [expire] command)") | |
2010 | 2206 | |
2011 | 2207 | optparser.add_option( "--progress", dest="progress_meter", action="store_true", help="Display progress meter (default on TTY).") |
2012 | 2208 | optparser.add_option( "--no-progress", dest="progress_meter", action="store_false", help="Don't display progress meter (default on non-TTY).") |
2034 | 2230 | '"buckets" and uploading, downloading and removing '+ |
2035 | 2231 | '"objects" from these buckets.') |
2036 | 2232 | optparser.epilog = format_commands(optparser.get_prog_name(), commands_list) |
2037 | optparser.epilog += ("\nFor more information see the project homepage:\n%s\n" % PkgInfo.url) | |
2233 | optparser.epilog += ("\nFor more information, updates and news, visit the s3cmd website:\n%s\n" % PkgInfo.url) | |
2038 | 2234 | optparser.epilog += ("\nConsider a donation if you have found s3cmd useful:\n%s/donate\n" % PkgInfo.url) |
2039 | 2235 | |
2040 | 2236 | (options, args) = optparser.parse_args() |
2041 | 2237 | |
2042 | 2238 | ## Some mucking with logging levels to enable |
2043 | 2239 | ## debugging/verbose output for config file parser on request |
2044 | logging.basicConfig(level=options.verbosity, | |
2240 | logging.basicConfig(level=options.verbosity or Config().verbosity, | |
2045 | 2241 | format='%(levelname)s: %(message)s', |
2046 | 2242 | stream = sys.stderr) |
2047 | 2243 | |
2048 | 2244 | if options.show_version: |
2049 | 2245 | output(u"s3cmd version %s" % PkgInfo.version) |
2050 | sys.exit(0) | |
2246 | sys.exit(EX_OK) | |
2051 | 2247 | |
2052 | 2248 | if options.quiet: |
2053 | 2249 | try: |
2060 | 2256 | ## Now finally parse the config file |
2061 | 2257 | if not options.config: |
2062 | 2258 | error(u"Can't find a config file. Please use --config option.") |
2063 | sys.exit(1) | |
2259 | sys.exit(EX_CONFIG) | |
2064 | 2260 | |
2065 | 2261 | try: |
2066 | cfg = Config(options.config) | |
2262 | cfg = Config(options.config, options.access_key, options.secret_key) | |
2067 | 2263 | except IOError, e: |
2068 | 2264 | if options.run_configure: |
2069 | 2265 | cfg = Config() |
2071 | 2267 | error(u"%s: %s" % (options.config, e.strerror)) |
2072 | 2268 | error(u"Configuration file not available.") |
2073 | 2269 | error(u"Consider using --configure parameter to create one.") |
2074 | sys.exit(1) | |
2075 | ||
2076 | ## And again some logging level adjustments | |
2077 | ## according to configfile and command line parameters | |
2078 | if options.verbosity != default_verbosity: | |
2270 | sys.exit(EX_CONFIG) | |
2271 | ||
2272 | # allow commandline verbosity config to override config file | |
2273 | if options.verbosity is not None: | |
2079 | 2274 | cfg.verbosity = options.verbosity |
2080 | 2275 | logging.root.setLevel(cfg.verbosity) |
2081 | 2276 | |
2208 | 2403 | if cfg.encrypt and cfg.gpg_passphrase == "": |
2209 | 2404 | error(u"Encryption requested but no passphrase set in config file.") |
2210 | 2405 | error(u"Please re-run 's3cmd --configure' and supply it.") |
2211 | sys.exit(1) | |
2406 | sys.exit(EX_CONFIG) | |
2212 | 2407 | |
2213 | 2408 | if options.dump_config: |
2214 | 2409 | cfg.dump_config(sys.stdout) |
2215 | sys.exit(0) | |
2410 | sys.exit(EX_OK) | |
2216 | 2411 | |
2217 | 2412 | if options.run_configure: |
2218 | 2413 | # 'args' may contain the test-bucket URI |
2219 | 2414 | run_configure(options.config, args) |
2220 | sys.exit(0) | |
2415 | sys.exit(EX_OK) | |
2221 | 2416 | |
2222 | 2417 | if len(args) < 1: |
2223 | error(u"Missing command. Please run with --help for more information.") | |
2224 | sys.exit(1) | |
2418 | optparser.print_help() | |
2419 | sys.exit(EX_USAGE) | |
2225 | 2420 | |
2226 | 2421 | ## Unicodise all remaining arguments: |
2227 | 2422 | args = [unicodise(arg) for arg in args] |
2235 | 2430 | cmd_func = commands[command]["func"] |
2236 | 2431 | except KeyError, e: |
2237 | 2432 | error(u"Invalid command: %s" % e) |
2238 | sys.exit(1) | |
2433 | sys.exit(EX_USAGE) | |
2239 | 2434 | |
2240 | 2435 | if len(args) < commands[command]["argc"]: |
2241 | 2436 | error(u"Not enough parameters for command '%s'" % command) |
2242 | sys.exit(1) | |
2437 | sys.exit(EX_USAGE) | |
2243 | 2438 | |
2244 | 2439 | try: |
2245 | cmd_func(args) | |
2440 | rc = cmd_func(args) | |
2441 | if rc is None: # if we missed any cmd_*() returns | |
2442 | rc = EX_GENERAL | |
2443 | return rc | |
2246 | 2444 | except S3Error, e: |
2247 | 2445 | error(u"S3 error: %s" % e) |
2248 | sys.exit(1) | |
2446 | sys.exit(EX_SOFTWARE) | |
2249 | 2447 | |
2250 | 2448 | def report_exception(e, msg=''): |
2251 | sys.stderr.write(""" | |
2449 | sys.stderr.write(u""" | |
2252 | 2450 | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! |
2253 | 2451 | An unexpected error has occurred. |
2254 | 2452 | Please try reproducing the error using |
2263 | 2461 | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! |
2264 | 2462 | |
2265 | 2463 | """ % msg) |
2266 | s = ' '.join(sys.argv) | |
2267 | sys.stderr.write("""Invoked as: %s""" % s) | |
2464 | s = u' '.join([unicodise(a) for a in sys.argv]) | |
2465 | sys.stderr.write(u"Invoked as: %s\n" % s) | |
2268 | 2466 | |
2269 | 2467 | tb = traceback.format_exc(sys.exc_info()) |
2270 | 2468 | e_class = str(e.__class__) |
2271 | 2469 | e_class = e_class[e_class.rfind(".")+1 : -2] |
2272 | 2470 | sys.stderr.write(u"Problem: %s: %s\n" % (e_class, e)) |
2273 | 2471 | try: |
2274 | sys.stderr.write("S3cmd: %s\n" % PkgInfo.version) | |
2472 | sys.stderr.write(u"S3cmd: %s\n" % PkgInfo.version) | |
2275 | 2473 | except NameError: |
2276 | sys.stderr.write("S3cmd: unknown version. Module import problem?\n") | |
2277 | sys.stderr.write("python: %s\n" % sys.version) | |
2278 | sys.stderr.write("environment LANG=%s\n" % os.getenv("LANG")) | |
2279 | sys.stderr.write("\n") | |
2474 | sys.stderr.write(u"S3cmd: unknown version. Module import problem?\n") | |
2475 | sys.stderr.write(u"python: %s\n" % sys.version) | |
2476 | sys.stderr.write(u"environment LANG=%s\n" % os.getenv("LANG")) | |
2477 | sys.stderr.write(u"\n") | |
2280 | 2478 | sys.stderr.write(unicode(tb, errors="replace")) |
2281 | 2479 | |
2282 | 2480 | if type(e) == ImportError: |
2305 | 2503 | ## Our modules |
2306 | 2504 | ## Keep them in try/except block to |
2307 | 2505 | ## detect any syntax errors in there |
2506 | from S3.ExitCodes import * | |
2308 | 2507 | from S3.Exceptions import * |
2309 | 2508 | from S3 import PkgInfo |
2310 | 2509 | from S3.S3 import S3 |
2320 | 2519 | from S3.FileLists import * |
2321 | 2520 | from S3.MultiPart import MultiPartUpload |
2322 | 2521 | |
2323 | main() | |
2324 | sys.exit(0) | |
2522 | rc = main() | |
2523 | sys.exit(rc) | |
2325 | 2524 | |
2326 | 2525 | except ImportError, e: |
2327 | 2526 | report_exception(e) |
2328 | sys.exit(1) | |
2329 | ||
2330 | except ParameterError, e: | |
2527 | sys.exit(EX_GENERAL) | |
2528 | ||
2529 | except (ParameterError, InvalidFileError), e: | |
2331 | 2530 | error(u"Parameter problem: %s" % e) |
2332 | sys.exit(1) | |
2531 | sys.exit(EX_USAGE) | |
2532 | ||
2533 | except (S3DownloadError, S3UploadError, S3RequestError), e: | |
2534 | error(u"S3 Temporary Error: %s. Please try again later." % e) | |
2535 | sys.exit(EX_TEMPFAIL) | |
2536 | ||
2537 | except (S3Error, S3Exception, S3ResponseError, CloudFrontError), e: | |
2538 | report_exception(e) | |
2539 | sys.exit(EX_SOFTWARE) | |
2333 | 2540 | |
2334 | 2541 | except SystemExit, e: |
2335 | 2542 | sys.exit(e.code) |
2336 | 2543 | |
2337 | 2544 | except KeyboardInterrupt: |
2338 | 2545 | sys.stderr.write("See ya!\n") |
2339 | sys.exit(1) | |
2546 | sys.exit(EX_BREAK) | |
2547 | ||
2548 | except IOError, e: | |
2549 | error(e) | |
2550 | sys.exit(EX_IOERR) | |
2551 | ||
2552 | except OSError, e: | |
2553 | error(e) | |
2554 | sys.exit(EX_OSERR) | |
2340 | 2555 | |
2341 | 2556 | except MemoryError: |
2342 | 2557 | msg = """ |
2347 | 2562 | 2) use a 64-bit python on a 64-bit OS with >8GB RAM |
2348 | 2563 | """ |
2349 | 2564 | sys.stderr.write(msg) |
2350 | sys.exit(1) | |
2565 | sys.exit(EX_OSERR) | |
2351 | 2566 | |
2352 | 2567 | except UnicodeEncodeError, e: |
2353 | 2568 | lang = os.getenv("LANG") |
2358 | 2573 | invoking s3cmd. |
2359 | 2574 | """ % lang |
2360 | 2575 | report_exception(e, msg) |
2361 | sys.exit(1) | |
2576 | sys.exit(EX_GENERAL) | |
2362 | 2577 | |
2363 | 2578 | except Exception, e: |
2364 | 2579 | report_exception(e) |
2365 | sys.exit(1) | |
2580 | sys.exit(EX_GENERAL) | |
2366 | 2581 | |
2367 | 2582 | # vim:et:ts=4:sts=4:ai |
42 | 42 | s3cmd \fBdel\fR \fIs3://BUCKET/OBJECT\fR |
43 | 43 | Delete file from bucket |
44 | 44 | .TP |
45 | s3cmd \fBrm\fR \fIs3://BUCKET/OBJECT\fR | |
46 | Delete file from bucket (alias for del) | |
47 | .TP | |
48 | s3cmd \fBrestore\fR \fIs3://BUCKET/OBJECT\fR | |
49 | Restore file from Glacier storage | |
50 | .TP | |
45 | 51 | s3cmd \fBsync\fR \fILOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR\fR |
46 | 52 | Synchronize a directory tree to S3 |
47 | 53 | .TP |
54 | 60 | s3cmd \fBcp\fR \fIs3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]\fR |
55 | 61 | Copy object |
56 | 62 | .TP |
63 | s3cmd \fBmodify\fR \fIs3://BUCKET1/OBJECT\fR | |
64 | Modify object metadata | |
65 | .TP | |
57 | 66 | s3cmd \fBmv\fR \fIs3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]\fR |
58 | 67 | Move object |
59 | 68 | .TP |
86 | 95 | .TP |
87 | 96 | s3cmd \fBfixbucket\fR \fIs3://BUCKET[/PREFIX]\fR |
88 | 97 | Fix invalid file names in a bucket |
98 | .TP | |
99 | s3cmd \fBexpire\fR \fIs3://BUCKET\fR | |
100 | Set or delete expiration rule for the bucket | |
101 | .TP | |
102 | s3cmd \fBsetlifecycle\fR \fIs3://BUCKET\fR | |
103 | Upload a lifecycle policy for the bucket | |
104 | .TP | |
105 | s3cmd \fBdellifecycle\fR \fIs3://BUCKET\fR | |
106 | Remove a lifecycle policy for the bucket | |
89 | 107 | |
90 | 108 | |
91 | 109 | .PP |
221 | 239 | Permission is one of: read, write, read_acp, wr |
222 | 240 | ite_acp, full_control, all |
223 | 241 | .TP |
242 | \fB\-D\fR NUM, \fB\-\-restore\-days\fR=NUM | |
243 | Number of days to keep restored file available (only | |
244 | for 'restore' command). | |
245 | .TP | |
224 | 246 | \fB\-\-delete\-removed\fR |
225 | 247 | Delete remote objects with no corresponding local file |
226 | 248 | [sync] |
328 | 350 | \fB\-\-add\-header\fR=NAME:VALUE |
329 | 351 | Add a given HTTP header to the upload request. Can be |
330 | 352 | used multiple times. For instance set 'Expires' or |
331 | 'Cache-Control' headers (or both) using this options | |
332 | if you like. | |
353 | 'Cache-Control' headers (or both) using this option. | |
333 | 354 | .TP |
334 | 355 | \fB\-\-server\-side\-encryption\fR |
335 | 356 | Specifies that server-side encryption will be used |
338 | 359 | \fB\-\-encoding\fR=ENCODING |
339 | 360 | Override autodetected terminal and filesystem encoding |
340 | 361 | (character set). Autodetected: UTF-8 |
341 | .TP | |
342 | \fB\-\-disable\-content\-encoding\fR | |
343 | Don't include a Content-encoding header to the the | |
344 | uploaded objects | |
345 | 362 | .TP |
346 | 363 | \fB\-\-add\-encoding\-exts\fR=EXTENSIONs |
347 | 364 | Add encoding to these comma delimited extensions i.e. |
360 | 377 | than SIZE are automatically uploaded as multithreaded- |
361 | 378 | multipart, smaller files are uploaded using the |
362 | 379 | traditional method. SIZE is in Mega-Bytes, default |
363 | chunk size is noneMB, minimum allowed chunk size is | |
364 | 5MB, maximum is 5GB. | |
380 | chunk size is 15MB, minimum allowed chunk size is 5MB, | |
381 | maximum is 5GB. | |
365 | 382 | .TP |
366 | 383 | \fB\-\-list\-md5\fR |
367 | 384 | Include MD5 sums in bucket listings (only for 'ls' |
376 | 393 | .TP |
377 | 394 | \fB\-\-ws\-error\fR=WEBSITE_ERROR |
378 | 395 | Name of error-document (only for [ws-create] command) |
396 | .TP | |
397 | \fB\-\-expiry\-date\fR=EXPIRY_DATE | |
398 | Indicates when the expiration rule takes effect. (only | |
399 | for [expire] command) | |
400 | .TP | |
401 | \fB\-\-expiry\-days\fR=EXPIRY_DAYS | |
402 | Indicates the number of days after object creation the | |
403 | expiration rule takes effect. (only for [expire] | |
404 | command) | |
405 | .TP | |
406 | \fB\-\-expiry\-prefix\fR=EXPIRY_PREFIX | |
407 | Identifying one or more objects with the prefix to | |
408 | which the expiration rule applies. (only for [expire] | |
409 | command) | |
379 | 410 | .TP |
380 | 411 | \fB\-\-progress\fR |
381 | 412 | Display progress meter (default on TTY). |
429 | 460 | Enable debug output. |
430 | 461 | .TP |
431 | 462 | \fB\-\-version\fR |
432 | Show s3cmd version (1.5.0-beta1) and exit. | |
463 | Show s3cmd version (1.5.0-rc1) and exit. | |
433 | 464 | .TP |
434 | 465 | \fB\-F\fR, \fB\-\-follow\-symlinks\fR |
435 | 466 | Follow symbolic links as if they are regular files |
516 | 547 | For example to exclude all files with ".jpg" extension except those beginning with a number use: |
517 | 548 | .PP |
518 | 549 | \-\-exclude '*.jpg' \-\-rinclude '[0-9].*\.jpg' |
550 | .PP | |
551 | To exclude all files except "*.jpg" extension, use: | |
552 | .PP | |
553 | \-\-exclude '*' \-\-include '*.jpg' | |
554 | .PP | |
555 | To exclude local directory 'somedir', be sure to use a trailing forward slash, as such: | |
556 | .PP | |
557 | \-\-exclude 'somedir/' | |
558 | .PP | |
559 | ||
519 | 560 | .SH SEE ALSO |
520 | For the most up to date list of options run | |
561 | For the most up to date list of options run: | |
521 | 562 | .B s3cmd \-\-help |
522 | 563 | .br |
523 | For more info about usage, examples and other related info visit project homepage at | |
524 | .br | |
564 | For more info about usage, examples and other related info visit project homepage at: | |
525 | 565 | .B http://s3tools.org |
526 | 566 | .SH DONATIONS |
527 | 567 | Please consider a donation if you have found s3cmd useful: |
528 | 568 | .br |
529 | 569 | .B http://s3tools.org/donate |
530 | 570 | .SH AUTHOR |
531 | Written by Michal Ludvig <mludvig@logix.net.nz> and 15+ contributors | |
571 | Written by Michal Ludvig and contributors | |
532 | 572 | .SH CONTACT, SUPPORT |
533 | 573 | Preferred way to get support is our mailing list: |
574 | .br | |
534 | 575 | .I s3tools\-general@lists.sourceforge.net |
576 | .br | |
577 | or visit the project homepage: | |
578 | .br | |
579 | .B http://s3tools.org | |
535 | 580 | .SH REPORTING BUGS |
536 | 581 | Report bugs to |
537 | 582 | .I s3tools\-bugs@lists.sourceforge.net |
538 | 583 | .SH COPYRIGHT |
539 | Copyright \(co 2007,2008,2009,2010,2011,2012 Michal Ludvig <http://www.logix.cz/michal> | |
540 | .br | |
541 | This is free software. You may redistribute copies of it under the terms of | |
542 | the GNU General Public License version 2 <http://www.gnu.org/licenses/gpl.html>. | |
543 | There is NO WARRANTY, to the extent permitted by law. | |
584 | Copyright \(co 2007-2014 TGRMN Software - http://www.tgrmn.com - and contributors | |
585 | .br | |
586 | .SH LICENSE | |
587 | This program is free software; you can redistribute it and/or modify | |
588 | it under the terms of the GNU General Public License as published by | |
589 | the Free Software Foundation; either version 2 of the License, or | |
590 | (at your option) any later version. | |
591 | This program is distributed in the hope that it will be useful, | |
592 | but WITHOUT ANY WARRANTY; without even the implied warranty of | |
593 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
594 | GNU General Public License for more details. | |
595 | .br |
0 | %{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")} | |
1 | ||
2 | %global commit ##COMMIT## | |
3 | %global shortcommit ##SHORTCOMMIT## | |
4 | ||
5 | Name: s3cmd | |
6 | Version: ##VERSION## | |
7 | Release: 0.3.git%{shortcommit}%{?dist} | |
8 | Summary: Tool for accessing Amazon Simple Storage Service | |
9 | ||
10 | Group: Applications/Internet | |
11 | License: GPLv2 | |
12 | URL: http://s3tools.logix.cz/s3cmd | |
13 | # git clone git@github.com:mdomsch/s3cmd.git | |
14 | # git checkout -b origin/merge | |
15 | #git archive --format tar --prefix s3cmd-1.1.0-beta3-2dfe4a65/ HEAD | gzip -c > s3cmd-1.1.0-beta1-2dfe4a65.tar.gz | |
16 | ||
17 | Source0: https://github.com/s3tools/s3cmd/archive/%{commit}/%{name}-%{version}-%{shortcommit}.tar.gz | |
18 | BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) | |
19 | BuildArch: noarch | |
20 | ||
21 | %if %{!?fedora:16}%{?fedora} < 16 || %{!?rhel:7}%{?rhel} < 7 | |
22 | BuildRequires: python-devel | |
23 | %else | |
24 | BuildRequires: python2-devel | |
25 | %endif | |
26 | %if %{!?fedora:8}%{?fedora} < 8 || %{!?rhel:6}%{?rhel} < 6 | |
27 | # This is in standard library since 2.5 | |
28 | Requires: python-elementtree | |
29 | %endif | |
30 | ||
31 | %description | |
32 | S3cmd lets you copy files from/to Amazon S3 | |
33 | (Simple Storage Service) using a simple to use | |
34 | command line client. | |
35 | ||
36 | ||
37 | %prep | |
38 | %setup -q -n s3cmd-%{commit} | |
39 | ||
40 | %build | |
41 | ||
42 | ||
43 | %install | |
44 | rm -rf $RPM_BUILD_ROOT | |
45 | S3CMD_PACKAGING=Yes python setup.py install --prefix=%{_prefix} --root=$RPM_BUILD_ROOT | |
46 | install -d $RPM_BUILD_ROOT%{_mandir}/man1 | |
47 | install -m 644 s3cmd.1 $RPM_BUILD_ROOT%{_mandir}/man1 | |
48 | ||
49 | ||
50 | %clean | |
51 | rm -rf $RPM_BUILD_ROOT | |
52 | ||
53 | ||
54 | %files | |
55 | %defattr(-,root,root,-) | |
56 | %{_bindir}/s3cmd | |
57 | %{_mandir}/man1/s3cmd.1* | |
58 | %{python_sitelib}/S3 | |
59 | %if 0%{?fedora} >= 9 || 0%{?rhel} >= 6 | |
60 | %{python_sitelib}/s3cmd*.egg-info | |
61 | %endif | |
62 | %doc NEWS README | |
63 | ||
64 | ||
65 | %changelog | |
66 | * Sun Feb 02 2014 Matt Domsch <mdomsch@fedoraproject.org> - 1.5.0-0.3.git | |
67 | - upstream 1.5.0-beta1 plus newer upstream fixes | |
68 | ||
69 | * Wed May 29 2013 Matt Domsch <mdomsch@fedoraproject.org> - 1.5.0-0.2.gita122d97 | |
70 | - more upstream bugfixes | |
71 | - drop pyxattr dep, that codepath got dropped in this release | |
72 | ||
73 | * Mon May 20 2013 Matt Domsch <mdomsch@fedoraproject.org> - 1.5.0-0.1.gitb1ae0fbe | |
74 | - upstream 1.5.0-alpha3 plus fixes | |
75 | - add dep on pyxattr for the --xattr option | |
76 | ||
77 | * Tue Jun 19 2012 Matt Domsch <mdomsch@fedoraproject.org> - 1.1.0-0.4.git11e5755e | |
78 | - add local MD5 cache | |
79 | ||
80 | * Mon Jun 18 2012 Matt Domsch <mdomsch@fedoraproject.org> - 1.1.0-0.3.git7de0789d | |
81 | - parallelize local->remote syncs | |
82 | ||
83 | * Mon Jun 18 2012 Matt Domsch <mdomsch@fedoraproject.org> - 1.1.0-0.2.gitf881b162 | |
84 | - add hardlink / duplicate file detection support | |
85 | ||
86 | * Fri Mar 9 2012 Matt Domsch <mdomsch@fedoraproject.org> - 1.1.0-0.1.git2dfe4a65 | |
87 | - build from git for mdomsch patches to s3cmd sync | |
88 | ||
89 | * Thu Feb 23 2012 Dennis Gilmore <dennis@ausil.us> - 1.0.1-1 | |
90 | - update to 1.0.1 release | |
91 | ||
92 | * Sat Jan 14 2012 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 1.0.0-4 | |
93 | - Rebuilt for https://fedoraproject.org/wiki/Fedora_17_Mass_Rebuild | |
94 | ||
95 | * Thu May 05 2011 Lubomir Rintel (GoodData) <lubo.rintel@gooddata.com> - 1.0.0-3 | |
96 | - No hashlib hackery | |
97 | ||
98 | * Wed Feb 09 2011 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 1.0.0-2 | |
99 | - Rebuilt for https://fedoraproject.org/wiki/Fedora_15_Mass_Rebuild | |
100 | ||
101 | * Tue Jan 11 2011 Lubomir Rintel (GoodData) <lubo.rintel@gooddata.com> - 1.0.0-1 | |
102 | - New upstream release | |
103 | ||
104 | * Mon Nov 29 2010 Lubomir Rintel (GoodData) <lubo.rintel@gooddata.com> - 0.9.9.91-3 | |
105 | - Patch for broken f14 httplib | |
106 | ||
107 | * Thu Jul 22 2010 David Malcolm <dmalcolm@redhat.com> - 0.9.9.91-2.1 | |
108 | - Rebuilt for https://fedoraproject.org/wiki/Features/Python_2.7/MassRebuild | |
109 | ||
110 | * Wed Apr 28 2010 Lubomir Rintel (GoodData) <lubo.rintel@gooddata.com> - 0.9.9.91-1.1 | |
111 | - Do not use sha1 from hashlib | |
112 | ||
113 | * Sun Feb 21 2010 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.9.91-1 | |
114 | - New upstream release | |
115 | ||
116 | * Sun Jul 26 2009 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 0.9.9-2 | |
117 | - Rebuilt for https://fedoraproject.org/wiki/Fedora_12_Mass_Rebuild | |
118 | ||
119 | * Tue Feb 24 2009 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.9-1 | |
120 | - New upstream release | |
121 | ||
122 | * Sat Nov 29 2008 Ignacio Vazquez-Abrams <ivazqueznet+rpm@gmail.com> - 0.9.8.4-2 | |
123 | - Rebuild for Python 2.6 | |
124 | ||
125 | * Tue Nov 11 2008 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.8.4-1 | |
126 | - New upstream release, URI encoding patch upstreamed | |
127 | ||
128 | * Fri Sep 26 2008 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.8.3-4 | |
129 | - Try 3/65536 | |
130 | ||
131 | * Fri Sep 26 2008 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.8.3-3 | |
132 | - Whoops, forgot to actually apply the patch. | |
133 | ||
134 | * Fri Sep 26 2008 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.8.3-2 | |
135 | - Fix listing of directories with special characters in names | |
136 | ||
137 | * Thu Jul 31 2008 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.8.3-1 | |
138 | - New upstream release: Avoid running out-of-memory in MD5'ing large files. | |
139 | ||
140 | * Fri Jul 25 2008 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.8.2-1.1 | |
141 | - Fix a typo | |
142 | ||
143 | * Tue Jul 15 2008 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.8.2-1 | |
144 | - New upstream | |
145 | ||
146 | * Fri Jul 04 2008 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.8.1-3 | |
147 | - Be satisfied with ET provided by 2.5 python | |
148 | ||
149 | * Fri Jul 04 2008 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.8.1-2 | |
150 | - Added missing python-devel BR, thanks to Marek Mahut | |
151 | - Packaged the Python egg file | |
152 | ||
153 | * Wed Jul 02 2008 Lubomir Rintel (Good Data) <lubo.rintel@gooddata.com> - 0.9.8.1-1 | |
154 | - Initial packaging attempt |
73 | 73 | Authors: |
74 | 74 | -------- |
75 | 75 | Michal Ludvig <michal@logix.cz> |
76 | """ % (S3.PkgInfo.long_description) | |
76 | """ % (S3.PkgInfo.long_description), | |
77 | requires=["dateutil"] | |
77 | 78 | ) |
78 | 79 | |
79 | 80 | # vim:et:ts=4:sts=4:ai |
0 | #!/bin/sh | |
1 | ||
2 | VERSION=$(./s3cmd --version | awk '{print $NF}') | |
3 | echo -e "Uploading \033[32ms3cmd \033[31m${VERSION}\033[0m ..." | |
4 | #rsync -avP dist/s3cmd-${VERSION}.* ludvigm@frs.sourceforge.net:uploads/ | |
5 | ln -f NEWS README.txt | |
6 | rsync -avP dist/s3cmd-${VERSION}.* README.txt ludvigm,s3tools@frs.sourceforge.net:/home/frs/project/s/s3/s3tools/s3cmd/${VERSION}/ |