Codebase list s3cmd / fb77a9c
Imported Upstream version 1.5.2 Gianfranco Costamagna 8 years ago
28 changed file(s) with 1332 addition(s) and 656 deletion(s). Raw diff Collapse all Expand all
5959 OpenSuse Python 2.5 package) or it can be installed using your
6060 package manager, e.g. in Debian use
6161
62 apt-get install python2.4-setuptools
62 apt-get install python-setuptools
6363
6464 Again, consult your distribution documentation on how to
6565 find out the actual package name and how to install it then.
0 include INSTALL README.md NEWS
1 include s3cmd.1
0 s3cmd-1.5.2 - 2015-02-08
1 ===============
2 * Handle unvalidated SSL certificate. Necessary on Ubuntu 14.04 for
3 SSL to function at all.
4 * packaging fixes (require python-magic, drop ez_setup)
5
6 s3cmd-1.5.1.2 - 2015-02-04
7 ===============
8 * fix PyPi install
9
10 s3cmd-1.5.1 - 2015-02-04
11 ===============
12
13 * Sort s3cmd ls output by bucket name (Andrew Gaul)
14 * Support relative expiry times in signurl. (Chris Lamb)
15 * Fixed issue with mixed path separators with s3cmd get --recursive on
16 Windows. (Luke Winslow)
17 * fix S3 wildcard certificate checking
18 * Handle headers with spaces in their values properly (#460)
19 * Fix lack of SSL certificate checking libraries on older python
20 * set content-type header for stdin from command line or Config()
21 * fix uploads from stdin (#464)
22 * Fix directory exclusions (#467)
23 * fix signurl
24 * Don't retry in response to HTTP 405 error (#422)
25 * Don't crash when a proxy returns an invalid XML error document
26
27 s3cmd-1.5.0 - 2015-01-12
28 ===============
29 * add support for newer regions such as Frankfurt that
30 require newer authorization signature v4 support
31 (Vasileios Mitrousis, Michal Ludvig, Matt Domsch)
32 * drop support for python 2.4 due to signature v4 code.
33 python 2.6 is now the minimum, and python 3 is still not supported.
34 * handle redirects to the "right" region for a bucket.
35 * add --ca-cert=FILE for self-signed certs (Matt Domsch)
36 * allow proxied SSL connections with python >= 2.7 (Damian Gerow)
37 * add --remove-headers for [modify] command (Matt Domsch)
38 * add -s/--ssl and --no-ssl options (Viktor Szakáts)
39 * add --signature-v2 for backwards compatibility with S3 clones.
40 * bugfixes by 17 contributors
41
042 s3cmd 1.5.0-rc1 - 2014-06-29
143 ===============
2 [TODO - extract from: git log --no-merges v1.5.0-beta1..]
44 * add environment variable S3CMD_CONFIG (Devon Jones),
45 access key and secre keys (Vasileios Mitrousis)
46 * added modify command (Francois Gaudin)
47 * better debug messages (Matt Domsch)
48 * faster batch deletes (Matt Domsch)
49 * Added support for restoring files from Glacier storage (Robert Palmer)
50 * Add and remove full lifecycle policies (Sam Rudge)
51 * Add support for object expiration (hrchu)
52 * bugfixes by 26 contributors
53
354
455 s3cmd 1.5.0-beta1 - 2013-12-02
556 =================
00 Metadata-Version: 1.1
11 Name: s3cmd
2 Version: 1.5.0-rc1
2 Version: 1.5.2
33 Summary: Command line tool for managing Amazon S3 and CloudFront services
44 Home-page: http://s3tools.org
5 Author: Michal Ludvig
6 Author-email: michal@logix.cz
7 License: GPL version 2
5 Author: github.com/mdomsch, github.com/matteobar
6 Author-email: s3tools-bugs@lists.sourceforge.net
7 License: GNU GPL v2+
88 Description:
99
1010 S3cmd lets you copy files from/to Amazon S3
1919 Michal Ludvig <michal@logix.cz>
2020
2121 Platform: UNKNOWN
22 Requires: dateutil
22 Classifier: Development Status :: 5 - Production/Stable
23 Classifier: Environment :: Console
24 Classifier: Environment :: MacOS X
25 Classifier: Environment :: Win32 (MS Windows)
26 Classifier: Intended Audience :: End Users/Desktop
27 Classifier: Intended Audience :: System Administrators
28 Classifier: License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)
29 Classifier: Natural Language :: English
30 Classifier: Operating System :: MacOS :: MacOS X
31 Classifier: Operating System :: Microsoft :: Windows
32 Classifier: Operating System :: POSIX
33 Classifier: Operating System :: Unix
34 Classifier: Programming Language :: Python :: 2.6
35 Classifier: Programming Language :: Python :: 2.7
36 Classifier: Programming Language :: Python :: 2 :: Only
37 Classifier: Topic :: System :: Archiving
38 Classifier: Topic :: Utilities
+0
-370
README less more
0 S3cmd tool for Amazon Simple Storage Service (S3)
1 =================================================
2
3 Author:
4 Michal Ludvig <michal@logix.cz>
5 Copyright (c) TGRMN Software - http://www.tgrmn.com - and contributors
6
7 S3tools / S3cmd project homepage:
8 http://s3tools.org
9
10 S3tools / S3cmd mailing lists:
11
12 * Announcements of new releases:
13 s3tools-announce@lists.sourceforge.net
14
15 * General questions and discussion about usage
16 s3tools-general@lists.sourceforge.net
17
18 * Bug reports
19 s3tools-bugs@lists.sourceforge.net
20
21 !!!
22 !!! Please consult INSTALL file for installation instructions!
23 !!!
24
25 What is S3cmd
26 --------------
27 S3cmd is a free command line tool and client for uploading,
28 retrieving and managing data in Amazon S3 and other cloud
29 storage service providers that use the S3 protocol, such as
30 Google Cloud Storage or DreamHost DreamObjects. It is best
31 suited for power users who are familiar with command line
32 programs. It is also ideal for batch scripts and automated
33 backup to S3, triggered from cron, etc.
34
35 S3cmd is written in Python. It's an open source project
36 available under GNU Public License v2 (GPLv2) and is free
37 for both commercial and private use. You will only have
38 to pay Amazon for using their storage.
39
40 Lots of features and options have been added to S3cmd,
41 since its very first release in 2008.... we recently counted
42 more than 60 command line options, including multipart
43 uploads, encryption, incremental backup, s3 sync, ACL and
44 Metadata management, S3 bucket size, bucket policies, and
45 more!
46
47 What is Amazon S3
48 -----------------
49 Amazon S3 provides a managed internet-accessible storage
50 service where anyone can store any amount of data and
51 retrieve it later again.
52
53 S3 is a paid service operated by Amazon. Before storing
54 anything into S3 you must sign up for an "AWS" account
55 (where AWS = Amazon Web Services) to obtain a pair of
56 identifiers: Access Key and Secret Key. You will need to
57 give these keys to S3cmd.
58 Think of them as if they were a username and password for
59 your S3 account.
60
61 Amazon S3 pricing explained
62 ---------------------------
63 At the time of this writing the costs of using S3 are (in USD):
64
65 $0.15 per GB per month of storage space used
66
67 plus
68
69 $0.10 per GB - all data uploaded
70
71 plus
72
73 $0.18 per GB - first 10 TB / month data downloaded
74 $0.16 per GB - next 40 TB / month data downloaded
75 $0.13 per GB - data downloaded / month over 50 TB
76
77 plus
78
79 $0.01 per 1,000 PUT or LIST requests
80 $0.01 per 10,000 GET and all other requests
81
82 If for instance on 1st of January you upload 2GB of
83 photos in JPEG from your holiday in New Zealand, at the
84 end of January you will be charged $0.30 for using 2GB of
85 storage space for a month, $0.20 for uploading 2GB
86 of data, and a few cents for requests.
87 That comes to slightly over $0.50 for a complete backup
88 of your precious holiday pictures.
89
90 In February you don't touch it. Your data are still on S3
91 servers so you pay $0.30 for those two gigabytes, but not
92 a single cent will be charged for any transfer. That comes
93 to $0.30 as an ongoing cost of your backup. Not too bad.
94
95 In March you allow anonymous read access to some of your
96 pictures and your friends download, say, 500MB of them.
97 As the files are owned by you, you are responsible for the
98 costs incurred. That means at the end of March you'll be
99 charged $0.30 for storage plus $0.09 for the download traffic
100 generated by your friends.
101
102 There is no minimum monthly contract or a setup fee. What
103 you use is what you pay for. At the beginning my bill used
104 to be like US$0.03 or even nil.
105
106 That's the pricing model of Amazon S3 in a nutshell. Check
107 Amazon S3 homepage at http://aws.amazon.com/s3 for more
108 details.
109
110 Needless to say that all these money are charged by Amazon
111 itself, there is obviously no payment for using S3cmd :-)
112
113 Amazon S3 basics
114 ----------------
115 Files stored in S3 are called "objects" and their names are
116 officially called "keys". Since this is sometimes confusing
117 for the users we often refer to the objects as "files" or
118 "remote files". Each object belongs to exactly one "bucket".
119
120 To describe objects in S3 storage we invented a URI-like
121 schema in the following form:
122
123 s3://BUCKET
124 or
125 s3://BUCKET/OBJECT
126
127 Buckets
128 -------
129 Buckets are sort of like directories or folders with some
130 restrictions:
131 1) each user can only have 100 buckets at the most,
132 2) bucket names must be unique amongst all users of S3,
133 3) buckets can not be nested into a deeper hierarchy and
134 4) a name of a bucket can only consist of basic alphanumeric
135 characters plus dot (.) and dash (-). No spaces, no accented
136 or UTF-8 letters, etc.
137
138 It is a good idea to use DNS-compatible bucket names. That
139 for instance means you should not use upper case characters.
140 While DNS compliance is not strictly required some features
141 described below are not available for DNS-incompatible named
142 buckets. One more step further is using a fully qualified
143 domain name (FQDN) for a bucket - that has even more benefits.
144
145 * For example "s3://--My-Bucket--" is not DNS compatible.
146 * On the other hand "s3://my-bucket" is DNS compatible but
147 is not FQDN.
148 * Finally "s3://my-bucket.s3tools.org" is DNS compatible
149 and FQDN provided you own the s3tools.org domain and can
150 create the domain record for "my-bucket.s3tools.org".
151
152 Look for "Virtual Hosts" later in this text for more details
153 regarding FQDN named buckets.
154
155 Objects (files stored in Amazon S3)
156 -----------------------------------
157 Unlike for buckets there are almost no restrictions on object
158 names. These can be any UTF-8 strings of up to 1024 bytes long.
159 Interestingly enough the object name can contain forward
160 slash character (/) thus a "my/funny/picture.jpg" is a valid
161 object name. Note that there are not directories nor
162 buckets called "my" and "funny" - it is really a single object
163 name called "my/funny/picture.jpg" and S3 does not care at
164 all that it _looks_ like a directory structure.
165
166 The full URI of such an image could be, for example:
167
168 s3://my-bucket/my/funny/picture.jpg
169
170 Public vs Private files
171 -----------------------
172 The files stored in S3 can be either Private or Public. The
173 Private ones are readable only by the user who uploaded them
174 while the Public ones can be read by anyone. Additionally the
175 Public files can be accessed using HTTP protocol, not only
176 using s3cmd or a similar tool.
177
178 The ACL (Access Control List) of a file can be set at the
179 time of upload using --acl-public or --acl-private options
180 with 's3cmd put' or 's3cmd sync' commands (see below).
181
182 Alternatively the ACL can be altered for existing remote files
183 with 's3cmd setacl --acl-public' (or --acl-private) command.
184
185 Simple s3cmd HowTo
186 ------------------
187 1) Register for Amazon AWS / S3
188 Go to http://aws.amazon.com/s3, click the "Sign up
189 for web service" button in the right column and work
190 through the registration. You will have to supply
191 your Credit Card details in order to allow Amazon
192 charge you for S3 usage.
193 At the end you should have your Access and Secret Keys
194
195 2) Run "s3cmd --configure"
196 You will be asked for the two keys - copy and paste
197 them from your confirmation email or from your Amazon
198 account page. Be careful when copying them! They are
199 case sensitive and must be entered accurately or you'll
200 keep getting errors about invalid signatures or similar.
201
202 Remember to add ListAllMyBuckets permissions to the keys
203 or you will get an AccessDenied error while testing access.
204
205 3) Run "s3cmd ls" to list all your buckets.
206 As you just started using S3 there are no buckets owned by
207 you as of now. So the output will be empty.
208
209 4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
210 As mentioned above the bucket names must be unique amongst
211 _all_ users of S3. That means the simple names like "test"
212 or "asdf" are already taken and you must make up something
213 more original. To demonstrate as many features as possible
214 let's create a FQDN-named bucket s3://public.s3tools.org:
215
216 ~$ s3cmd mb s3://public.s3tools.org
217 Bucket 's3://public.s3tools.org' created
218
219 5) List your buckets again with "s3cmd ls"
220 Now you should see your freshly created bucket
221
222 ~$ s3cmd ls
223 2009-01-28 12:34 s3://public.s3tools.org
224
225 6) List the contents of the bucket
226
227 ~$ s3cmd ls s3://public.s3tools.org
228 ~$
229
230 It's empty, indeed.
231
232 7) Upload a single file into the bucket:
233
234 ~$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
235 some-file.xml -> s3://public.s3tools.org/somefile.xml [1 of 1]
236 123456 of 123456 100% in 2s 51.75 kB/s done
237
238 Upload a two directory tree into the bucket's virtual 'directory':
239
240 ~$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
241 File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
242 File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
243 File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
244 File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
245 File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]
246
247 As you can see we didn't have to create the /somewhere
248 'directory'. In fact it's only a filename prefix, not
249 a real directory and it doesn't have to be created in
250 any way beforehand.
251
252 8) Now list the bucket contents again:
253
254 ~$ s3cmd ls s3://public.s3tools.org
255 DIR s3://public.s3tools.org/somewhere/
256 2009-02-10 05:10 123456 s3://public.s3tools.org/somefile.xml
257
258 Use --recursive (or -r) to list all the remote files:
259
260 ~$ s3cmd ls --recursive s3://public.s3tools.org
261 2009-02-10 05:10 123456 s3://public.s3tools.org/somefile.xml
262 2009-02-10 05:13 18 s3://public.s3tools.org/somewhere/dir1/file1-1.txt
263 2009-02-10 05:13 8 s3://public.s3tools.org/somewhere/dir1/file1-2.txt
264 2009-02-10 05:13 16 s3://public.s3tools.org/somewhere/dir1/file1-3.log
265 2009-02-10 05:13 11 s3://public.s3tools.org/somewhere/dir2/file2-1.bin
266 2009-02-10 05:13 8 s3://public.s3tools.org/somewhere/dir2/file2-2.txt
267
268 9) Retrieve one of the files back and verify that it hasn't been
269 corrupted:
270
271 ~$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
272 s3://public.s3tools.org/somefile.xml -> some-file-2.xml [1 of 1]
273 123456 of 123456 100% in 3s 35.75 kB/s done
274
275 ~$ md5sum some-file.xml some-file-2.xml
276 39bcb6992e461b269b95b3bda303addf some-file.xml
277 39bcb6992e461b269b95b3bda303addf some-file-2.xml
278
279 Checksums of the original file matches the one of the
280 retrieved one. Looks like it worked :-)
281
282 To retrieve a whole 'directory tree' from S3 use recursive get:
283
284 ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere
285 File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
286 File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
287 File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
288 File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
289 File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'
290
291 Since the destination directory wasn't specified s3cmd
292 saved the directory structure in a current working
293 directory ('.').
294
295 There is an important difference between:
296 get s3://public.s3tools.org/somewhere
297 and
298 get s3://public.s3tools.org/somewhere/
299 (note the trailing slash)
300 S3cmd always uses the last path part, ie the word
301 after the last slash, for naming files.
302
303 In the case of s3://.../somewhere the last path part
304 is 'somewhere' and therefore the recursive get names
305 the local files as somewhere/dir1, somewhere/dir2, etc.
306
307 On the other hand in s3://.../somewhere/ the last path
308 part is empty and s3cmd will only create 'dir1' and 'dir2'
309 without the 'somewhere/' prefix:
310
311 ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
312 File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
313 File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
314 File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
315 File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'
316
317 See? It's /tmp/dir1 and not /tmp/somewhere/dir1 as it
318 was in the previous example.
319
320 10) Clean up - delete the remote files and remove the bucket:
321
322 Remove everything under s3://public.s3tools.org/somewhere/
323
324 ~$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
325 File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
326 File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
327 ...
328
329 Now try to remove the bucket:
330
331 ~$ s3cmd rb s3://public.s3tools.org
332 ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
333
334 Ouch, we forgot about s3://public.s3tools.org/somefile.xml
335 We can force the bucket removal anyway:
336
337 ~$ s3cmd rb --force s3://public.s3tools.org/
338 WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
339 File s3://public.s3tools.org/somefile.xml deleted
340 Bucket 's3://public.s3tools.org/' removed
341
342 Hints
343 -----
344 The basic usage is as simple as described in the previous
345 section.
346
347 You can increase the level of verbosity with -v option and
348 if you're really keen to know what the program does under
349 its bonet run it with -d to see all 'debugging' output.
350
351 After configuring it with --configure all available options
352 are spitted into your ~/.s3cfg file. It's a text file ready
353 to be modified in your favourite text editor.
354
355 For more information refer to:
356 * S3cmd / S3tools homepage at http://s3tools.org
357
358 ===========================================================================
359 Copyright (C) 2014 TGRMN Software - http://www.tgrmn.com - and contributors
360
361 This program is free software; you can redistribute it and/or modify
362 it under the terms of the GNU General Public License as published by
363 the Free Software Foundation; either version 2 of the License, or
364 (at your option) any later version.
365
366 This program is distributed in the hope that it will be useful,
367 but WITHOUT ANY WARRANTY; without even the implied warranty of
368 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
369 GNU General Public License for more details.
0 ## S3cmd tool for Amazon Simple Storage Service (S3)
1
2
3 * Author: Michal Ludvig, michal@logix.cz
4 * [Project homepage](http://s3tools.org)
5 * (c) [TGRMN Software](http://www.tgrmn.com) and contributors
6
7
8 S3tools / S3cmd mailing lists:
9
10 * Announcements of new releases: s3tools-announce@lists.sourceforge.net
11 * General questions and discussion: s3tools-general@lists.sourceforge.net
12 * Bug reports: s3tools-bugs@lists.sourceforge.net
13
14 ### What is S3cmd
15
16 S3cmd (`s3cmd`) is a free command line tool and client for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers that use the S3 protocol, such as Google Cloud Storage or DreamHost DreamObjects. It is best suited for power users who are familiar with command line programs. It is also ideal for batch scripts and automated backup to S3, triggered from cron, etc.
17
18 S3cmd is written in Python. It's an open source project available under GNU Public License v2 (GPLv2) and is free for both commercial and private use. You will only have to pay Amazon for using their storage.
19
20 Lots of features and options have been added to S3cmd, since its very first release in 2008.... we recently counted more than 60 command line options, including multipart uploads, encryption, incremental backup, s3 sync, ACL and Metadata management, S3 bucket size, bucket policies, and more!
21
22 ### What is Amazon S3
23
24 Amazon S3 provides a managed internet-accessible storage service where anyone can store any amount of data and retrieve it later again.
25
26 S3 is a paid service operated by Amazon. Before storing anything into S3 you must sign up for an "AWS" account (where AWS = Amazon Web Services) to obtain a pair of identifiers: Access Key and Secret Key. You will need to
27 give these keys to S3cmd. Think of them as if they were a username and password for your S3 account.
28
29 ### Amazon S3 pricing explained
30
31 At the time of this writing the costs of using S3 are (in USD):
32
33 $0.03 per GB per month of storage space used
34
35 plus
36
37 $0.00 per GB - all data uploaded
38
39 plus
40
41 $0.000 per GB - first 1GB / month data downloaded
42 $0.090 per GB - up to 10 TB / month data downloaded
43 $0.085 per GB - next 40 TB / month data downloaded
44 $0.070 per GB - data downloaded / month over 50 TB
45
46 plus
47
48 $0.005 per 1,000 PUT or COPY or LIST requests
49 $0.004 per 10,000 GET and all other requests
50
51 If for instance on 1st of January you upload 2GB of photos in JPEG from your holiday in New Zealand, at the end of January you will be charged $0.06 for using 2GB of storage space for a month, $0.0 for uploading 2GB of data, and a few cents for requests. That comes to slightly over $0.06 for a complete backup of your precious holiday pictures.
52
53 In February you don't touch it. Your data are still on S3 servers so you pay $0.06 for those two gigabytes, but not a single cent will be charged for any transfer. That comes to $0.06 as an ongoing cost of your backup. Not too bad.
54
55 In March you allow anonymous read access to some of your pictures and your friends download, say, 1500MB of them. As the files are owned by you, you are responsible for the costs incurred. That means at the end of March you'll be charged $0.06 for storage plus $0.045 for the download traffic generated by your friends.
56
57 There is no minimum monthly contract or a setup fee. What you use is what you pay for. At the beginning my bill used to be like US$0.03 or even nil.
58
59 That's the pricing model of Amazon S3 in a nutshell. Check the [Amazon S3 homepage](http://aws.amazon.com/s3/pricing/) for more details.
60
61 Needless to say that all these money are charged by Amazon itself, there is obviously no payment for using S3cmd :-)
62
63 ### Amazon S3 basics
64
65 Files stored in S3 are called "objects" and their names are officially called "keys". Since this is sometimes confusing for the users we often refer to the objects as "files" or "remote files". Each object belongs to exactly one "bucket".
66
67 To describe objects in S3 storage we invented a URI-like schema in the following form:
68
69 ```
70 s3://BUCKET
71 ```
72 or
73
74 ```
75 s3://BUCKET/OBJECT
76 ```
77
78 ### Buckets
79
80 Buckets are sort of like directories or folders with some restrictions:
81
82 1. each user can only have 100 buckets at the most,
83 2. bucket names must be unique amongst all users of S3,
84 3. buckets can not be nested into a deeper hierarchy and
85 4. a name of a bucket can only consist of basic alphanumeric
86 characters plus dot (.) and dash (-). No spaces, no accented
87 or UTF-8 letters, etc.
88
89 It is a good idea to use DNS-compatible bucket names. That for instance means you should not use upper case characters. While DNS compliance is not strictly required some features described below are not available for DNS-incompatible named buckets. One more step further is using a fully qualified domain name (FQDN) for a bucket - that has even more benefits.
90
91 * For example "s3://--My-Bucket--" is not DNS compatible.
92 * On the other hand "s3://my-bucket" is DNS compatible but
93 is not FQDN.
94 * Finally "s3://my-bucket.s3tools.org" is DNS compatible
95 and FQDN provided you own the s3tools.org domain and can
96 create the domain record for "my-bucket.s3tools.org".
97
98 Look for "Virtual Hosts" later in this text for more details regarding FQDN named buckets.
99
100 ### Objects (files stored in Amazon S3)
101
102 Unlike for buckets there are almost no restrictions on object names. These can be any UTF-8 strings of up to 1024 bytes long. Interestingly enough the object name can contain forward slash character (/) thus a `my/funny/picture.jpg` is a valid object name. Note that there are not directories nor buckets called `my` and `funny` - it is really a single object name called `my/funny/picture.jpg` and S3 does not care at all that it _looks_ like a directory structure.
103
104 The full URI of such an image could be, for example:
105
106 ```
107 s3://my-bucket/my/funny/picture.jpg
108 ```
109
110 ### Public vs Private files
111
112 The files stored in S3 can be either Private or Public. The Private ones are readable only by the user who uploaded them while the Public ones can be read by anyone. Additionally the Public files can be accessed using HTTP protocol, not only using `s3cmd` or a similar tool.
113
114 The ACL (Access Control List) of a file can be set at the time of upload using `--acl-public` or `--acl-private` options with `s3cmd put` or `s3cmd sync` commands (see below).
115
116 Alternatively the ACL can be altered for existing remote files with `s3cmd setacl --acl-public` (or `--acl-private`) command.
117
118 ### Simple s3cmd HowTo
119
120 1) Register for Amazon AWS / S3
121
122 Go to http://aws.amazon.com/s3, click the "Sign up for web service" button in the right column and work through the registration. You will have to supply your Credit Card details in order to allow Amazon charge you for S3 usage. At the end you should have your Access and Secret Keys.
123
124 2) Run `s3cmd --configure`
125
126 You will be asked for the two keys - copy and paste them from your confirmation email or from your Amazon account page. Be careful when copying them! They are case sensitive and must be entered accurately or you'll keep getting errors about invalid signatures or similar.
127
128 Remember to add ListAllMyBuckets permissions to the keys or you will get an AccessDenied error while testing access.
129
130 3) Run `s3cmd ls` to list all your buckets.
131
132 As you just started using S3 there are no buckets owned by you as of now. So the output will be empty.
133
134 4) Make a bucket with `s3cmd mb s3://my-new-bucket-name`
135
136 As mentioned above the bucket names must be unique amongst _all_ users of S3. That means the simple names like "test" or "asdf" are already taken and you must make up something more original. To demonstrate as many features as possible let's create a FQDN-named bucket `s3://public.s3tools.org`:
137
138 ```
139 $ s3cmd mb s3://public.s3tools.org
140
141 Bucket 's3://public.s3tools.org' created
142 ```
143
144 5) List your buckets again with `s3cmd ls`
145
146 Now you should see your freshly created bucket:
147
148 ```
149 $ s3cmd ls
150
151 2009-01-28 12:34 s3://public.s3tools.org
152 ```
153
154 6) List the contents of the bucket:
155
156 ```
157 $ s3cmd ls s3://public.s3tools.org
158 $
159 ```
160
161 It's empty, indeed.
162
163 7) Upload a single file into the bucket:
164
165 ```
166 $ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
167
168 some-file.xml -> s3://public.s3tools.org/somefile.xml [1 of 1]
169 123456 of 123456 100% in 2s 51.75 kB/s done
170 ```
171
172 Upload a two-directory tree into the bucket's virtual 'directory':
173
174 ```
175 $ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
176
177 File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
178 File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
179 File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
180 File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
181 File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]
182 ```
183
184 As you can see we didn't have to create the `/somewhere` 'directory'. In fact it's only a filename prefix, not a real directory and it doesn't have to be created in any way beforehand.
185
186 8) Now list the bucket's contents again:
187
188 ```
189 $ s3cmd ls s3://public.s3tools.org
190
191 DIR s3://public.s3tools.org/somewhere/
192 2009-02-10 05:10 123456 s3://public.s3tools.org/somefile.xml
193 ```
194
195 Use --recursive (or -r) to list all the remote files:
196
197 ```
198 $ s3cmd ls --recursive s3://public.s3tools.org
199
200 2009-02-10 05:10 123456 s3://public.s3tools.org/somefile.xml
201 2009-02-10 05:13 18 s3://public.s3tools.org/somewhere/dir1/file1-1.txt
202 2009-02-10 05:13 8 s3://public.s3tools.org/somewhere/dir1/file1-2.txt
203 2009-02-10 05:13 16 s3://public.s3tools.org/somewhere/dir1/file1-3.log
204 2009-02-10 05:13 11 s3://public.s3tools.org/somewhere/dir2/file2-1.bin
205 2009-02-10 05:13 8 s3://public.s3tools.org/somewhere/dir2/file2-2.txt
206 ```
207
208 9) Retrieve one of the files back and verify that it hasn't been
209 corrupted:
210
211 ```
212 $ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
213
214 s3://public.s3tools.org/somefile.xml -> some-file-2.xml [1 of 1]
215 123456 of 123456 100% in 3s 35.75 kB/s done
216 ```
217
218 ```
219 $ md5sum some-file.xml some-file-2.xml
220
221 39bcb6992e461b269b95b3bda303addf some-file.xml
222 39bcb6992e461b269b95b3bda303addf some-file-2.xml
223 ```
224
225 Checksums of the original file matches the one of the retrieved ones. Looks like it worked :-)
226
227 To retrieve a whole 'directory tree' from S3 use recursive get:
228
229 ```
230 $ s3cmd get --recursive s3://public.s3tools.org/somewhere
231
232 File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
233 File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
234 File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
235 File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
236 File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'
237 ```
238
239 Since the destination directory wasn't specified, `s3cmd` saved the directory structure in a current working directory ('.').
240
241 There is an important difference between:
242
243 ```
244 get s3://public.s3tools.org/somewhere
245 ```
246
247 and
248
249 ```
250 get s3://public.s3tools.org/somewhere/
251 ```
252
253 (note the trailing slash)
254
255 `s3cmd` always uses the last path part, ie the word after the last slash, for naming files.
256
257 In the case of `s3://.../somewhere` the last path part is 'somewhere' and therefore the recursive get names the local files as somewhere/dir1, somewhere/dir2, etc.
258
259 On the other hand in `s3://.../somewhere/` the last path
260 part is empty and s3cmd will only create 'dir1' and 'dir2'
261 without the 'somewhere/' prefix:
262
263 ```
264 $ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
265
266 File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
267 File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
268 File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
269 File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'
270 ```
271
272 See? It's `/tmp/dir1` and not `/tmp/somewhere/dir1` as it was in the previous example.
273
274 10) Clean up - delete the remote files and remove the bucket:
275
276 Remove everything under s3://public.s3tools.org/somewhere/
277
278 ```
279 $ s3cmd del --recursive s3://public.s3tools.org/somewhere/
280
281 File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
282 File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
283 ...
284 ```
285
286 Now try to remove the bucket:
287
288 ```
289 $ s3cmd rb s3://public.s3tools.org
290
291 ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
292 ```
293
294 Ouch, we forgot about `s3://public.s3tools.org/somefile.xml`. We can force the bucket removal anyway:
295
296 ```
297 $ s3cmd rb --force s3://public.s3tools.org/
298
299 WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
300 File s3://public.s3tools.org/somefile.xml deleted
301 Bucket 's3://public.s3tools.org/' removed
302 ```
303
304 ### Hints
305
306 The basic usage is as simple as described in the previous section.
307
308 You can increase the level of verbosity with `-v` option and if you're really keen to know what the program does under its bonnet run it with `-d` to see all 'debugging' output.
309
310 After configuring it with `--configure` all available options are spitted into your `~/.s3cfg` file. It's a text file ready to be modified in your favourite text editor.
311
312 For more information refer to the [S3cmd / S3tools homepage](http://s3tools.org).
313
314 ### License
315
316 Copyright (C) 2014 TGRMN Software - http://www.tgrmn.com - and contributors
317
318 This program is free software; you can redistribute it and/or modify
319 it under the terms of the GNU General Public License as published by
320 the Free Software Foundation; either version 2 of the License, or
321 (at your option) any later version.
322
323 This program is distributed in the hope that it will be useful,
324 but WITHOUT ANY WARRANTY; without even the implied warranty of
325 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
326 GNU General Public License for more details.
327
158158 grantee.name = name
159159 grantee.permission = permission
160160
161 if name.find('@') > -1:
161 if '@' in name:
162162 grantee.name = grantee.name.lower()
163163 grantee.xsi_type = "AmazonCustomerByEmail"
164164 grantee.tag = "EmailAddress"
165 elif name.find('http://acs.amazonaws.com/groups/') > -1:
165 elif 'http://acs.amazonaws.com/groups/' in name:
166166 grantee.xsi_type = "Group"
167167 grantee.tag = "URI"
168168 else:
1818 from S3 import S3
1919 from Config import Config
2020 from Exceptions import *
21 from Utils import getTreeFromXml, appendXmlTextNode, getDictFromTree, dateS3toPython, sign_string, getBucketFromHostname, getHostnameFromBucket
21 from Utils import getTreeFromXml, appendXmlTextNode, getDictFromTree, dateS3toPython, getBucketFromHostname, getHostnameFromBucket
22 from Crypto import sign_string_v2
2223 from S3Uri import S3Uri, S3UriS3
2324 from FileLists import fetch_remote_list
25 from ConnMan import ConnMan
2426
2527 cloudfront_api_version = "2010-11-01"
2628 cloudfront_resource = "/%(api_ver)s/distribution" % { 'api_ver' : cloudfront_api_version }
494496 request = self.create_request(operation, dist_id, request_id, headers)
495497 conn = self.get_connection()
496498 debug("send_request(): %s %s" % (request['method'], request['resource']))
497 conn.request(request['method'], request['resource'], body, request['headers'])
498 http_response = conn.getresponse()
499 conn.c.request(request['method'], request['resource'], body, request['headers'])
500 http_response = conn.c.getresponse()
499501 response = {}
500502 response["status"] = http_response.status
501503 response["reason"] = http_response.reason
502504 response["headers"] = dict(http_response.getheaders())
503505 response["data"] = http_response.read()
504 conn.close()
506 ConnMan.put(conn)
505507
506508 debug("CloudFront: response: %r" % response)
507509
552554
553555 def sign_request(self, headers):
554556 string_to_sign = headers['x-amz-date']
555 signature = sign_string(string_to_sign)
557 signature = sign_string_v2(string_to_sign)
556558 debug(u"CloudFront.sign_request('%s') = %s" % (string_to_sign, signature))
557559 return signature
558560
559561 def get_connection(self):
560 if self.config.proxy_host != "":
561 raise ParameterError("CloudFront commands don't work from behind a HTTP proxy")
562 return httplib.HTTPSConnection(self.config.cloudfront_host)
562 conn = ConnMan.get(self.config.cloudfront_host, ssl = True)
563 return conn
563564
564565 def _fail_wait(self, retries):
565566 # Wait a few seconds. The more it fails the more we wait.
1111 import Progress
1212 from SortedDict import SortedDict
1313 import httplib
14 import locale
1415 try:
1516 import json
1617 except ImportError, e:
7677 gpg_encrypt = "%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s"
7778 gpg_decrypt = "%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s"
7879 use_https = False
80 ca_certs_file = ""
81 check_ssl_certificate = True
7982 bucket_location = "US"
8083 default_mime_type = "binary/octet-stream"
8184 guess_mime_type = True
9194 # Dict mapping compiled REGEXPs back to their textual form
9295 debug_exclude = {}
9396 debug_include = {}
94 encoding = "utf-8"
97 encoding = locale.getpreferredencoding() or "UTF-8"
9598 urlencoding_mode = "normal"
9699 log_target_prefix = ""
97100 reduced_redundancy = False
108111 files_from = []
109112 cache_file = ""
110113 add_headers = ""
114 remove_headers = []
111115 ignore_failed_copy = False
112116 expiry_days = ""
113117 expiry_date = ""
114118 expiry_prefix = ""
119 signature_v2 = False
115120
116121 ## Creating a singleton
117122 def __new__(self, configfile = None, access_key=None, secret_key=None):
222227 def read_config_file(self, configfile):
223228 cp = ConfigParser(configfile)
224229 for option in self.option_list():
225 self.update_option(option, cp.get(option))
230 _option = cp.get(option)
231 if _option is not None:
232 _option = _option.strip()
233 self.update_option(option, _option)
226234
227235 if cp.get('add_headers'):
228236 for option in cp.get('add_headers').split(","):
33 ## License: GPL Version 2
44 ## Copyright: TGRMN Software and contributors
55
6 import sys
67 import httplib
8 import ssl
79 from urlparse import urlparse
810 from threading import Semaphore
911 from logging import debug, info, warning, error
1113 from Config import Config
1214 from Exceptions import ParameterError
1315
16 if not 'CertificateError ' in ssl.__dict__:
17 class CertificateError(Exception):
18 pass
19 ssl.CertificateError = CertificateError
20
1421 __all__ = [ "ConnMan" ]
1522
1623 class http_connection(object):
24 context = None
25 context_set = False
26
27 @staticmethod
28 def _ssl_verified_context(cafile):
29 context = None
30 try:
31 context = ssl.create_default_context(cafile=cafile)
32 except AttributeError: # no ssl.create_default_context
33 pass
34 return context
35
36 @staticmethod
37 def _ssl_context():
38 if http_connection.context_set:
39 return http_connection.context
40
41 cfg = Config()
42 cafile = cfg.ca_certs_file
43 if cafile == "":
44 cafile = None
45 debug(u"Using ca_certs_file %s" % cafile)
46
47 context = http_connection._ssl_verified_context(cafile)
48
49 if context and not cfg.check_ssl_certificate:
50 context.check_hostname = False
51 debug(u'Disabling hostname checking')
52
53 http_connection.context = context
54 http_connection.context_set = True
55 return context
56
57 def match_hostname_aws(self, cert, e):
58 """
59 Wildcard matching for *.s3.amazonaws.com and similar per region.
60
61 Per http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html:
62 "We recommend that all bucket names comply with DNS naming conventions."
63
64 Per http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html:
65 "When using virtual hosted-style buckets with SSL, the SSL
66 wild card certificate only matches buckets that do not contain
67 periods. To work around this, use HTTP or write your own
68 certificate verification logic."
69
70 Therefore, we need a custom validation routine that allows
71 mybucket.example.com.s3.amazonaws.com to be considered a valid
72 hostname for the *.s3.amazonaws.com wildcard cert, and for the
73 region-specific *.s3-[region].amazonaws.com wildcard cert.
74 """
75 san = cert.get('subjectAltName', ())
76 for key, value in san:
77 if key == 'DNS':
78 if value.startswith('*.s3') and value.endswith('.amazonaws.com') and self.c.host.endswith('.amazonaws.com'):
79 return
80 raise e
81
82 def match_hostname(self):
83 cert = self.c.sock.getpeercert()
84 try:
85 ssl.match_hostname(cert, self.c.host)
86 except AttributeError: # old ssl module doesn't have this function
87 return
88 except ValueError: # empty SSL cert means underlying SSL library didn't validate it, we don't either.
89 return
90 except ssl.CertificateError, e:
91 self.match_hostname_aws(cert, e)
92
93 @staticmethod
94 def _https_connection(hostname, port=None):
95 try:
96 context = http_connection._ssl_context()
97 # S3's wildcart certificate doesn't work with DNS-style named buckets.
98 if hostname.endswith('.amazonaws.com') and context:
99 # this merely delays running the hostname check until
100 # after the connection is made and we get control
101 # back. We then run the same check, relaxed for S3's
102 # wildcard certificates.
103 context.check_hostname = False
104 conn = httplib.HTTPSConnection(hostname, port, context=context)
105 except TypeError:
106 conn = httplib.HTTPSConnection(hostname, port)
107 return conn
108
17109 def __init__(self, id, hostname, ssl, cfg):
18 self.hostname = hostname
19110 self.ssl = ssl
20111 self.id = id
21112 self.counter = 0
22 if cfg.proxy_host != "":
23 self.c = httplib.HTTPConnection(cfg.proxy_host, cfg.proxy_port)
24 elif not ssl:
25 self.c = httplib.HTTPConnection(hostname)
113
114 if not ssl:
115 if cfg.proxy_host != "":
116 self.c = httplib.HTTPConnection(cfg.proxy_host, cfg.proxy_port)
117 debug(u'proxied HTTPConnection(%s, %s)' % (cfg.proxy_host, cfg.proxy_port))
118 else:
119 self.c = httplib.HTTPConnection(hostname)
120 debug(u'non-proxied HTTPConnection(%s)' % hostname)
26121 else:
27 self.c = httplib.HTTPSConnection(hostname)
122 if cfg.proxy_host != "":
123 self.c = http_connection._https_connection(cfg.proxy_host, cfg.proxy_port)
124 self.c.set_tunnel(hostname)
125 debug(u'proxied HTTPSConnection(%s, %s)' % (cfg.proxy_host, cfg.proxy_port))
126 debug(u'tunnel to %s' % hostname)
127 else:
128 self.c = http_connection._https_connection(hostname)
129 debug(u'non-proxied HTTPSConnection(%s)' % hostname)
130
28131
29132 class ConnMan(object):
30133 conn_pool_sem = Semaphore()
38141 ssl = cfg.use_https
39142 conn = None
40143 if cfg.proxy_host != "":
41 if ssl:
42 raise ParameterError("use_https=True can't be used with proxy")
144 if ssl and sys.hexversion < 0x02070000:
145 raise ParameterError("use_https=True can't be used with proxy on Python <2.7")
43146 conn_id = "proxy://%s:%s" % (cfg.proxy_host, cfg.proxy_port)
44147 else:
45148 conn_id = "http%s://%s" % (ssl and "s" or "", hostname)
54157 debug("ConnMan.get(): creating new connection: %s" % conn_id)
55158 conn = http_connection(conn_id, hostname, ssl, cfg)
56159 conn.c.connect()
160 if conn.ssl:
161 conn.match_hostname()
57162 conn.counter += 1
58163 return conn
59164
0 ## Amazon S3 manager
1 ## Author: Michal Ludvig <michal@logix.cz>
2 ## http://www.logix.cz/michal
3 ## License: GPL Version 2
4 ## Copyright: TGRMN Software and contributors
5
6 import sys
7 import hmac
8 import base64
9
10 import Config
11 from logging import debug
12 import Utils
13
14 import os
15 import datetime
16 import urllib
17
18 # hashlib backported to python 2.4 / 2.5 is not compatible with hmac!
19 if sys.version_info[0] == 2 and sys.version_info[1] < 6:
20 from md5 import md5
21 import sha as sha1
22 from Crypto.Hash import SHA256 as sha256
23 else:
24 from hashlib import md5, sha1, sha256
25
26 __all__ = []
27
28 ### AWS Version 2 signing
29 def sign_string_v2(string_to_sign):
30 """Sign a string with the secret key, returning base64 encoded results.
31 By default the configured secret key is used, but may be overridden as
32 an argument.
33
34 Useful for REST authentication. See http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
35 """
36 signature = base64.encodestring(hmac.new(Config.Config().secret_key, string_to_sign, sha1).digest()).strip()
37 return signature
38 __all__.append("sign_string_v2")
39
40 def sign_url_v2(url_to_sign, expiry):
41 """Sign a URL in s3://bucket/object form with the given expiry
42 time. The object will be accessible via the signed URL until the
43 AWS key and secret are revoked or the expiry time is reached, even
44 if the object is otherwise private.
45
46 See: http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
47 """
48 return sign_url_base_v2(
49 bucket = url_to_sign.bucket(),
50 object = url_to_sign.object(),
51 expiry = expiry
52 )
53 __all__.append("sign_url_v2")
54
55 def sign_url_base_v2(**parms):
56 """Shared implementation of sign_url methods. Takes a hash of 'bucket', 'object' and 'expiry' as args."""
57 parms['expiry']=Utils.time_to_epoch(parms['expiry'])
58 parms['access_key']=Config.Config().access_key
59 parms['host_base']=Config.Config().host_base
60 debug("Expiry interpreted as epoch time %s", parms['expiry'])
61 signtext = 'GET\n\n\n%(expiry)d\n/%(bucket)s/%(object)s' % parms
62 debug("Signing plaintext: %r", signtext)
63 parms['sig'] = urllib.quote_plus(sign_string_v2(signtext))
64 debug("Urlencoded signature: %s", parms['sig'])
65 return "http://%(bucket)s.%(host_base)s/%(object)s?AWSAccessKeyId=%(access_key)s&Expires=%(expiry)d&Signature=%(sig)s" % parms
66
67 def sign(key, msg):
68 return hmac.new(key, msg.encode('utf-8'), sha256).digest()
69
70 def getSignatureKey(key, dateStamp, regionName, serviceName):
71 kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp)
72 kRegion = sign(kDate, regionName)
73 kService = sign(kRegion, serviceName)
74 kSigning = sign(kService, 'aws4_request')
75 return kSigning
76
77 def sign_string_v4(method='GET', host='', canonical_uri='/', params={}, region='us-east-1', cur_headers={}, body=''):
78 service = 's3'
79
80 cfg = Config.Config()
81 access_key = cfg.access_key
82 secret_key = cfg.secret_key
83
84 t = datetime.datetime.utcnow()
85 amzdate = t.strftime('%Y%m%dT%H%M%SZ')
86 datestamp = t.strftime('%Y%m%d')
87
88 canonical_querystring = '&'.join(['%s=%s' % (urllib.quote_plus(p), quote_param(params[p])) for p in sorted(params.keys())])
89
90 splits = canonical_uri.split('?')
91
92 canonical_uri = quote_param(splits[0], quote_backslashes=False)
93 canonical_querystring += '&'.join([('%s' if '=' in qs else '%s=') % qs for qs in splits[1:]])
94
95 if type(body) == type(sha256('')):
96 payload_hash = body.hexdigest()
97 else:
98 payload_hash = sha256(body).hexdigest()
99
100 canonical_headers = {'host' : host,
101 'x-amz-content-sha256': payload_hash,
102 'x-amz-date' : amzdate
103 }
104 signed_headers = 'host;x-amz-content-sha256;x-amz-date'
105
106 for header in cur_headers.keys():
107 # avoid duplicate headers and previous Authorization
108 if header == 'Authorization' or header in signed_headers.split(';'):
109 continue
110 canonical_headers[header.strip()] = str(cur_headers[header]).strip()
111 signed_headers += ';' + header.strip()
112
113 # sort headers into a string
114 canonical_headers_str = ''
115 for k, v in sorted(canonical_headers.items()):
116 canonical_headers_str += k + ":" + v + "\n"
117
118 canonical_headers = canonical_headers_str
119 debug(u"canonical_headers = %s" % canonical_headers)
120 signed_headers = ';'.join(sorted(signed_headers.split(';')))
121
122 canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash
123 debug('Canonical Request:\n%s\n----------------------' % canonical_request)
124
125 algorithm = 'AWS4-HMAC-SHA256'
126 credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request'
127 string_to_sign = algorithm + '\n' + amzdate + '\n' + credential_scope + '\n' + sha256(canonical_request).hexdigest()
128 signing_key = getSignatureKey(secret_key, datestamp, region, service)
129 signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), sha256).hexdigest()
130 authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ',' + 'SignedHeaders=' + signed_headers + ',' + 'Signature=' + signature
131 headers = dict(cur_headers.items() + {'x-amz-date':amzdate, 'Authorization':authorization_header, 'x-amz-content-sha256': payload_hash}.items())
132 debug("signature-v4 headers: %s" % headers)
133 return headers
134
135 def quote_param(param, quote_backslashes=True):
136 # As stated by Amazon the '/' in the filename should stay unquoted and %20 should be used for space instead of '+'
137 quoted = urllib.quote_plus(urllib.unquote_plus(param), safe='~').replace('+', '%20')
138 if not quote_backslashes:
139 quoted = quoted.replace('%2F', '/')
140 return quoted
141
142 def checksum_sha256_file(filename, offset=0, size=None):
143 try:
144 hash = sha256()
145 except:
146 # fallback to Crypto SHA256 module
147 hash = sha256.new()
148 with open(filename,'rb') as f:
149 if size is None:
150 for chunk in iter(lambda: f.read(8192), b''):
151 hash.update(chunk)
152 else:
153 f.seek(offset)
154 chunk = f.read(size)
155 hash.update(chunk)
156 return hash
157
158 def checksum_sha256_buffer(buffer, offset=0, size=None):
159 try:
160 hash = sha256()
161 except:
162 # fallback to Crypto SHA256 module
163 hash = sha256.new()
164 if size is None:
165 hash.update(buffer)
166 else:
167 hash.update(buffer[offset:offset+size])
168 return hash
55
66 from Utils import getTreeFromXml, unicodise, deunicodise
77 from logging import debug, info, warning, error
8 import ExitCodes
89
910 try:
1011 import xml.etree.ElementTree as ET
1112 except ImportError:
13 # xml.etree.ElementTree was only added in python 2.5
1214 import elementtree.ElementTree as ET
15
16 try:
17 from xml.etree.ElementTree import ParseError as XmlParseError
18 except ImportError:
19 # ParseError was only added in python2.7, before ET was raising ExpatError
20 from xml.parsers.expat import ExpatError as XmlParseError
1321
1422 class S3Exception(Exception):
1523 def __init__(self, message = ""):
4755 if response.has_key("data") and response["data"]:
4856 try:
4957 tree = getTreeFromXml(response["data"])
50 except ET.ParseError:
58 except XmlParseError:
5159 debug("Not an XML response")
5260 else:
53 self.info.update(self.parse_error_xml(tree))
61 try:
62 self.info.update(self.parse_error_xml(tree))
63 except Exception, e:
64 error("Error parsing xml: %s. ErrorXML: %s" % (e, response["data"]))
5465
5566 self.code = self.info["Code"]
5667 self.message = self.info["Message"]
6374 retval += (u": %s" % self.info["Message"])
6475 return retval
6576
77 def get_error_code(self):
78 if self.status in [301, 307]:
79 return ExitCodes.EX_SERVERMOVED
80 elif self.status in [400, 405, 411, 416, 501]:
81 return ExitCodes.EX_SERVERERROR
82 elif self.status == 403:
83 return ExitCodes.EX_ACCESSDENIED
84 elif self.status == 404:
85 return ExitCodes.EX_NOTFOUND
86 elif self.status == 409:
87 return ExitCodes.EX_CONFLICT
88 elif self.status == 412:
89 return ExitCodes.EX_PRECONDITION
90 elif self.status == 500:
91 return ExitCodes.EX_SOFTWARE
92 elif self.status == 503:
93 return ExitCodes.EX_SERVICE
94 else:
95 return ExitCodes.EX_SOFTWARE
96
6697 @staticmethod
6798 def parse_error_xml(tree):
6899 info = {}
69100 error_node = tree
70101 if not error_node.tag == "Error":
71102 error_node = tree.find(".//Error")
72 for child in error_node.getchildren():
73 if child.text != "":
74 debug("ErrorXML: " + child.tag + ": " + repr(child.text))
75 info[child.tag] = child.text
76
103 if error_node is not None:
104 for child in error_node.getchildren():
105 if child.text != "":
106 debug("ErrorXML: " + child.tag + ": " + repr(child.text))
107 info[child.tag] = child.text
108 else:
109 raise S3ResponseError("Malformed error XML returned from remote server.")
77110 return info
78111
79112
00 # patterned on /usr/include/sysexits.h
11
2 EX_OK = 0
3 EX_GENERAL = 1
4 EX_SOMEFAILED = 2 # some parts of the command succeeded, while others failed
5 EX_USAGE = 64 # The command was used incorrectly (e.g. bad command line syntax)
6 EX_SOFTWARE = 70 # internal software error (e.g. S3 error of unknown specificity)
7 EX_OSERR = 71 # system error (e.g. out of memory)
8 EX_OSFILE = 72 # OS error (e.g. invalid Python version)
9 EX_IOERR = 74 # An error occurred while doing I/O on some file.
10 EX_TEMPFAIL = 75 # temporary failure (S3DownloadError or similar, retry later)
11 EX_NOPERM = 77 # Insufficient permissions to perform the operation on S3
12 EX_CONFIG = 78 # Configuration file error
13 _EX_SIGNAL = 128
14 _EX_SIGINT = 2
15 EX_BREAK = _EX_SIGNAL + _EX_SIGINT # Control-C (KeyboardInterrupt raised)
2 EX_OK = 0
3 EX_GENERAL = 1
4 EX_PARTIAL = 2 # some parts of the command succeeded, while others failed
5 EX_SERVERMOVED = 10 # 301: Moved permanantly & 307: Moved temp
6 EX_SERVERERROR = 11 # 400, 405, 411, 416, 501: Bad request
7 EX_NOTFOUND = 12 # 404: Not found
8 EX_CONFLICT = 13 # 409: Conflict (ex: bucket error)
9 EX_PRECONDITION = 14 # 412: Precondition failed
10 EX_SERVICE = 15 # 503: Service not available or slow down
11 EX_USAGE = 64 # The command was used incorrectly (e.g. bad command line syntax)
12 EX_SOFTWARE = 70 # internal software error (e.g. S3 error of unknown specificity)
13 EX_OSERR = 71 # system error (e.g. out of memory)
14 EX_OSFILE = 72 # OS error (e.g. invalid Python version)
15 EX_IOERR = 74 # An error occurred while doing I/O on some file.
16 EX_TEMPFAIL = 75 # temporary failure (S3DownloadError or similar, retry later)
17 EX_ACCESSDENIED = 77 # Insufficient permissions to perform the operation on S3
18 EX_CONFIG = 78 # Configuration file error
19 _EX_SIGNAL = 128
20 _EX_SIGINT = 2
21 EX_BREAK = _EX_SIGNAL + _EX_SIGINT # Control-C (KeyboardInterrupt raised)
9797 debug(u"CHECK: %r" % d)
9898 excluded = False
9999 for r in cfg.exclude:
100 if not r.pattern.endswith(u'/'): continue # we only check for directories here
100 # python versions end their patterns (from globs) differently, test for both styles.
101 if not (r.pattern.endswith(u'\\/$') or r.pattern.endswith(u'\\/\\Z(?ms)')): continue # we only check for directory patterns here
101102 if r.search(d):
102103 excluded = True
103104 debug(u"EXCL-MATCH: '%s'" % (cfg.debug_exclude[r]))
105106 if excluded:
106107 ## No need to check for --include if not excluded
107108 for r in cfg.include:
108 if not r.pattern.endswith(u'/'): continue # we only check for directories here
109 # python versions end their patterns (from globs) differently, test for both styles.
110 if not (r.pattern.endswith(u'\\/$') or r.pattern.endswith(u'\\/\\Z(?ms)')): continue # we only check for directory patterns here
109111 debug(u"INCL-TEST: %s ~ %s" % (d, r.pattern))
110112 if r.search(d):
111113 excluded = False
380382 rem_list[key] = {
381383 'size' : int(object['Size']),
382384 'timestamp' : dateS3toUnix(object['LastModified']), ## Sadly it's upload time, not our lastmod time :-(
383 'md5' : object['ETag'][1:-1],
385 'md5' : object['ETag'].strip('"\''),
384386 'object_key' : object['Key'],
385387 'object_uri_str' : object_uri_str,
386388 'base_uri' : remote_uri,
387389 'dev' : None,
388390 'inode' : None,
389391 }
390 if rem_list[key]['md5'].find("-") > 0: # always get it for multipart uploads
392 if '-' in rem_list[key]['md5']: # always get it for multipart uploads
391393 _get_remote_attribs(S3Uri(object_uri_str), rem_list[key])
392394 md5 = rem_list[key]['md5']
393395 rem_list.record_md5(key, md5)
467469 return False
468470
469471 ## check size first
470 if 'size' in cfg.sync_checks and dst_list[file]['size'] != src_list[file]['size']:
471 debug(u"xfer: %s (size mismatch: src=%s dst=%s)" % (file, src_list[file]['size'], dst_list[file]['size']))
472 attribs_match = False
472 if 'size' in cfg.sync_checks:
473 if 'size' in dst_list[file] and 'size' in src_list[file]:
474 if dst_list[file]['size'] != src_list[file]['size']:
475 debug(u"xfer: %s (size mismatch: src=%s dst=%s)" % (file, src_list[file]['size'], dst_list[file]['size']))
476 attribs_match = False
473477
474478 ## check md5
475479 compare_md5 = 'md5' in cfg.sync_checks
476480 # Multipart-uploaded files don't have a valid md5 sum - it ends with "...-nn"
477481 if compare_md5:
478 if (src_remote == True and src_list[file]['md5'].find("-") >= 0) or (dst_remote == True and dst_list[file]['md5'].find("-") >= 0):
482 if (src_remote == True and '-' in src_list[file]['md5']) or (dst_remote == True and '-' in dst_list[file]['md5']):
479483 compare_md5 = False
480484 info(u"disabled md5 check for %s" % file)
481485 if attribs_match and compare_md5:
116116 else:
117117 while True:
118118 buffer = self.file.read(self.chunk_size)
119 offset = self.chunk_size * (seq - 1)
119 offset = 0 # send from start of the buffer
120120 current_chunk_size = len(buffer)
121121 labels = {
122122 'source' : unicodise(self.file.name),
129129 self.upload_part(seq, offset, current_chunk_size, labels, buffer, remote_status = remote_statuses.get(seq))
130130 except:
131131 error(u"\nUpload of '%s' part %d failed. Use\n %s abortmp %s %s\nto abort, or\n %s --upload-id %s put ...\nto continue the upload."
132 % (self.file.name, seq, self.uri, sys.argv[0], self.upload_id, sys.argv[0], self.upload_id))
132 % (self.file.name, seq, sys.argv[0], self.uri, self.upload_id, sys.argv[0], self.upload_id))
133133 raise
134134 seq += 1
135135
146146 if remote_status is not None:
147147 if int(remote_status['size']) == chunk_size:
148148 checksum = calculateChecksum(buffer, self.file, offset, chunk_size, self.s3.config.send_chunk)
149 remote_checksum = remote_status['checksum'].strip('"')
149 remote_checksum = remote_status['checksum'].strip('"\'')
150150 if remote_checksum == checksum:
151151 warning("MultiPart: size and md5sum match for %s part %d, skipping." % (self.uri, seq))
152152 self.parts[seq] = remote_status['checksum']
179179 body = "<CompleteMultipartUpload>%s</CompleteMultipartUpload>" % ("".join(parts_xml))
180180
181181 headers = { "content-length": len(body) }
182 request = self.s3.create_request("OBJECT_POST", uri = self.uri, headers = headers, extra = "?uploadId=%s" % (self.upload_id))
183 response = self.s3.send_request(request, body = body)
182 request = self.s3.create_request("OBJECT_POST", uri = self.uri, headers = headers, extra = "?uploadId=%s" % (self.upload_id), body = body)
183 response = self.s3.send_request(request)
184184
185185 return response
186186
44 ## Copyright: TGRMN Software and contributors
55
66 package = "s3cmd"
7 version = "1.5.0-rc1"
7 version = "1.5.2"
88 url = "http://s3tools.org"
9 license = "GPL version 2"
9 license = "GNU GPL v2+"
1010 short_description = "Command line tool for managing Amazon S3 and CloudFront services"
1111 long_description = """
1212 S3cmd lets you copy files from/to Amazon S3
3232 from MultiPart import MultiPartUpload
3333 from S3Uri import S3Uri
3434 from ConnMan import ConnMan
35 from Crypto import sign_string_v2, sign_string_v4, checksum_sha256_file, checksum_sha256_buffer
36 from ExitCodes import *
3537
3638 try:
3739 import magic
6062 return magic_.file(file)
6163
6264 except ImportError, e:
63 if str(e).find("magic") >= 0:
65 if 'magic' in str(e):
6466 magic_message = "Module python-magic is not available."
6567 else:
6668 magic_message = "Module python-magic can't be used (%s)." % e.message
100102
101103 __all__ = []
102104 class S3Request(object):
103 def __init__(self, s3, method_string, resource, headers, params = {}):
105 region_map = {}
106
107 def __init__(self, s3, method_string, resource, headers, body, params = {}):
104108 self.s3 = s3
105109 self.headers = SortedDict(headers or {}, ignore_case = True)
106 # Add in any extra headers from s3 config object
107 if self.s3.config.extra_headers:
108 self.headers.update(self.s3.config.extra_headers)
109110 if len(self.s3.config.access_token)>0:
110111 self.s3.config.role_refresh()
111112 self.headers['x-amz-security-token']=self.s3.config.access_token
112113 self.resource = resource
113114 self.method_string = method_string
114115 self.params = params
115
116 self.update_timestamp()
117 self.sign()
116 self.body = body
118117
119118 def update_timestamp(self):
120119 if self.headers.has_key("date"):
136135 param_str += "&%s" % param
137136 return param_str and "?" + param_str[1:]
138137
138 def use_signature_v2(self):
139 if self.s3.endpoint_requires_signature_v4:
140 return False
141 # in case of bad DNS name due to bucket name v2 will be used
142 # this way we can still use capital letters in bucket names for the older regions
143
144 if self.resource['bucket'] is None or not check_bucket_name_dns_conformity(self.resource['bucket']) or self.s3.config.signature_v2 or self.s3.fallback_to_signature_v2:
145 return True
146 return False
147
139148 def sign(self):
140149 h = self.method_string + "\n"
141150 h += self.headers.get("content-md5", "")+"\n"
142151 h += self.headers.get("content-type", "")+"\n"
143152 h += self.headers.get("date", "")+"\n"
144 for header in self.headers.keys():
153 for header in sorted(self.headers.keys()):
145154 if header.startswith("x-amz-"):
146155 h += header+":"+str(self.headers[header])+"\n"
147156 if self.resource['bucket']:
148157 h += "/" + self.resource['bucket']
149158 h += self.resource['uri']
150 debug("SignHeaders: " + repr(h))
151 signature = sign_string(h)
152
153 self.headers["Authorization"] = "AWS "+self.s3.config.access_key+":"+signature
159
160 if self.use_signature_v2():
161 debug("Using signature v2")
162 debug("SignHeaders: " + repr(h))
163 signature = sign_string_v2(h)
164 self.headers["Authorization"] = "AWS "+self.s3.config.access_key+":"+signature
165 else:
166 debug("Using signature v4")
167 self.headers = sign_string_v4(self.method_string,
168 self.s3.get_hostname(self.resource['bucket']),
169 self.resource['uri'],
170 self.params,
171 S3Request.region_map.get(self.resource['bucket'], Config().bucket_location),
172 self.headers,
173 self.body)
154174
155175 def get_triplet(self):
156176 self.update_timestamp()
205225
206226 def __init__(self, config):
207227 self.config = config
228 self.fallback_to_signature_v2 = False
229 self.endpoint_requires_signature_v4 = False
208230
209231 def get_hostname(self, bucket):
210 if bucket and check_bucket_name_dns_conformity(bucket):
232 if bucket and check_bucket_name_dns_support(self.config.host_bucket, bucket):
211233 if self.redir_map.has_key(bucket):
212234 host = self.redir_map[bucket]
213235 else:
221243 self.redir_map[bucket] = redir_hostname
222244
223245 def format_uri(self, resource):
224 if resource['bucket'] and not check_bucket_name_dns_conformity(resource['bucket']):
246 if resource['bucket'] and not check_bucket_name_dns_support(self.config.host_bucket, resource['bucket']):
225247 uri = "/%s%s" % (resource['bucket'], resource['uri'])
226248 else:
227249 uri = resource['uri']
248270
249271 def _get_common_prefixes(data):
250272 return getListFromXml(data, "CommonPrefixes")
273
251274
252275 uri_params = uri_params.copy()
253276 truncated = True
301324 check_bucket_name(bucket, dns_strict = False)
302325 if self.config.acl_public:
303326 headers["x-amz-acl"] = "public-read"
304 request = self.create_request("BUCKET_CREATE", bucket = bucket, headers = headers)
305 response = self.send_request(request, body)
327
328 request = self.create_request("BUCKET_CREATE", bucket = bucket, headers = headers, body = body)
329 response = self.send_request(request)
306330 return response
307331
308332 def bucket_delete(self, bucket):
329353 def website_info(self, uri, bucket_location = None):
330354 headers = SortedDict(ignore_case = True)
331355 bucket = uri.bucket()
332 body = ""
333356
334357 request = self.create_request("BUCKET_LIST", bucket = bucket, extra="?website")
335358 try:
336 response = self.send_request(request, body)
359 response = self.send_request(request)
337360 response['index_document'] = getTextFromXml(response['data'], ".//IndexDocument//Suffix")
338361 response['error_document'] = getTextFromXml(response['data'], ".//ErrorDocument//Key")
339362 response['website_endpoint'] = self.config.website_endpoint % {
359382 body += ' </ErrorDocument>'
360383 body += '</WebsiteConfiguration>'
361384
362 request = self.create_request("BUCKET_CREATE", bucket = bucket, extra="?website")
363 debug("About to send request '%s' with body '%s'" % (request, body))
364 response = self.send_request(request, body)
385 request = self.create_request("BUCKET_CREATE", bucket = bucket, extra="?website", body = body)
386 response = self.send_request(request)
365387 debug("Received response '%s'" % (response))
366388
367389 return response
369391 def website_delete(self, uri, bucket_location = None):
370392 headers = SortedDict(ignore_case = True)
371393 bucket = uri.bucket()
372 body = ""
373394
374395 request = self.create_request("BUCKET_DELETE", bucket = bucket, extra="?website")
375 debug("About to send request '%s' with body '%s'" % (request, body))
376 response = self.send_request(request, body)
396 response = self.send_request(request)
377397 debug("Received response '%s'" % (response))
378398
379399 if response['status'] != 204:
384404 def expiration_info(self, uri, bucket_location = None):
385405 headers = SortedDict(ignore_case = True)
386406 bucket = uri.bucket()
387 body = ""
388407
389408 request = self.create_request("BUCKET_LIST", bucket = bucket, extra="?lifecycle")
390409 try:
391 response = self.send_request(request, body)
410 response = self.send_request(request)
392411 response['prefix'] = getTextFromXml(response['data'], ".//Rule//Prefix")
393412 response['date'] = getTextFromXml(response['data'], ".//Rule//Expiration//Date")
394413 response['days'] = getTextFromXml(response['data'], ".//Rule//Expiration//Days")
407426 raise ParameterError("Expect either --expiry-day or --expiry-date")
408427 debug("del bucket lifecycle")
409428 bucket = uri.bucket()
410 body = ""
411429 request = self.create_request("BUCKET_DELETE", bucket = bucket, extra="?lifecycle")
412430 else:
413 request, body = self._expiration_set(uri)
414 debug("About to send request '%s' with body '%s'" % (request, body))
415 response = self.send_request(request, body)
431 request = self._expiration_set(uri)
432 response = self.send_request(request)
416433 debug("Received response '%s'" % (response))
417434 return response
418435
434451 headers = SortedDict(ignore_case = True)
435452 headers['content-md5'] = compute_content_md5(body)
436453 bucket = uri.bucket()
437 request = self.create_request("BUCKET_CREATE", bucket = bucket, headers = headers, extra="?lifecycle")
438 return (request, body)
454 request = self.create_request("BUCKET_CREATE", bucket = bucket, headers = headers, extra="?lifecycle", body = body)
455 return (request)
456
457 def _guess_content_type(self, filename):
458 content_type = self.config.default_mime_type
459 content_charset = None
460
461 if filename == "-" and not self.config.default_mime_type:
462 raise ParameterError("You must specify --mime-type or --default-mime-type for files uploaded from stdin.")
463
464 if self.config.guess_mime_type:
465 if self.config.use_mime_magic:
466 (content_type, content_charset) = mime_magic(filename)
467 else:
468 (content_type, content_charset) = mimetypes.guess_type(filename)
469 if not content_type:
470 content_type = self.config.default_mime_type
471 return (content_type, content_charset)
472
473 def stdin_content_type(self):
474 content_type = self.config.mime_type
475 if content_type == '':
476 content_type = self.config.default_mime_type
477
478 content_type += "; charset=" + self.config.encoding.upper()
479 return content_type
480
481 def content_type(self, filename=None):
482 # explicit command line argument always wins
483 content_type = self.config.mime_type
484 content_charset = None
485
486 if filename == u'-':
487 return self.stdin_content_type()
488 if not content_type:
489 (content_type, content_charset) = self._guess_content_type(filename)
490
491 ## add charset to content type
492 if not content_charset:
493 content_charset = self.config.encoding.upper()
494 if self.add_encoding(filename, content_type) and content_charset is not None:
495 content_type = content_type + "; charset=" + content_charset
496
497 return content_type
439498
440499 def add_encoding(self, filename, content_type):
441 if content_type.find("charset=") != -1:
500 if 'charset=' in content_type:
442501 return False
443502 exts = self.config.add_encoding_exts.split(',')
444503 if exts[0]=='':
479538 headers["x-amz-server-side-encryption"] = "AES256"
480539
481540 ## MIME-type handling
482 content_type = self.config.mime_type
483 content_charset = None
484 if filename != "-" and not content_type and self.config.guess_mime_type:
485 if self.config.use_mime_magic:
486 (content_type, content_charset) = mime_magic(filename)
487 else:
488 (content_type, content_charset) = mimetypes.guess_type(filename)
489 if not content_type:
490 content_type = self.config.default_mime_type
491 if not content_charset:
492 content_charset = self.config.encoding.upper()
493
494 ## add charset to content type
495 if self.add_encoding(filename, content_type) and content_charset is not None:
496 content_type = content_type + "; charset=" + content_charset
497
498 headers["content-type"] = content_type
541 headers["content-type"] = self.content_type(filename=filename)
499542
500543 ## Other Amazon S3 attributes
501544 if self.config.acl_public:
527570
528571 if info is not None:
529572 remote_size = int(info['headers']['content-length'])
530 remote_checksum = info['headers']['etag'].strip('"')
573 remote_checksum = info['headers']['etag'].strip('"\'')
531574 if size == remote_size:
532575 checksum = calculateChecksum('', file, 0, size, self.config.send_chunk)
533576 if remote_checksum == checksum:
560603 for key in key_list:
561604 uri = S3Uri(key)
562605 if uri.type != "s3":
563 raise ValueError("Excpected URI type 's3', got '%s'" % uri.type)
606 raise ValueError("Expected URI type 's3', got '%s'" % uri.type)
564607 if not uri.has_object():
565608 raise ValueError("URI '%s' has no object" % key)
566609 if uri.bucket() != bucket:
578621 request_body = compose_batch_del_xml(bucket, batch)
579622 md5_hash = md5()
580623 md5_hash.update(request_body)
581 headers = {'content-md5': base64.b64encode(md5_hash.digest())}
582 request = self.create_request("BATCH_DELETE", bucket = bucket, extra = '?delete', headers = headers)
583 response = self.send_request(request, request_body)
624 headers = {'content-md5': base64.b64encode(md5_hash.digest()),
625 'content-type': 'application/xml'}
626 request = self.create_request("BATCH_DELETE", bucket = bucket, extra = '?delete', headers = headers, body = request_body)
627 response = self.send_request(request)
584628 return response
585629
586630 def object_delete(self, uri):
596640 body = '<RestoreRequest xmlns="http://s3.amazonaws.com/doc/2006-3-01">'
597641 body += (' <Days>%s</Days>' % self.config.restore_days)
598642 body += '</RestoreRequest>'
599 request = self.create_request("OBJECT_POST", uri = uri, extra = "?restore")
600 debug("About to send request '%s' with body '%s'" % (request, body))
601 response = self.send_request(request, body)
643 request = self.create_request("OBJECT_POST", uri = uri, extra = "?restore", body = body)
644 response = self.send_request(request)
602645 debug("Received response '%s'" % (response))
603646 return response
647
648 def _sanitize_headers(self, headers):
649 to_remove = [
650 # from http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
651 'date',
652 'content-length',
653 'last-modified',
654 'content-md5',
655 'x-amz-version-id',
656 'x-amz-delete-marker',
657 # other headers returned from object_info() we don't want to send
658 'accept-ranges',
659 'etag',
660 'server',
661 'x-amz-id-2',
662 'x-amz-request-id',
663 ]
664
665 for h in to_remove + self.config.remove_headers:
666 if h.lower() in headers:
667 del headers[h.lower()]
668 return headers
604669
605670 def object_copy(self, src_uri, dst_uri, extra_headers = None):
606671 if src_uri.type != "s3":
609674 raise ValueError("Expected URI type 's3', got '%s'" % dst_uri.type)
610675 headers = SortedDict(ignore_case = True)
611676 headers['x-amz-copy-source'] = "/%s/%s" % (src_uri.bucket(), self.urlencode_string(src_uri.object()))
612 ## TODO: For now COPY, later maybe add a switch?
613677 headers['x-amz-metadata-directive'] = "COPY"
614678 if self.config.acl_public:
615679 headers["x-amz-acl"] = "public-read"
616680 if self.config.reduced_redundancy:
617681 headers["x-amz-storage-class"] = "REDUCED_REDUNDANCY"
682 else:
683 headers["x-amz-storage-class"] = "STANDARD"
618684
619685 ## Set server side encryption
620686 if self.config.server_side_encryption:
621687 headers["x-amz-server-side-encryption"] = "AES256"
622688
623689 if extra_headers:
624 headers['x-amz-metadata-directive'] = "REPLACE"
625690 headers.update(extra_headers)
691
626692 request = self.create_request("OBJECT_PUT", uri = dst_uri, headers = headers)
627693 response = self.send_request(request)
694 return response
695
696 def object_modify(self, src_uri, dst_uri, extra_headers = None):
697 if src_uri.type != "s3":
698 raise ValueError("Expected URI type 's3', got '%s'" % src_uri.type)
699 if dst_uri.type != "s3":
700 raise ValueError("Expected URI type 's3', got '%s'" % dst_uri.type)
701
702 info_response = self.object_info(src_uri)
703 headers = info_response['headers']
704 headers = self._sanitize_headers(headers)
705 acl = self.get_acl(src_uri)
706
707 headers['x-amz-copy-source'] = "/%s/%s" % (src_uri.bucket(), self.urlencode_string(src_uri.object()))
708 headers['x-amz-metadata-directive'] = "REPLACE"
709
710 # cannot change between standard and reduced redundancy with a REPLACE.
711
712 ## Set server side encryption
713 if self.config.server_side_encryption:
714 headers["x-amz-server-side-encryption"] = "AES256"
715
716 if extra_headers:
717 headers.update(extra_headers)
718
719 if self.config.mime_type:
720 headers["content-type"] = self.config.mime_type
721
722 request = self.create_request("OBJECT_PUT", uri = src_uri, headers = headers)
723 response = self.send_request(request)
724
725 acl_response = self.set_acl(src_uri, acl)
726
628727 return response
629728
630729 def object_move(self, src_uri, dst_uri, extra_headers = None):
651750 return acl
652751
653752 def set_acl(self, uri, acl):
654 if uri.has_object():
655 request = self.create_request("OBJECT_PUT", uri = uri, extra = "?acl")
656 else:
657 request = self.create_request("BUCKET_CREATE", bucket = uri.bucket(), extra = "?acl")
753 # dreamhost doesn't support set_acl properly
754 if 'objects.dreamhost.com' in self.config.host_base:
755 return { 'status' : 501 } # not implemented
658756
659757 body = str(acl)
660758 debug(u"set_acl(%s): acl-xml: %s" % (uri, body))
661 response = self.send_request(request, body)
759
760 headers = {'content-type': 'application/xml'}
761 if uri.has_object():
762 request = self.create_request("OBJECT_PUT", uri = uri, extra = "?acl", body = body)
763 else:
764 request = self.create_request("BUCKET_CREATE", bucket = uri.bucket(), extra = "?acl", body = body)
765
766 response = self.send_request(request)
662767 return response
663768
664769 def get_policy(self, uri):
671776 # TODO check policy is proper json string
672777 headers['content-type'] = 'application/json'
673778 request = self.create_request("BUCKET_CREATE", uri = uri,
674 extra = "?policy", headers=headers)
675 body = policy
676 debug(u"set_policy(%s): policy-json: %s" % (uri, body))
677 request.sign()
678 response = self.send_request(request, body=body)
779 extra = "?policy", headers=headers, body = policy)
780 response = self.send_request(request)
679781 return response
680782
681783 def delete_policy(self, uri):
688790 headers = SortedDict(ignore_case = True)
689791 headers['content-md5'] = compute_content_md5(policy)
690792 request = self.create_request("BUCKET_CREATE", uri = uri,
691 extra = "?lifecycle", headers=headers)
692 body = policy
693 debug(u"set_lifecycle_policy(%s): policy-xml: %s" % (uri, body))
694 request.sign()
695 response = self.send_request(request, body=body)
793 extra = "?lifecycle", headers=headers, body = policy)
794 debug(u"set_lifecycle_policy(%s): policy-xml: %s" % (uri))
795 response = self.send_request(request)
696796 return response
697797
698798 def delete_lifecycle_policy(self, uri):
733833 self.set_acl(uri, acl)
734834
735835 def set_accesslog(self, uri, enable, log_target_prefix_uri = None, acl_public = False):
736 request = self.create_request("BUCKET_CREATE", bucket = uri.bucket(), extra = "?logging")
737836 accesslog = AccessLog()
738837 if enable:
739838 accesslog.enableLogging(log_target_prefix_uri)
740839 accesslog.setAclPublic(acl_public)
741840 else:
742841 accesslog.disableLogging()
842
743843 body = str(accesslog)
744844 debug(u"set_accesslog(%s): accesslog-xml: %s" % (uri, body))
845
846 request = self.create_request("BUCKET_CREATE", bucket = uri.bucket(), extra = "?logging", body = body)
745847 try:
746 response = self.send_request(request, body)
848 response = self.send_request(request)
747849 except S3Error, e:
748850 if e.info['Code'] == "InvalidTargetBucketForLogging":
749851 info("Setting up log-delivery ACL for target bucket.")
750852 self.set_accesslog_acl(S3Uri("s3://%s" % log_target_prefix_uri.bucket()))
751 response = self.send_request(request, body)
853 response = self.send_request(request)
752854 else:
753855 raise
754856 return accesslog, response
805907 debug("String '%s' encoded to '%s'" % (string, encoded))
806908 return encoded
807909
808 def create_request(self, operation, uri = None, bucket = None, object = None, headers = None, extra = None, **params):
910 def create_request(self, operation, uri = None, bucket = None, object = None, headers = None, extra = None, body = "", **params):
809911 resource = { 'bucket' : None, 'uri' : "/" }
810912
811913 if uri and (bucket or object):
824926
825927 method_string = S3.http_methods.getkey(S3.operations[operation] & S3.http_methods["MASK"])
826928
827 request = S3Request(self, method_string, resource, headers, params)
929 request = S3Request(self, method_string, resource, headers, body, params)
828930
829931 debug("CreateRequest: resource[uri]=" + resource['uri'])
830932 return request
833935 # Wait a few seconds. The more it fails the more we wait.
834936 return (self._max_retries - retries + 1) * 3
835937
836 def send_request(self, request, body = None, retries = _max_retries):
938 def _http_400_handler(self, request, response, fn, *args, **kwargs):
939 # AWS response AuthorizationHeaderMalformed means we sent the request to the wrong region
940 # get the right region out of the response and send it there.
941 message = 'Unknown error'
942 if 'data' in response and len(response['data']) > 0:
943 failureCode = getTextFromXml(response['data'], 'Code')
944 message = getTextFromXml(response['data'], 'Message')
945 if failureCode == 'AuthorizationHeaderMalformed': # we sent the request to the wrong region
946 region = getTextFromXml(response['data'], 'Region')
947 if region is not None:
948 S3Request.region_map[request.resource['bucket']] = region
949 info('Forwarding request to %s' % region)
950 return fn(*args, **kwargs)
951 else:
952 message = u'Could not determine bucket location. Please consider using --region parameter.'
953
954 elif failureCode == 'InvalidRequest':
955 if message == 'The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.':
956 debug(u'Endpoint requires signature v4')
957 self.endpoint_requires_signature_v4 = True
958 return fn(*args, **kwargs)
959
960 elif failureCode == 'InvalidArgument': # returned by DreamObjects on send_request and send_file,
961 # which doesn't support signature v4. Retry with signature v2
962 if not request.use_signature_v2() and not self.fallback_to_signature_v2: # have not tried with v2 yet
963 debug(u'Falling back to signature v2')
964 self.fallback_to_signature_v2 = True
965 return fn(*args, **kwargs)
966
967 else: # returned by DreamObjects on recv_file, which doesn't support signature v4. Retry with signature v2
968 if not request.use_signature_v2() and not self.fallback_to_signature_v2: # have not tried with v2 yet
969 debug(u'Falling back to signature v2')
970 self.fallback_to_signature_v2 = True
971 return fn(*args, **kwargs)
972
973 error(u"S3 error: %s" % message)
974 sys.exit(ExitCodes.EX_GENERAL)
975
976 def _http_403_handler(self, request, response, fn, *args, **kwargs):
977 message = 'Unknown error'
978 if 'data' in response and len(response['data']) > 0:
979 failureCode = getTextFromXml(response['data'], 'Code')
980 message = getTextFromXml(response['data'], 'Message')
981 if failureCode == 'AccessDenied': # traditional HTTP 403
982 if message == 'AWS authentication requires a valid Date or x-amz-date header': # message from an Eucalyptus walrus server
983 if not request.use_signature_v2() and not self.fallback_to_signature_v2: # have not tried with v2 yet
984 debug(u'Falling back to signature v2')
985 self.fallback_to_signature_v2 = True
986 return fn(*args, **kwargs)
987
988 error(u"S3 error: %s" % message)
989 sys.exit(ExitCodes.EX_GENERAL)
990
991 def send_request(self, request, retries = _max_retries):
837992 method_string, resource, headers = request.get_triplet()
993
838994 debug("Processing request, please wait...")
839 if not headers.has_key('content-length'):
840 headers['content-length'] = body and len(body) or 0
841995 try:
842 # "Stringify" all headers
843 for header in headers.keys():
844 headers[header] = str(headers[header])
845996 conn = ConnMan.get(self.get_hostname(resource['bucket']))
846997 uri = self.format_uri(resource)
847 debug("Sending request method_string=%r, uri=%r, headers=%r, body=(%i bytes)" % (method_string, uri, headers, len(body or "")))
848 conn.c.request(method_string, uri, body, headers)
998 debug("Sending request method_string=%r, uri=%r, headers=%r, body=(%i bytes)" % (method_string, uri, headers, len(request.body or "")))
999 conn.c.request(method_string, uri, request.body, headers)
8491000 response = {}
8501001 http_response = conn.c.getresponse()
8511002 response["status"] = http_response.status
8661017 warning("Retrying failed request: %s (%s)" % (resource['uri'], e))
8671018 warning("Waiting %d sec..." % self._fail_wait(retries))
8681019 time.sleep(self._fail_wait(retries))
869 return self.send_request(request, body, retries - 1)
1020 return self.send_request(request, retries - 1)
8701021 else:
8711022 raise S3RequestError("Request failed for: %s" % resource['uri'])
1023
1024 if response["status"] == 400:
1025 return self._http_400_handler(request, response, self.send_request, request)
1026 if response["status"] == 403:
1027 return self._http_403_handler(request, response, self.send_request, request)
1028 if response["status"] == 405: # Method Not Allowed. Don't retry.
1029 raise S3Error(response)
8721030
8731031 if response["status"] == 307:
8741032 ## RedirectPermanent
8751033 redir_bucket = getTextFromXml(response['data'], ".//Bucket")
8761034 redir_hostname = getTextFromXml(response['data'], ".//Endpoint")
8771035 self.set_hostname(redir_bucket, redir_hostname)
878 warning("Redirected to: %s" % (redir_hostname))
879 return self.send_request(request, body)
1036 info("Redirected to: %s" % (redir_hostname))
1037 return self.send_request(request)
8801038
8811039 if response["status"] >= 500:
8821040 e = S3Error(response)
8851043 warning(unicode(e))
8861044 warning("Waiting %d sec..." % self._fail_wait(retries))
8871045 time.sleep(self._fail_wait(retries))
888 return self.send_request(request, body, retries - 1)
1046 return self.send_request(request, retries - 1)
8891047 else:
8901048 raise e
8911049
8961054
8971055 def send_file(self, request, file, labels, buffer = '', throttle = 0, retries = _max_retries, offset = 0, chunk_size = -1):
8981056 method_string, resource, headers = request.get_triplet()
1057 if S3Request.region_map.get(request.resource['bucket'], None) is None:
1058 s3_uri = S3Uri('s3://' + request.resource['bucket'])
1059 region = self.get_bucket_location(s3_uri)
1060 if region is not None:
1061 S3Request.region_map[request.resource['bucket']] = region
1062
8991063 size_left = size_total = headers.get("content-length")
9001064 if self.config.progress_meter:
9011065 progress = self.config.progress_class(labels, size_total)
9021066 else:
9031067 info("Sending file '%s', please wait..." % file.name)
9041068 timestamp_start = time.time()
1069
1070 if buffer:
1071 sha256_hash = checksum_sha256_buffer(buffer, offset, size_total)
1072 else:
1073 sha256_hash = checksum_sha256_file(file.name, offset, size_total)
1074 request.body = sha256_hash
1075 method_string, resource, headers = request.get_triplet()
9051076 try:
9061077 conn = ConnMan.get(self.get_hostname(resource['bucket']))
9071078 conn.c.putrequest(method_string, self.format_uri(resource))
9851156 redir_bucket = getTextFromXml(response['data'], ".//Bucket")
9861157 redir_hostname = getTextFromXml(response['data'], ".//Endpoint")
9871158 self.set_hostname(redir_bucket, redir_hostname)
988 warning("Redirected to: %s" % (redir_hostname))
1159 info("Redirected to: %s" % (redir_hostname))
9891160 return self.send_file(request, file, labels, buffer, offset = offset, chunk_size = chunk_size)
1161
1162 if response["status"] == 400:
1163 return self._http_400_handler(request, response, self.send_file, request, file, labels, buffer, offset = offset, chunk_size = chunk_size)
1164 if response["status"] == 403:
1165 return self._http_403_handler(request, response, self.send_file, request, file, labels, buffer, offset = offset, chunk_size = chunk_size)
9901166
9911167 # S3 from time to time doesn't send ETag back in a response :-(
9921168 # Force re-upload here.
10881264 redir_bucket = getTextFromXml(response['data'], ".//Bucket")
10891265 redir_hostname = getTextFromXml(response['data'], ".//Endpoint")
10901266 self.set_hostname(redir_bucket, redir_hostname)
1091 warning("Redirected to: %s" % (redir_hostname))
1267 info("Redirected to: %s" % (redir_hostname))
10921268 return self.recv_file(request, stream, labels)
1269
1270 if response["status"] == 400:
1271 return self._http_400_handler(request, response, self.recv_file, request, stream, labels)
1272 if response["status"] == 403:
1273 return self._http_403_handler(request, response, self.recv_file, request, stream, labels)
1274 if response["status"] == 405: # Method Not Allowed. Don't retry.
1275 raise S3Error(response)
10931276
10941277 if response["status"] < 200 or response["status"] > 299:
10951278 raise S3Error(response)
11691352 except KeyError:
11701353 pass
11711354
1172 response["md5match"] = md5_hash.find(response["md5"]) >= 0
1355 response["md5match"] = response["md5"] in md5_hash
11731356 response["elapsed"] = timestamp_end - timestamp_start
11741357 response["size"] = current_position
11751358 response["speed"] = response["elapsed"] and float(response["size"]) / response["elapsed"] or float(-1)
99 from BidirMap import BidirMap
1010 from logging import debug
1111 import S3
12 from Utils import unicodise, check_bucket_name_dns_conformity
12 from Utils import unicodise, check_bucket_name_dns_conformity, check_bucket_name_dns_support
1313 import Config
1414
1515 class S3Uri(object):
5252
5353 class S3UriS3(S3Uri):
5454 type = "s3"
55 _re = re.compile("^s3://([^/]+)/?(.*)", re.IGNORECASE)
55 _re = re.compile("^s3://([^/]*)/?(.*)", re.IGNORECASE)
5656 def __init__(self, string):
5757 match = self._re.match(string)
5858 if not match:
7777 return u"/".join([u"s3:/", self._bucket, self._object])
7878
7979 def is_dns_compatible(self):
80 return check_bucket_name_dns_conformity(self._bucket)
80 return check_bucket_name_dns_support(Config.Config().host_bucket, self._bucket)
8181
8282 def public_url(self):
8383 if self.is_dns_compatible():
1111 import string
1212 import random
1313 import rfc822
14 import hmac
15 import base64
1614 import errno
1715 import urllib
1816 from calendar import timegm
4846 try:
4947 import xml.etree.ElementTree as ET
5048 except ImportError:
49 # xml.etree.ElementTree was only added in python 2.5
5150 import elementtree.ElementTree as ET
52 from xml.parsers.expat import ExpatError
5351
5452 __all__ = []
5553 def parseNodes(nodes):
7371 """
7472 removeNameSpace(xml) -- remove top-level AWS namespace
7573 """
76 r = re.compile('^(<?[^>]+?>\s?)(<\w+) xmlns=[\'"](http://[^\'"]+)[\'"](.*)', re.MULTILINE)
74 r = re.compile('^(<?[^>]+?>\s*)(<\w+) xmlns=[\'"](http://[^\'"]+)[\'"](.*)', re.MULTILINE)
7775 if r.match(xml):
7876 xmlns = r.match(xml).groups()[2]
7977 xml = r.sub("\\1\\2\\4", xml)
8987 if xmlns:
9088 tree.attrib['xmlns'] = xmlns
9189 return tree
92 except ExpatError, e:
93 error(e)
94 raise Exceptions.ParameterError("Bucket contains invalid filenames. Please run: s3cmd fixbucket s3://your-bucket/")
9590 except Exception, e:
96 error(e)
91 error("Error parsing xml: %s", e)
9792 error(xml)
9893 raise
9994
342337 return new_string
343338 __all__.append("replace_nonprintables")
344339
345 def sign_string(string_to_sign):
346 """Sign a string with the secret key, returning base64 encoded results.
347 By default the configured secret key is used, but may be overridden as
348 an argument.
349
350 Useful for REST authentication. See http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
351 """
352 signature = base64.encodestring(hmac.new(Config.Config().secret_key, string_to_sign, sha1).digest()).strip()
353 return signature
354 __all__.append("sign_string")
355
356 def sign_url(url_to_sign, expiry):
357 """Sign a URL in s3://bucket/object form with the given expiry
358 time. The object will be accessible via the signed URL until the
359 AWS key and secret are revoked or the expiry time is reached, even
360 if the object is otherwise private.
361
362 See: http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
363 """
364 return sign_url_base(
365 bucket = url_to_sign.bucket(),
366 object = url_to_sign.object(),
367 expiry = expiry
368 )
369 __all__.append("sign_url")
370
371 def sign_url_base(**parms):
372 """Shared implementation of sign_url methods. Takes a hash of 'bucket', 'object' and 'expiry' as args."""
373 parms['expiry']=time_to_epoch(parms['expiry'])
374 parms['access_key']=Config.Config().access_key
375 parms['host_base']=Config.Config().host_base
376 debug("Expiry interpreted as epoch time %s", parms['expiry'])
377 signtext = 'GET\n\n\n%(expiry)d\n/%(bucket)s/%(object)s' % parms
378 debug("Signing plaintext: %r", signtext)
379 parms['sig'] = urllib.quote_plus(sign_string(signtext))
380 debug("Urlencoded signature: %s", parms['sig'])
381 return "http://%(bucket)s.%(host_base)s/%(object)s?AWSAccessKeyId=%(access_key)s&Expires=%(expiry)d&Signature=%(sig)s" % parms
382
383340 def time_to_epoch(t):
384341 """Convert time specified in a variety of forms into UNIX epoch time.
385342 Accepts datetime.datetime, int, anything that has a strftime() method, and standard time 9-tuples
399356 elif isinstance(t, str) or isinstance(t, unicode):
400357 # See if it's a string representation of an epoch
401358 try:
359 # Support relative times (eg. "+60")
360 if t.startswith('+'):
361 return time.time() + int(t[1:])
402362 return int(t)
403363 except ValueError:
404364 # Try to parse it as a timestamp string
446406 return False
447407 __all__.append("check_bucket_name_dns_conformity")
448408
409 def check_bucket_name_dns_support(bucket_host, bucket_name):
410 """
411 Check whether either the host_bucket support buckets and
412 either bucket name is dns compatible
413 """
414 if "%(bucket)s" not in bucket_host:
415 return False
416
417 try:
418 return check_bucket_name(bucket_name, dns_strict = True)
419 except Exceptions.ParameterError:
420 return False
421 __all__.append("check_bucket_name_dns_support")
422
449423 def getBucketFromHostname(hostname):
450424 """
451425 bucket, success = getBucketFromHostname(hostname)
+90
-61
s3cmd less more
0 #!/usr/bin/env python
0 #!/usr/bin/env python2
11
22 ## --------------------------------------------------------------------
33 ## s3cmd - S3 client
1919
2020 import sys
2121
22 if float("%d.%d" %(sys.version_info[0], sys.version_info[1])) < 2.4:
23 sys.stderr.write(u"ERROR: Python 2.4 or higher required, sorry.\n")
22 if float("%d.%d" %(sys.version_info[0], sys.version_info[1])) < 2.6:
23 sys.stderr.write(u"ERROR: Python 2.6 or higher required, sorry.\n")
2424 sys.exit(EX_OSFILE)
2525
2626 import logging
123123 if uri.type == "s3" and uri.has_bucket():
124124 subcmd_bucket_list(s3, uri)
125125 return EX_OK
126 subcmd_buckets_list_all(s3)
127 return EX_OK
128
129 def cmd_buckets_list_all_all(args):
126
127 # If not a s3 type uri or no bucket was provided, list all the buckets
128 subcmd_all_buckets_list(s3)
129 return EX_OK
130
131 def subcmd_all_buckets_list(s3):
132
133 response = s3.list_all_buckets()
134
135 for bucket in sorted(response["list"], key=lambda b:b["Name"]):
136 output(u"%s s3://%s" % (formatDateTime(bucket["CreationDate"]),
137 bucket["Name"]))
138
139 def cmd_all_buckets_list_all_content(args):
130140 s3 = S3(Config())
131141
132142 response = s3.list_all_buckets()
135145 subcmd_bucket_list(s3, S3Uri("s3://" + bucket["Name"]))
136146 output(u"")
137147 return EX_OK
138
139 def subcmd_buckets_list_all(s3):
140 response = s3.list_all_buckets()
141 for bucket in response["list"]:
142 output(u"%s s3://%s" % (
143 formatDateTime(bucket["CreationDate"]),
144 bucket["Name"],
145 ))
146148
147149 def subcmd_bucket_list(s3, uri):
148150 bucket = uri.bucket()
172174 "uri": uri.compose_uri(bucket, prefix["Prefix"])})
173175
174176 for object in response["list"]:
175 md5 = object['ETag'].strip('"')
177 md5 = object['ETag'].strip('"\'')
176178 if cfg.list_md5:
177 if md5.find('-') >= 0: # need to get md5 from the object
179 if '-' in md5: # need to get md5 from the object
178180 object_uri = uri.compose_uri(bucket, object["Key"])
179181 info_response = s3.object_info(S3Uri(object_uri))
180182 try:
464466 if destination_base[-1] != os.path.sep:
465467 destination_base += os.path.sep
466468 for key in remote_list:
467 remote_list[key]['local_filename'] = destination_base + key
469 local_filename = destination_base + key
470 if os.path.sep != "/":
471 local_filename = os.path.sep.join(local_filename.split("/"))
472 remote_list[key]['local_filename'] = deunicodise(local_filename)
468473 else:
469474 raise InternalError("WTF? Is it a dir or not? -- %s" % destination_base)
470475
752757
753758 def cmd_modify(args):
754759 s3 = S3(Config())
755 return subcmd_cp_mv(args, s3.object_copy, "modify", u"File %(src)s modified")
760 return subcmd_cp_mv(args, s3.object_modify, "modify", u"File %(src)s modified")
756761
757762 def cmd_mv(args):
758763 s3 = S3(Config())
774779 output(u" File size: %s" % info['headers']['content-length'])
775780 output(u" Last mod: %s" % info['headers']['last-modified'])
776781 output(u" MIME type: %s" % info['headers']['content-type'])
777 md5 = info['headers']['etag'].strip('"')
782 md5 = info['headers']['etag'].strip('"\'')
778783 try:
779784 md5 = info['s3cmd-attrs']['md5']
780785 except KeyError:
916921 failed_copy_files[key]['target_uri'] = destination_base + key
917922 seq = _upload(failed_copy_files, seq, failed_copy_count)
918923
919 total_elapsed = time.time() - timestamp_start
920 if total_elapsed == 0.0:
921 total_elapsed = 1.0
924 total_elapsed = max(1.0, time.time() - timestamp_start)
922925 outstr = "Done. Copied %d files in %0.1f seconds, %0.2f files/s" % (seq, total_elapsed, seq/total_elapsed)
923926 if seq > 0:
924927 output(outstr)
10431046 except OSError, e:
10441047 if e.errno == errno.EISDIR:
10451048 warning(u"%s is a directory - skipping over" % unicodise(dst_file))
1049 continue
1050 elif e.errno == errno.ETXTBSY:
1051 warning(u"%s is currently open for execute, cannot be overwritten. Skipping over." % unicodise(dst_file))
1052 os.unlink(chkptfname)
10461053 continue
10471054 else:
10481055 raise
11471154 _set_local_filename(failed_copy_list, destination_base)
11481155 seq, total_size = _download(failed_copy_list, seq, len(failed_copy_list) + remote_count + update_count, total_size, dir_cache)
11491156
1150 total_elapsed = time.time() - timestamp_start
1157 total_elapsed = max(1.0, time.time() - timestamp_start)
11511158 speed_fmt = formatSize(total_size/total_elapsed, human_readable = True, floating_point = True)
11521159
11531160 # Only print out the result if any work has been done or
13981405
13991406 if cfg.delete_removed and cfg.delete_after and remote_list:
14001407 subcmd_batch_del(remote_list = remote_list)
1401 total_elapsed = time.time() - timestamp_start
1408 total_elapsed = max(1.0, time.time() - timestamp_start)
14021409 total_speed = total_elapsed and total_size/total_elapsed or 0.0
14031410 speed_fmt = formatSize(total_speed, human_readable = True, floating_point = True)
14041411
14051412 # Only print out the result if any work has been done or
14061413 # if the user asked for verbose output
1407 outstr = "Done. Uploaded %d bytes in %0.1f seconds, %0.2f %sB/s. Copied %d files saving %d bytes transfer." % (total_size, total_elapsed, speed_fmt[0], speed_fmt[1], n_copies, saved_bytes)
1414 outstr = "Done. Uploaded %d bytes in %0.1f seconds, %0.2f %sB/s. Copied %d files saving %d bytes transfer." % (total_size, total_elapsed, speed_fmt[0], speed_fmt[1], n_copies, saved_bytes)
14081415 if total_size + saved_bytes > 0:
14091416 output(outstr)
14101417 else:
16441651 def cmd_sign(args):
16451652 string_to_sign = args.pop()
16461653 debug("string-to-sign: %r" % string_to_sign)
1647 signature = Utils.sign_string(string_to_sign)
1654 signature = Crypto.sign_string_v2(string_to_sign)
16481655 output("Signature: %s" % signature)
16491656 return EX_OK
16501657
16541661 if url_to_sign.type != 's3':
16551662 raise ParameterError("Must be S3Uri. Got: %s" % url_to_sign)
16561663 debug("url to sign: %r" % url_to_sign)
1657 signed_url = Utils.sign_url(url_to_sign, expiry)
1664 signed_url = Crypto.sign_url_v2(url_to_sign, expiry)
16581665 output(signed_url)
16591666 return EX_OK
16601667
17341741
17351742 def gpg_command(command, passphrase = ""):
17361743 debug("GPG command: " + " ".join(command))
1737 p = subprocess.Popen(command, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
1744 p = subprocess.Popen(command, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.STDOUT,
1745 close_fds = True)
17381746 p_stdout, p_stderr = p.communicate(passphrase + "\n")
17391747 debug("GPG output:")
17401748 for line in p_stdout.split("\n"):
17781786 options = [
17791787 ("access_key", "Access Key", "Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables."),
17801788 ("secret_key", "Secret Key"),
1789 ("bucket_location", "Default Region"),
17811790 ("gpg_passphrase", "Encryption password", "Encryption password is used to protect your files from reading\nby unauthorized persons while in transfer to S3"),
17821791 ("gpg_command", "Path to GPG program"),
1783 ("use_https", "Use HTTPS protocol", "When using secure HTTPS protocol all communication with Amazon S3\nservers is protected from 3rd party eavesdropping. This method is\nslower than plain HTTP and can't be used if you're behind a proxy"),
1792 ("use_https", "Use HTTPS protocol", "When using secure HTTPS protocol all communication with Amazon S3\nservers is protected from 3rd party eavesdropping. This method is\nslower than plain HTTP, and can only be proxied with Python 2.7 or newer"),
17841793 ("proxy_host", "HTTP Proxy server name", "On some networks all internet access must go through a HTTP proxy.\nTry setting it here if you can't connect to S3 directly"),
17851794 ("proxy_port", "HTTP Proxy server port"),
17861795 ]
18011810 for option in options:
18021811 prompt = option[1]
18031812 ## Option-specific handling
1804 if option[0] == 'proxy_host' and getattr(cfg, 'use_https') == True:
1813 if option[0] == 'proxy_host' and getattr(cfg, 'use_https') == True and sys.hexversion < 0x02070000:
18051814 setattr(cfg, option[0], "")
18061815 continue
18071816 if option[0] == 'proxy_port' and getattr(cfg, 'proxy_host') == "":
19691978 {"cmd":"mb", "label":"Make bucket", "param":"s3://BUCKET", "func":cmd_bucket_create, "argc":1},
19701979 {"cmd":"rb", "label":"Remove bucket", "param":"s3://BUCKET", "func":cmd_bucket_delete, "argc":1},
19711980 {"cmd":"ls", "label":"List objects or buckets", "param":"[s3://BUCKET[/PREFIX]]", "func":cmd_ls, "argc":0},
1972 {"cmd":"la", "label":"List all object in all buckets", "param":"", "func":cmd_buckets_list_all_all, "argc":0},
1981 {"cmd":"la", "label":"List all object in all buckets", "param":"", "func":cmd_all_buckets_list_all_content, "argc":0},
19731982 {"cmd":"put", "label":"Put file into bucket", "param":"FILE [FILE...] s3://BUCKET[/PREFIX]", "func":cmd_object_put, "argc":2},
19741983 {"cmd":"get", "label":"Get file from bucket", "param":"s3://BUCKET/OBJECT LOCAL_FILE", "func":cmd_object_get, "argc":1},
19751984 {"cmd":"del", "label":"Delete file from bucket", "param":"s3://BUCKET/OBJECT", "func":cmd_object_del, "argc":1},
19761985 {"cmd":"rm", "label":"Delete file from bucket (alias for del)", "param":"s3://BUCKET/OBJECT", "func":cmd_object_del, "argc":1},
19771986 #{"cmd":"mkdir", "label":"Make a virtual S3 directory", "param":"s3://BUCKET/path/to/dir", "func":cmd_mkdir, "argc":1},
19781987 {"cmd":"restore", "label":"Restore file from Glacier storage", "param":"s3://BUCKET/OBJECT", "func":cmd_object_restore, "argc":1},
1979 {"cmd":"sync", "label":"Synchronize a directory tree to S3", "param":"LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR", "func":cmd_sync, "argc":2},
1988 {"cmd":"sync", "label":"Synchronize a directory tree to S3 (checks files freshness using size and md5 checksum, unless overriden by options, see below)", "param":"LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR", "func":cmd_sync, "argc":2},
19801989 {"cmd":"du", "label":"Disk usage by buckets", "param":"[s3://BUCKET[/PREFIX]]", "func":cmd_du, "argc":0},
19811990 {"cmd":"info", "label":"Get various information about Buckets or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1},
19821991 {"cmd":"cp", "label":"Copy object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_cp, "argc":2},
19871996 {"cmd":"setpolicy", "label":"Modify Bucket Policy", "param":"FILE s3://BUCKET", "func":cmd_setpolicy, "argc":2},
19881997 {"cmd":"delpolicy", "label":"Delete Bucket Policy", "param":"s3://BUCKET", "func":cmd_delpolicy, "argc":1},
19891998
1990 {"cmd":"multipart", "label":"show multipart uploads", "param":"s3://BUCKET [Id]", "func":cmd_multipart, "argc":1},
1991 {"cmd":"abortmp", "label":"abort a multipart upload", "param":"s3://BUCKET/OBJECT Id", "func":cmd_abort_multipart, "argc":2},
1992
1993 {"cmd":"listmp", "label":"list parts of a multipart upload", "param":"s3://BUCKET/OBJECT Id", "func":cmd_list_multipart, "argc":2},
1999 {"cmd":"multipart", "label":"Show multipart uploads", "param":"s3://BUCKET [Id]", "func":cmd_multipart, "argc":1},
2000 {"cmd":"abortmp", "label":"Abort a multipart upload", "param":"s3://BUCKET/OBJECT Id", "func":cmd_abort_multipart, "argc":2},
2001
2002 {"cmd":"listmp", "label":"List parts of a multipart upload", "param":"s3://BUCKET/OBJECT Id", "func":cmd_list_multipart, "argc":2},
19942003
19952004 {"cmd":"accesslog", "label":"Enable/disable bucket access logging", "param":"s3://BUCKET", "func":cmd_accesslog, "argc":1},
19962005 {"cmd":"sign", "label":"Sign arbitrary string using the secret key", "param":"STRING-TO-SIGN", "func":cmd_sign, "argc":1},
1997 {"cmd":"signurl", "label":"Sign an S3 URL to provide limited public access with expiry", "param":"s3://BUCKET/OBJECT expiry_epoch", "func":cmd_signurl, "argc":2},
2006 {"cmd":"signurl", "label":"Sign an S3 URL to provide limited public access with expiry", "param":"s3://BUCKET/OBJECT <expiry_epoch|+expiry_offset>", "func":cmd_signurl, "argc":2},
19982007 {"cmd":"fixbucket", "label":"Fix invalid file names in a bucket", "param":"s3://BUCKET[/PREFIX]", "func":cmd_fixbucket, "argc":1},
19992008
20002009 ## Website commands
21212130 from os.path import expanduser
21222131 config_file = os.path.join(expanduser("~"), ".s3cfg")
21232132
2124 preferred_encoding = locale.getpreferredencoding() or "UTF-8"
2125
2126 optparser.set_defaults(encoding = preferred_encoding)
2133 autodetected_encoding = locale.getpreferredencoding() or "UTF-8"
2134
21272135 optparser.set_defaults(config = config_file)
21282136
21292137 optparser.add_option( "--configure", dest="run_configure", action="store_true", help="Invoke interactive (re)configuration tool. Optionally use as '--configure s3://some-bucket' to test access to a specific bucket instead of attempting to list them all.")
2130 optparser.add_option("-c", "--config", dest="config", metavar="FILE", help="Config file name. Defaults to %default")
2138 optparser.add_option("-c", "--config", dest="config", metavar="FILE", help="Config file name. Defaults to $HOME/.s3cfg")
21312139 optparser.add_option( "--dump-config", dest="dump_config", action="store_true", help="Dump current configuration after parsing config files and command line options and exit.")
21322140 optparser.add_option( "--access_key", dest="access_key", help="AWS Access Key")
21332141 optparser.add_option( "--secret_key", dest="secret_key", help="AWS Secret Key")
21342142
21352143 optparser.add_option("-n", "--dry-run", dest="dry_run", action="store_true", help="Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though (only for file transfer commands)")
21362144
2145 optparser.add_option("-s", "--ssl", dest="use_https", action="store_true", help="Use HTTPS connection when communicating with S3.")
2146 optparser.add_option( "--no-ssl", dest="use_https", action="store_false", help="Don't use HTTPS. (default)")
21372147 optparser.add_option("-e", "--encrypt", dest="encrypt", action="store_true", help="Encrypt files before uploading to S3.")
21382148 optparser.add_option( "--no-encrypt", dest="encrypt", action="store_false", help="Don't encrypt files.")
21392149 optparser.add_option("-f", "--force", dest="force", action="store_true", help="Force overwrite and other dangerous operations.")
21712181 optparser.add_option( "--ignore-failed-copy", dest="ignore_failed_copy", action="store_true", help="Don't exit unsuccessfully because of missing keys")
21722182
21732183 optparser.add_option( "--files-from", dest="files_from", action="append", metavar="FILE", help="Read list of source-file names from FILE. Use - to read from stdin.")
2174 optparser.add_option( "--bucket-location", dest="bucket_location", help="Datacentre to create bucket in. As of now the datacenters are: US (default), EU, ap-northeast-1, ap-southeast-1, sa-east-1, us-west-1 and us-west-2")
2184 optparser.add_option( "--region", "--bucket-location", metavar="REGION", dest="bucket_location", help="Region to create bucket in. As of now the regions are: us-east-1, us-west-1, us-west-2, eu-west-1, eu-central-1, ap-northeast-1, ap-southeast-1, ap-southeast-2, sa-east-1")
21752185 optparser.add_option( "--reduced-redundancy", "--rr", dest="reduced_redundancy", action="store_true", help="Store object with 'Reduced redundancy'. Lower per-GB price. [put, cp, mv]")
2186 optparser.add_option( "--no-reduced-redundancy", "--no-rr", dest="reduced_redundancy", action="store_false", help="Store object without 'Reduced redundancy'. Higher per-GB price. [put, cp, mv]")
21762187
21772188 optparser.add_option( "--access-logging-target-prefix", dest="log_target_prefix", help="Target prefix for access logs (S3 URI) (for [cfmodify] and [accesslog] commands)")
21782189 optparser.add_option( "--no-access-logging", dest="log_target_prefix", action="store_false", help="Disable access logging (for [cfmodify] and [accesslog] commands)")
21842195 optparser.add_option("-m", "--mime-type", dest="mime_type", type="mimetype", metavar="MIME/TYPE", help="Force MIME-type. Override both --default-mime-type and --guess-mime-type.")
21852196
21862197 optparser.add_option( "--add-header", dest="add_header", action="append", metavar="NAME:VALUE", help="Add a given HTTP header to the upload request. Can be used multiple times. For instance set 'Expires' or 'Cache-Control' headers (or both) using this option.")
2187
2188 optparser.add_option( "--server-side-encryption", dest="server_side_encryption", action="store_true", help="Specifies that server-side encryption will be used when putting objects.")
2189
2190 optparser.add_option( "--encoding", dest="encoding", metavar="ENCODING", help="Override autodetected terminal and filesystem encoding (character set). Autodetected: %s" % preferred_encoding)
2198 optparser.add_option( "--remove-header", dest="remove_headers", action="append", metavar="NAME", help="Remove a given HTTP header. Can be used multiple times. For instance, remove 'Expires' or 'Cache-Control' headers (or both) using this option. [modify]")
2199
2200 optparser.add_option( "--server-side-encryption", dest="server_side_encryption", action="store_true", help="Specifies that server-side encryption will be used when putting objects. [put, sync, cp, modify]")
2201
2202 optparser.add_option( "--encoding", dest="encoding", metavar="ENCODING", help="Override autodetected terminal and filesystem encoding (character set). Autodetected: %s" % autodetected_encoding)
21912203 optparser.add_option( "--add-encoding-exts", dest="add_encoding_exts", metavar="EXTENSIONs", help="Add encoding to these comma delimited extensions i.e. (css,js,html) when uploading to S3 )")
21922204 optparser.add_option( "--verbatim", dest="urlencoding_mode", action="store_const", const="verbatim", help="Use the S3 name as given on the command line. No pre-processing, encoding, etc. Use with caution!")
21932205
22232235 optparser.add_option("-F", "--follow-symlinks", dest="follow_symlinks", action="store_true", default=False, help="Follow symbolic links as if they are regular files")
22242236 optparser.add_option( "--cache-file", dest="cache_file", action="store", default="", metavar="FILE", help="Cache FILE containing local source MD5 values")
22252237 optparser.add_option("-q", "--quiet", dest="quiet", action="store_true", default=False, help="Silence output on stdout")
2238 optparser.add_option("--ca-certs", dest="ca_certs_file", action="store", default=None, help="Path to SSL CA certificate FILE (instead of system default)")
2239 optparser.add_option("--check-certificate", dest="check_ssl_certificate", action="store_true", help="Check SSL certificate validity")
2240 optparser.add_option("--no-check-certificate", dest="check_ssl_certificate", action="store_false", help="Check SSL certificate validity")
2241 optparser.add_option("--signature-v2", dest="signature_v2", action="store_true", help="Use AWS Signature version 2 instead of newer signature methods. Helpful for S3-like systems that don't have AWS Signature v4 yet.")
22262242
22272243 optparser.set_usage(optparser.usage + " COMMAND [parameters]")
22282244 optparser.set_description('S3cmd is a tool for managing objects in '+
22992315 key_inval = key_inval.replace(" ", "<space>")
23002316 key_inval = key_inval.replace("\t", "<tab>")
23012317 raise ParameterError("Invalid character(s) in header name '%s': \"%s\"" % (key, key_inval))
2302 debug(u"Updating Config.Config extra_headers[%s] -> %s" % (key.strip(), val.strip()))
2303 cfg.extra_headers[key.strip()] = val.strip()
2318 debug(u"Updating Config.Config extra_headers[%s] -> %s" % (key.strip().lower(), val.strip()))
2319 cfg.extra_headers[key.strip().lower()] = val.strip()
2320
2321 # Process --remove-header
2322 if options.remove_headers:
2323 cfg.remove_headers = options.remove_headers
23042324
23052325 ## --acl-grant/--acl-revoke arguments are pre-parsed by OptionS3ACL()
23062326 if options.acl_grants:
24362456 error(u"Not enough parameters for command '%s'" % command)
24372457 sys.exit(EX_USAGE)
24382458
2439 try:
2440 rc = cmd_func(args)
2441 if rc is None: # if we missed any cmd_*() returns
2442 rc = EX_GENERAL
2443 return rc
2444 except S3Error, e:
2445 error(u"S3 error: %s" % e)
2446 sys.exit(EX_SOFTWARE)
2459 rc = cmd_func(args)
2460 if rc is None: # if we missed any cmd_*() returns
2461 rc = EX_GENERAL
2462 return rc
24472463
24482464 def report_exception(e, msg=''):
24492465 sys.stderr.write(u"""
24612477 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
24622478
24632479 """ % msg)
2464 s = u' '.join([unicodise(a) for a in sys.argv])
2480 tb = traceback.format_exc(sys.exc_info())
2481 try:
2482 s = u' '.join([unicodise(a) for a in sys.argv])
2483 except NameError:
2484 s = u' '.join([(a) for a in sys.argv])
24652485 sys.stderr.write(u"Invoked as: %s\n" % s)
24662486
2467 tb = traceback.format_exc(sys.exc_info())
24682487 e_class = str(e.__class__)
24692488 e_class = e_class[e_class.rfind(".")+1 : -2]
24702489 sys.stderr.write(u"Problem: %s: %s\n" % (e_class, e))
25122531 from S3.FileDict import FileDict
25132532 from S3.S3Uri import S3Uri
25142533 from S3 import Utils
2534 from S3 import Crypto
25152535 from S3.Utils import *
25162536 from S3.Progress import Progress
25172537 from S3.CloudFront import Cmd as CfCmd
25182538 from S3.CloudFront import CloudFront
25192539 from S3.FileLists import *
25202540 from S3.MultiPart import MultiPartUpload
2521
2541 except Exception as e:
2542 report_exception(e, "Error loading some components of s3cmd (Import Error)")
2543 # 1 = EX_GENERAL but be safe in that situation
2544 sys.exit(1)
2545
2546 try:
25222547 rc = main()
25232548 sys.exit(rc)
25242549
25342559 error(u"S3 Temporary Error: %s. Please try again later." % e)
25352560 sys.exit(EX_TEMPFAIL)
25362561
2537 except (S3Error, S3Exception, S3ResponseError, CloudFrontError), e:
2562 except S3Error, e:
2563 error(u"S3 error: %s" % e)
2564 sys.exit(e.get_error_code())
2565
2566 except (S3Exception, S3ResponseError, CloudFrontError), e:
25382567 report_exception(e)
25392568 sys.exit(EX_SOFTWARE)
25402569
4949 Restore file from Glacier storage
5050 .TP
5151 s3cmd \fBsync\fR \fILOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR\fR
52 Synchronize a directory tree to S3
52 Synchronize a directory tree to S3 (checks files freshness using size and md5 checksum, unless overriden by options, see below)
5353 .TP
5454 s3cmd \fBdu\fR \fI[s3://BUCKET[/PREFIX]]\fR
5555 Disk usage by buckets
7676 Delete Bucket Policy
7777 .TP
7878 s3cmd \fBmultipart\fR \fIs3://BUCKET [Id]\fR
79 show multipart uploads
79 Show multipart uploads
8080 .TP
8181 s3cmd \fBabortmp\fR \fIs3://BUCKET/OBJECT Id\fR
82 abort a multipart upload
82 Abort a multipart upload
8383 .TP
8484 s3cmd \fBlistmp\fR \fIs3://BUCKET/OBJECT Id\fR
85 list parts of a multipart upload
85 List parts of a multipart upload
8686 .TP
8787 s3cmd \fBaccesslog\fR \fIs3://BUCKET\fR
8888 Enable/disable bucket access logging
9090 s3cmd \fBsign\fR \fISTRING-TO-SIGN\fR
9191 Sign arbitrary string using the secret key
9292 .TP
93 s3cmd \fBsignurl\fR \fIs3://BUCKET/OBJECT expiry_epoch\fR
93 s3cmd \fBsignurl\fR \fIs3://BUCKET/OBJECT <expiry_epoch|+expiry_offset>\fR
9494 Sign an S3 URL to provide limited public access with expiry
9595 .TP
9696 s3cmd \fBfixbucket\fR \fIs3://BUCKET[/PREFIX]\fR
160160 them all.
161161 .TP
162162 \fB\-c\fR FILE, \fB\-\-config\fR=FILE
163 Config file name. Defaults to /home/mludvig/.s3cfg
163 Config file name. Defaults to $HOME/.s3cfg
164164 .TP
165165 \fB\-\-dump\-config\fR
166166 Dump current configuration after parsing config files
177177 don't actually do it. May still perform S3 requests to
178178 get bucket listings and other information though (only
179179 for file transfer commands)
180 .TP
181 \fB\-s\fR, \fB\-\-ssl\fR
182 Use HTTPS connection when communicating with S3.
183 .TP
184 \fB\-\-no\-ssl\fR
185 Don't use HTTPS. (default)
180186 .TP
181187 \fB\-e\fR, \fB\-\-encrypt\fR
182188 Encrypt files before uploading to S3.
310316 Read list of source-file names from FILE. Use - to
311317 read from stdin.
312318 .TP
313 \fB\-\-bucket\-location\fR=BUCKET_LOCATION
314 Datacentre to create bucket in. As of now the
315 datacenters are: US (default), EU, ap-northeast-1, ap-
316 southeast-1, sa-east-1, us-west-1 and us-west-2
319 \fB\-\-region\fR=REGION, \fB\-\-bucket\-location\fR=REGION
320 Region to create bucket in. As of now the regions are:
321 us-east-1, us-west-1, us-west-2, eu-west-1, eu-
322 central-1, ap-northeast-1, ap-southeast-1, ap-
323 southeast-2, sa-east-1
317324 .TP
318325 \fB\-\-reduced\-redundancy\fR, \fB\-\-rr\fR
319326 Store object with 'Reduced redundancy'. Lower per-GB
320327 price. [put, cp, mv]
328 .TP
329 \fB\-\-no\-reduced\-redundancy\fR, \fB\-\-no\-rr\fR
330 Store object without 'Reduced redundancy'. Higher per-
331 GB price. [put, cp, mv]
321332 .TP
322333 \fB\-\-access\-logging\-target\-prefix\fR=LOG_TARGET_PREFIX
323334 Target prefix for access logs (S3 URI) (for [cfmodify]
352363 used multiple times. For instance set 'Expires' or
353364 'Cache-Control' headers (or both) using this option.
354365 .TP
366 \fB\-\-remove\-header\fR=NAME
367 Remove a given HTTP header. Can be used multiple
368 times. For instance, remove 'Expires' or 'Cache-
369 Control' headers (or both) using this option. [modify]
370 .TP
355371 \fB\-\-server\-side\-encryption\fR
356372 Specifies that server-side encryption will be used
357 when putting objects.
373 when putting objects. [put, sync, cp, modify]
358374 .TP
359375 \fB\-\-encoding\fR=ENCODING
360376 Override autodetected terminal and filesystem encoding
460476 Enable debug output.
461477 .TP
462478 \fB\-\-version\fR
463 Show s3cmd version (1.5.0-rc1) and exit.
479 Show s3cmd version (1.5.2) and exit.
464480 .TP
465481 \fB\-F\fR, \fB\-\-follow\-symlinks\fR
466482 Follow symbolic links as if they are regular files
470486 .TP
471487 \fB\-q\fR, \fB\-\-quiet\fR
472488 Silence output on stdout
489 .TP
490 \fB\-\-ca\-certs\fR=CA_CERTS_FILE
491 Path to SSL CA certificate FILE (instead of system
492 default)
493 .TP
494 \fB\-\-check\-certificate\fR
495 Check SSL certificate validity
496 .TP
497 \fB\-\-no\-check\-certificate\fR
498 Check SSL certificate validity
499 .TP
500 \fB\-\-signature\-v2\fR
501 Use AWS Signature version 2 instead of newer signature
502 methods. Helpful for S3-like systems that don't have
503 AWS Signature v4 yet.
473504
474505
475506 .SH EXAMPLES
0 Metadata-Version: 1.1
1 Name: s3cmd
2 Version: 1.5.2
3 Summary: Command line tool for managing Amazon S3 and CloudFront services
4 Home-page: http://s3tools.org
5 Author: github.com/mdomsch, github.com/matteobar
6 Author-email: s3tools-bugs@lists.sourceforge.net
7 License: GNU GPL v2+
8 Description:
9
10 S3cmd lets you copy files from/to Amazon S3
11 (Simple Storage Service) using a simple to use
12 command line client. Supports rsync-like backup,
13 GPG encryption, and more. Also supports management
14 of Amazon's CloudFront content delivery network.
15
16
17 Authors:
18 --------
19 Michal Ludvig <michal@logix.cz>
20
21 Platform: UNKNOWN
22 Classifier: Development Status :: 5 - Production/Stable
23 Classifier: Environment :: Console
24 Classifier: Environment :: MacOS X
25 Classifier: Environment :: Win32 (MS Windows)
26 Classifier: Intended Audience :: End Users/Desktop
27 Classifier: Intended Audience :: System Administrators
28 Classifier: License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)
29 Classifier: Natural Language :: English
30 Classifier: Operating System :: MacOS :: MacOS X
31 Classifier: Operating System :: Microsoft :: Windows
32 Classifier: Operating System :: POSIX
33 Classifier: Operating System :: Unix
34 Classifier: Programming Language :: Python :: 2.6
35 Classifier: Programming Language :: Python :: 2.7
36 Classifier: Programming Language :: Python :: 2 :: Only
37 Classifier: Topic :: System :: Archiving
38 Classifier: Topic :: Utilities
0 INSTALL
1 MANIFEST.in
2 NEWS
3 README.md
4 s3cmd
5 s3cmd.1
6 setup.cfg
7 setup.py
8 S3/ACL.py
9 S3/AccessLog.py
10 S3/BidirMap.py
11 S3/CloudFront.py
12 S3/Config.py
13 S3/ConnMan.py
14 S3/Crypto.py
15 S3/Exceptions.py
16 S3/ExitCodes.py
17 S3/FileDict.py
18 S3/FileLists.py
19 S3/HashCache.py
20 S3/MultiPart.py
21 S3/PkgInfo.py
22 S3/Progress.py
23 S3/S3.py
24 S3/S3Uri.py
25 S3/SortedDict.py
26 S3/Utils.py
27 S3/__init__.py
28 s3cmd.egg-info/PKG-INFO
29 s3cmd.egg-info/SOURCES.txt
30 s3cmd.egg-info/dependency_links.txt
31 s3cmd.egg-info/requires.txt
32 s3cmd.egg-info/top_level.txt
0 python-dateutil
1 python-magic
00 [sdist]
11 formats = gztar,zip
2
3 [egg_info]
4 tag_build =
5 tag_date = 0
6 tag_svn_revision = 0
7
0 from distutils.core import setup
10 import sys
21 import os
32
3 from setuptools import setup, find_packages
4
45 import S3.PkgInfo
56
6 if float("%d.%d" % sys.version_info[:2]) < 2.4:
7 if float("%d.%d" % sys.version_info[:2]) < 2.6:
78 sys.stderr.write("Your Python version %d.%d.%d is not supported.\n" % sys.version_info[:3])
8 sys.stderr.write("S3cmd requires Python 2.4 or newer.\n")
9 sys.stderr.write("S3cmd requires Python 2.6 or newer.\n")
910 sys.exit(1)
1011
1112 try:
4647 man_path = os.getenv("S3CMD_INSTPATH_MAN") or "share/man"
4748 doc_path = os.getenv("S3CMD_INSTPATH_DOC") or "share/doc/packages"
4849 data_files = [
49 (doc_path+"/s3cmd", [ "README", "INSTALL", "NEWS" ]),
50 (doc_path+"/s3cmd", [ "README.md", "INSTALL", "NEWS" ]),
5051 (man_path+"/man1", [ "s3cmd.1" ] ),
5152 ]
5253 else:
6465 ## Packaging details
6566 author = "Michal Ludvig",
6667 author_email = "michal@logix.cz",
68 maintainer = "github.com/mdomsch, github.com/matteobar",
69 maintainer_email = "s3tools-bugs@lists.sourceforge.net",
6770 url = S3.PkgInfo.url,
6871 license = S3.PkgInfo.license,
6972 description = S3.PkgInfo.short_description,
7477 --------
7578 Michal Ludvig <michal@logix.cz>
7679 """ % (S3.PkgInfo.long_description),
77 requires=["dateutil"]
80
81 classifiers = [
82 'Development Status :: 5 - Production/Stable',
83 'Environment :: Console',
84 'Environment :: MacOS X',
85 'Environment :: Win32 (MS Windows)',
86 'Intended Audience :: End Users/Desktop',
87 'Intended Audience :: System Administrators',
88 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
89 'Natural Language :: English',
90 'Operating System :: MacOS :: MacOS X',
91 'Operating System :: Microsoft :: Windows',
92 'Operating System :: POSIX',
93 'Operating System :: Unix',
94 'Programming Language :: Python :: 2.6',
95 'Programming Language :: Python :: 2.7',
96 'Programming Language :: Python :: 2 :: Only',
97 'Topic :: System :: Archiving',
98 'Topic :: Utilities',
99 ],
100
101 install_requires = ["python-dateutil", "python-magic"]
78102 )
79103
80104 # vim:et:ts=4:sts=4:ai