Update upstream source from tag 'upstream/2.1.0'
Update to upstream version '2.1.0'
with Debian dir bddc557a223bb8ac9cf06f0ead0a801384e05971
Gianfranco Costamagna
3 years ago
0 | Installation of s3cmd package | |
1 | ============================= | |
2 | ||
3 | Copyright: | |
4 | TGRMN Software and contributors | |
5 | ||
6 | S3tools / S3cmd project homepage: | |
7 | http://s3tools.org | |
8 | ||
9 | !!! | |
10 | !!! Please consult README file for setup, usage and examples! | |
11 | !!! | |
12 | ||
13 | Package formats | |
14 | --------------- | |
15 | S3cmd is distributed in two formats: | |
16 | ||
17 | 1) Prebuilt RPM file - should work on most RPM-based | |
18 | distributions | |
19 | ||
20 | 2) Source .tar.gz package | |
21 | ||
22 | ||
23 | Installation of RPM package | |
24 | --------------------------- | |
25 | As user "root" run: | |
26 | ||
27 | rpm -ivh s3cmd-X.Y.Z.noarch.rpm | |
28 | ||
29 | where X.Y.Z is the most recent s3cmd release version. | |
30 | ||
31 | You may be informed about missing dependencies | |
32 | on Python or some libraries. Please consult your | |
33 | distribution documentation on ways to solve the problem. | |
34 | ||
35 | Installation from PyPA (Python Package Authority) | |
36 | --------------------- | |
37 | S3cmd can be installed from the PyPA using PIP (the recommended tool for PyPA). | |
38 | ||
39 | 1) Confirm you have PIP installed. PIP home page is here: https://pypi.python.org/pypi/pip | |
40 | Example install on a RHEL yum based machine | |
41 | sudo yum install python-pip | |
42 | 2) Install with pip | |
43 | sudo pip install s3cmd | |
44 | ||
45 | Installation from zip file | |
46 | -------------------------- | |
47 | There are three options to run s3cmd from source tarball: | |
48 | ||
49 | 1) The S3cmd program, as distributed in s3cmd-X.Y.Z.tar.gz | |
50 | on SourceForge or in master.zip on GitHub, can be run directly | |
51 | from where you unzipped the package. | |
52 | ||
53 | 2) Or you may want to move "s3cmd" file and "S3" subdirectory | |
54 | to some other path. Make sure that "S3" subdirectory ends up | |
55 | in the same place where you move the "s3cmd" file. | |
56 | ||
57 | For instance if you decide to move s3cmd to you $HOME/bin | |
58 | you will have $HOME/bin/s3cmd file and $HOME/bin/S3 directory | |
59 | with a number of support files. | |
60 | ||
61 | 3) The cleanest and most recommended approach is to unzip the | |
62 | package and then just run: | |
63 | ||
64 | python setup.py install | |
65 | ||
66 | You will however need Python "distutils" module for this to | |
67 | work. It is often part of the core python package (e.g. in | |
68 | OpenSuse Python 2.5 package) or it can be installed using your | |
69 | package manager, e.g. in Debian use | |
70 | ||
71 | apt-get install python-setuptools | |
72 | ||
73 | Again, consult your distribution documentation on how to | |
74 | find out the actual package name and how to install it then. | |
75 | ||
76 | Note that on Linux, if you are not "root" already, you may | |
77 | need to run: | |
78 | ||
79 | sudo python setup.py install | |
80 | ||
81 | instead. | |
82 | ||
83 | ||
84 | Note to distributions package maintainers | |
85 | ---------------------------------------- | |
86 | Define shell environment variable S3CMD_PACKAGING=yes if you | |
87 | don't want setup.py to install manpages and doc files. You'll | |
88 | have to install them manually in your .spec or similar package | |
89 | build scripts. | |
90 | ||
91 | On the other hand if you want setup.py to install manpages | |
92 | and docs, but to other than default path, define env | |
93 | variables $S3CMD_INSTPATH_MAN and $S3CMD_INSTPATH_DOC. Check | |
94 | out setup.py for details and default values. | |
95 | ||
96 | ||
97 | Where to get help | |
98 | ----------------- | |
99 | If in doubt, or if something doesn't work as expected, | |
100 | get back to us via mailing list: | |
101 | ||
102 | s3tools-general@lists.sourceforge.net | |
103 | ||
104 | or visit the S3cmd / S3tools homepage at: | |
105 | ||
106 | http://s3tools.org |
0 | Installation of s3cmd package | |
1 | ============================= | |
2 | ||
3 | Copyright: | |
4 | TGRMN Software and contributors | |
5 | ||
6 | S3tools / S3cmd project homepage: | |
7 | http://s3tools.org | |
8 | ||
9 | !!! | |
10 | !!! Please consult README file for setup, usage and examples! | |
11 | !!! | |
12 | ||
13 | Package formats | |
14 | --------------- | |
15 | S3cmd is distributed in two formats: | |
16 | ||
17 | 1) Prebuilt RPM file - should work on most RPM-based | |
18 | distributions | |
19 | ||
20 | 2) Source .tar.gz package | |
21 | ||
22 | Installation of Brew package | |
23 | --------------------------- | |
24 | ``` | |
25 | brew install s3cmd | |
26 | ``` | |
27 | ||
28 | Installation of RPM package | |
29 | --------------------------- | |
30 | As user "root" run: | |
31 | ``` | |
32 | rpm -ivh s3cmd-X.Y.Z.noarch.rpm | |
33 | ``` | |
34 | where X.Y.Z is the most recent s3cmd release version. | |
35 | ||
36 | You may be informed about missing dependencies | |
37 | on Python or some libraries. Please consult your | |
38 | distribution documentation on ways to solve the problem. | |
39 | ||
40 | Installation from PyPA (Python Package Authority) | |
41 | --------------------- | |
42 | S3cmd can be installed from the PyPA using PIP (the recommended tool for PyPA). | |
43 | ||
44 | 1) Confirm you have PIP installed. PIP home page is here: https://pypi.python.org/pypi/pip. Example install on a RHEL yum based machine | |
45 | ``` | |
46 | sudo yum install python-pip | |
47 | ``` | |
48 | 2) Install with pip | |
49 | ``` | |
50 | sudo pip install s3cmd | |
51 | ``` | |
52 | ||
53 | Installation from zip file | |
54 | -------------------------- | |
55 | There are three options to run s3cmd from source tarball: | |
56 | ||
57 | 1) The S3cmd program, as distributed in s3cmd-X.Y.Z.tar.gz | |
58 | on SourceForge or in master.zip on GitHub, can be run directly | |
59 | from where you unzipped the package. | |
60 | ||
61 | 2) Or you may want to move "s3cmd" file and "S3" subdirectory | |
62 | to some other path. Make sure that "S3" subdirectory ends up | |
63 | in the same place where you move the "s3cmd" file. | |
64 | ||
65 | For instance if you decide to move s3cmd to you $HOME/bin | |
66 | you will have $HOME/bin/s3cmd file and $HOME/bin/S3 directory | |
67 | with a number of support files. | |
68 | ||
69 | 3) The cleanest and most recommended approach is to unzip the | |
70 | package and then just run: | |
71 | ||
72 | `python setup.py install` | |
73 | ||
74 | You will however need Python "distutils" module for this to | |
75 | work. It is often part of the core python package (e.g. in | |
76 | OpenSuse Python 2.5 package) or it can be installed using your | |
77 | package manager, e.g. in Debian use | |
78 | ||
79 | `apt-get install python-setuptools` | |
80 | ||
81 | Again, consult your distribution documentation on how to | |
82 | find out the actual package name and how to install it then. | |
83 | ||
84 | Note that on Linux, if you are not "root" already, you may | |
85 | need to run: | |
86 | ||
87 | `sudo python setup.py install` | |
88 | ||
89 | instead. | |
90 | ||
91 | ||
92 | Note to distributions package maintainers | |
93 | ---------------------------------------- | |
94 | Define shell environment variable S3CMD_PACKAGING=yes if you | |
95 | don't want setup.py to install manpages and doc files. You'll | |
96 | have to install them manually in your .spec or similar package | |
97 | build scripts. | |
98 | ||
99 | On the other hand if you want setup.py to install manpages | |
100 | and docs, but to other than default path, define env | |
101 | variables $S3CMD_INSTPATH_MAN and $S3CMD_INSTPATH_DOC. Check | |
102 | out setup.py for details and default values. | |
103 | ||
104 | ||
105 | Where to get help | |
106 | ----------------- | |
107 | If in doubt, or if something doesn't work as expected, | |
108 | get back to us via mailing list: | |
109 | ``` | |
110 | s3tools-general@lists.sourceforge.net | |
111 | ``` | |
112 | ||
113 | or visit the S3cmd / S3tools homepage at: [http://s3tools.org](http://s3tools.org) |
0 | include INSTALL README.md LICENSE NEWS | |
0 | include INSTALL.md README.md LICENSE NEWS | |
1 | 1 | include s3cmd.1 |
0 | s3cmd-2.1.0 - 2020-04-07 | |
1 | =============== | |
2 | * Changed size reporting using k instead of K as it a multiple of 1024 (#956) | |
3 | * Added "public_url_use_https" config to generate public url using https (#551, #666) (Jukka Nousiainen) | |
4 | * Added option to make connection pooling configurable and improvements (Arto Jantunen) | |
5 | * Added support for path-style bucket access to signurl (Zac Medico) | |
6 | * Added docker configuration and help to run test cases with multiple Python versions (Doug Crozier) | |
7 | * Relaxed limitation on special chars for --add-header key names (#1054) | |
8 | * Fixed all regions that were automatically converted to lower case (Harshavardhana) | |
9 | * Fixed size and alignment of DU and LS output reporting (#956) | |
10 | * Fixes for SignatureDoesNotMatch error when host port 80 or 443 is specified, due to stupid servers (#1059) | |
11 | * Fixed the useless retries of requests that fail because of ssl cert checks | |
12 | * Fixed a possible crash when a file disappears during cache generation (#377) | |
13 | * Fixed unicode issues with IAM (#987) | |
14 | * Fixed unicode errors with bucked Policy/CORS requests (#847) (Alex Offshore) | |
15 | * Fixed unicode issues when loading aws_credential_file (#989) | |
16 | * Fixed an issue with the tenant feature of CephRGW. Url encode bucket_name for path-style requests (#1080) | |
17 | * Fixed signature v2 always used when bucket_name had special chars (#1081) | |
18 | * Allow to use signature v4 only, even for commands without buckets specified (#1082) | |
19 | * Fixed small open file descriptor leaks. | |
20 | * Py3: Fixed hash-bang in headers to not force using python2 when setup/s3cmd/run-test scripts are executed directly. | |
21 | * Py3: Fixed unicode issues with Cloudfront (#1006) | |
22 | * Py3: Fixed http.client.RemoteDisconnected errors (#1014) (Ryan Huddleston) | |
23 | * Py3: Fixed 'dictionary changed size during iteration' error when using a cache-file (#945) (Doug Crozier) | |
24 | * Py3: Fixed the display of file sizes (Vlad Presnyak) | |
25 | * Py3: Python 3.8 compatibility fixes (Konstantin Shalygin) | |
26 | * Py2: Fixed unicode errors sometimes crashing remote2remote sync (#847) | |
27 | * Added s3cmd.egg-info to .gitignore (Philip Dubé) | |
28 | * Improved run-test script to not use hard-coded bucket names(#1066) (Doug Crozier) | |
29 | * Renamed INSTALL to INSTALL.md and improvements (Nitro, Prabhakar Gupta) | |
30 | * Improved the restore command help (Hrchu) | |
31 | * Updated the storage-class command help with the recent aws s3 classes (#1020) | |
32 | * Fixed typo in the --continue-put help message (Pengyu Chen) | |
33 | * Fixed typo (#1062) (Tim Gates) | |
34 | * Improvements for setup and build configurations | |
35 | * Many other bug fixes | |
36 | ||
37 | ||
0 | 38 | s3cmd-2.0.2 - 2018-07-15 |
1 | 39 | =============== |
2 | 40 | * Fixed unexpected timeouts encountered during requests or transfers due to AWS strange connection short timeouts (#941) |
11 | 49 | * Fixed setting full_control on objects with public read access (Matthew Vernon) |
12 | 50 | * Fixed a bug when only one path is supplied with Cloudfront. (Mikael Svensson) |
13 | 51 | * Fixed signature errors with 'modify' requests (Radek Simko) |
14 | * Fixes #936 - Fix setacl command exception (Robert Moucha) | |
15 | * Fixes error reporting if deleting a source object failed after a move (#929) | |
52 | * Fixed #936 - Fix setacl command exception (Robert Moucha) | |
53 | * Fixed error reporting if deleting a source object failed after a move (#929) | |
16 | 54 | * Many other bug fixes (#525, #933, #940, #947, #957, #958, #960, #967) |
17 | 55 | |
18 | 56 | Important info: AWS S3 doesn't allow anymore uppercases and underscores in bucket names since march 1, 2018 |
0 | Metadata-Version: 1.1 | |
0 | Metadata-Version: 1.2 | |
1 | 1 | Name: s3cmd |
2 | Version: 2.0.2 | |
2 | Version: 2.1.0 | |
3 | 3 | Summary: Command line tool for managing Amazon S3 and CloudFront services |
4 | 4 | Home-page: http://s3tools.org |
5 | Author: github.com/mdomsch, github.com/matteobar, github.com/fviard | |
6 | Author-email: s3tools-bugs@lists.sourceforge.net | |
5 | Author: Michal Ludvig | |
6 | Author-email: michal@logix.cz | |
7 | Maintainer: github.com/fviard, github.com/matteobar | |
8 | Maintainer-email: s3tools-bugs@lists.sourceforge.net | |
7 | 9 | License: GNU GPL v2+ |
8 | Description-Content-Type: UNKNOWN | |
9 | 10 | Description: |
10 | 11 | |
11 | 12 | S3cmd lets you copy files from/to Amazon S3 |
17 | 18 | |
18 | 19 | Authors: |
19 | 20 | -------- |
21 | Florent Viard <florent@sodria.com> | |
20 | 22 | Michal Ludvig <michal@logix.cz> |
23 | Matt Domsch (github.com/mdomsch) | |
21 | 24 | |
22 | 25 | Platform: UNKNOWN |
23 | 26 | Classifier: Development Status :: 5 - Production/Stable |
40 | 43 | Classifier: Programming Language :: Python :: 3.4 |
41 | 44 | Classifier: Programming Language :: Python :: 3.5 |
42 | 45 | Classifier: Programming Language :: Python :: 3.6 |
46 | Classifier: Programming Language :: Python :: 3.7 | |
47 | Classifier: Programming Language :: Python :: 3.8 | |
43 | 48 | Classifier: Topic :: System :: Archiving |
44 | 49 | Classifier: Topic :: Utilities |
12 | 12 | * General questions and discussion: s3tools-general@lists.sourceforge.net |
13 | 13 | * Bug reports: s3tools-bugs@lists.sourceforge.net |
14 | 14 | |
15 | S3cmd requires Python 2.6 or newer. | |
15 | S3cmd requires Python 2.6 or newer. | |
16 | 16 | Python 3+ is also supported starting with S3cmd version 2. |
17 | 17 | |
18 | 18 | |
334 | 334 | |
335 | 335 | ### License |
336 | 336 | |
337 | Copyright (C) 2007-2017 TGRMN Software - http://www.tgrmn.com - and contributors | |
337 | Copyright (C) 2007-2020 TGRMN Software - http://www.tgrmn.com - and contributors | |
338 | 338 | |
339 | 339 | This program is free software; you can redistribute it and/or modify |
340 | 340 | it under the terms of the GNU General Public License as published by |
15 | 15 | except ImportError: |
16 | 16 | import elementtree.ElementTree as ET |
17 | 17 | |
18 | PY3 = (sys.version_info >= (3,0)) | |
18 | PY3 = (sys.version_info >= (3, 0)) | |
19 | 19 | |
20 | 20 | class Grantee(object): |
21 | 21 | ALL_USERS_URI = "http://acs.amazonaws.com/groups/global/AllUsers" |
21 | 21 | from .S3 import S3 |
22 | 22 | from .Config import Config |
23 | 23 | from .Exceptions import * |
24 | from .Utils import getTreeFromXml, appendXmlTextNode, getDictFromTree, dateS3toPython, getBucketFromHostname, getHostnameFromBucket, deunicodise, urlencode_string, convertHeaderTupleListToDict | |
24 | from .Utils import (getTreeFromXml, appendXmlTextNode, getDictFromTree, | |
25 | dateS3toPython, getBucketFromHostname, | |
26 | getHostnameFromBucket, deunicodise, urlencode_string, | |
27 | convertHeaderTupleListToDict, encode_to_s3, decode_from_s3) | |
25 | 28 | from .Crypto import sign_string_v2 |
26 | 29 | from .S3Uri import S3Uri, S3UriS3 |
27 | 30 | from .ConnMan import ConnMan |
28 | 31 | from .SortedDict import SortedDict |
32 | ||
33 | PY3 = (sys.version_info >= (3, 0)) | |
29 | 34 | |
30 | 35 | cloudfront_api_version = "2010-11-01" |
31 | 36 | cloudfront_resource = "/%(api_ver)s/distribution" % { 'api_ver' : cloudfront_api_version } |
175 | 180 | else: |
176 | 181 | self.info['Logging'] = None |
177 | 182 | |
178 | def __str__(self): | |
183 | def get_printable_tree(self): | |
179 | 184 | tree = ET.Element("DistributionConfig") |
180 | 185 | tree.attrib['xmlns'] = DistributionConfig.xmlns |
181 | 186 | |
196 | 201 | appendXmlTextNode("Bucket", getHostnameFromBucket(self.info['Logging'].bucket()), logging_el) |
197 | 202 | appendXmlTextNode("Prefix", self.info['Logging'].object(), logging_el) |
198 | 203 | tree.append(logging_el) |
199 | return ET.tostring(tree) | |
204 | return tree | |
205 | ||
206 | def __unicode__(self): | |
207 | return decode_from_s3(ET.tostring(self.get_printable_tree())) | |
208 | ||
209 | def __str__(self): | |
210 | if PY3: | |
211 | # Return unicode | |
212 | return ET.tostring(self.get_printable_tree(), encoding="unicode") | |
213 | else: | |
214 | # Return bytes | |
215 | return ET.tostring(self.get_printable_tree()) | |
200 | 216 | |
201 | 217 | class Invalidation(object): |
202 | 218 | ## Example: |
284 | 300 | def get_reference(self): |
285 | 301 | return self.reference |
286 | 302 | |
287 | def __str__(self): | |
303 | def get_printable_tree(self): | |
288 | 304 | tree = ET.Element("InvalidationBatch") |
289 | ||
290 | 305 | for path in self.paths: |
291 | 306 | if len(path) < 1 or path[0] != "/": |
292 | 307 | path = "/" + path |
293 | 308 | appendXmlTextNode("Path", urlencode_string(path), tree) |
294 | 309 | appendXmlTextNode("CallerReference", self.reference, tree) |
295 | return ET.tostring(tree) | |
310 | return tree | |
311 | ||
312 | def __unicode__(self): | |
313 | return decode_from_s3(ET.tostring(self.get_printable_tree())) | |
314 | ||
315 | def __str__(self): | |
316 | if PY3: | |
317 | # Return unicode | |
318 | return ET.tostring(self.get_printable_tree(), encoding="unicode") | |
319 | else: | |
320 | # Return bytes | |
321 | return ET.tostring(self.get_printable_tree()) | |
296 | 322 | |
297 | 323 | class CloudFront(object): |
298 | 324 | operations = { |
563 | 589 | |
564 | 590 | def sign_request(self, headers): |
565 | 591 | string_to_sign = headers['x-amz-date'] |
566 | signature = sign_string_v2(string_to_sign) | |
592 | signature = decode_from_s3(sign_string_v2(encode_to_s3(string_to_sign))) | |
567 | 593 | debug(u"CloudFront.sign_request('%s') = %s" % (string_to_sign, signature)) |
568 | 594 | return signature |
569 | 595 | |
602 | 628 | continue |
603 | 629 | |
604 | 630 | if CloudFront.dist_list.get(distListIndex, None) is None: |
605 | CloudFront.dist_list[distListIndex] = set() | |
631 | CloudFront.dist_list[distListIndex] = set() | |
606 | 632 | |
607 | 633 | CloudFront.dist_list[distListIndex].add(d.uri()) |
608 | 634 |
23 | 23 | import http.client as httplib |
24 | 24 | import locale |
25 | 25 | |
26 | try: | |
27 | from configparser import NoOptionError, NoSectionError, MissingSectionHeaderError, ConfigParser as PyConfigParser | |
26 | try: | |
27 | from configparser import (NoOptionError, NoSectionError, | |
28 | MissingSectionHeaderError, ParsingError, | |
29 | ConfigParser as PyConfigParser) | |
28 | 30 | except ImportError: |
29 | 31 | # Python2 fallback code |
30 | from ConfigParser import NoOptionError, NoSectionError, MissingSectionHeaderError, ConfigParser as PyConfigParser | |
32 | from ConfigParser import (NoOptionError, NoSectionError, | |
33 | MissingSectionHeaderError, ParsingError, | |
34 | ConfigParser as PyConfigParser) | |
31 | 35 | |
32 | 36 | try: |
33 | 37 | unicode |
204 | 208 | # Maximum sleep duration for throtte / limitrate. |
205 | 209 | # s3 will timeout if a request/transfer is stuck for more than a short time |
206 | 210 | throttle_max = 100 |
211 | public_url_use_https = False | |
212 | connection_pooling = True | |
207 | 213 | |
208 | 214 | ## Creating a singleton |
209 | 215 | def __new__(self, configfile = None, access_key=None, secret_key=None, access_token=None): |
259 | 265 | resp = conn.getresponse() |
260 | 266 | files = resp.read() |
261 | 267 | if resp.status == 200 and len(files)>1: |
262 | conn.request('GET', "/latest/meta-data/iam/security-credentials/%s"%files.decode('UTF-8')) | |
268 | conn.request('GET', "/latest/meta-data/iam/security-credentials/%s" % files.decode('utf-8')) | |
263 | 269 | resp=conn.getresponse() |
264 | 270 | if resp.status == 200: |
265 | creds=json.load(resp) | |
266 | Config().update_option('access_key', creds['AccessKeyId'].encode('ascii')) | |
267 | Config().update_option('secret_key', creds['SecretAccessKey'].encode('ascii')) | |
268 | Config().update_option('access_token', creds['Token'].encode('ascii')) | |
271 | resp_content = config_unicodise(resp.read()) | |
272 | creds=json.loads(resp_content) | |
273 | Config().update_option('access_key', config_unicodise(creds['AccessKeyId'])) | |
274 | Config().update_option('secret_key', config_unicodise(creds['SecretAccessKey'])) | |
275 | Config().update_option('access_token', config_unicodise(creds['Token'])) | |
269 | 276 | else: |
270 | 277 | raise IOError |
271 | 278 | else: |
282 | 289 | |
283 | 290 | def aws_credential_file(self): |
284 | 291 | try: |
285 | aws_credential_file = os.path.expanduser('~/.aws/credentials') | |
286 | if 'AWS_CREDENTIAL_FILE' in os.environ and os.path.isfile(os.environ['AWS_CREDENTIAL_FILE']): | |
287 | aws_credential_file = config_unicodise(os.environ['AWS_CREDENTIAL_FILE']) | |
292 | aws_credential_file = os.path.expanduser('~/.aws/credentials') | |
293 | credential_file_from_env = os.environ.get('AWS_CREDENTIAL_FILE') | |
294 | if credential_file_from_env and \ | |
295 | os.path.isfile(credential_file_from_env): | |
296 | aws_credential_file = config_unicodise(credential_file_from_env) | |
297 | elif not os.path.isfile(aws_credential_file): | |
298 | return | |
299 | ||
300 | warning("Errno %d accessing credentials file %s" % (e.errno, aws_credential_file)) | |
288 | 301 | |
289 | 302 | config = PyConfigParser() |
290 | 303 | |
291 | 304 | debug("Reading AWS credentials from %s" % (aws_credential_file)) |
305 | with io.open(aws_credential_file, "r", | |
306 | encoding=getattr(self, 'encoding', 'UTF-8')) as fp: | |
307 | config_string = fp.read() | |
292 | 308 | try: |
293 | config.read(aws_credential_file) | |
294 | except MissingSectionHeaderError: | |
295 | # if header is missing, this could be deprecated credentials file format | |
296 | # as described here: https://blog.csanchez.org/2011/05/ | |
297 | # then do the hacky-hack and add default header | |
298 | # to be able to read the file with PyConfigParser() | |
299 | config_string = None | |
300 | with open(aws_credential_file, 'r') as f: | |
301 | config_string = '[default]\n' + f.read() | |
302 | config.read_string(config_string.decode('utf-8')) | |
303 | ||
309 | try: | |
310 | # readfp is replaced by read_file in python3, | |
311 | # but so far readfp it is still available. | |
312 | config.readfp(io.StringIO(config_string)) | |
313 | except MissingSectionHeaderError: | |
314 | # if header is missing, this could be deprecated credentials file format | |
315 | # as described here: https://blog.csanchez.org/2011/05/ | |
316 | # then do the hacky-hack and add default header | |
317 | # to be able to read the file with PyConfigParser() | |
318 | config_string = u'[default]\n' + config_string | |
319 | config.readfp(io.StringIO(config_string)) | |
320 | except ParsingError as exc: | |
321 | raise ValueError( | |
322 | "Error reading aws_credential_file " | |
323 | "(%s): %s" % (aws_credential_file, str(exc))) | |
304 | 324 | |
305 | 325 | profile = config_unicodise(os.environ.get('AWS_PROFILE', "default")) |
306 | 326 | debug("Using AWS profile '%s'" % (profile)) |
307 | 327 | |
308 | 328 | # get_key - helper function to read the aws profile credentials |
309 | # including the legacy ones as described here: https://blog.csanchez.org/2011/05/ | |
329 | # including the legacy ones as described here: https://blog.csanchez.org/2011/05/ | |
310 | 330 | def get_key(profile, key, legacy_key, print_warning=True): |
311 | 331 | result = None |
312 | 332 | |
321 | 341 | profile = "default" |
322 | 342 | result = config.get(profile, key) |
323 | 343 | warning( |
324 | "Legacy configuratin key '%s' used, " % (key) + | |
344 | "Legacy configuratin key '%s' used, " % (key) + | |
325 | 345 | "please use the standardized config format as described here: " + |
326 | 346 | "https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/" |
327 | 347 | ) |
329 | 349 | pass |
330 | 350 | |
331 | 351 | if result: |
332 | debug("Found the configuration option '%s' for the AWS Profile '%s' in the credentials file %s" % (key, profile, aws_credential_file)) | |
352 | debug("Found the configuration option '%s' for the AWS Profile '%s' in the credentials file %s" % (key, profile, aws_credential_file)) | |
333 | 353 | return result |
334 | 354 | |
335 | profile_access_key = get_key(profile, "aws_access_key_id", "AWSAccessKeyId") | |
355 | profile_access_key = get_key(profile, "aws_access_key_id", "AWSAccessKeyId") | |
336 | 356 | if profile_access_key: |
337 | 357 | Config().update_option('access_key', config_unicodise(profile_access_key)) |
338 | 358 | |
339 | profile_secret_key = get_key(profile, "aws_secret_access_key", "AWSSecretKey") | |
359 | profile_secret_key = get_key(profile, "aws_secret_access_key", "AWSSecretKey") | |
340 | 360 | if profile_secret_key: |
341 | 361 | Config().update_option('secret_key', config_unicodise(profile_secret_key)) |
342 | 362 | |
343 | profile_access_token = get_key(profile, "aws_session_token", None, False) | |
363 | profile_access_token = get_key(profile, "aws_session_token", None, False) | |
344 | 364 | if profile_access_token: |
345 | 365 | Config().update_option('access_token', config_unicodise(profile_access_token)) |
346 | 366 | |
347 | 367 | except IOError as e: |
348 | warning("%d accessing credentials file %s" % (e.errno, aws_credential_file)) | |
368 | warning("Errno %d accessing credentials file %s" % (e.errno, aws_credential_file)) | |
349 | 369 | except NoSectionError as e: |
350 | 370 | warning("Couldn't find AWS Profile '%s' in the credentials file '%s'" % (profile, aws_credential_file)) |
351 | 371 | |
379 | 399 | if cp.get('add_headers'): |
380 | 400 | for option in cp.get('add_headers').split(","): |
381 | 401 | (key, value) = option.split(':') |
382 | self.extra_headers[key.replace('_', '-').strip()] = value.strip() | |
402 | self.extra_headers[key.strip()] = value.strip() | |
383 | 403 | |
384 | 404 | self._parsed_files.append(configfile) |
385 | 405 |
8 | 8 | from __future__ import absolute_import |
9 | 9 | |
10 | 10 | import sys |
11 | if sys.version_info >= (3,0): | |
11 | if sys.version_info >= (3, 0): | |
12 | 12 | from .Custom_httplib3x import httplib |
13 | 13 | else: |
14 | 14 | from .Custom_httplib27 import httplib |
22 | 22 | from urllib.parse import urlparse |
23 | 23 | |
24 | 24 | from .Config import Config |
25 | from .Exceptions import ParameterError | |
25 | from .Exceptions import ParameterError, S3SSLCertificateError | |
26 | 26 | from .Utils import getBucketFromHostname |
27 | 27 | |
28 | if not 'CertificateError' in ssl.__dict__: | |
29 | class CertificateError(Exception): | |
30 | pass | |
31 | else: | |
32 | CertificateError = ssl.CertificateError | |
33 | ||
34 | __all__ = [ "ConnMan" ] | |
28 | ||
29 | ||
30 | __all__ = ["ConnMan"] | |
35 | 31 | |
36 | 32 | |
37 | 33 | class http_connection(object): |
127 | 123 | cert = self.c.sock.getpeercert() |
128 | 124 | try: |
129 | 125 | ssl.match_hostname(cert, self.hostname) |
130 | except AttributeError: # old ssl module doesn't have this function | |
131 | return | |
132 | except ValueError: # empty SSL cert means underlying SSL library didn't validate it, we don't either. | |
133 | return | |
134 | except CertificateError as e: | |
126 | except AttributeError: | |
127 | # old ssl module doesn't have this function | |
128 | return | |
129 | except ValueError: | |
130 | # empty SSL cert means underlying SSL library didn't validate it, we don't either. | |
131 | return | |
132 | except S3CertificateError as e: | |
135 | 133 | if not self.forgive_wildcard_cert(cert, self.hostname): |
136 | 134 | raise e |
137 | 135 | |
258 | 256 | @staticmethod |
259 | 257 | def put(conn): |
260 | 258 | if conn.id.startswith("proxy://"): |
261 | conn.c.close() | |
259 | ConnMan.close(conn) | |
262 | 260 | debug("ConnMan.put(): closing proxy connection (keep-alive not yet supported)") |
263 | 261 | return |
264 | 262 | |
265 | 263 | if conn.counter >= ConnMan.conn_max_counter: |
266 | conn.c.close() | |
264 | ConnMan.close(conn) | |
267 | 265 | debug("ConnMan.put(): closing over-used connection") |
266 | return | |
267 | ||
268 | cfg = Config() | |
269 | if not cfg.connection_pooling: | |
270 | ConnMan.close(conn) | |
271 | debug("ConnMan.put(): closing connection (connection pooling disabled)") | |
268 | 272 | return |
269 | 273 | |
270 | 274 | ConnMan.conn_pool_sem.acquire() |
271 | 275 | ConnMan.conn_pool[conn.id].append(conn) |
272 | 276 | ConnMan.conn_pool_sem.release() |
273 | 277 | debug("ConnMan.put(): connection put back to pool (%s#%d)" % (conn.id, conn.counter)) |
278 | ||
279 | @staticmethod | |
280 | def close(conn): | |
281 | if conn: | |
282 | conn.c.close() |
13 | 13 | |
14 | 14 | from . import Config |
15 | 15 | from logging import debug |
16 | from .Utils import encode_to_s3, time_to_epoch, deunicodise, decode_from_s3 | |
16 | from .Utils import encode_to_s3, time_to_epoch, deunicodise, decode_from_s3, check_bucket_name_dns_support | |
17 | 17 | from .SortedDict import SortedDict |
18 | 18 | |
19 | 19 | import datetime |
62 | 62 | |
63 | 63 | Useful for REST authentication. See http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html |
64 | 64 | string_to_sign should be utf-8 "bytes". |
65 | and returned signature will be utf-8 encoded "bytes". | |
65 | 66 | """ |
66 | 67 | secret_key = Config.Config().secret_key |
67 | 68 | signature = base64.encodestring(hmac.new(encode_to_s3(secret_key), string_to_sign, sha1).digest()).strip() |
157 | 158 | debug("Signing plaintext: %r", signtext) |
158 | 159 | parms['sig'] = s3_quote(sign_string_v2(encode_to_s3(signtext)), unicode_output=True) |
159 | 160 | debug("Urlencoded signature: %s", parms['sig']) |
160 | url = "%(proto)s://%(bucket)s.%(host_base)s/%(object)s?AWSAccessKeyId=%(access_key)s&Expires=%(expiry)d&Signature=%(sig)s" % parms | |
161 | if check_bucket_name_dns_support(Config.Config().host_bucket, parms['bucket']): | |
162 | url = "%(proto)s://%(bucket)s.%(host_base)s/%(object)s" | |
163 | else: | |
164 | url = "%(proto)s://%(host_base)s/%(bucket)s/%(object)s" | |
165 | url += "?AWSAccessKeyId=%(access_key)s&Expires=%(expiry)d&Signature=%(sig)s" | |
166 | url = url % parms | |
161 | 167 | if content_disposition: |
162 | 168 | url += "&response-content-disposition=" + s3_quote(content_disposition, unicode_output=True) |
163 | 169 | if content_type: |
146 | 146 | self.putheader(encode_to_s3(hdr), encode_to_s3(value)) |
147 | 147 | |
148 | 148 | # If an Expect: 100-continue was sent, we need to check for a 417 |
149 | # Expectation Failed to avoid unecessarily sending the body | |
149 | # Expectation Failed to avoid unnecessarily sending the body | |
150 | 150 | # See RFC 2616 8.2.3 |
151 | 151 | if not expect_continue: |
152 | 152 | self.endheaders(body) |
192 | 192 | body = _encode(body, 'body') |
193 | 193 | |
194 | 194 | # If an Expect: 100-continue was sent, we need to check for a 417 |
195 | # Expectation Failed to avoid unecessarily sending the body | |
195 | # Expectation Failed to avoid unnecessarily sending the body | |
196 | 196 | # See RFC 2616 8.2.3 |
197 | 197 | if not expect_continue: |
198 | 198 | self.endheaders(body, encode_chunked=encode_chunked) |
12 | 12 | import S3.Utils |
13 | 13 | from . import ExitCodes |
14 | 14 | |
15 | if sys.version_info >= (3,0): | |
15 | if sys.version_info >= (3, 0): | |
16 | 16 | PY3 = True |
17 | 17 | # In python 3, unicode -> str, and str -> bytes |
18 | 18 | unicode = str |
19 | 19 | else: |
20 | 20 | PY3 = False |
21 | ||
22 | ## External exceptions | |
23 | ||
24 | from ssl import SSLError as S3SSLError | |
25 | ||
26 | try: | |
27 | from ssl import CertificateError as S3SSLCertificateError | |
28 | except ImportError: | |
29 | class S3SSLCertificateError(Exception): | |
30 | pass | |
21 | 31 | |
22 | 32 | |
23 | 33 | try: |
25 | 35 | except ImportError: |
26 | 36 | # ParseError was only added in python2.7, before ET was raising ExpatError |
27 | 37 | from xml.parsers.expat import ExpatError as XmlParseError |
38 | ||
39 | ||
40 | ## s3cmd exceptions | |
28 | 41 | |
29 | 42 | class S3Exception(Exception): |
30 | 43 | def __init__(self, message = ""): |
23 | 23 | import re |
24 | 24 | import errno |
25 | 25 | import io |
26 | ||
27 | PY3 = (sys.version_info >= (3, 0)) | |
26 | 28 | |
27 | 29 | __all__ = ["fetch_local_list", "fetch_remote_list", "compare_filelists"] |
28 | 30 | |
201 | 203 | if counter % 1000 == 0: |
202 | 204 | info(u"[%d/%d]" % (counter, len_loc_list)) |
203 | 205 | |
204 | if relative_file == '-': continue | |
206 | if relative_file == '-': | |
207 | continue | |
205 | 208 | |
206 | 209 | full_name = loc_list[relative_file]['full_name'] |
207 | 210 | try: |
305 | 308 | # not. Leave it to a non-files_from run to purge. |
306 | 309 | if cfg.cache_file and len(cfg.files_from) == 0: |
307 | 310 | cache.mark_all_for_purge() |
308 | for i in local_list.keys(): | |
309 | cache.unmark_for_purge(local_list[i]['dev'], local_list[i]['inode'], local_list[i]['mtime'], local_list[i]['size']) | |
311 | if PY3: | |
312 | local_list_val_iter = local_list.values() | |
313 | else: | |
314 | local_list_val_iter = local_list.itervalues() | |
315 | for f_info in local_list_val_iter: | |
316 | inode = f_info.get('inode', 0) | |
317 | if not inode: | |
318 | continue | |
319 | cache.unmark_for_purge(f_info['dev'], inode, f_info['mtime'], | |
320 | f_info['size']) | |
310 | 321 | cache.purge() |
311 | 322 | cache.save(cfg.cache_file) |
312 | 323 |
30 | 30 | return d['md5'] |
31 | 31 | |
32 | 32 | def mark_all_for_purge(self): |
33 | for d in self.inodes.keys(): | |
34 | for i in self.inodes[d].keys(): | |
35 | for c in self.inodes[d][i].keys(): | |
33 | for d in tuple(self.inodes): | |
34 | for i in tuple(self.inodes[d]): | |
35 | for c in tuple(self.inodes[d][i]): | |
36 | 36 | self.inodes[d][i][c]['purge'] = True |
37 | 37 | |
38 | 38 | def unmark_for_purge(self, dev, inode, mtime, size): |
44 | 44 | del self.inodes[dev][inode][mtime]['purge'] |
45 | 45 | |
46 | 46 | def purge(self): |
47 | for d in self.inodes.keys(): | |
48 | for i in self.inodes[d].keys(): | |
49 | for m in self.inodes[d][i].keys(): | |
47 | for d in tuple(self.inodes): | |
48 | for i in tuple(self.inodes[d]): | |
49 | for m in tuple(self.inodes[d][i]): | |
50 | 50 | if 'purge' in self.inodes[d][i][m]: |
51 | 51 | del self.inodes[d][i] |
52 | 52 | break |
6 | 6 | ## Copyright: TGRMN Software and contributors |
7 | 7 | |
8 | 8 | package = "s3cmd" |
9 | version = "2.0.2" | |
9 | version = "2.1.0" | |
10 | 10 | url = "http://s3tools.org" |
11 | 11 | license = "GNU GPL v2+" |
12 | 12 | short_description = "Command line tool for managing Amazon S3 and CloudFront services" |
43 | 43 | from .S3Uri import S3Uri |
44 | 44 | from .ConnMan import ConnMan |
45 | 45 | from .Crypto import (sign_request_v2, sign_request_v4, checksum_sha256_file, |
46 | checksum_sha256_buffer, s3_quote, format_param_str) | |
46 | checksum_sha256_buffer, s3_quote, format_param_str) | |
47 | 47 | |
48 | 48 | try: |
49 | 49 | from ctypes import ArgumentError |
155 | 155 | def use_signature_v2(self): |
156 | 156 | if self.s3.endpoint_requires_signature_v4: |
157 | 157 | return False |
158 | # in case of bad DNS name due to bucket name v2 will be used | |
159 | # this way we can still use capital letters in bucket names for the older regions | |
160 | ||
161 | if self.resource['bucket'] is None or not check_bucket_name_dns_conformity(self.resource['bucket']) or self.s3.config.signature_v2 or self.s3.fallback_to_signature_v2: | |
158 | ||
159 | if self.s3.config.signature_v2 or self.s3.fallback_to_signature_v2: | |
162 | 160 | return True |
161 | ||
163 | 162 | return False |
164 | 163 | |
165 | 164 | def sign(self): |
271 | 270 | host = getHostnameFromBucket(bucket) |
272 | 271 | else: |
273 | 272 | host = self.config.host_base.lower() |
273 | # The following hack is needed because it looks like that some servers | |
274 | # are not respecting the HTTP spec and so will fail the signature check | |
275 | # if the port is specified in the "Host" header for default ports. | |
276 | # STUPIDIEST THING EVER FOR A SERVER... | |
277 | # See: https://github.com/minio/minio/issues/9169 | |
278 | if self.config.use_https: | |
279 | if host.endswith(':443'): | |
280 | host = host[:-4] | |
281 | elif host.endswith(':80'): | |
282 | host = host[:-3] | |
283 | ||
274 | 284 | debug('get_hostname(%s): %s' % (bucket, host)) |
275 | 285 | return host |
276 | 286 | |
285 | 295 | or (bucket_name not in S3Request.redir_map |
286 | 296 | and not check_bucket_name_dns_support(self.config.host_bucket, bucket_name)) |
287 | 297 | ): |
288 | uri = "/%s%s" % (bucket_name, resource['uri']) | |
298 | uri = "/%s%s" % (s3_quote(bucket_name, quote_backslashes=False, | |
299 | unicode_output=True), | |
300 | resource['uri']) | |
289 | 301 | else: |
290 | 302 | uri = resource['uri'] |
291 | 303 | if base_path: |
381 | 393 | bucket_location = bucket_location.strip() |
382 | 394 | if bucket_location.upper() == "EU": |
383 | 395 | bucket_location = bucket_location.upper() |
384 | else: | |
385 | bucket_location = bucket_location.lower() | |
386 | 396 | body = "<CreateBucketConfiguration><LocationConstraint>" |
387 | 397 | body += bucket_location |
388 | 398 | body += "</LocationConstraint></CreateBucketConfiguration>" |
967 | 977 | request = self.create_request("BUCKET_LIST", bucket = uri.bucket(), |
968 | 978 | uri_params = {'policy': None}) |
969 | 979 | response = self.send_request(request) |
970 | return response['data'] | |
980 | return decode_from_s3(response['data']) | |
971 | 981 | |
972 | 982 | def set_policy(self, uri, policy): |
973 | 983 | headers = SortedDict(ignore_case = True) |
990 | 1000 | request = self.create_request("BUCKET_LIST", bucket = uri.bucket(), |
991 | 1001 | uri_params = {'cors': None}) |
992 | 1002 | response = self.send_request(request) |
993 | return response['data'] | |
1003 | return decode_from_s3(response['data']) | |
994 | 1004 | |
995 | 1005 | def set_cors(self, uri, cors): |
996 | 1006 | headers = SortedDict(ignore_case = True) |
1226 | 1236 | |
1227 | 1237 | raise S3Error(response) |
1228 | 1238 | |
1229 | def send_request(self, request, retries = _max_retries): | |
1230 | if request.resource.get('bucket') \ | |
1231 | and not request.use_signature_v2() \ | |
1232 | and S3Request.region_map.get(request.resource['bucket'], | |
1233 | Config().bucket_location) == "US": | |
1234 | debug("===== Send_request inner request to determine the bucket region =====") | |
1239 | def update_region_inner_request(self, request): | |
1240 | """Get and update region for the request if needed. | |
1241 | ||
1242 | Signature v4 needs the region of the bucket or the request will fail | |
1243 | with the indication of the correct region. | |
1244 | We are trying to avoid this failure by pre-emptively getting the | |
1245 | correct region to use, if not provided by the user. | |
1246 | """ | |
1247 | if request.resource.get('bucket') and not request.use_signature_v2() \ | |
1248 | and S3Request.region_map.get( | |
1249 | request.resource['bucket'], Config().bucket_location | |
1250 | ) == "US": | |
1251 | debug("===== SEND Inner request to determine the bucket region " | |
1252 | "=====") | |
1235 | 1253 | try: |
1236 | 1254 | s3_uri = S3Uri(u's3://' + request.resource['bucket']) |
1237 | 1255 | # "force_us_default" should prevent infinite recursivity because |
1239 | 1257 | region = self.get_bucket_location(s3_uri, force_us_default=True) |
1240 | 1258 | if region is not None: |
1241 | 1259 | S3Request.region_map[request.resource['bucket']] = region |
1242 | debug("===== END send_request inner request to determine the bucket region (%r) =====", | |
1243 | region) | |
1260 | debug("===== SUCCESS Inner request to determine the bucket " | |
1261 | "region (%r) =====", region) | |
1244 | 1262 | except Exception as exc: |
1245 | 1263 | # Ignore errors, it is just an optimisation, so nothing critical |
1246 | debug("Error getlocation inner request: %s", exc) | |
1264 | debug("getlocation inner request failure reason: %s", exc) | |
1265 | debug("===== FAILED Inner request to determine the bucket " | |
1266 | "region =====") | |
1267 | ||
1268 | def send_request(self, request, retries = _max_retries): | |
1269 | self.update_region_inner_request(request) | |
1247 | 1270 | |
1248 | 1271 | request.body = encode_to_s3(request.body) |
1249 | 1272 | headers = request.headers |
1269 | 1292 | attrs = parse_attrs_header(response["headers"]["x-amz-meta-s3cmd-attrs"]) |
1270 | 1293 | response["s3cmd-attrs"] = attrs |
1271 | 1294 | ConnMan.put(conn) |
1295 | except (S3SSLError, S3SSLCertificateError): | |
1296 | # In case of failure to validate the certificate for a ssl | |
1297 | # connection,no need to retry, abort immediately | |
1298 | raise | |
1272 | 1299 | except (IOError, Exception) as e: |
1273 | 1300 | debug("Response:\n" + pprint.pformat(response)) |
1274 | 1301 | if ((hasattr(e, 'errno') and e.errno |
1280 | 1307 | # When the connection is broken, BadStatusLine is raised with py2 |
1281 | 1308 | # and RemoteDisconnected is raised by py3 with a trap: |
1282 | 1309 | # RemoteDisconnected has an errno field with a None value. |
1283 | if conn: | |
1284 | # close the connection and re-establish | |
1285 | conn.counter = ConnMan.conn_max_counter | |
1286 | ConnMan.put(conn) | |
1310 | ||
1311 | # close the connection and re-establish | |
1312 | ConnMan.close(conn) | |
1287 | 1313 | if retries: |
1288 | 1314 | warning("Retrying failed request: %s (%s)" % (resource['uri'], e)) |
1289 | 1315 | warning("Waiting %d sec..." % self._fail_wait(retries)) |
1344 | 1370 | def send_file(self, request, stream, labels, buffer = '', throttle = 0, |
1345 | 1371 | retries = _max_retries, offset = 0, chunk_size = -1, |
1346 | 1372 | use_expect_continue = None): |
1347 | if request.resource.get('bucket') \ | |
1348 | and not request.use_signature_v2() \ | |
1349 | and S3Request.region_map.get(request.resource['bucket'], | |
1350 | Config().bucket_location) == "US": | |
1351 | debug("===== Send_file inner request to determine the bucket region =====") | |
1352 | try: | |
1353 | s3_uri = S3Uri(u's3://' + request.resource['bucket']) | |
1354 | # "force_us_default" should prevent infinite recursivity because | |
1355 | # it will set the region_map dict. | |
1356 | region = self.get_bucket_location(s3_uri, force_us_default=True) | |
1357 | if region is not None: | |
1358 | S3Request.region_map[request.resource['bucket']] = region | |
1359 | debug("===== END Send_file inner request to determine the bucket region (%r) =====", | |
1360 | region) | |
1361 | except Exception as exc: | |
1362 | # Ignore errors, it is just an optimisation, so nothing critical | |
1363 | debug("Error getlocation inner request: %s", exc) | |
1373 | self.update_region_inner_request(request) | |
1364 | 1374 | |
1365 | 1375 | if use_expect_continue is None: |
1366 | 1376 | use_expect_continue = self.config.use_http_expect |
1613 | 1623 | return response |
1614 | 1624 | |
1615 | 1625 | def recv_file(self, request, stream, labels, start_position = 0, retries = _max_retries): |
1616 | if request.resource.get('bucket') \ | |
1617 | and not request.use_signature_v2() \ | |
1618 | and S3Request.region_map.get(request.resource['bucket'], | |
1619 | Config().bucket_location) == "US": | |
1620 | debug("===== Recv_file inner request to determine the bucket region =====") | |
1621 | try: | |
1622 | s3_uri = S3Uri(u's3://' + request.resource['bucket']) | |
1623 | # "force_us_default" should prevent infinite recursivity because | |
1624 | # it will set the region_map dict. | |
1625 | region = self.get_bucket_location(s3_uri, force_us_default=True) | |
1626 | if region is not None: | |
1627 | S3Request.region_map[request.resource['bucket']] = region | |
1628 | debug("===== END recv_file Inner request to determine the bucket region (%r) =====", | |
1629 | region) | |
1630 | except Exception as exc: | |
1631 | # Ignore errors, it is just an optimisation, so nothing critical | |
1632 | debug("Error getlocation inner request: %s", exc) | |
1626 | self.update_region_inner_request(request) | |
1633 | 1627 | |
1634 | 1628 | method_string, resource, headers = request.get_triplet() |
1635 | 1629 | filename = stream.stream_name |
1661 | 1655 | debug("Response:\n" + pprint.pformat(response)) |
1662 | 1656 | except ParameterError as e: |
1663 | 1657 | raise |
1664 | except OSError as e: | |
1665 | raise | |
1666 | 1658 | except (IOError, Exception) as e: |
1667 | 1659 | if self.config.progress_meter: |
1668 | 1660 | progress.done("failed") |
1671 | 1663 | or "[Errno 104]" in str(e) or "[Errno 32]" in str(e) |
1672 | 1664 | ) and not isinstance(e, SocketTimeoutException): |
1673 | 1665 | raise |
1674 | if conn: | |
1675 | # close the connection and re-establish | |
1676 | conn.counter = ConnMan.conn_max_counter | |
1677 | ConnMan.put(conn) | |
1666 | ||
1667 | # close the connection and re-establish | |
1668 | ConnMan.close(conn) | |
1678 | 1669 | |
1679 | 1670 | if retries: |
1680 | 1671 | warning("Retrying failed request: %s (%s)" % (resource['uri'], e)) |
1768 | 1759 | ) and not isinstance(e, SocketTimeoutException): |
1769 | 1760 | raise |
1770 | 1761 | # close the connection and re-establish |
1771 | conn.counter = ConnMan.conn_max_counter | |
1772 | ConnMan.put(conn) | |
1762 | ConnMan.close(conn) | |
1773 | 1763 | |
1774 | 1764 | if retries: |
1775 | 1765 | warning("Retrying failed request: %s (%s)" % (resource['uri'], e)) |
13 | 13 | from .Utils import unicodise, deunicodise, check_bucket_name_dns_support |
14 | 14 | from . import Config |
15 | 15 | |
16 | if sys.version_info >= (3,0): | |
17 | PY3 = True | |
18 | else: | |
19 | PY3 = False | |
16 | ||
17 | PY3 = (sys.version_info >= (3, 0)) | |
18 | ||
20 | 19 | |
21 | 20 | class S3Uri(object): |
22 | 21 | type = None |
89 | 88 | return check_bucket_name_dns_support(Config.Config().host_bucket, self._bucket) |
90 | 89 | |
91 | 90 | def public_url(self): |
91 | public_url_protocol = "http" | |
92 | if Config.Config().public_url_use_https: | |
93 | public_url_protocol = "https" | |
92 | 94 | if self.is_dns_compatible(): |
93 | return "http://%s.%s/%s" % (self._bucket, Config.Config().host_base, self._object) | |
94 | else: | |
95 | return "http://%s/%s/%s" % (Config.Config().host_base, self._bucket, self._object) | |
95 | return "%s://%s.%s/%s" % (public_url_protocol, self._bucket, Config.Config().host_base, self._object) | |
96 | else: | |
97 | return "%s://%s/%s/%s" % (public_url_protocol, Config.Config().host_base, self._bucket, self._object) | |
96 | 98 | |
97 | 99 | def host_name(self): |
98 | 100 | if self.is_dns_compatible(): |
100 | 100 | __all__.append("stripNameSpace") |
101 | 101 | |
102 | 102 | def getTreeFromXml(xml): |
103 | xml, xmlns = stripNameSpace(xml) | |
103 | xml, xmlns = stripNameSpace(encode_to_s3(xml)) | |
104 | 104 | try: |
105 | 105 | tree = ET.fromstring(xml) |
106 | 106 | if xmlns: |
193 | 193 | def formatSize(size, human_readable = False, floating_point = False): |
194 | 194 | size = floating_point and float(size) or int(size) |
195 | 195 | if human_readable: |
196 | coeffs = ['k', 'M', 'G', 'T'] | |
196 | coeffs = ['K', 'M', 'G', 'T'] | |
197 | 197 | coeff = "" |
198 | 198 | while size > 2048: |
199 | 199 | size /= 1024 |
200 | 200 | coeff = coeffs.pop(0) |
201 | return (size, coeff) | |
201 | return (floating_point and float(size) or int(size), coeff) | |
202 | 202 | else: |
203 | 203 | return (size, "") |
204 | 204 | __all__.append("formatSize") |
0 | #!/usr/bin/env python2 | |
0 | #!/usr/bin/env python | |
1 | 1 | # -*- coding: utf-8 -*- |
2 | 2 | |
3 | 3 | ## -------------------------------------------------------------------- |
22 | 22 | |
23 | 23 | import sys |
24 | 24 | |
25 | if float("%d.%d" %(sys.version_info[0], sys.version_info[1])) < 2.6: | |
25 | if sys.version_info < (2, 6): | |
26 | 26 | sys.stderr.write(u"ERROR: Python 2.6 or higher required, sorry.\n") |
27 | 27 | sys.exit(EX_OSFILE) |
28 | 28 | |
29 | PY3 = (sys.version_info >= (3, 0)) | |
30 | ||
31 | import codecs | |
32 | import errno | |
33 | import glob | |
34 | import io | |
35 | import locale | |
29 | 36 | import logging |
30 | import time | |
31 | 37 | import os |
32 | 38 | import re |
33 | import errno | |
34 | import glob | |
39 | import shutil | |
40 | import socket | |
41 | import subprocess | |
42 | import tempfile | |
43 | import time | |
35 | 44 | import traceback |
36 | import codecs | |
37 | import locale | |
38 | import subprocess | |
45 | ||
46 | from copy import copy | |
47 | from optparse import OptionParser, Option, OptionValueError, IndentedHelpFormatter | |
48 | from logging import debug, info, warning, error | |
49 | ||
50 | ||
39 | 51 | try: |
40 | 52 | import htmlentitydefs |
41 | 53 | except: |
49 | 61 | # In python 3, unicode -> str, and str -> bytes |
50 | 62 | unicode = str |
51 | 63 | |
52 | import socket | |
53 | import shutil | |
54 | import tempfile | |
55 | ||
56 | from copy import copy | |
57 | from optparse import OptionParser, Option, OptionValueError, IndentedHelpFormatter | |
58 | from logging import debug, info, warning, error | |
59 | ||
60 | 64 | try: |
61 | 65 | from shutil import which |
62 | 66 | except ImportError: |
63 | 67 | # python2 fallback code |
64 | 68 | from distutils.spawn import find_executable as which |
65 | 69 | |
66 | from ssl import SSLError | |
67 | import io | |
68 | 70 | |
69 | 71 | def output(message): |
70 | 72 | sys.stdout.write(message + "\n") |
101 | 103 | buckets_size += size |
102 | 104 | total_size, size_coeff = formatSize(buckets_size, cfg.human_readable_sizes) |
103 | 105 | total_size_str = str(total_size) + size_coeff |
104 | output(u"".rjust(8, "-")) | |
105 | output(u"%s Total" % (total_size_str.ljust(8))) | |
106 | output(u"".rjust(12, "-")) | |
107 | output(u"%s Total" % (total_size_str.ljust(12))) | |
106 | 108 | return size |
107 | 109 | |
108 | 110 | def subcmd_bucket_usage(s3, uri): |
130 | 132 | except KeyboardInterrupt as e: |
131 | 133 | extra_info = u' [interrupted]' |
132 | 134 | |
133 | total_size, size_coeff = formatSize(bucket_size, Config().human_readable_sizes) | |
134 | total_size_str = str(total_size) + size_coeff | |
135 | output(u"%s %s objects %s%s" % (total_size_str.ljust(8), object_count, uri, extra_info)) | |
135 | total_size_str = u"%d%s" % formatSize(bucket_size, | |
136 | Config().human_readable_sizes) | |
137 | if Config().human_readable_sizes: | |
138 | total_size_str = total_size_str.rjust(5) | |
139 | else: | |
140 | total_size_str = total_size_str.rjust(12) | |
141 | output(u"%s %7s objects %s%s" % (total_size_str, object_count, uri, | |
142 | extra_info)) | |
136 | 143 | return bucket_size |
137 | 144 | |
138 | 145 | def cmd_ls(args): |
183 | 190 | error(S3.codes[e.info["Code"]] % bucket) |
184 | 191 | raise |
185 | 192 | |
193 | # md5 are 32 char long, but for multipart there could be a suffix | |
194 | if Config().human_readable_sizes: | |
195 | # %(size)5s%(coeff)1s | |
196 | format_size = u"%5d%1s" | |
197 | dir_str = u"DIR".rjust(6) | |
198 | else: | |
199 | format_size = u"%12d%s" | |
200 | dir_str = u"DIR".rjust(12) | |
186 | 201 | if cfg.long_listing: |
187 | format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(md5)32s %(storageclass)s %(uri)s" | |
202 | format_string = u"%(timestamp)16s %(size)s %(md5)-35s %(storageclass)-11s %(uri)s" | |
188 | 203 | elif cfg.list_md5: |
189 | format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(md5)32s %(uri)s" | |
204 | format_string = u"%(timestamp)16s %(size)s %(md5)-35s %(uri)s" | |
190 | 205 | else: |
191 | format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(uri)s" | |
206 | format_string = u"%(timestamp)16s %(size)s %(uri)s" | |
192 | 207 | |
193 | 208 | for prefix in response['common_prefixes']: |
194 | 209 | output(format_string % { |
195 | 210 | "timestamp": "", |
196 | "size": "DIR", | |
197 | "coeff": "", | |
211 | "size": dir_str, | |
198 | 212 | "md5": "", |
199 | 213 | "storageclass": "", |
200 | 214 | "uri": uri.compose_uri(bucket, prefix["Prefix"])}) |
212 | 226 | except KeyError: |
213 | 227 | pass |
214 | 228 | |
215 | size, size_coeff = formatSize(object["Size"], Config().human_readable_sizes) | |
229 | size_and_coeff = formatSize(object["Size"], | |
230 | Config().human_readable_sizes) | |
216 | 231 | output(format_string % { |
217 | 232 | "timestamp": formatDateTime(object["LastModified"]), |
218 | "size" : str(size), | |
219 | "coeff": size_coeff, | |
233 | "size" : format_size % size_and_coeff, | |
220 | 234 | "md5" : md5, |
221 | 235 | "storageclass" : storageclass, |
222 | 236 | "uri": uri.compose_uri(bucket, object["Key"]), |
304 | 318 | raise ParameterError("Expecting S3 URI with just the bucket name set instead of '%s'" % arg) |
305 | 319 | try: |
306 | 320 | response = s3.expiration_set(uri, cfg.bucket_location) |
307 | if response["status"] is 200: | |
321 | if response["status"] == 200: | |
308 | 322 | output(u"Bucket '%s': expiration configuration is set." % (uri.uri())) |
309 | elif response["status"] is 204: | |
323 | elif response["status"] == 204: | |
310 | 324 | output(u"Bucket '%s': expiration configuration is deleted." % (uri.uri())) |
311 | 325 | except S3Error as e: |
312 | 326 | if e.info["Code"] in S3.codes: |
670 | 684 | response = s3.object_batch_delete_uri_strs([uri.compose_uri(bucket, item['Key']) for item in to_delete]) |
671 | 685 | deleted_bytes += sum(int(item["Size"]) for item in to_delete) |
672 | 686 | deleted_count += len(to_delete) |
673 | output('\n'.join(u"delete: '%s'" % uri.compose_uri(bucket, p['Key']) for p in to_delete)) | |
687 | output(u'\n'.join(u"delete: '%s'" % uri.compose_uri(bucket, p['Key']) for p in to_delete)) | |
674 | 688 | |
675 | 689 | if deleted_count: |
676 | 690 | # display summary data of deleted files |
702 | 716 | debug(u"Batch delete %d, remaining %d" % (len(to_delete), len(remote_list))) |
703 | 717 | if not cfg.dry_run: |
704 | 718 | response = s3.object_batch_delete(to_delete) |
705 | output('\n'.join((u"delete: '%s'" % to_delete[p]['object_uri_str']) for p in to_delete)) | |
719 | output(u'\n'.join((u"delete: '%s'" % to_delete[p]['object_uri_str']) for p in to_delete)) | |
706 | 720 | to_delete = remote_list[:1000] |
707 | 721 | remote_list = remote_list[1000:] |
708 | 722 | |
992 | 1006 | |
993 | 1007 | if uri.has_object(): |
994 | 1008 | # Temporary hack for performance + python3 compatibility |
995 | try: | |
996 | # Check python 2 first | |
1009 | if PY3: | |
1010 | info_headers_iter = info['headers'].items() | |
1011 | else: | |
997 | 1012 | info_headers_iter = info['headers'].iteritems() |
998 | except: | |
999 | info_headers_iter = info['headers'].items() | |
1000 | 1013 | for header, value in info_headers_iter: |
1001 | 1014 | if header.startswith('x-amz-meta-'): |
1002 | 1015 | output(u" %s: %s" % (header, value)) |
1099 | 1112 | extra_headers = copy(cfg.extra_headers) |
1100 | 1113 | try: |
1101 | 1114 | response = s3.object_copy(src_uri, dst_uri, extra_headers) |
1102 | output("remote copy: '%(src)s' -> '%(dst)s'" % { "src" : src_uri, "dst" : dst_uri }) | |
1115 | output(u"remote copy: '%s' -> '%s'" % (src_uri, dst_uri)) | |
1103 | 1116 | total_nb_files += 1 |
1104 | 1117 | total_size += item.get(u'size', 0) |
1105 | except S3Error as e: | |
1118 | except S3Error as exc: | |
1106 | 1119 | ret = EX_PARTIAL |
1107 | error("File '%(src)s' could not be copied: %(e)s" % { "src" : src_uri, "e" : e }) | |
1120 | error(u"File '%s' could not be copied: %s", src_uri, exc) | |
1108 | 1121 | if cfg.stop_on_error: |
1109 | 1122 | raise |
1110 | 1123 | return ret, seq, total_nb_files, total_size |
1937 | 1950 | s3 = S3(cfg) |
1938 | 1951 | uri = S3Uri(args[1]) |
1939 | 1952 | policy_file = args[0] |
1940 | policy = open(deunicodise(policy_file), 'r').read() | |
1941 | ||
1942 | if cfg.dry_run: return EX_OK | |
1953 | ||
1954 | with open(deunicodise(policy_file), 'r') as fp: | |
1955 | policy = fp.read() | |
1956 | ||
1957 | if cfg.dry_run: | |
1958 | return EX_OK | |
1943 | 1959 | |
1944 | 1960 | response = s3.set_policy(uri, policy) |
1945 | 1961 | |
1967 | 1983 | s3 = S3(cfg) |
1968 | 1984 | uri = S3Uri(args[1]) |
1969 | 1985 | cors_file = args[0] |
1970 | cors = open(deunicodise(cors_file), 'r').read() | |
1971 | ||
1972 | if cfg.dry_run: return EX_OK | |
1986 | ||
1987 | with open(deunicodise(cors_file), 'r') as fp: | |
1988 | cors = fp.read() | |
1989 | ||
1990 | if cfg.dry_run: | |
1991 | return EX_OK | |
1973 | 1992 | |
1974 | 1993 | response = s3.set_cors(uri, cors) |
1975 | 1994 | |
2012 | 2031 | s3 = S3(cfg) |
2013 | 2032 | uri = S3Uri(args[1]) |
2014 | 2033 | lifecycle_policy_file = args[0] |
2015 | lifecycle_policy = open(deunicodise(lifecycle_policy_file), 'r').read() | |
2016 | ||
2017 | if cfg.dry_run: return EX_OK | |
2034 | ||
2035 | with open(deunicodise(lifecycle_policy_file), 'r') as fp: | |
2036 | lifecycle_policy = fp.read() | |
2037 | ||
2038 | if cfg.dry_run: | |
2039 | return EX_OK | |
2018 | 2040 | |
2019 | 2041 | response = s3.set_lifecycle_policy(uri, lifecycle_policy) |
2020 | 2042 | |
2061 | 2083 | output(u"Initiated\tPath\tId") |
2062 | 2084 | for mpupload in parseNodes(tree): |
2063 | 2085 | try: |
2064 | output("%s\t%s\t%s" % (mpupload['Initiated'], "s3://" + uri.bucket() + "/" + mpupload['Key'], mpupload['UploadId'])) | |
2086 | output(u"%s\t%s\t%s" % (mpupload['Initiated'], "s3://" + uri.bucket() + "/" + mpupload['Key'], mpupload['UploadId'])) | |
2065 | 2087 | except KeyError: |
2066 | 2088 | pass |
2067 | 2089 | return EX_OK |
2090 | 2112 | output(u"LastModified\t\t\tPartNumber\tETag\tSize") |
2091 | 2113 | for mpupload in parseNodes(tree): |
2092 | 2114 | try: |
2093 | output("%s\t%s\t%s\t%s" % (mpupload['LastModified'], mpupload['PartNumber'], mpupload['ETag'], mpupload['Size'])) | |
2115 | output(u"%s\t%s\t%s\t%s" % (mpupload['LastModified'], mpupload['PartNumber'], mpupload['ETag'], mpupload['Size'])) | |
2094 | 2116 | except: |
2095 | 2117 | pass |
2096 | 2118 | return EX_OK |
2120 | 2142 | |
2121 | 2143 | def cmd_sign(args): |
2122 | 2144 | string_to_sign = args.pop() |
2123 | debug("string-to-sign: %r" % string_to_sign) | |
2145 | debug(u"string-to-sign: %r" % string_to_sign) | |
2124 | 2146 | signature = Crypto.sign_string_v2(encode_to_s3(string_to_sign)) |
2125 | output("Signature: %s" % decode_from_s3(signature)) | |
2147 | output(u"Signature: %s" % decode_from_s3(signature)) | |
2126 | 2148 | return EX_OK |
2127 | 2149 | |
2128 | 2150 | def cmd_signurl(args): |
2191 | 2213 | src = S3Uri("s3://%s/%s" % (culprit.bucket(), key_bin)) |
2192 | 2214 | dst = S3Uri("s3://%s/%s" % (culprit.bucket(), key_new)) |
2193 | 2215 | if cfg.dry_run: |
2194 | output("[--dry-run] File %r would be renamed to %s" % (key_bin, key_new)) | |
2216 | output(u"[--dry-run] File %r would be renamed to %s" % (key_bin, key_new)) | |
2195 | 2217 | continue |
2196 | 2218 | try: |
2197 | 2219 | resp_move = s3.object_move(src, dst) |
2198 | 2220 | if resp_move['status'] == 200: |
2199 | output("File '%r' renamed to '%s'" % (key_bin, key_new)) | |
2221 | output(u"File '%r' renamed to '%s'" % (key_bin, key_new)) | |
2200 | 2222 | count += 1 |
2201 | 2223 | else: |
2202 | error("Something went wrong for: %r" % key) | |
2203 | error("Please report the problem to s3tools-bugs@lists.sourceforge.net") | |
2224 | error(u"Something went wrong for: %r" % key) | |
2225 | error(u"Please report the problem to s3tools-bugs@lists.sourceforge.net") | |
2204 | 2226 | except S3Error: |
2205 | error("Something went wrong for: %r" % key) | |
2206 | error("Please report the problem to s3tools-bugs@lists.sourceforge.net") | |
2227 | error(u"Something went wrong for: %r" % key) | |
2228 | error(u"Please report the problem to s3tools-bugs@lists.sourceforge.net") | |
2207 | 2229 | |
2208 | 2230 | if count > 0: |
2209 | warning("Fixed %d files' names. Their ACL were reset to Private." % count) | |
2210 | warning("Use 's3cmd setacl --acl-public s3://...' to make") | |
2211 | warning("them publicly readable if required.") | |
2231 | warning(u"Fixed %d files' names. Their ACL were reset to Private." % count) | |
2232 | warning(u"Use 's3cmd setacl --acl-public s3://...' to make") | |
2233 | warning(u"them publicly readable if required.") | |
2212 | 2234 | return EX_OK |
2213 | 2235 | |
2214 | 2236 | def resolve_list(lst, args): |
2218 | 2240 | return retval |
2219 | 2241 | |
2220 | 2242 | def gpg_command(command, passphrase = ""): |
2221 | debug("GPG command: " + " ".join(command)) | |
2243 | debug(u"GPG command: " + " ".join(command)) | |
2222 | 2244 | command = [deunicodise(cmd_entry) for cmd_entry in command] |
2223 | 2245 | p = subprocess.Popen(command, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.STDOUT, |
2224 | 2246 | close_fds = True) |
2225 | 2247 | p_stdout, p_stderr = p.communicate(deunicodise(passphrase + "\n")) |
2226 | debug("GPG output:") | |
2248 | debug(u"GPG output:") | |
2227 | 2249 | for line in unicodise(p_stdout).split("\n"): |
2228 | debug("GPG: " + line) | |
2250 | debug(u"GPG: " + line) | |
2229 | 2251 | p_exitcode = p.wait() |
2230 | 2252 | return p_exitcode |
2231 | 2253 | |
2374 | 2396 | os.unlink(deunicodise(ret_enc[1])) |
2375 | 2397 | os.unlink(deunicodise(ret_dec[1])) |
2376 | 2398 | if hash[0] == hash[2] and hash[0] != hash[1]: |
2377 | output ("Success. Encryption and decryption worked fine :-)") | |
2399 | output(u"Success. Encryption and decryption worked fine :-)") | |
2378 | 2400 | else: |
2379 | 2401 | raise Exception("Encryption verification error.") |
2380 | 2402 | |
2423 | 2445 | |
2424 | 2446 | def process_patterns_from_file(fname, patterns_list): |
2425 | 2447 | try: |
2426 | fn = open(deunicodise(fname), "rt") | |
2448 | with open(deunicodise(fname), "rt") as fn: | |
2449 | for pattern in fn: | |
2450 | pattern = unicodise(pattern).strip() | |
2451 | if re.match("^#", pattern) or re.match("^\s*$", pattern): | |
2452 | continue | |
2453 | debug(u"%s: adding rule: %s" % (fname, pattern)) | |
2454 | patterns_list.append(pattern) | |
2427 | 2455 | except IOError as e: |
2428 | 2456 | error(e) |
2429 | 2457 | sys.exit(EX_IOERR) |
2430 | for pattern in fn: | |
2431 | pattern = unicodise(pattern).strip() | |
2432 | if re.match("^#", pattern) or re.match("^\s*$", pattern): | |
2433 | continue | |
2434 | debug(u"%s: adding rule: %s" % (fname, pattern)) | |
2435 | patterns_list.append(pattern) | |
2436 | 2458 | |
2437 | 2459 | return patterns_list |
2438 | 2460 | |
2442 | 2464 | Process --exclude / --include GLOB and REGEXP patterns. |
2443 | 2465 | 'option_txt' is 'exclude' / 'include' / 'rexclude' / 'rinclude' |
2444 | 2466 | Returns: patterns_compiled, patterns_text |
2467 | Note: process_patterns_from_file will ignore lines starting with # as these | |
2468 | are comments. To target escape the initial #, to use it in a file name, one | |
2469 | can use: "[#]" (for exclude) or "\#" (for rexclude). | |
2445 | 2470 | """ |
2446 | 2471 | |
2447 | 2472 | patterns_compiled = [] |
2649 | 2674 | optparser.add_option( "--no-encrypt", dest="encrypt", action="store_false", help="Don't encrypt files.") |
2650 | 2675 | optparser.add_option("-f", "--force", dest="force", action="store_true", help="Force overwrite and other dangerous operations.") |
2651 | 2676 | optparser.add_option( "--continue", dest="get_continue", action="store_true", help="Continue getting a partially downloaded file (only for [get] command).") |
2652 | optparser.add_option( "--continue-put", dest="put_continue", action="store_true", help="Continue uploading partially uploaded files or multipart upload parts. Restarts/parts files that don't have matching size and md5. Skips files/parts that do. Note: md5sum checks are not always sufficient to check (part) file equality. Enable this at your own risk.") | |
2677 | optparser.add_option( "--continue-put", dest="put_continue", action="store_true", help="Continue uploading partially uploaded files or multipart upload parts. Restarts parts/files that don't have matching size and md5. Skips files/parts that do. Note: md5sum checks are not always sufficient to check (part) file equality. Enable this at your own risk.") | |
2653 | 2678 | optparser.add_option( "--upload-id", dest="upload_id", help="UploadId for Multipart Upload, in case you want continue an existing upload (equivalent to --continue-put) and there are multiple partial uploads. Use s3cmd multipart [URI] to see what UploadIds are associated with the given URI.") |
2654 | 2679 | optparser.add_option( "--skip-existing", dest="skip_existing", action="store_true", help="Skip over files that exist at the destination (only for [get] and [sync] commands).") |
2655 | 2680 | optparser.add_option("-r", "--recursive", dest="recursive", action="store_true", help="Recursive upload, download or removal.") |
2660 | 2685 | optparser.add_option( "--acl-grant", dest="acl_grants", type="s3acl", action="append", metavar="PERMISSION:EMAIL or USER_CANONICAL_ID", help="Grant stated permission to a given amazon user. Permission is one of: read, write, read_acp, write_acp, full_control, all") |
2661 | 2686 | optparser.add_option( "--acl-revoke", dest="acl_revokes", type="s3acl", action="append", metavar="PERMISSION:USER_CANONICAL_ID", help="Revoke stated permission for a given amazon user. Permission is one of: read, write, read_acp, write_acp, full_control, all") |
2662 | 2687 | |
2663 | optparser.add_option("-D", "--restore-days", dest="restore_days", action="store", help="Number of days to keep restored file available (only for 'restore' command).", metavar="NUM") | |
2688 | optparser.add_option("-D", "--restore-days", dest="restore_days", action="store", help="Number of days to keep restored file available (only for 'restore' command). Default is 1 day.", metavar="NUM") | |
2664 | 2689 | optparser.add_option( "--restore-priority", dest="restore_priority", action="store", choices=['standard', 'expedited', 'bulk'], help="Priority for restoring files from S3 Glacier (only for 'restore' command). Choices available: bulk, standard, expedited") |
2665 | 2690 | |
2666 | 2691 | optparser.add_option( "--delete-removed", dest="delete_removed", action="store_true", help="Delete destination objects with no corresponding source file [sync]") |
2688 | 2713 | optparser.add_option( "--host-bucket", dest="host_bucket", help="DNS-style bucket+hostname:port template for accessing a bucket (default: %s)" % (cfg.host_bucket)) |
2689 | 2714 | optparser.add_option( "--reduced-redundancy", "--rr", dest="reduced_redundancy", action="store_true", help="Store object with 'Reduced redundancy'. Lower per-GB price. [put, cp, mv]") |
2690 | 2715 | optparser.add_option( "--no-reduced-redundancy", "--no-rr", dest="reduced_redundancy", action="store_false", help="Store object without 'Reduced redundancy'. Higher per-GB price. [put, cp, mv]") |
2691 | optparser.add_option( "--storage-class", dest="storage_class", action="store", metavar="CLASS", help="Store object with specified CLASS (STANDARD, STANDARD_IA, or REDUCED_REDUNDANCY). Lower per-GB price. [put, cp, mv]") | |
2716 | optparser.add_option( "--storage-class", dest="storage_class", action="store", metavar="CLASS", help="Store object with specified CLASS (STANDARD, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER or DEEP_ARCHIVE). [put, cp, mv]") | |
2692 | 2717 | optparser.add_option( "--access-logging-target-prefix", dest="log_target_prefix", help="Target prefix for access logs (S3 URI) (for [cfmodify] and [accesslog] commands)") |
2693 | 2718 | optparser.add_option( "--no-access-logging", dest="log_target_prefix", action="store_false", help="Disable access logging (for [cfmodify] and [accesslog] commands)") |
2694 | 2719 | |
2748 | 2773 | optparser.add_option( "--no-check-hostname", dest="check_ssl_hostname", action="store_false", help="Do not check SSL certificate hostname validity") |
2749 | 2774 | optparser.add_option( "--signature-v2", dest="signature_v2", action="store_true", help="Use AWS Signature version 2 instead of newer signature methods. Helpful for S3-like systems that don't have AWS Signature v4 yet.") |
2750 | 2775 | optparser.add_option( "--limit-rate", dest="limitrate", action="store", type="string", help="Limit the upload or download speed to amount bytes per second. Amount may be expressed in bytes, kilobytes with the k suffix, or megabytes with the m suffix") |
2776 | optparser.add_option( "--no-connection-pooling", dest="connection_pooling", action="store_false", help="Disable connection re-use") | |
2751 | 2777 | optparser.add_option( "--requester-pays", dest="requester_pays", action="store_true", help="Set the REQUESTER PAYS flag for operations") |
2752 | 2778 | optparser.add_option("-l", "--long-listing", dest="long_listing", action="store_true", help="Produce long listing [ls]") |
2753 | 2779 | optparser.add_option( "--stop-on-error", dest="stop_on_error", action="store_true", help="stop if error in transfer") |
2820 | 2846 | key, val = unicodise_s(hdr).split(":", 1) |
2821 | 2847 | except ValueError: |
2822 | 2848 | raise ParameterError("Invalid header format: %s" % unicodise_s(hdr)) |
2823 | key_inval = re.sub("[a-zA-Z0-9-.]", "", key) | |
2849 | # key char restrictions of the http headers name specification | |
2850 | key_inval = re.sub(r"[a-zA-Z0-9\-.!#$%&*+^_|]", "", key) | |
2824 | 2851 | if key_inval: |
2825 | 2852 | key_inval = key_inval.replace(" ", "<space>") |
2826 | 2853 | key_inval = key_inval.replace("\t", "<tab>") |
2827 | raise ParameterError("Invalid character(s) in header name '%s': \"%s\"" % (key, key_inval)) | |
2828 | debug(u"Updating Config.Config extra_headers[%s] -> %s" % (key.replace('_', '-').strip().lower(), val.strip())) | |
2829 | cfg.extra_headers[key.replace('_', '-').strip().lower()] = val.strip() | |
2854 | raise ParameterError("Invalid character(s) in header name '%s'" | |
2855 | ": \"%s\"" % (key, key_inval)) | |
2856 | debug(u"Updating Config.Config extra_headers[%s] -> %s" % | |
2857 | (key.strip().lower(), val.strip())) | |
2858 | cfg.extra_headers[key.strip().lower()] = val.strip() | |
2830 | 2859 | |
2831 | 2860 | # Process --remove-header |
2832 | 2861 | if options.remove_headers: |
3118 | 3147 | sys.stderr.write("See ya!\n") |
3119 | 3148 | sys.exit(EX_BREAK) |
3120 | 3149 | |
3121 | except SSLError as e: | |
3150 | except (S3SSLError, S3SSLCertificateError) as e: | |
3122 | 3151 | # SSLError is a subtype of IOError |
3123 | 3152 | error("SSL certificate verification failure: %s" % e) |
3124 | 3153 | sys.exit(EX_ACCESSDENIED) |
215 | 215 | .TP |
216 | 216 | \fB\-\-continue\-put\fR |
217 | 217 | Continue uploading partially uploaded files or |
218 | multipart upload parts. Restarts/parts files that | |
218 | multipart upload parts. Restarts parts/files that | |
219 | 219 | don't have matching size and md5. Skips files/parts |
220 | 220 | that do. Note: md5sum checks are not always |
221 | 221 | sufficient to check (part) file equality. Enable this |
263 | 263 | .TP |
264 | 264 | \fB\-D\fR NUM, \fB\-\-restore\-days\fR=NUM |
265 | 265 | Number of days to keep restored file available (only |
266 | for 'restore' command). | |
266 | for 'restore' command). Default is 1 day. | |
267 | 267 | .TP |
268 | 268 | \fB\-\-restore\-priority\fR=RESTORE_PRIORITY |
269 | 269 | Priority for restoring files from S3 Glacier (only for |
366 | 366 | .TP |
367 | 367 | \fB\-\-storage\-class\fR=CLASS |
368 | 368 | Store object with specified CLASS (STANDARD, |
369 | STANDARD_IA, or REDUCED_REDUNDANCY). Lower per\-GB | |
370 | price. [put, cp, mv] | |
369 | STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER | |
370 | or DEEP_ARCHIVE). [put, cp, mv] | |
371 | 371 | .TP |
372 | 372 | \fB\-\-access\-logging\-target\-prefix\fR=LOG_TARGET_PREFIX |
373 | 373 | Target prefix for access logs (S3 URI) (for [cfmodify] |
523 | 523 | Enable debug output. |
524 | 524 | .TP |
525 | 525 | \fB\-\-version\fR |
526 | Show s3cmd version (2.0.2) and exit. | |
526 | Show s3cmd version (2.1.0) and exit. | |
527 | 527 | .TP |
528 | 528 | \fB\-F\fR, \fB\-\-follow\-symlinks\fR |
529 | 529 | Follow symbolic links as if they are regular files |
559 | 559 | Limit the upload or download speed to amount bytes per |
560 | 560 | second. Amount may be expressed in bytes, kilobytes |
561 | 561 | with the k suffix, or megabytes with the m suffix |
562 | .TP | |
563 | \fB\-\-no\-connection\-pooling\fR | |
564 | Disable connection re\-use | |
562 | 565 | .TP |
563 | 566 | \fB\-\-requester\-pays\fR |
564 | 567 | Set the REQUESTER PAYS flag for operations |
0 | Metadata-Version: 1.1 | |
0 | Metadata-Version: 1.2 | |
1 | 1 | Name: s3cmd |
2 | Version: 2.0.2 | |
2 | Version: 2.1.0 | |
3 | 3 | Summary: Command line tool for managing Amazon S3 and CloudFront services |
4 | 4 | Home-page: http://s3tools.org |
5 | Author: github.com/mdomsch, github.com/matteobar, github.com/fviard | |
6 | Author-email: s3tools-bugs@lists.sourceforge.net | |
5 | Author: Michal Ludvig | |
6 | Author-email: michal@logix.cz | |
7 | Maintainer: github.com/fviard, github.com/matteobar | |
8 | Maintainer-email: s3tools-bugs@lists.sourceforge.net | |
7 | 9 | License: GNU GPL v2+ |
8 | Description-Content-Type: UNKNOWN | |
9 | 10 | Description: |
10 | 11 | |
11 | 12 | S3cmd lets you copy files from/to Amazon S3 |
17 | 18 | |
18 | 19 | Authors: |
19 | 20 | -------- |
21 | Florent Viard <florent@sodria.com> | |
20 | 22 | Michal Ludvig <michal@logix.cz> |
23 | Matt Domsch (github.com/mdomsch) | |
21 | 24 | |
22 | 25 | Platform: UNKNOWN |
23 | 26 | Classifier: Development Status :: 5 - Production/Stable |
40 | 43 | Classifier: Programming Language :: Python :: 3.4 |
41 | 44 | Classifier: Programming Language :: Python :: 3.5 |
42 | 45 | Classifier: Programming Language :: Python :: 3.6 |
46 | Classifier: Programming Language :: Python :: 3.7 | |
47 | Classifier: Programming Language :: Python :: 3.8 | |
43 | 48 | Classifier: Topic :: System :: Archiving |
44 | 49 | Classifier: Topic :: Utilities |
0 | #!/usr/bin/env python2 | |
1 | # -*- coding=utf-8 -*- | |
0 | #!/usr/bin/env python | |
1 | # -*- coding: utf-8 -*- | |
2 | 2 | |
3 | 3 | from __future__ import print_function |
4 | 4 | |
41 | 41 | ## Re-create the manpage |
42 | 42 | ## (Beware! Perl script on the loose!!) |
43 | 43 | if len(sys.argv) > 1 and sys.argv[1] == "sdist": |
44 | if os.stat_result(os.stat("s3cmd.1")).st_mtime < os.stat_result(os.stat("s3cmd")).st_mtime: | |
44 | if os.stat_result(os.stat("s3cmd.1")).st_mtime \ | |
45 | < os.stat_result(os.stat("s3cmd")).st_mtime: | |
45 | 46 | sys.stderr.write("Re-create man page first!\n") |
46 | 47 | sys.stderr.write("Run: ./s3cmd --help | ./format-manpage.pl > s3cmd.1\n") |
47 | 48 | sys.exit(1) |
52 | 53 | man_path = os.getenv("S3CMD_INSTPATH_MAN") or "share/man" |
53 | 54 | doc_path = os.getenv("S3CMD_INSTPATH_DOC") or "share/doc/packages" |
54 | 55 | data_files = [ |
55 | (doc_path+"/s3cmd", [ "README.md", "INSTALL", "LICENSE", "NEWS" ]), | |
56 | (man_path+"/man1", [ "s3cmd.1" ] ), | |
56 | (doc_path+"/s3cmd", ["README.md", "INSTALL.md", "LICENSE", "NEWS"]), | |
57 | (man_path+"/man1", ["s3cmd.1"]), | |
57 | 58 | ] |
58 | 59 | else: |
59 | 60 | data_files = None |
61 | 62 | ## Main distutils info |
62 | 63 | setup( |
63 | 64 | ## Content description |
64 | name = S3.PkgInfo.package, | |
65 | version = S3.PkgInfo.version, | |
66 | packages = [ 'S3' ], | |
67 | scripts = ['s3cmd'], | |
68 | data_files = data_files, | |
65 | name=S3.PkgInfo.package, | |
66 | version=S3.PkgInfo.version, | |
67 | packages=['S3'], | |
68 | scripts=['s3cmd'], | |
69 | data_files=data_files, | |
69 | 70 | |
70 | 71 | ## Packaging details |
71 | author = "Michal Ludvig", | |
72 | author_email = "michal@logix.cz", | |
73 | maintainer = "github.com/mdomsch, github.com/matteobar, github.com/fviard", | |
74 | maintainer_email = "s3tools-bugs@lists.sourceforge.net", | |
75 | url = S3.PkgInfo.url, | |
76 | license = S3.PkgInfo.license, | |
77 | description = S3.PkgInfo.short_description, | |
78 | long_description = """ | |
72 | author="Michal Ludvig", | |
73 | author_email="michal@logix.cz", | |
74 | maintainer="github.com/fviard, github.com/matteobar", | |
75 | maintainer_email="s3tools-bugs@lists.sourceforge.net", | |
76 | url=S3.PkgInfo.url, | |
77 | license=S3.PkgInfo.license, | |
78 | description=S3.PkgInfo.short_description, | |
79 | long_description=""" | |
79 | 80 | %s |
80 | 81 | |
81 | 82 | Authors: |
82 | 83 | -------- |
84 | Florent Viard <florent@sodria.com> | |
83 | 85 | Michal Ludvig <michal@logix.cz> |
86 | Matt Domsch (github.com/mdomsch) | |
84 | 87 | """ % (S3.PkgInfo.long_description), |
85 | 88 | |
86 | classifiers = [ | |
89 | classifiers=[ | |
87 | 90 | 'Development Status :: 5 - Production/Stable', |
88 | 91 | 'Environment :: Console', |
89 | 92 | 'Environment :: MacOS X', |
104 | 107 | 'Programming Language :: Python :: 3.4', |
105 | 108 | 'Programming Language :: Python :: 3.5', |
106 | 109 | 'Programming Language :: Python :: 3.6', |
110 | 'Programming Language :: Python :: 3.7', | |
111 | 'Programming Language :: Python :: 3.8', | |
107 | 112 | 'Topic :: System :: Archiving', |
108 | 113 | 'Topic :: Utilities', |
109 | 114 | ], |
110 | 115 | |
111 | install_requires = ["python-dateutil", "python-magic"] | |
116 | install_requires=["python-dateutil", "python-magic"] | |
112 | 117 | ) |
113 | 118 | |
114 | 119 | # vim:et:ts=4:sts=4:ai |