Codebase list s3cmd / c443e7d
Update upstream source from tag 'upstream/2.1.0' Update to upstream version '2.1.0' with Debian dir bddc557a223bb8ac9cf06f0ead0a801384e05971 Gianfranco Costamagna 3 years ago
26 changed file(s) with 548 addition(s) and 376 deletion(s). Raw diff Collapse all Expand all
+0
-107
INSTALL less more
0 Installation of s3cmd package
1 =============================
2
3 Copyright:
4 TGRMN Software and contributors
5
6 S3tools / S3cmd project homepage:
7 http://s3tools.org
8
9 !!!
10 !!! Please consult README file for setup, usage and examples!
11 !!!
12
13 Package formats
14 ---------------
15 S3cmd is distributed in two formats:
16
17 1) Prebuilt RPM file - should work on most RPM-based
18 distributions
19
20 2) Source .tar.gz package
21
22
23 Installation of RPM package
24 ---------------------------
25 As user "root" run:
26
27 rpm -ivh s3cmd-X.Y.Z.noarch.rpm
28
29 where X.Y.Z is the most recent s3cmd release version.
30
31 You may be informed about missing dependencies
32 on Python or some libraries. Please consult your
33 distribution documentation on ways to solve the problem.
34
35 Installation from PyPA (Python Package Authority)
36 ---------------------
37 S3cmd can be installed from the PyPA using PIP (the recommended tool for PyPA).
38
39 1) Confirm you have PIP installed. PIP home page is here: https://pypi.python.org/pypi/pip
40 Example install on a RHEL yum based machine
41 sudo yum install python-pip
42 2) Install with pip
43 sudo pip install s3cmd
44
45 Installation from zip file
46 --------------------------
47 There are three options to run s3cmd from source tarball:
48
49 1) The S3cmd program, as distributed in s3cmd-X.Y.Z.tar.gz
50 on SourceForge or in master.zip on GitHub, can be run directly
51 from where you unzipped the package.
52
53 2) Or you may want to move "s3cmd" file and "S3" subdirectory
54 to some other path. Make sure that "S3" subdirectory ends up
55 in the same place where you move the "s3cmd" file.
56
57 For instance if you decide to move s3cmd to you $HOME/bin
58 you will have $HOME/bin/s3cmd file and $HOME/bin/S3 directory
59 with a number of support files.
60
61 3) The cleanest and most recommended approach is to unzip the
62 package and then just run:
63
64 python setup.py install
65
66 You will however need Python "distutils" module for this to
67 work. It is often part of the core python package (e.g. in
68 OpenSuse Python 2.5 package) or it can be installed using your
69 package manager, e.g. in Debian use
70
71 apt-get install python-setuptools
72
73 Again, consult your distribution documentation on how to
74 find out the actual package name and how to install it then.
75
76 Note that on Linux, if you are not "root" already, you may
77 need to run:
78
79 sudo python setup.py install
80
81 instead.
82
83
84 Note to distributions package maintainers
85 ----------------------------------------
86 Define shell environment variable S3CMD_PACKAGING=yes if you
87 don't want setup.py to install manpages and doc files. You'll
88 have to install them manually in your .spec or similar package
89 build scripts.
90
91 On the other hand if you want setup.py to install manpages
92 and docs, but to other than default path, define env
93 variables $S3CMD_INSTPATH_MAN and $S3CMD_INSTPATH_DOC. Check
94 out setup.py for details and default values.
95
96
97 Where to get help
98 -----------------
99 If in doubt, or if something doesn't work as expected,
100 get back to us via mailing list:
101
102 s3tools-general@lists.sourceforge.net
103
104 or visit the S3cmd / S3tools homepage at:
105
106 http://s3tools.org
0 Installation of s3cmd package
1 =============================
2
3 Copyright:
4 TGRMN Software and contributors
5
6 S3tools / S3cmd project homepage:
7 http://s3tools.org
8
9 !!!
10 !!! Please consult README file for setup, usage and examples!
11 !!!
12
13 Package formats
14 ---------------
15 S3cmd is distributed in two formats:
16
17 1) Prebuilt RPM file - should work on most RPM-based
18 distributions
19
20 2) Source .tar.gz package
21
22 Installation of Brew package
23 ---------------------------
24 ```
25 brew install s3cmd
26 ```
27
28 Installation of RPM package
29 ---------------------------
30 As user "root" run:
31 ```
32 rpm -ivh s3cmd-X.Y.Z.noarch.rpm
33 ```
34 where X.Y.Z is the most recent s3cmd release version.
35
36 You may be informed about missing dependencies
37 on Python or some libraries. Please consult your
38 distribution documentation on ways to solve the problem.
39
40 Installation from PyPA (Python Package Authority)
41 ---------------------
42 S3cmd can be installed from the PyPA using PIP (the recommended tool for PyPA).
43
44 1) Confirm you have PIP installed. PIP home page is here: https://pypi.python.org/pypi/pip. Example install on a RHEL yum based machine
45 ```
46 sudo yum install python-pip
47 ```
48 2) Install with pip
49 ```
50 sudo pip install s3cmd
51 ```
52
53 Installation from zip file
54 --------------------------
55 There are three options to run s3cmd from source tarball:
56
57 1) The S3cmd program, as distributed in s3cmd-X.Y.Z.tar.gz
58 on SourceForge or in master.zip on GitHub, can be run directly
59 from where you unzipped the package.
60
61 2) Or you may want to move "s3cmd" file and "S3" subdirectory
62 to some other path. Make sure that "S3" subdirectory ends up
63 in the same place where you move the "s3cmd" file.
64
65 For instance if you decide to move s3cmd to you $HOME/bin
66 you will have $HOME/bin/s3cmd file and $HOME/bin/S3 directory
67 with a number of support files.
68
69 3) The cleanest and most recommended approach is to unzip the
70 package and then just run:
71
72 `python setup.py install`
73
74 You will however need Python "distutils" module for this to
75 work. It is often part of the core python package (e.g. in
76 OpenSuse Python 2.5 package) or it can be installed using your
77 package manager, e.g. in Debian use
78
79 `apt-get install python-setuptools`
80
81 Again, consult your distribution documentation on how to
82 find out the actual package name and how to install it then.
83
84 Note that on Linux, if you are not "root" already, you may
85 need to run:
86
87 `sudo python setup.py install`
88
89 instead.
90
91
92 Note to distributions package maintainers
93 ----------------------------------------
94 Define shell environment variable S3CMD_PACKAGING=yes if you
95 don't want setup.py to install manpages and doc files. You'll
96 have to install them manually in your .spec or similar package
97 build scripts.
98
99 On the other hand if you want setup.py to install manpages
100 and docs, but to other than default path, define env
101 variables $S3CMD_INSTPATH_MAN and $S3CMD_INSTPATH_DOC. Check
102 out setup.py for details and default values.
103
104
105 Where to get help
106 -----------------
107 If in doubt, or if something doesn't work as expected,
108 get back to us via mailing list:
109 ```
110 s3tools-general@lists.sourceforge.net
111 ```
112
113 or visit the S3cmd / S3tools homepage at: [http://s3tools.org](http://s3tools.org)
0 include INSTALL README.md LICENSE NEWS
0 include INSTALL.md README.md LICENSE NEWS
11 include s3cmd.1
0 s3cmd-2.1.0 - 2020-04-07
1 ===============
2 * Changed size reporting using k instead of K as it a multiple of 1024 (#956)
3 * Added "public_url_use_https" config to generate public url using https (#551, #666) (Jukka Nousiainen)
4 * Added option to make connection pooling configurable and improvements (Arto Jantunen)
5 * Added support for path-style bucket access to signurl (Zac Medico)
6 * Added docker configuration and help to run test cases with multiple Python versions (Doug Crozier)
7 * Relaxed limitation on special chars for --add-header key names (#1054)
8 * Fixed all regions that were automatically converted to lower case (Harshavardhana)
9 * Fixed size and alignment of DU and LS output reporting (#956)
10 * Fixes for SignatureDoesNotMatch error when host port 80 or 443 is specified, due to stupid servers (#1059)
11 * Fixed the useless retries of requests that fail because of ssl cert checks
12 * Fixed a possible crash when a file disappears during cache generation (#377)
13 * Fixed unicode issues with IAM (#987)
14 * Fixed unicode errors with bucked Policy/CORS requests (#847) (Alex Offshore)
15 * Fixed unicode issues when loading aws_credential_file (#989)
16 * Fixed an issue with the tenant feature of CephRGW. Url encode bucket_name for path-style requests (#1080)
17 * Fixed signature v2 always used when bucket_name had special chars (#1081)
18 * Allow to use signature v4 only, even for commands without buckets specified (#1082)
19 * Fixed small open file descriptor leaks.
20 * Py3: Fixed hash-bang in headers to not force using python2 when setup/s3cmd/run-test scripts are executed directly.
21 * Py3: Fixed unicode issues with Cloudfront (#1006)
22 * Py3: Fixed http.client.RemoteDisconnected errors (#1014) (Ryan Huddleston)
23 * Py3: Fixed 'dictionary changed size during iteration' error when using a cache-file (#945) (Doug Crozier)
24 * Py3: Fixed the display of file sizes (Vlad Presnyak)
25 * Py3: Python 3.8 compatibility fixes (Konstantin Shalygin)
26 * Py2: Fixed unicode errors sometimes crashing remote2remote sync (#847)
27 * Added s3cmd.egg-info to .gitignore (Philip Dubé)
28 * Improved run-test script to not use hard-coded bucket names(#1066) (Doug Crozier)
29 * Renamed INSTALL to INSTALL.md and improvements (Nitro, Prabhakar Gupta)
30 * Improved the restore command help (Hrchu)
31 * Updated the storage-class command help with the recent aws s3 classes (#1020)
32 * Fixed typo in the --continue-put help message (Pengyu Chen)
33 * Fixed typo (#1062) (Tim Gates)
34 * Improvements for setup and build configurations
35 * Many other bug fixes
36
37
038 s3cmd-2.0.2 - 2018-07-15
139 ===============
240 * Fixed unexpected timeouts encountered during requests or transfers due to AWS strange connection short timeouts (#941)
1149 * Fixed setting full_control on objects with public read access (Matthew Vernon)
1250 * Fixed a bug when only one path is supplied with Cloudfront. (Mikael Svensson)
1351 * Fixed signature errors with 'modify' requests (Radek Simko)
14 * Fixes #936 - Fix setacl command exception (Robert Moucha)
15 * Fixes error reporting if deleting a source object failed after a move (#929)
52 * Fixed #936 - Fix setacl command exception (Robert Moucha)
53 * Fixed error reporting if deleting a source object failed after a move (#929)
1654 * Many other bug fixes (#525, #933, #940, #947, #957, #958, #960, #967)
1755
1856 Important info: AWS S3 doesn't allow anymore uppercases and underscores in bucket names since march 1, 2018
0 Metadata-Version: 1.1
0 Metadata-Version: 1.2
11 Name: s3cmd
2 Version: 2.0.2
2 Version: 2.1.0
33 Summary: Command line tool for managing Amazon S3 and CloudFront services
44 Home-page: http://s3tools.org
5 Author: github.com/mdomsch, github.com/matteobar, github.com/fviard
6 Author-email: s3tools-bugs@lists.sourceforge.net
5 Author: Michal Ludvig
6 Author-email: michal@logix.cz
7 Maintainer: github.com/fviard, github.com/matteobar
8 Maintainer-email: s3tools-bugs@lists.sourceforge.net
79 License: GNU GPL v2+
8 Description-Content-Type: UNKNOWN
910 Description:
1011
1112 S3cmd lets you copy files from/to Amazon S3
1718
1819 Authors:
1920 --------
21 Florent Viard <florent@sodria.com>
2022 Michal Ludvig <michal@logix.cz>
23 Matt Domsch (github.com/mdomsch)
2124
2225 Platform: UNKNOWN
2326 Classifier: Development Status :: 5 - Production/Stable
4043 Classifier: Programming Language :: Python :: 3.4
4144 Classifier: Programming Language :: Python :: 3.5
4245 Classifier: Programming Language :: Python :: 3.6
46 Classifier: Programming Language :: Python :: 3.7
47 Classifier: Programming Language :: Python :: 3.8
4348 Classifier: Topic :: System :: Archiving
4449 Classifier: Topic :: Utilities
1212 * General questions and discussion: s3tools-general@lists.sourceforge.net
1313 * Bug reports: s3tools-bugs@lists.sourceforge.net
1414
15 S3cmd requires Python 2.6 or newer.
15 S3cmd requires Python 2.6 or newer.
1616 Python 3+ is also supported starting with S3cmd version 2.
1717
1818
334334
335335 ### License
336336
337 Copyright (C) 2007-2017 TGRMN Software - http://www.tgrmn.com - and contributors
337 Copyright (C) 2007-2020 TGRMN Software - http://www.tgrmn.com - and contributors
338338
339339 This program is free software; you can redistribute it and/or modify
340340 it under the terms of the GNU General Public License as published by
1515 except ImportError:
1616 import elementtree.ElementTree as ET
1717
18 PY3 = (sys.version_info >= (3,0))
18 PY3 = (sys.version_info >= (3, 0))
1919
2020 class Grantee(object):
2121 ALL_USERS_URI = "http://acs.amazonaws.com/groups/global/AllUsers"
2121 from .S3 import S3
2222 from .Config import Config
2323 from .Exceptions import *
24 from .Utils import getTreeFromXml, appendXmlTextNode, getDictFromTree, dateS3toPython, getBucketFromHostname, getHostnameFromBucket, deunicodise, urlencode_string, convertHeaderTupleListToDict
24 from .Utils import (getTreeFromXml, appendXmlTextNode, getDictFromTree,
25 dateS3toPython, getBucketFromHostname,
26 getHostnameFromBucket, deunicodise, urlencode_string,
27 convertHeaderTupleListToDict, encode_to_s3, decode_from_s3)
2528 from .Crypto import sign_string_v2
2629 from .S3Uri import S3Uri, S3UriS3
2730 from .ConnMan import ConnMan
2831 from .SortedDict import SortedDict
32
33 PY3 = (sys.version_info >= (3, 0))
2934
3035 cloudfront_api_version = "2010-11-01"
3136 cloudfront_resource = "/%(api_ver)s/distribution" % { 'api_ver' : cloudfront_api_version }
175180 else:
176181 self.info['Logging'] = None
177182
178 def __str__(self):
183 def get_printable_tree(self):
179184 tree = ET.Element("DistributionConfig")
180185 tree.attrib['xmlns'] = DistributionConfig.xmlns
181186
196201 appendXmlTextNode("Bucket", getHostnameFromBucket(self.info['Logging'].bucket()), logging_el)
197202 appendXmlTextNode("Prefix", self.info['Logging'].object(), logging_el)
198203 tree.append(logging_el)
199 return ET.tostring(tree)
204 return tree
205
206 def __unicode__(self):
207 return decode_from_s3(ET.tostring(self.get_printable_tree()))
208
209 def __str__(self):
210 if PY3:
211 # Return unicode
212 return ET.tostring(self.get_printable_tree(), encoding="unicode")
213 else:
214 # Return bytes
215 return ET.tostring(self.get_printable_tree())
200216
201217 class Invalidation(object):
202218 ## Example:
284300 def get_reference(self):
285301 return self.reference
286302
287 def __str__(self):
303 def get_printable_tree(self):
288304 tree = ET.Element("InvalidationBatch")
289
290305 for path in self.paths:
291306 if len(path) < 1 or path[0] != "/":
292307 path = "/" + path
293308 appendXmlTextNode("Path", urlencode_string(path), tree)
294309 appendXmlTextNode("CallerReference", self.reference, tree)
295 return ET.tostring(tree)
310 return tree
311
312 def __unicode__(self):
313 return decode_from_s3(ET.tostring(self.get_printable_tree()))
314
315 def __str__(self):
316 if PY3:
317 # Return unicode
318 return ET.tostring(self.get_printable_tree(), encoding="unicode")
319 else:
320 # Return bytes
321 return ET.tostring(self.get_printable_tree())
296322
297323 class CloudFront(object):
298324 operations = {
563589
564590 def sign_request(self, headers):
565591 string_to_sign = headers['x-amz-date']
566 signature = sign_string_v2(string_to_sign)
592 signature = decode_from_s3(sign_string_v2(encode_to_s3(string_to_sign)))
567593 debug(u"CloudFront.sign_request('%s') = %s" % (string_to_sign, signature))
568594 return signature
569595
602628 continue
603629
604630 if CloudFront.dist_list.get(distListIndex, None) is None:
605 CloudFront.dist_list[distListIndex] = set()
631 CloudFront.dist_list[distListIndex] = set()
606632
607633 CloudFront.dist_list[distListIndex].add(d.uri())
608634
2323 import http.client as httplib
2424 import locale
2525
26 try:
27 from configparser import NoOptionError, NoSectionError, MissingSectionHeaderError, ConfigParser as PyConfigParser
26 try:
27 from configparser import (NoOptionError, NoSectionError,
28 MissingSectionHeaderError, ParsingError,
29 ConfigParser as PyConfigParser)
2830 except ImportError:
2931 # Python2 fallback code
30 from ConfigParser import NoOptionError, NoSectionError, MissingSectionHeaderError, ConfigParser as PyConfigParser
32 from ConfigParser import (NoOptionError, NoSectionError,
33 MissingSectionHeaderError, ParsingError,
34 ConfigParser as PyConfigParser)
3135
3236 try:
3337 unicode
204208 # Maximum sleep duration for throtte / limitrate.
205209 # s3 will timeout if a request/transfer is stuck for more than a short time
206210 throttle_max = 100
211 public_url_use_https = False
212 connection_pooling = True
207213
208214 ## Creating a singleton
209215 def __new__(self, configfile = None, access_key=None, secret_key=None, access_token=None):
259265 resp = conn.getresponse()
260266 files = resp.read()
261267 if resp.status == 200 and len(files)>1:
262 conn.request('GET', "/latest/meta-data/iam/security-credentials/%s"%files.decode('UTF-8'))
268 conn.request('GET', "/latest/meta-data/iam/security-credentials/%s" % files.decode('utf-8'))
263269 resp=conn.getresponse()
264270 if resp.status == 200:
265 creds=json.load(resp)
266 Config().update_option('access_key', creds['AccessKeyId'].encode('ascii'))
267 Config().update_option('secret_key', creds['SecretAccessKey'].encode('ascii'))
268 Config().update_option('access_token', creds['Token'].encode('ascii'))
271 resp_content = config_unicodise(resp.read())
272 creds=json.loads(resp_content)
273 Config().update_option('access_key', config_unicodise(creds['AccessKeyId']))
274 Config().update_option('secret_key', config_unicodise(creds['SecretAccessKey']))
275 Config().update_option('access_token', config_unicodise(creds['Token']))
269276 else:
270277 raise IOError
271278 else:
282289
283290 def aws_credential_file(self):
284291 try:
285 aws_credential_file = os.path.expanduser('~/.aws/credentials')
286 if 'AWS_CREDENTIAL_FILE' in os.environ and os.path.isfile(os.environ['AWS_CREDENTIAL_FILE']):
287 aws_credential_file = config_unicodise(os.environ['AWS_CREDENTIAL_FILE'])
292 aws_credential_file = os.path.expanduser('~/.aws/credentials')
293 credential_file_from_env = os.environ.get('AWS_CREDENTIAL_FILE')
294 if credential_file_from_env and \
295 os.path.isfile(credential_file_from_env):
296 aws_credential_file = config_unicodise(credential_file_from_env)
297 elif not os.path.isfile(aws_credential_file):
298 return
299
300 warning("Errno %d accessing credentials file %s" % (e.errno, aws_credential_file))
288301
289302 config = PyConfigParser()
290303
291304 debug("Reading AWS credentials from %s" % (aws_credential_file))
305 with io.open(aws_credential_file, "r",
306 encoding=getattr(self, 'encoding', 'UTF-8')) as fp:
307 config_string = fp.read()
292308 try:
293 config.read(aws_credential_file)
294 except MissingSectionHeaderError:
295 # if header is missing, this could be deprecated credentials file format
296 # as described here: https://blog.csanchez.org/2011/05/
297 # then do the hacky-hack and add default header
298 # to be able to read the file with PyConfigParser()
299 config_string = None
300 with open(aws_credential_file, 'r') as f:
301 config_string = '[default]\n' + f.read()
302 config.read_string(config_string.decode('utf-8'))
303
309 try:
310 # readfp is replaced by read_file in python3,
311 # but so far readfp it is still available.
312 config.readfp(io.StringIO(config_string))
313 except MissingSectionHeaderError:
314 # if header is missing, this could be deprecated credentials file format
315 # as described here: https://blog.csanchez.org/2011/05/
316 # then do the hacky-hack and add default header
317 # to be able to read the file with PyConfigParser()
318 config_string = u'[default]\n' + config_string
319 config.readfp(io.StringIO(config_string))
320 except ParsingError as exc:
321 raise ValueError(
322 "Error reading aws_credential_file "
323 "(%s): %s" % (aws_credential_file, str(exc)))
304324
305325 profile = config_unicodise(os.environ.get('AWS_PROFILE', "default"))
306326 debug("Using AWS profile '%s'" % (profile))
307327
308328 # get_key - helper function to read the aws profile credentials
309 # including the legacy ones as described here: https://blog.csanchez.org/2011/05/
329 # including the legacy ones as described here: https://blog.csanchez.org/2011/05/
310330 def get_key(profile, key, legacy_key, print_warning=True):
311331 result = None
312332
321341 profile = "default"
322342 result = config.get(profile, key)
323343 warning(
324 "Legacy configuratin key '%s' used, " % (key) +
344 "Legacy configuratin key '%s' used, " % (key) +
325345 "please use the standardized config format as described here: " +
326346 "https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/"
327347 )
329349 pass
330350
331351 if result:
332 debug("Found the configuration option '%s' for the AWS Profile '%s' in the credentials file %s" % (key, profile, aws_credential_file))
352 debug("Found the configuration option '%s' for the AWS Profile '%s' in the credentials file %s" % (key, profile, aws_credential_file))
333353 return result
334354
335 profile_access_key = get_key(profile, "aws_access_key_id", "AWSAccessKeyId")
355 profile_access_key = get_key(profile, "aws_access_key_id", "AWSAccessKeyId")
336356 if profile_access_key:
337357 Config().update_option('access_key', config_unicodise(profile_access_key))
338358
339 profile_secret_key = get_key(profile, "aws_secret_access_key", "AWSSecretKey")
359 profile_secret_key = get_key(profile, "aws_secret_access_key", "AWSSecretKey")
340360 if profile_secret_key:
341361 Config().update_option('secret_key', config_unicodise(profile_secret_key))
342362
343 profile_access_token = get_key(profile, "aws_session_token", None, False)
363 profile_access_token = get_key(profile, "aws_session_token", None, False)
344364 if profile_access_token:
345365 Config().update_option('access_token', config_unicodise(profile_access_token))
346366
347367 except IOError as e:
348 warning("%d accessing credentials file %s" % (e.errno, aws_credential_file))
368 warning("Errno %d accessing credentials file %s" % (e.errno, aws_credential_file))
349369 except NoSectionError as e:
350370 warning("Couldn't find AWS Profile '%s' in the credentials file '%s'" % (profile, aws_credential_file))
351371
379399 if cp.get('add_headers'):
380400 for option in cp.get('add_headers').split(","):
381401 (key, value) = option.split(':')
382 self.extra_headers[key.replace('_', '-').strip()] = value.strip()
402 self.extra_headers[key.strip()] = value.strip()
383403
384404 self._parsed_files.append(configfile)
385405
88 from __future__ import absolute_import
99
1010 import sys
11 if sys.version_info >= (3,0):
11 if sys.version_info >= (3, 0):
1212 from .Custom_httplib3x import httplib
1313 else:
1414 from .Custom_httplib27 import httplib
2222 from urllib.parse import urlparse
2323
2424 from .Config import Config
25 from .Exceptions import ParameterError
25 from .Exceptions import ParameterError, S3SSLCertificateError
2626 from .Utils import getBucketFromHostname
2727
28 if not 'CertificateError' in ssl.__dict__:
29 class CertificateError(Exception):
30 pass
31 else:
32 CertificateError = ssl.CertificateError
33
34 __all__ = [ "ConnMan" ]
28
29
30 __all__ = ["ConnMan"]
3531
3632
3733 class http_connection(object):
127123 cert = self.c.sock.getpeercert()
128124 try:
129125 ssl.match_hostname(cert, self.hostname)
130 except AttributeError: # old ssl module doesn't have this function
131 return
132 except ValueError: # empty SSL cert means underlying SSL library didn't validate it, we don't either.
133 return
134 except CertificateError as e:
126 except AttributeError:
127 # old ssl module doesn't have this function
128 return
129 except ValueError:
130 # empty SSL cert means underlying SSL library didn't validate it, we don't either.
131 return
132 except S3CertificateError as e:
135133 if not self.forgive_wildcard_cert(cert, self.hostname):
136134 raise e
137135
258256 @staticmethod
259257 def put(conn):
260258 if conn.id.startswith("proxy://"):
261 conn.c.close()
259 ConnMan.close(conn)
262260 debug("ConnMan.put(): closing proxy connection (keep-alive not yet supported)")
263261 return
264262
265263 if conn.counter >= ConnMan.conn_max_counter:
266 conn.c.close()
264 ConnMan.close(conn)
267265 debug("ConnMan.put(): closing over-used connection")
266 return
267
268 cfg = Config()
269 if not cfg.connection_pooling:
270 ConnMan.close(conn)
271 debug("ConnMan.put(): closing connection (connection pooling disabled)")
268272 return
269273
270274 ConnMan.conn_pool_sem.acquire()
271275 ConnMan.conn_pool[conn.id].append(conn)
272276 ConnMan.conn_pool_sem.release()
273277 debug("ConnMan.put(): connection put back to pool (%s#%d)" % (conn.id, conn.counter))
278
279 @staticmethod
280 def close(conn):
281 if conn:
282 conn.c.close()
1313
1414 from . import Config
1515 from logging import debug
16 from .Utils import encode_to_s3, time_to_epoch, deunicodise, decode_from_s3
16 from .Utils import encode_to_s3, time_to_epoch, deunicodise, decode_from_s3, check_bucket_name_dns_support
1717 from .SortedDict import SortedDict
1818
1919 import datetime
6262
6363 Useful for REST authentication. See http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
6464 string_to_sign should be utf-8 "bytes".
65 and returned signature will be utf-8 encoded "bytes".
6566 """
6667 secret_key = Config.Config().secret_key
6768 signature = base64.encodestring(hmac.new(encode_to_s3(secret_key), string_to_sign, sha1).digest()).strip()
157158 debug("Signing plaintext: %r", signtext)
158159 parms['sig'] = s3_quote(sign_string_v2(encode_to_s3(signtext)), unicode_output=True)
159160 debug("Urlencoded signature: %s", parms['sig'])
160 url = "%(proto)s://%(bucket)s.%(host_base)s/%(object)s?AWSAccessKeyId=%(access_key)s&Expires=%(expiry)d&Signature=%(sig)s" % parms
161 if check_bucket_name_dns_support(Config.Config().host_bucket, parms['bucket']):
162 url = "%(proto)s://%(bucket)s.%(host_base)s/%(object)s"
163 else:
164 url = "%(proto)s://%(host_base)s/%(bucket)s/%(object)s"
165 url += "?AWSAccessKeyId=%(access_key)s&Expires=%(expiry)d&Signature=%(sig)s"
166 url = url % parms
161167 if content_disposition:
162168 url += "&response-content-disposition=" + s3_quote(content_disposition, unicode_output=True)
163169 if content_type:
146146 self.putheader(encode_to_s3(hdr), encode_to_s3(value))
147147
148148 # If an Expect: 100-continue was sent, we need to check for a 417
149 # Expectation Failed to avoid unecessarily sending the body
149 # Expectation Failed to avoid unnecessarily sending the body
150150 # See RFC 2616 8.2.3
151151 if not expect_continue:
152152 self.endheaders(body)
192192 body = _encode(body, 'body')
193193
194194 # If an Expect: 100-continue was sent, we need to check for a 417
195 # Expectation Failed to avoid unecessarily sending the body
195 # Expectation Failed to avoid unnecessarily sending the body
196196 # See RFC 2616 8.2.3
197197 if not expect_continue:
198198 self.endheaders(body, encode_chunked=encode_chunked)
1212 import S3.Utils
1313 from . import ExitCodes
1414
15 if sys.version_info >= (3,0):
15 if sys.version_info >= (3, 0):
1616 PY3 = True
1717 # In python 3, unicode -> str, and str -> bytes
1818 unicode = str
1919 else:
2020 PY3 = False
21
22 ## External exceptions
23
24 from ssl import SSLError as S3SSLError
25
26 try:
27 from ssl import CertificateError as S3SSLCertificateError
28 except ImportError:
29 class S3SSLCertificateError(Exception):
30 pass
2131
2232
2333 try:
2535 except ImportError:
2636 # ParseError was only added in python2.7, before ET was raising ExpatError
2737 from xml.parsers.expat import ExpatError as XmlParseError
38
39
40 ## s3cmd exceptions
2841
2942 class S3Exception(Exception):
3043 def __init__(self, message = ""):
2323 import re
2424 import errno
2525 import io
26
27 PY3 = (sys.version_info >= (3, 0))
2628
2729 __all__ = ["fetch_local_list", "fetch_remote_list", "compare_filelists"]
2830
201203 if counter % 1000 == 0:
202204 info(u"[%d/%d]" % (counter, len_loc_list))
203205
204 if relative_file == '-': continue
206 if relative_file == '-':
207 continue
205208
206209 full_name = loc_list[relative_file]['full_name']
207210 try:
305308 # not. Leave it to a non-files_from run to purge.
306309 if cfg.cache_file and len(cfg.files_from) == 0:
307310 cache.mark_all_for_purge()
308 for i in local_list.keys():
309 cache.unmark_for_purge(local_list[i]['dev'], local_list[i]['inode'], local_list[i]['mtime'], local_list[i]['size'])
311 if PY3:
312 local_list_val_iter = local_list.values()
313 else:
314 local_list_val_iter = local_list.itervalues()
315 for f_info in local_list_val_iter:
316 inode = f_info.get('inode', 0)
317 if not inode:
318 continue
319 cache.unmark_for_purge(f_info['dev'], inode, f_info['mtime'],
320 f_info['size'])
310321 cache.purge()
311322 cache.save(cfg.cache_file)
312323
3030 return d['md5']
3131
3232 def mark_all_for_purge(self):
33 for d in self.inodes.keys():
34 for i in self.inodes[d].keys():
35 for c in self.inodes[d][i].keys():
33 for d in tuple(self.inodes):
34 for i in tuple(self.inodes[d]):
35 for c in tuple(self.inodes[d][i]):
3636 self.inodes[d][i][c]['purge'] = True
3737
3838 def unmark_for_purge(self, dev, inode, mtime, size):
4444 del self.inodes[dev][inode][mtime]['purge']
4545
4646 def purge(self):
47 for d in self.inodes.keys():
48 for i in self.inodes[d].keys():
49 for m in self.inodes[d][i].keys():
47 for d in tuple(self.inodes):
48 for i in tuple(self.inodes[d]):
49 for m in tuple(self.inodes[d][i]):
5050 if 'purge' in self.inodes[d][i][m]:
5151 del self.inodes[d][i]
5252 break
66 ## Copyright: TGRMN Software and contributors
77
88 package = "s3cmd"
9 version = "2.0.2"
9 version = "2.1.0"
1010 url = "http://s3tools.org"
1111 license = "GNU GPL v2+"
1212 short_description = "Command line tool for managing Amazon S3 and CloudFront services"
4343 from .S3Uri import S3Uri
4444 from .ConnMan import ConnMan
4545 from .Crypto import (sign_request_v2, sign_request_v4, checksum_sha256_file,
46 checksum_sha256_buffer, s3_quote, format_param_str)
46 checksum_sha256_buffer, s3_quote, format_param_str)
4747
4848 try:
4949 from ctypes import ArgumentError
155155 def use_signature_v2(self):
156156 if self.s3.endpoint_requires_signature_v4:
157157 return False
158 # in case of bad DNS name due to bucket name v2 will be used
159 # this way we can still use capital letters in bucket names for the older regions
160
161 if self.resource['bucket'] is None or not check_bucket_name_dns_conformity(self.resource['bucket']) or self.s3.config.signature_v2 or self.s3.fallback_to_signature_v2:
158
159 if self.s3.config.signature_v2 or self.s3.fallback_to_signature_v2:
162160 return True
161
163162 return False
164163
165164 def sign(self):
271270 host = getHostnameFromBucket(bucket)
272271 else:
273272 host = self.config.host_base.lower()
273 # The following hack is needed because it looks like that some servers
274 # are not respecting the HTTP spec and so will fail the signature check
275 # if the port is specified in the "Host" header for default ports.
276 # STUPIDIEST THING EVER FOR A SERVER...
277 # See: https://github.com/minio/minio/issues/9169
278 if self.config.use_https:
279 if host.endswith(':443'):
280 host = host[:-4]
281 elif host.endswith(':80'):
282 host = host[:-3]
283
274284 debug('get_hostname(%s): %s' % (bucket, host))
275285 return host
276286
285295 or (bucket_name not in S3Request.redir_map
286296 and not check_bucket_name_dns_support(self.config.host_bucket, bucket_name))
287297 ):
288 uri = "/%s%s" % (bucket_name, resource['uri'])
298 uri = "/%s%s" % (s3_quote(bucket_name, quote_backslashes=False,
299 unicode_output=True),
300 resource['uri'])
289301 else:
290302 uri = resource['uri']
291303 if base_path:
381393 bucket_location = bucket_location.strip()
382394 if bucket_location.upper() == "EU":
383395 bucket_location = bucket_location.upper()
384 else:
385 bucket_location = bucket_location.lower()
386396 body = "<CreateBucketConfiguration><LocationConstraint>"
387397 body += bucket_location
388398 body += "</LocationConstraint></CreateBucketConfiguration>"
967977 request = self.create_request("BUCKET_LIST", bucket = uri.bucket(),
968978 uri_params = {'policy': None})
969979 response = self.send_request(request)
970 return response['data']
980 return decode_from_s3(response['data'])
971981
972982 def set_policy(self, uri, policy):
973983 headers = SortedDict(ignore_case = True)
9901000 request = self.create_request("BUCKET_LIST", bucket = uri.bucket(),
9911001 uri_params = {'cors': None})
9921002 response = self.send_request(request)
993 return response['data']
1003 return decode_from_s3(response['data'])
9941004
9951005 def set_cors(self, uri, cors):
9961006 headers = SortedDict(ignore_case = True)
12261236
12271237 raise S3Error(response)
12281238
1229 def send_request(self, request, retries = _max_retries):
1230 if request.resource.get('bucket') \
1231 and not request.use_signature_v2() \
1232 and S3Request.region_map.get(request.resource['bucket'],
1233 Config().bucket_location) == "US":
1234 debug("===== Send_request inner request to determine the bucket region =====")
1239 def update_region_inner_request(self, request):
1240 """Get and update region for the request if needed.
1241
1242 Signature v4 needs the region of the bucket or the request will fail
1243 with the indication of the correct region.
1244 We are trying to avoid this failure by pre-emptively getting the
1245 correct region to use, if not provided by the user.
1246 """
1247 if request.resource.get('bucket') and not request.use_signature_v2() \
1248 and S3Request.region_map.get(
1249 request.resource['bucket'], Config().bucket_location
1250 ) == "US":
1251 debug("===== SEND Inner request to determine the bucket region "
1252 "=====")
12351253 try:
12361254 s3_uri = S3Uri(u's3://' + request.resource['bucket'])
12371255 # "force_us_default" should prevent infinite recursivity because
12391257 region = self.get_bucket_location(s3_uri, force_us_default=True)
12401258 if region is not None:
12411259 S3Request.region_map[request.resource['bucket']] = region
1242 debug("===== END send_request inner request to determine the bucket region (%r) =====",
1243 region)
1260 debug("===== SUCCESS Inner request to determine the bucket "
1261 "region (%r) =====", region)
12441262 except Exception as exc:
12451263 # Ignore errors, it is just an optimisation, so nothing critical
1246 debug("Error getlocation inner request: %s", exc)
1264 debug("getlocation inner request failure reason: %s", exc)
1265 debug("===== FAILED Inner request to determine the bucket "
1266 "region =====")
1267
1268 def send_request(self, request, retries = _max_retries):
1269 self.update_region_inner_request(request)
12471270
12481271 request.body = encode_to_s3(request.body)
12491272 headers = request.headers
12691292 attrs = parse_attrs_header(response["headers"]["x-amz-meta-s3cmd-attrs"])
12701293 response["s3cmd-attrs"] = attrs
12711294 ConnMan.put(conn)
1295 except (S3SSLError, S3SSLCertificateError):
1296 # In case of failure to validate the certificate for a ssl
1297 # connection,no need to retry, abort immediately
1298 raise
12721299 except (IOError, Exception) as e:
12731300 debug("Response:\n" + pprint.pformat(response))
12741301 if ((hasattr(e, 'errno') and e.errno
12801307 # When the connection is broken, BadStatusLine is raised with py2
12811308 # and RemoteDisconnected is raised by py3 with a trap:
12821309 # RemoteDisconnected has an errno field with a None value.
1283 if conn:
1284 # close the connection and re-establish
1285 conn.counter = ConnMan.conn_max_counter
1286 ConnMan.put(conn)
1310
1311 # close the connection and re-establish
1312 ConnMan.close(conn)
12871313 if retries:
12881314 warning("Retrying failed request: %s (%s)" % (resource['uri'], e))
12891315 warning("Waiting %d sec..." % self._fail_wait(retries))
13441370 def send_file(self, request, stream, labels, buffer = '', throttle = 0,
13451371 retries = _max_retries, offset = 0, chunk_size = -1,
13461372 use_expect_continue = None):
1347 if request.resource.get('bucket') \
1348 and not request.use_signature_v2() \
1349 and S3Request.region_map.get(request.resource['bucket'],
1350 Config().bucket_location) == "US":
1351 debug("===== Send_file inner request to determine the bucket region =====")
1352 try:
1353 s3_uri = S3Uri(u's3://' + request.resource['bucket'])
1354 # "force_us_default" should prevent infinite recursivity because
1355 # it will set the region_map dict.
1356 region = self.get_bucket_location(s3_uri, force_us_default=True)
1357 if region is not None:
1358 S3Request.region_map[request.resource['bucket']] = region
1359 debug("===== END Send_file inner request to determine the bucket region (%r) =====",
1360 region)
1361 except Exception as exc:
1362 # Ignore errors, it is just an optimisation, so nothing critical
1363 debug("Error getlocation inner request: %s", exc)
1373 self.update_region_inner_request(request)
13641374
13651375 if use_expect_continue is None:
13661376 use_expect_continue = self.config.use_http_expect
16131623 return response
16141624
16151625 def recv_file(self, request, stream, labels, start_position = 0, retries = _max_retries):
1616 if request.resource.get('bucket') \
1617 and not request.use_signature_v2() \
1618 and S3Request.region_map.get(request.resource['bucket'],
1619 Config().bucket_location) == "US":
1620 debug("===== Recv_file inner request to determine the bucket region =====")
1621 try:
1622 s3_uri = S3Uri(u's3://' + request.resource['bucket'])
1623 # "force_us_default" should prevent infinite recursivity because
1624 # it will set the region_map dict.
1625 region = self.get_bucket_location(s3_uri, force_us_default=True)
1626 if region is not None:
1627 S3Request.region_map[request.resource['bucket']] = region
1628 debug("===== END recv_file Inner request to determine the bucket region (%r) =====",
1629 region)
1630 except Exception as exc:
1631 # Ignore errors, it is just an optimisation, so nothing critical
1632 debug("Error getlocation inner request: %s", exc)
1626 self.update_region_inner_request(request)
16331627
16341628 method_string, resource, headers = request.get_triplet()
16351629 filename = stream.stream_name
16611655 debug("Response:\n" + pprint.pformat(response))
16621656 except ParameterError as e:
16631657 raise
1664 except OSError as e:
1665 raise
16661658 except (IOError, Exception) as e:
16671659 if self.config.progress_meter:
16681660 progress.done("failed")
16711663 or "[Errno 104]" in str(e) or "[Errno 32]" in str(e)
16721664 ) and not isinstance(e, SocketTimeoutException):
16731665 raise
1674 if conn:
1675 # close the connection and re-establish
1676 conn.counter = ConnMan.conn_max_counter
1677 ConnMan.put(conn)
1666
1667 # close the connection and re-establish
1668 ConnMan.close(conn)
16781669
16791670 if retries:
16801671 warning("Retrying failed request: %s (%s)" % (resource['uri'], e))
17681759 ) and not isinstance(e, SocketTimeoutException):
17691760 raise
17701761 # close the connection and re-establish
1771 conn.counter = ConnMan.conn_max_counter
1772 ConnMan.put(conn)
1762 ConnMan.close(conn)
17731763
17741764 if retries:
17751765 warning("Retrying failed request: %s (%s)" % (resource['uri'], e))
1313 from .Utils import unicodise, deunicodise, check_bucket_name_dns_support
1414 from . import Config
1515
16 if sys.version_info >= (3,0):
17 PY3 = True
18 else:
19 PY3 = False
16
17 PY3 = (sys.version_info >= (3, 0))
18
2019
2120 class S3Uri(object):
2221 type = None
8988 return check_bucket_name_dns_support(Config.Config().host_bucket, self._bucket)
9089
9190 def public_url(self):
91 public_url_protocol = "http"
92 if Config.Config().public_url_use_https:
93 public_url_protocol = "https"
9294 if self.is_dns_compatible():
93 return "http://%s.%s/%s" % (self._bucket, Config.Config().host_base, self._object)
94 else:
95 return "http://%s/%s/%s" % (Config.Config().host_base, self._bucket, self._object)
95 return "%s://%s.%s/%s" % (public_url_protocol, self._bucket, Config.Config().host_base, self._object)
96 else:
97 return "%s://%s/%s/%s" % (public_url_protocol, Config.Config().host_base, self._bucket, self._object)
9698
9799 def host_name(self):
98100 if self.is_dns_compatible():
100100 __all__.append("stripNameSpace")
101101
102102 def getTreeFromXml(xml):
103 xml, xmlns = stripNameSpace(xml)
103 xml, xmlns = stripNameSpace(encode_to_s3(xml))
104104 try:
105105 tree = ET.fromstring(xml)
106106 if xmlns:
193193 def formatSize(size, human_readable = False, floating_point = False):
194194 size = floating_point and float(size) or int(size)
195195 if human_readable:
196 coeffs = ['k', 'M', 'G', 'T']
196 coeffs = ['K', 'M', 'G', 'T']
197197 coeff = ""
198198 while size > 2048:
199199 size /= 1024
200200 coeff = coeffs.pop(0)
201 return (size, coeff)
201 return (floating_point and float(size) or int(size), coeff)
202202 else:
203203 return (size, "")
204204 __all__.append("formatSize")
+112
-83
s3cmd less more
0 #!/usr/bin/env python2
0 #!/usr/bin/env python
11 # -*- coding: utf-8 -*-
22
33 ## --------------------------------------------------------------------
2222
2323 import sys
2424
25 if float("%d.%d" %(sys.version_info[0], sys.version_info[1])) < 2.6:
25 if sys.version_info < (2, 6):
2626 sys.stderr.write(u"ERROR: Python 2.6 or higher required, sorry.\n")
2727 sys.exit(EX_OSFILE)
2828
29 PY3 = (sys.version_info >= (3, 0))
30
31 import codecs
32 import errno
33 import glob
34 import io
35 import locale
2936 import logging
30 import time
3137 import os
3238 import re
33 import errno
34 import glob
39 import shutil
40 import socket
41 import subprocess
42 import tempfile
43 import time
3544 import traceback
36 import codecs
37 import locale
38 import subprocess
45
46 from copy import copy
47 from optparse import OptionParser, Option, OptionValueError, IndentedHelpFormatter
48 from logging import debug, info, warning, error
49
50
3951 try:
4052 import htmlentitydefs
4153 except:
4961 # In python 3, unicode -> str, and str -> bytes
5062 unicode = str
5163
52 import socket
53 import shutil
54 import tempfile
55
56 from copy import copy
57 from optparse import OptionParser, Option, OptionValueError, IndentedHelpFormatter
58 from logging import debug, info, warning, error
59
6064 try:
6165 from shutil import which
6266 except ImportError:
6367 # python2 fallback code
6468 from distutils.spawn import find_executable as which
6569
66 from ssl import SSLError
67 import io
6870
6971 def output(message):
7072 sys.stdout.write(message + "\n")
101103 buckets_size += size
102104 total_size, size_coeff = formatSize(buckets_size, cfg.human_readable_sizes)
103105 total_size_str = str(total_size) + size_coeff
104 output(u"".rjust(8, "-"))
105 output(u"%s Total" % (total_size_str.ljust(8)))
106 output(u"".rjust(12, "-"))
107 output(u"%s Total" % (total_size_str.ljust(12)))
106108 return size
107109
108110 def subcmd_bucket_usage(s3, uri):
130132 except KeyboardInterrupt as e:
131133 extra_info = u' [interrupted]'
132134
133 total_size, size_coeff = formatSize(bucket_size, Config().human_readable_sizes)
134 total_size_str = str(total_size) + size_coeff
135 output(u"%s %s objects %s%s" % (total_size_str.ljust(8), object_count, uri, extra_info))
135 total_size_str = u"%d%s" % formatSize(bucket_size,
136 Config().human_readable_sizes)
137 if Config().human_readable_sizes:
138 total_size_str = total_size_str.rjust(5)
139 else:
140 total_size_str = total_size_str.rjust(12)
141 output(u"%s %7s objects %s%s" % (total_size_str, object_count, uri,
142 extra_info))
136143 return bucket_size
137144
138145 def cmd_ls(args):
183190 error(S3.codes[e.info["Code"]] % bucket)
184191 raise
185192
193 # md5 are 32 char long, but for multipart there could be a suffix
194 if Config().human_readable_sizes:
195 # %(size)5s%(coeff)1s
196 format_size = u"%5d%1s"
197 dir_str = u"DIR".rjust(6)
198 else:
199 format_size = u"%12d%s"
200 dir_str = u"DIR".rjust(12)
186201 if cfg.long_listing:
187 format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(md5)32s %(storageclass)s %(uri)s"
202 format_string = u"%(timestamp)16s %(size)s %(md5)-35s %(storageclass)-11s %(uri)s"
188203 elif cfg.list_md5:
189 format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(md5)32s %(uri)s"
204 format_string = u"%(timestamp)16s %(size)s %(md5)-35s %(uri)s"
190205 else:
191 format_string = u"%(timestamp)16s %(size)9s%(coeff)1s %(uri)s"
206 format_string = u"%(timestamp)16s %(size)s %(uri)s"
192207
193208 for prefix in response['common_prefixes']:
194209 output(format_string % {
195210 "timestamp": "",
196 "size": "DIR",
197 "coeff": "",
211 "size": dir_str,
198212 "md5": "",
199213 "storageclass": "",
200214 "uri": uri.compose_uri(bucket, prefix["Prefix"])})
212226 except KeyError:
213227 pass
214228
215 size, size_coeff = formatSize(object["Size"], Config().human_readable_sizes)
229 size_and_coeff = formatSize(object["Size"],
230 Config().human_readable_sizes)
216231 output(format_string % {
217232 "timestamp": formatDateTime(object["LastModified"]),
218 "size" : str(size),
219 "coeff": size_coeff,
233 "size" : format_size % size_and_coeff,
220234 "md5" : md5,
221235 "storageclass" : storageclass,
222236 "uri": uri.compose_uri(bucket, object["Key"]),
304318 raise ParameterError("Expecting S3 URI with just the bucket name set instead of '%s'" % arg)
305319 try:
306320 response = s3.expiration_set(uri, cfg.bucket_location)
307 if response["status"] is 200:
321 if response["status"] == 200:
308322 output(u"Bucket '%s': expiration configuration is set." % (uri.uri()))
309 elif response["status"] is 204:
323 elif response["status"] == 204:
310324 output(u"Bucket '%s': expiration configuration is deleted." % (uri.uri()))
311325 except S3Error as e:
312326 if e.info["Code"] in S3.codes:
670684 response = s3.object_batch_delete_uri_strs([uri.compose_uri(bucket, item['Key']) for item in to_delete])
671685 deleted_bytes += sum(int(item["Size"]) for item in to_delete)
672686 deleted_count += len(to_delete)
673 output('\n'.join(u"delete: '%s'" % uri.compose_uri(bucket, p['Key']) for p in to_delete))
687 output(u'\n'.join(u"delete: '%s'" % uri.compose_uri(bucket, p['Key']) for p in to_delete))
674688
675689 if deleted_count:
676690 # display summary data of deleted files
702716 debug(u"Batch delete %d, remaining %d" % (len(to_delete), len(remote_list)))
703717 if not cfg.dry_run:
704718 response = s3.object_batch_delete(to_delete)
705 output('\n'.join((u"delete: '%s'" % to_delete[p]['object_uri_str']) for p in to_delete))
719 output(u'\n'.join((u"delete: '%s'" % to_delete[p]['object_uri_str']) for p in to_delete))
706720 to_delete = remote_list[:1000]
707721 remote_list = remote_list[1000:]
708722
9921006
9931007 if uri.has_object():
9941008 # Temporary hack for performance + python3 compatibility
995 try:
996 # Check python 2 first
1009 if PY3:
1010 info_headers_iter = info['headers'].items()
1011 else:
9971012 info_headers_iter = info['headers'].iteritems()
998 except:
999 info_headers_iter = info['headers'].items()
10001013 for header, value in info_headers_iter:
10011014 if header.startswith('x-amz-meta-'):
10021015 output(u" %s: %s" % (header, value))
10991112 extra_headers = copy(cfg.extra_headers)
11001113 try:
11011114 response = s3.object_copy(src_uri, dst_uri, extra_headers)
1102 output("remote copy: '%(src)s' -> '%(dst)s'" % { "src" : src_uri, "dst" : dst_uri })
1115 output(u"remote copy: '%s' -> '%s'" % (src_uri, dst_uri))
11031116 total_nb_files += 1
11041117 total_size += item.get(u'size', 0)
1105 except S3Error as e:
1118 except S3Error as exc:
11061119 ret = EX_PARTIAL
1107 error("File '%(src)s' could not be copied: %(e)s" % { "src" : src_uri, "e" : e })
1120 error(u"File '%s' could not be copied: %s", src_uri, exc)
11081121 if cfg.stop_on_error:
11091122 raise
11101123 return ret, seq, total_nb_files, total_size
19371950 s3 = S3(cfg)
19381951 uri = S3Uri(args[1])
19391952 policy_file = args[0]
1940 policy = open(deunicodise(policy_file), 'r').read()
1941
1942 if cfg.dry_run: return EX_OK
1953
1954 with open(deunicodise(policy_file), 'r') as fp:
1955 policy = fp.read()
1956
1957 if cfg.dry_run:
1958 return EX_OK
19431959
19441960 response = s3.set_policy(uri, policy)
19451961
19671983 s3 = S3(cfg)
19681984 uri = S3Uri(args[1])
19691985 cors_file = args[0]
1970 cors = open(deunicodise(cors_file), 'r').read()
1971
1972 if cfg.dry_run: return EX_OK
1986
1987 with open(deunicodise(cors_file), 'r') as fp:
1988 cors = fp.read()
1989
1990 if cfg.dry_run:
1991 return EX_OK
19731992
19741993 response = s3.set_cors(uri, cors)
19751994
20122031 s3 = S3(cfg)
20132032 uri = S3Uri(args[1])
20142033 lifecycle_policy_file = args[0]
2015 lifecycle_policy = open(deunicodise(lifecycle_policy_file), 'r').read()
2016
2017 if cfg.dry_run: return EX_OK
2034
2035 with open(deunicodise(lifecycle_policy_file), 'r') as fp:
2036 lifecycle_policy = fp.read()
2037
2038 if cfg.dry_run:
2039 return EX_OK
20182040
20192041 response = s3.set_lifecycle_policy(uri, lifecycle_policy)
20202042
20612083 output(u"Initiated\tPath\tId")
20622084 for mpupload in parseNodes(tree):
20632085 try:
2064 output("%s\t%s\t%s" % (mpupload['Initiated'], "s3://" + uri.bucket() + "/" + mpupload['Key'], mpupload['UploadId']))
2086 output(u"%s\t%s\t%s" % (mpupload['Initiated'], "s3://" + uri.bucket() + "/" + mpupload['Key'], mpupload['UploadId']))
20652087 except KeyError:
20662088 pass
20672089 return EX_OK
20902112 output(u"LastModified\t\t\tPartNumber\tETag\tSize")
20912113 for mpupload in parseNodes(tree):
20922114 try:
2093 output("%s\t%s\t%s\t%s" % (mpupload['LastModified'], mpupload['PartNumber'], mpupload['ETag'], mpupload['Size']))
2115 output(u"%s\t%s\t%s\t%s" % (mpupload['LastModified'], mpupload['PartNumber'], mpupload['ETag'], mpupload['Size']))
20942116 except:
20952117 pass
20962118 return EX_OK
21202142
21212143 def cmd_sign(args):
21222144 string_to_sign = args.pop()
2123 debug("string-to-sign: %r" % string_to_sign)
2145 debug(u"string-to-sign: %r" % string_to_sign)
21242146 signature = Crypto.sign_string_v2(encode_to_s3(string_to_sign))
2125 output("Signature: %s" % decode_from_s3(signature))
2147 output(u"Signature: %s" % decode_from_s3(signature))
21262148 return EX_OK
21272149
21282150 def cmd_signurl(args):
21912213 src = S3Uri("s3://%s/%s" % (culprit.bucket(), key_bin))
21922214 dst = S3Uri("s3://%s/%s" % (culprit.bucket(), key_new))
21932215 if cfg.dry_run:
2194 output("[--dry-run] File %r would be renamed to %s" % (key_bin, key_new))
2216 output(u"[--dry-run] File %r would be renamed to %s" % (key_bin, key_new))
21952217 continue
21962218 try:
21972219 resp_move = s3.object_move(src, dst)
21982220 if resp_move['status'] == 200:
2199 output("File '%r' renamed to '%s'" % (key_bin, key_new))
2221 output(u"File '%r' renamed to '%s'" % (key_bin, key_new))
22002222 count += 1
22012223 else:
2202 error("Something went wrong for: %r" % key)
2203 error("Please report the problem to s3tools-bugs@lists.sourceforge.net")
2224 error(u"Something went wrong for: %r" % key)
2225 error(u"Please report the problem to s3tools-bugs@lists.sourceforge.net")
22042226 except S3Error:
2205 error("Something went wrong for: %r" % key)
2206 error("Please report the problem to s3tools-bugs@lists.sourceforge.net")
2227 error(u"Something went wrong for: %r" % key)
2228 error(u"Please report the problem to s3tools-bugs@lists.sourceforge.net")
22072229
22082230 if count > 0:
2209 warning("Fixed %d files' names. Their ACL were reset to Private." % count)
2210 warning("Use 's3cmd setacl --acl-public s3://...' to make")
2211 warning("them publicly readable if required.")
2231 warning(u"Fixed %d files' names. Their ACL were reset to Private." % count)
2232 warning(u"Use 's3cmd setacl --acl-public s3://...' to make")
2233 warning(u"them publicly readable if required.")
22122234 return EX_OK
22132235
22142236 def resolve_list(lst, args):
22182240 return retval
22192241
22202242 def gpg_command(command, passphrase = ""):
2221 debug("GPG command: " + " ".join(command))
2243 debug(u"GPG command: " + " ".join(command))
22222244 command = [deunicodise(cmd_entry) for cmd_entry in command]
22232245 p = subprocess.Popen(command, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.STDOUT,
22242246 close_fds = True)
22252247 p_stdout, p_stderr = p.communicate(deunicodise(passphrase + "\n"))
2226 debug("GPG output:")
2248 debug(u"GPG output:")
22272249 for line in unicodise(p_stdout).split("\n"):
2228 debug("GPG: " + line)
2250 debug(u"GPG: " + line)
22292251 p_exitcode = p.wait()
22302252 return p_exitcode
22312253
23742396 os.unlink(deunicodise(ret_enc[1]))
23752397 os.unlink(deunicodise(ret_dec[1]))
23762398 if hash[0] == hash[2] and hash[0] != hash[1]:
2377 output ("Success. Encryption and decryption worked fine :-)")
2399 output(u"Success. Encryption and decryption worked fine :-)")
23782400 else:
23792401 raise Exception("Encryption verification error.")
23802402
24232445
24242446 def process_patterns_from_file(fname, patterns_list):
24252447 try:
2426 fn = open(deunicodise(fname), "rt")
2448 with open(deunicodise(fname), "rt") as fn:
2449 for pattern in fn:
2450 pattern = unicodise(pattern).strip()
2451 if re.match("^#", pattern) or re.match("^\s*$", pattern):
2452 continue
2453 debug(u"%s: adding rule: %s" % (fname, pattern))
2454 patterns_list.append(pattern)
24272455 except IOError as e:
24282456 error(e)
24292457 sys.exit(EX_IOERR)
2430 for pattern in fn:
2431 pattern = unicodise(pattern).strip()
2432 if re.match("^#", pattern) or re.match("^\s*$", pattern):
2433 continue
2434 debug(u"%s: adding rule: %s" % (fname, pattern))
2435 patterns_list.append(pattern)
24362458
24372459 return patterns_list
24382460
24422464 Process --exclude / --include GLOB and REGEXP patterns.
24432465 'option_txt' is 'exclude' / 'include' / 'rexclude' / 'rinclude'
24442466 Returns: patterns_compiled, patterns_text
2467 Note: process_patterns_from_file will ignore lines starting with # as these
2468 are comments. To target escape the initial #, to use it in a file name, one
2469 can use: "[#]" (for exclude) or "\#" (for rexclude).
24452470 """
24462471
24472472 patterns_compiled = []
26492674 optparser.add_option( "--no-encrypt", dest="encrypt", action="store_false", help="Don't encrypt files.")
26502675 optparser.add_option("-f", "--force", dest="force", action="store_true", help="Force overwrite and other dangerous operations.")
26512676 optparser.add_option( "--continue", dest="get_continue", action="store_true", help="Continue getting a partially downloaded file (only for [get] command).")
2652 optparser.add_option( "--continue-put", dest="put_continue", action="store_true", help="Continue uploading partially uploaded files or multipart upload parts. Restarts/parts files that don't have matching size and md5. Skips files/parts that do. Note: md5sum checks are not always sufficient to check (part) file equality. Enable this at your own risk.")
2677 optparser.add_option( "--continue-put", dest="put_continue", action="store_true", help="Continue uploading partially uploaded files or multipart upload parts. Restarts parts/files that don't have matching size and md5. Skips files/parts that do. Note: md5sum checks are not always sufficient to check (part) file equality. Enable this at your own risk.")
26532678 optparser.add_option( "--upload-id", dest="upload_id", help="UploadId for Multipart Upload, in case you want continue an existing upload (equivalent to --continue-put) and there are multiple partial uploads. Use s3cmd multipart [URI] to see what UploadIds are associated with the given URI.")
26542679 optparser.add_option( "--skip-existing", dest="skip_existing", action="store_true", help="Skip over files that exist at the destination (only for [get] and [sync] commands).")
26552680 optparser.add_option("-r", "--recursive", dest="recursive", action="store_true", help="Recursive upload, download or removal.")
26602685 optparser.add_option( "--acl-grant", dest="acl_grants", type="s3acl", action="append", metavar="PERMISSION:EMAIL or USER_CANONICAL_ID", help="Grant stated permission to a given amazon user. Permission is one of: read, write, read_acp, write_acp, full_control, all")
26612686 optparser.add_option( "--acl-revoke", dest="acl_revokes", type="s3acl", action="append", metavar="PERMISSION:USER_CANONICAL_ID", help="Revoke stated permission for a given amazon user. Permission is one of: read, write, read_acp, write_acp, full_control, all")
26622687
2663 optparser.add_option("-D", "--restore-days", dest="restore_days", action="store", help="Number of days to keep restored file available (only for 'restore' command).", metavar="NUM")
2688 optparser.add_option("-D", "--restore-days", dest="restore_days", action="store", help="Number of days to keep restored file available (only for 'restore' command). Default is 1 day.", metavar="NUM")
26642689 optparser.add_option( "--restore-priority", dest="restore_priority", action="store", choices=['standard', 'expedited', 'bulk'], help="Priority for restoring files from S3 Glacier (only for 'restore' command). Choices available: bulk, standard, expedited")
26652690
26662691 optparser.add_option( "--delete-removed", dest="delete_removed", action="store_true", help="Delete destination objects with no corresponding source file [sync]")
26882713 optparser.add_option( "--host-bucket", dest="host_bucket", help="DNS-style bucket+hostname:port template for accessing a bucket (default: %s)" % (cfg.host_bucket))
26892714 optparser.add_option( "--reduced-redundancy", "--rr", dest="reduced_redundancy", action="store_true", help="Store object with 'Reduced redundancy'. Lower per-GB price. [put, cp, mv]")
26902715 optparser.add_option( "--no-reduced-redundancy", "--no-rr", dest="reduced_redundancy", action="store_false", help="Store object without 'Reduced redundancy'. Higher per-GB price. [put, cp, mv]")
2691 optparser.add_option( "--storage-class", dest="storage_class", action="store", metavar="CLASS", help="Store object with specified CLASS (STANDARD, STANDARD_IA, or REDUCED_REDUNDANCY). Lower per-GB price. [put, cp, mv]")
2716 optparser.add_option( "--storage-class", dest="storage_class", action="store", metavar="CLASS", help="Store object with specified CLASS (STANDARD, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER or DEEP_ARCHIVE). [put, cp, mv]")
26922717 optparser.add_option( "--access-logging-target-prefix", dest="log_target_prefix", help="Target prefix for access logs (S3 URI) (for [cfmodify] and [accesslog] commands)")
26932718 optparser.add_option( "--no-access-logging", dest="log_target_prefix", action="store_false", help="Disable access logging (for [cfmodify] and [accesslog] commands)")
26942719
27482773 optparser.add_option( "--no-check-hostname", dest="check_ssl_hostname", action="store_false", help="Do not check SSL certificate hostname validity")
27492774 optparser.add_option( "--signature-v2", dest="signature_v2", action="store_true", help="Use AWS Signature version 2 instead of newer signature methods. Helpful for S3-like systems that don't have AWS Signature v4 yet.")
27502775 optparser.add_option( "--limit-rate", dest="limitrate", action="store", type="string", help="Limit the upload or download speed to amount bytes per second. Amount may be expressed in bytes, kilobytes with the k suffix, or megabytes with the m suffix")
2776 optparser.add_option( "--no-connection-pooling", dest="connection_pooling", action="store_false", help="Disable connection re-use")
27512777 optparser.add_option( "--requester-pays", dest="requester_pays", action="store_true", help="Set the REQUESTER PAYS flag for operations")
27522778 optparser.add_option("-l", "--long-listing", dest="long_listing", action="store_true", help="Produce long listing [ls]")
27532779 optparser.add_option( "--stop-on-error", dest="stop_on_error", action="store_true", help="stop if error in transfer")
28202846 key, val = unicodise_s(hdr).split(":", 1)
28212847 except ValueError:
28222848 raise ParameterError("Invalid header format: %s" % unicodise_s(hdr))
2823 key_inval = re.sub("[a-zA-Z0-9-.]", "", key)
2849 # key char restrictions of the http headers name specification
2850 key_inval = re.sub(r"[a-zA-Z0-9\-.!#$%&*+^_|]", "", key)
28242851 if key_inval:
28252852 key_inval = key_inval.replace(" ", "<space>")
28262853 key_inval = key_inval.replace("\t", "<tab>")
2827 raise ParameterError("Invalid character(s) in header name '%s': \"%s\"" % (key, key_inval))
2828 debug(u"Updating Config.Config extra_headers[%s] -> %s" % (key.replace('_', '-').strip().lower(), val.strip()))
2829 cfg.extra_headers[key.replace('_', '-').strip().lower()] = val.strip()
2854 raise ParameterError("Invalid character(s) in header name '%s'"
2855 ": \"%s\"" % (key, key_inval))
2856 debug(u"Updating Config.Config extra_headers[%s] -> %s" %
2857 (key.strip().lower(), val.strip()))
2858 cfg.extra_headers[key.strip().lower()] = val.strip()
28302859
28312860 # Process --remove-header
28322861 if options.remove_headers:
31183147 sys.stderr.write("See ya!\n")
31193148 sys.exit(EX_BREAK)
31203149
3121 except SSLError as e:
3150 except (S3SSLError, S3SSLCertificateError) as e:
31223151 # SSLError is a subtype of IOError
31233152 error("SSL certificate verification failure: %s" % e)
31243153 sys.exit(EX_ACCESSDENIED)
215215 .TP
216216 \fB\-\-continue\-put\fR
217217 Continue uploading partially uploaded files or
218 multipart upload parts. Restarts/parts files that
218 multipart upload parts. Restarts parts/files that
219219 don't have matching size and md5. Skips files/parts
220220 that do. Note: md5sum checks are not always
221221 sufficient to check (part) file equality. Enable this
263263 .TP
264264 \fB\-D\fR NUM, \fB\-\-restore\-days\fR=NUM
265265 Number of days to keep restored file available (only
266 for 'restore' command).
266 for 'restore' command). Default is 1 day.
267267 .TP
268268 \fB\-\-restore\-priority\fR=RESTORE_PRIORITY
269269 Priority for restoring files from S3 Glacier (only for
366366 .TP
367367 \fB\-\-storage\-class\fR=CLASS
368368 Store object with specified CLASS (STANDARD,
369 STANDARD_IA, or REDUCED_REDUNDANCY). Lower per\-GB
370 price. [put, cp, mv]
369 STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER
370 or DEEP_ARCHIVE). [put, cp, mv]
371371 .TP
372372 \fB\-\-access\-logging\-target\-prefix\fR=LOG_TARGET_PREFIX
373373 Target prefix for access logs (S3 URI) (for [cfmodify]
523523 Enable debug output.
524524 .TP
525525 \fB\-\-version\fR
526 Show s3cmd version (2.0.2) and exit.
526 Show s3cmd version (2.1.0) and exit.
527527 .TP
528528 \fB\-F\fR, \fB\-\-follow\-symlinks\fR
529529 Follow symbolic links as if they are regular files
559559 Limit the upload or download speed to amount bytes per
560560 second. Amount may be expressed in bytes, kilobytes
561561 with the k suffix, or megabytes with the m suffix
562 .TP
563 \fB\-\-no\-connection\-pooling\fR
564 Disable connection re\-use
562565 .TP
563566 \fB\-\-requester\-pays\fR
564567 Set the REQUESTER PAYS flag for operations
0 Metadata-Version: 1.1
0 Metadata-Version: 1.2
11 Name: s3cmd
2 Version: 2.0.2
2 Version: 2.1.0
33 Summary: Command line tool for managing Amazon S3 and CloudFront services
44 Home-page: http://s3tools.org
5 Author: github.com/mdomsch, github.com/matteobar, github.com/fviard
6 Author-email: s3tools-bugs@lists.sourceforge.net
5 Author: Michal Ludvig
6 Author-email: michal@logix.cz
7 Maintainer: github.com/fviard, github.com/matteobar
8 Maintainer-email: s3tools-bugs@lists.sourceforge.net
79 License: GNU GPL v2+
8 Description-Content-Type: UNKNOWN
910 Description:
1011
1112 S3cmd lets you copy files from/to Amazon S3
1718
1819 Authors:
1920 --------
21 Florent Viard <florent@sodria.com>
2022 Michal Ludvig <michal@logix.cz>
23 Matt Domsch (github.com/mdomsch)
2124
2225 Platform: UNKNOWN
2326 Classifier: Development Status :: 5 - Production/Stable
4043 Classifier: Programming Language :: Python :: 3.4
4144 Classifier: Programming Language :: Python :: 3.5
4245 Classifier: Programming Language :: Python :: 3.6
46 Classifier: Programming Language :: Python :: 3.7
47 Classifier: Programming Language :: Python :: 3.8
4348 Classifier: Topic :: System :: Archiving
4449 Classifier: Topic :: Utilities
0 INSTALL
0 INSTALL.md
11 LICENSE
22 MANIFEST.in
33 NEWS
00 [sdist]
11 formats = gztar,zip
2
3 [bdist_wheel]
4 universal = 1
25
36 [egg_info]
47 tag_build =
0 #!/usr/bin/env python2
1 # -*- coding=utf-8 -*-
0 #!/usr/bin/env python
1 # -*- coding: utf-8 -*-
22
33 from __future__ import print_function
44
4141 ## Re-create the manpage
4242 ## (Beware! Perl script on the loose!!)
4343 if len(sys.argv) > 1 and sys.argv[1] == "sdist":
44 if os.stat_result(os.stat("s3cmd.1")).st_mtime < os.stat_result(os.stat("s3cmd")).st_mtime:
44 if os.stat_result(os.stat("s3cmd.1")).st_mtime \
45 < os.stat_result(os.stat("s3cmd")).st_mtime:
4546 sys.stderr.write("Re-create man page first!\n")
4647 sys.stderr.write("Run: ./s3cmd --help | ./format-manpage.pl > s3cmd.1\n")
4748 sys.exit(1)
5253 man_path = os.getenv("S3CMD_INSTPATH_MAN") or "share/man"
5354 doc_path = os.getenv("S3CMD_INSTPATH_DOC") or "share/doc/packages"
5455 data_files = [
55 (doc_path+"/s3cmd", [ "README.md", "INSTALL", "LICENSE", "NEWS" ]),
56 (man_path+"/man1", [ "s3cmd.1" ] ),
56 (doc_path+"/s3cmd", ["README.md", "INSTALL.md", "LICENSE", "NEWS"]),
57 (man_path+"/man1", ["s3cmd.1"]),
5758 ]
5859 else:
5960 data_files = None
6162 ## Main distutils info
6263 setup(
6364 ## Content description
64 name = S3.PkgInfo.package,
65 version = S3.PkgInfo.version,
66 packages = [ 'S3' ],
67 scripts = ['s3cmd'],
68 data_files = data_files,
65 name=S3.PkgInfo.package,
66 version=S3.PkgInfo.version,
67 packages=['S3'],
68 scripts=['s3cmd'],
69 data_files=data_files,
6970
7071 ## Packaging details
71 author = "Michal Ludvig",
72 author_email = "michal@logix.cz",
73 maintainer = "github.com/mdomsch, github.com/matteobar, github.com/fviard",
74 maintainer_email = "s3tools-bugs@lists.sourceforge.net",
75 url = S3.PkgInfo.url,
76 license = S3.PkgInfo.license,
77 description = S3.PkgInfo.short_description,
78 long_description = """
72 author="Michal Ludvig",
73 author_email="michal@logix.cz",
74 maintainer="github.com/fviard, github.com/matteobar",
75 maintainer_email="s3tools-bugs@lists.sourceforge.net",
76 url=S3.PkgInfo.url,
77 license=S3.PkgInfo.license,
78 description=S3.PkgInfo.short_description,
79 long_description="""
7980 %s
8081
8182 Authors:
8283 --------
84 Florent Viard <florent@sodria.com>
8385 Michal Ludvig <michal@logix.cz>
86 Matt Domsch (github.com/mdomsch)
8487 """ % (S3.PkgInfo.long_description),
8588
86 classifiers = [
89 classifiers=[
8790 'Development Status :: 5 - Production/Stable',
8891 'Environment :: Console',
8992 'Environment :: MacOS X',
104107 'Programming Language :: Python :: 3.4',
105108 'Programming Language :: Python :: 3.5',
106109 'Programming Language :: Python :: 3.6',
110 'Programming Language :: Python :: 3.7',
111 'Programming Language :: Python :: 3.8',
107112 'Topic :: System :: Archiving',
108113 'Topic :: Utilities',
109114 ],
110115
111 install_requires = ["python-dateutil", "python-magic"]
116 install_requires=["python-dateutil", "python-magic"]
112117 )
113118
114119 # vim:et:ts=4:sts=4:ai