Codebase list s3cmd / 7790088
Imported Upstream version 0.9.3 Gianfranco Costamagna 8 years ago
16 changed file(s) with 1588 addition(s) and 0 deletion(s). Raw diff Collapse all Expand all
0 Installation of s3cmd package
1 =============================
2
3 Author:
4 Michal Ludvig <michal@logix.cz>
5
6 S3tools / S3cmd project homepage:
7 http://s3tools.sourceforge.net
8
9 Amazon S3 homepage:
10 http://aws.amazon.com/s3
11
12 !!!
13 !!! Please consult README file for setup, usage and examples!
14 !!!
15
16 Package formats
17 ---------------
18 S3cmd is distributed in two formats:
19 1) Prebuilt RPM file - should work on most RPM-based
20 distributions
21 2) Source .tar.gz package
22
23
24
25 Installation of RPM package
26 ---------------------------
27 As user "root" run:
28
29 rpm -ivh s3cmd-X.Y.Z.noarch.rpm
30
31 where X.Y.Z is the most recent s3cmd release version.
32
33 You may be informed about missing dependencies
34 on Python or some libraries. Please consult your
35 distribution documentation on ways to solve the problem.
36
37
38 Installation of source .tar.gz package
39 --------------------------------------
40 There are three options to run s3cmd from source tarball:
41
42 1) S3cmd program as distributed in s3cmd-X.Y.Z.tar.gz
43 can be run directly from where you untar'ed the package.
44
45 2) Or you may want to move "s3cmd" file and "S3" subdirectory
46 to some other path. Make sure that "S3" subdirectory ends up
47 in the same place where you move the "s3cmd" file.
48
49 For instance if you decide to move s3cmd to you $HOME/bin
50 you will have $HOME/bin/s3cmd file and $HOME/bin/S3 directory
51 with a number of support files.
52
53 3) The cleanest and most recommended approach is to run
54
55 python setup.py install
56
57 You will however need Python "distutils" module for this to
58 work. It is often part of the core python package (e.g. in
59 OpenSuse Python 2.5 package) or it can be installed using your
60 package manager, e.g. in Debian use
61
62 apt-get install python2.4-setuptools
63
64 Again, consult your distribution documentation on how to
65 find out the actual package name and how to install it then.
66
67
68 Where to get help
69 -----------------
70 If in doubt, or if something doesn't work as expected,
71 get back to us via mailing list:
72
73 s3tools-general@lists.sourceforge.net
74
75 For more information refer to:
76 * S3cmd / S3tools homepage at http://s3tools.sourceforge.net
77
78 Enjoy!
79
80 Michal Ludvig
81 * michal@logix.cz
82 * http://www.logix.cz/michal
83
0 s3cmd 0.9.3 - 2007-05-26
1 ===========
2 * New command "du" for displaying size of your data in S3.
3 (Basil Shubin)
4
5 s3cmd 0.9.2 - 2007-04-09
6 ===========
7 * Lots of new documentation
8 * Allow "get" to stdout (use "-" in place of destination file
9 to get the file contents on stdout)
10 * Better compatibility with Python 2.4
11 * Output public HTTP URL for objects stored with Public ACL
12 * Various bugfixes and improvements
13
14 s3cmd 0.9.1 - 2007-02-06
15 ===========
16 * All commands now use S3-URIs
17 * Removed hard dependency on Python 2.5
18 * Experimental support for Python 2.4
19 (requires external ElementTree module)
20
21
22 s3cmd 0.9.0 - 2007-01-18
23 ===========
24 * First public release brings support for all basic Amazon S3
25 operations: Creation and Removal of buckets, Upload (put),
26 Download (get) and Removal (del) of files/objects.
27
0 Metadata-Version: 1.0
1 Name: s3cmd
2 Version: 0.9.3
3 Summary: S3cmd is a tool for managing Amazon S3 storage space.
4 Home-page: http://s3tools.logix.cz
5 Author: Michal Ludvig
6 Author-email: michal@logix.cz
7 License: GPL version 2
8 Description:
9
10 S3cmd lets you copy files from/to Amazon S3
11 (Simple Storage Service) using a simple to use
12 command line client.
13
14
15 Authors:
16 --------
17 Michal Ludvig <michal@logix.cz>
18
19 Platform: UNKNOWN
0 S3cmd tool for Amazon Simple Storage Service (S3)
1 =================================================
2
3 Author:
4 Michal Ludvig <michal@logix.cz>
5
6 S3tools / S3cmd project homepage:
7 http://s3tools.sourceforge.net
8
9 S3tools / S3cmd mailing list:
10 s3tools-general@lists.sourceforge.net
11
12 Amazon S3 homepage:
13 http://aws.amazon.com/s3
14
15 !!!
16 !!! Please consult INSTALL file for installation instructions!
17 !!!
18
19 What is Amazon S3
20 -----------------
21 Amazon S3 provides a managed internet-accessible storage
22 service where anyone can store any amount of data and
23 retrieve it later again. Maximum amount of data in one
24 "object" is 5GB, maximum number of objects is not limited.
25
26 S3 is a paid service operated by the well known Amazon.com
27 internet book shop. Before storing anything into S3 you
28 must sign up for an "AWS" account (where AWS = Amazon Web
29 Services) to obtain a pair of identifiers: Access Key and
30 Secret Key. You will need to give these keys to S3cmd.
31 Think of them as if they were a username and password for
32 your S3 account.
33
34 Pricing explained
35 -----------------
36 At the time of this writing the costs of using S3 are:
37 1) US$0.15 per GB-Month of storage used.
38 2) US$0.20 per GB of data transferred.
39
40 If for instance on 1st of January you upload 2GB of
41 photos in JPEG from your holiday in New Zealand, at the
42 end of January you will be charged $0.30 for using 2GB of
43 storage space for a month and $0.40 for transferring 2GB
44 of data. That comes to $0.70 for a complete backup of your
45 precious holiday pictures.
46
47 In February you don't touch it. Your data are still on S3
48 servers so you pay $0.30 for those two gigabytes, but not
49 a single cent will be charged for any transfer. That comes
50 to $0.30 as an ongoing cost of your backup. Not too bad.
51
52 In March you allow anonymous read access to some of your
53 pictures and your friends download, say, 500MB of them.
54 As the files are owned by you, you are responsible for the
55 costs incurred. That means at the end of March you'll be
56 charged $0.30 for storage plus $0.07 for the traffic
57 generated by your friends.
58
59 There is no minimum monthly contract or a setup fee. What
60 you use is what you pay for. At the beginning my bill used
61 to be like US$0.03 or even nil.
62
63 That's the pricing model of Amazon S3 in a nutshell. Check
64 Amazon S3 homepage at http://aws.amazon.com/s3 for more
65 details.
66
67 Needless to say that all these money are charged by Amazon
68 itself, there is obviously no payment for using S3cmd :-)
69
70 Amazon S3 basics
71 ----------------
72 Files stored in S3 are called "objects" and their names are
73 officially called "keys". Each object belongs to exactly one
74 "bucket". Buckets are kind of directories or folders with
75 some restrictions: 1) each user can only have 100 buckets at
76 the most, 2) bucket names must be unique amongst all users
77 of S3, 3) buckets can not be nested into a deeper
78 hierarchy and 4) a name of a bucket can only consist of basic
79 alphanumeric characters plus dot (.) and dash (-). No spaces,
80 no accented or UTF-8 letters, etc.
81
82 On the other hand there are almost no restrictions on object
83 names ("keys"). These can be any UTF-8 strings of up to 1024
84 bytes long. Interestingly enough the object name can contain
85 forward slash character (/) thus a "my/funny/picture.jpg" is
86 a valid object name. Note that there are not directories nor
87 buckets called "my" and "funny" - it is really a single object
88 name called "my/funny/picture.jpg" and S3 does not care at
89 all that it _looks_ like a directory structure.
90
91 To describe objects in S3 storage we invented a URI-like
92 schema in the following form:
93
94 s3://BUCKET/OBJECT
95
96 See the HowTo later in this document for example usages of
97 this S3-URI schema.
98
99 Simple S3cmd HowTo
100 ------------------
101 1) Register for Amazon AWS / S3
102 Go to http://aws.amazon.com/s3, click the "Sign up
103 for web service" button in the right column and work
104 through the registration. You will have to supply
105 your Credit Card details in order to allow Amazon
106 charge you for S3 usage.
107 At the end you should posses your Access and Secret Keys
108
109 2) Run "s3cmd --configure"
110 You will be asked for the two keys - copy and paste
111 them from your confirmation email or from your Amazon
112 account page. Be careful when copying them! They are
113 case sensitive and must be entered accurately or you'll
114 keep getting errors about invalid signatures or similar.
115
116 3) Run "s3cmd ls" to list all your buckets.
117 As you just started using S3 there are no buckets owned by
118 you as of now. So the output will be empty.
119
120 4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
121 As mentioned above bucket names must be unique amongst
122 _all_ users of S3. That means the simple names like "test"
123 or "asdf" are already taken and you must make up something
124 more original. I sometimes prefix my bucket names with
125 my e-mail domain name (logix.cz) leading to a bucket name,
126 for instance, 'logix.cz-test':
127
128 ~$ s3cmd mb s3://logix.cz-test
129 Bucket 'logix.cz-test' created
130
131 5) List your buckets again with "s3cmd ls"
132 Now you should see your freshly created bucket
133
134 ~$ s3cmd ls
135 2007-01-19 01:41 s3://logix.cz-test
136
137 6) List the contents of the bucket
138
139 ~$ s3cmd ls s3://logix.cz-test
140 Bucket 'logix.cz-test':
141 ~$
142
143 It's empty, indeed.
144
145 7) Upload a file into the bucket
146
147 ~$ s3cmd put addressbook.xml s3://logix.cz-test/addrbook.xml
148 File 'addressbook.xml' stored as s3://logix.cz-test/addrbook.xml (123456 bytes)
149
150 8) Now we can list the bucket contents again
151
152 ~$ s3cmd ls s3://logix.cz-test
153 Bucket 'logix.cz-test':
154 2007-01-19 01:46 120k s3://logix.cz-test/addrbook.xml
155
156 9) Retrieve the file back and verify that its hasn't been
157 corrupted
158
159 ~$ s3cmd get s3://logix.cz-test/addrbook.xml addressbook-2.xml
160 Object s3://logix.cz-test/addrbook.xml saved as 'addressbook-2.xml' (123456 bytes)
161
162 ~$ md5sum addressbook.xml addressbook-2.xml
163 39bcb6992e461b269b95b3bda303addf addressbook.xml
164 39bcb6992e461b269b95b3bda303addf addressbook-2.xml
165
166 Checksums of the original file matches the one of the
167 retrieved one. Looks like it worked :-)
168
169 10) Clean up: delete the object and remove the bucket
170
171 ~$ s3cmd rb s3://logix.cz-test
172 ERROR: S3 error: 409 (Conflict): BucketNotEmpty
173
174 Ouch, we can only remove empty buckets!
175
176 ~$ s3cmd del s3://logix.cz-test/addrbook.xml
177 Object s3://logix.cz-test/addrbook.xml deleted
178
179 ~$ s3cmd rb s3://logix.cz-test
180 Bucket 'logix.cz-test' removed
181
182 Hints
183 -----
184 The basic usage is as simple as described in the previous
185 section.
186
187 You can increase the level of verbosity with -v option and
188 if you're really keen to know what the program does under
189 its bonet run it with -d to see all 'debugging' output.
190
191 After configuring it with --configure all available options
192 are spitted into your ~/.s3cfg file. It's a text file ready
193 to be modified in your favourite text editor.
194
195 Multiple local files may be specified for "s3cmd put"
196 operation. In that case the S3 URI should only include
197 the bucket name, not the object part:
198
199 ~$ s3cmd put file-* s3://logix.cz-test/
200 File 'file-one.txt' stored as s3://logix.cz-test/file-one.txt (4 bytes)
201 File 'file-two.txt' stored as s3://logix.cz-test/file-two.txt (4 bytes)
202
203 Alternatively if you specify the object part as well it
204 will be treated as a prefix and all filenames given on the
205 command line will be appended to the prefix making up
206 the object name. However --force option is required in this
207 case:
208
209 ~$ s3cmd put --force file-* s3://logix.cz-test/prefixed:
210 File 'file-one.txt' stored as s3://logix.cz-test/prefixed:file-one.txt (4 bytes)
211 File 'file-two.txt' stored as s3://logix.cz-test/prefixed:file-two.txt (4 bytes)
212
213 This prefixing mode works with "s3cmd ls" as well:
214
215 ~$ s3cmd ls s3://logix.cz-test
216 Bucket 'logix.cz-test':
217 2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt
218 2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt
219 2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-one.txt
220 2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-two.txt
221
222 Now with a prefix to list only names beginning with "file-":
223
224 ~$ s3cmd ls s3://logix.cz-test/file-*
225 Bucket 'logix.cz-test':
226 2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt
227 2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt
228
229 For more information refer to:
230 * S3cmd / S3tools homepage at http://s3tools.sourceforge.net
231 * Amazon S3 homepage at http://aws.amazon.com/s3
232
233 Enjoy!
234
235 Michal Ludvig
236 * michal@logix.cz
237 * http://www.logix.cz/michal
238
0 ## Amazon S3 manager
1 ## Author: Michal Ludvig <michal@logix.cz>
2 ## http://www.logix.cz/michal
3 ## License: GPL Version 2
4
5 class BidirMap(object):
6 def __init__(self, **map):
7 self.k2v = {}
8 self.v2k = {}
9 for key in map:
10 self.__setitem__(key, map[key])
11
12 def __setitem__(self, key, value):
13 if self.v2k.has_key(value):
14 if self.v2k[value] != key:
15 raise KeyError("Value '"+str(value)+"' already in use with key '"+str(self.v2k[value])+"'")
16 try:
17 del(self.v2k[self.k2v[key]])
18 except KeyError:
19 pass
20 self.k2v[key] = value
21 self.v2k[value] = key
22
23 def __getitem__(self, key):
24 return self.k2v[key]
25
26 def __str__(self):
27 return self.v2k.__str__()
28
29 def getkey(self, value):
30 return self.v2k[value]
31
32 def getvalue(self, key):
33 return self.k2v[key]
34
35
0 ## Amazon S3 manager
1 ## Author: Michal Ludvig <michal@logix.cz>
2 ## http://www.logix.cz/michal
3 ## License: GPL Version 2
4
5 import logging
6 from logging import debug, info, warning, error
7 import re
8
9 class Config(object):
10 _instance = None
11 _parsed_files = []
12 access_key = ""
13 secret_key = ""
14 host = "s3.amazonaws.com"
15 verbosity = logging.WARNING
16 send_chunk = 4096
17 recv_chunk = 4096
18 human_readable_sizes = False
19 force = False
20 acl_public = False
21
22 ## Creating a singleton
23 def __new__(self, configfile = None):
24 if self._instance is None:
25 self._instance = object.__new__(self)
26 return self._instance
27
28 def __init__(self, configfile = None):
29 if configfile:
30 self.read_config_file(configfile)
31
32 def option_list(self):
33 retval = []
34 for option in dir(self):
35 ## Skip attributes that start with underscore or are not string, int or bool
36 option_type = type(getattr(Config, option))
37 if option.startswith("_") or \
38 not (option_type in (
39 type("string"), # str
40 type(42), # int
41 type(True))): # bool
42 continue
43 retval.append(option)
44 return retval
45
46 def read_config_file(self, configfile):
47 cp = ConfigParser(configfile)
48 for option in self.option_list():
49 self.update_option(option, cp.get(option))
50 self._parsed_files.append(configfile)
51
52 def dump_config(self, stream):
53 ConfigDumper(stream).dump("default", self)
54
55 def update_option(self, option, value):
56 if value is None:
57 return
58 #### Special treatment of some options
59 ## verbosity must be known to "logging" module
60 if option == "verbosity":
61 try:
62 setattr(Config, "verbosity", logging._levelNames[value])
63 except KeyError:
64 error("Config: verbosity level '%s' is not valid" % value)
65 ## allow yes/no, true/false, on/off and 1/0 for boolean options
66 elif type(getattr(Config, option)) is type(True): # bool
67 if str(value).lower() in ("true", "yes", "on", "1"):
68 setattr(Config, option, True)
69 elif str(value).lower() in ("false", "no", "off", "0"):
70 setattr(Config, option, False)
71 else:
72 error("Config: value of option '%s' must be Yes or No, not '%s'" % (option, value))
73 elif type(getattr(Config, option)) is type(42): # int
74 try:
75 setattr(Config, option, int(value))
76 except ValueError, e:
77 error("Config: value of option '%s' must be an integer, not '%s'" % (option, value))
78 else: # string
79 setattr(Config, option, value)
80
81 class ConfigParser(object):
82 def __init__(self, file, sections = []):
83 self.cfg = {}
84 self.parse_file(file, sections)
85
86 def parse_file(self, file, sections = []):
87 info("ConfigParser: Reading file '%s'" % file)
88 if type(sections) != type([]):
89 sections = [sections]
90 in_our_section = True
91 f = open(file, "r")
92 r_comment = re.compile("^\s*#.*")
93 r_empty = re.compile("^\s*$")
94 r_section = re.compile("^\[([^\]]+)\]")
95 r_data = re.compile("^\s*(?P<key>\w+)\s*=\s*(?P<value>.*)")
96 r_quotes = re.compile("^\"(.*)\"\s*$")
97 for line in f:
98 if r_comment.match(line) or r_empty.match(line):
99 continue
100 is_section = r_section.match(line)
101 if is_section:
102 section = is_section.groups()[0]
103 in_our_section = (section in sections) or (len(sections) == 0)
104 continue
105 is_data = r_data.match(line)
106 if is_data and in_our_section:
107 data = is_data.groupdict()
108 if r_quotes.match(data["value"]):
109 data["value"] = data["value"][1:-1]
110 self.__setitem__(data["key"], data["value"])
111 if data["key"] in ("access_key", "secret_key"):
112 print_value = (data["value"][:3]+"...%d_chars..."+data["value"][-2:]) % (len(data["value"]) - 4)
113 else:
114 print_value = data["value"]
115 debug("ConfigParser: %s->%s" % (data["key"], print_value))
116 continue
117 warning("Ignoring invalid line in '%s': %s" % (file, line))
118
119 def __getitem__(self, name):
120 return self.cfg[name]
121
122 def __setitem__(self, name, value):
123 self.cfg[name] = value
124
125 def get(self, name, default = None):
126 if self.cfg.has_key(name):
127 return self.cfg[name]
128 return default
129
130 class ConfigDumper(object):
131 def __init__(self, stream):
132 self.stream = stream
133
134 def dump(self, section, config):
135 self.stream.write("[%s]\n" % section)
136 for option in config.option_list():
137 self.stream.write("%s = %s\n" % (option, getattr(config, option)))
138
0 package = "s3cmd"
1 version = "0.9.3"
2 url = "http://s3tools.logix.cz"
3 license = "GPL version 2"
4 short_description = "S3cmd is a tool for managing Amazon S3 storage space."
5 long_description = """
6 S3cmd lets you copy files from/to Amazon S3
7 (Simple Storage Service) using a simple to use
8 command line client.
9 """
10
0 ## Amazon S3 manager
1 ## Author: Michal Ludvig <michal@logix.cz>
2 ## http://www.logix.cz/michal
3 ## License: GPL Version 2
4
5 import sys
6 import os, os.path
7 import base64
8 import md5
9 import sha
10 import hmac
11 import httplib
12 import logging
13 from logging import debug, info, warning, error
14 from stat import ST_SIZE
15
16 from Utils import *
17 from SortedDict import SortedDict
18 from BidirMap import BidirMap
19 from Config import Config
20
21 class S3Error (Exception):
22 def __init__(self, response):
23 self.status = response["status"]
24 self.reason = response["reason"]
25 self.info = {}
26 debug("S3Error: %s (%s)" % (self.status, self.reason))
27 if response.has_key("headers"):
28 for header in response["headers"]:
29 debug("HttpHeader: %s: %s" % (header, response["headers"][header]))
30 if response.has_key("data"):
31 tree = ET.fromstring(response["data"])
32 for child in tree.getchildren():
33 if child.text != "":
34 debug("ErrorXML: " + child.tag + ": " + repr(child.text))
35 self.info[child.tag] = child.text
36
37 def __str__(self):
38 retval = "%d (%s)" % (self.status, self.reason)
39 try:
40 retval += (": %s" % self.info["Code"])
41 except AttributeError:
42 pass
43 return retval
44
45 class ParameterError(Exception):
46 pass
47
48 class S3(object):
49 http_methods = BidirMap(
50 GET = 0x01,
51 PUT = 0x02,
52 HEAD = 0x04,
53 DELETE = 0x08,
54 MASK = 0x0F,
55 )
56
57 targets = BidirMap(
58 SERVICE = 0x0100,
59 BUCKET = 0x0200,
60 OBJECT = 0x0400,
61 MASK = 0x0700,
62 )
63
64 operations = BidirMap(
65 UNDFINED = 0x0000,
66 LIST_ALL_BUCKETS = targets["SERVICE"] | http_methods["GET"],
67 BUCKET_CREATE = targets["BUCKET"] | http_methods["PUT"],
68 BUCKET_LIST = targets["BUCKET"] | http_methods["GET"],
69 BUCKET_DELETE = targets["BUCKET"] | http_methods["DELETE"],
70 OBJECT_PUT = targets["OBJECT"] | http_methods["PUT"],
71 OBJECT_GET = targets["OBJECT"] | http_methods["GET"],
72 OBJECT_HEAD = targets["OBJECT"] | http_methods["HEAD"],
73 OBJECT_DELETE = targets["OBJECT"] | http_methods["DELETE"],
74 )
75
76 codes = {
77 "NoSuchBucket" : "Bucket '%s' does not exist",
78 "AccessDenied" : "Access to bucket '%s' was denied",
79 "BucketAlreadyExists" : "Bucket '%s' already exists",
80 }
81
82 def __init__(self, config):
83 self.config = config
84
85 ## Commands / Actions
86 def list_all_buckets(self):
87 request = self.create_request("LIST_ALL_BUCKETS")
88 response = self.send_request(request)
89 response["list"] = getListFromXml(response["data"], "Bucket")
90 return response
91
92 def bucket_list(self, bucket, prefix = None):
93 ## TODO: use prefix if supplied
94 request = self.create_request("BUCKET_LIST", bucket = bucket, prefix = prefix)
95 response = self.send_request(request)
96 debug(response)
97 response["list"] = getListFromXml(response["data"], "Contents")
98 return response
99
100 def bucket_create(self, bucket):
101 self.check_bucket_name(bucket)
102 request = self.create_request("BUCKET_CREATE", bucket = bucket)
103 response = self.send_request(request)
104 return response
105
106 def bucket_delete(self, bucket):
107 request = self.create_request("BUCKET_DELETE", bucket = bucket)
108 response = self.send_request(request)
109 return response
110
111 def object_put(self, filename, bucket, object):
112 if not os.path.isfile(filename):
113 raise ParameterError("%s is not a regular file" % filename)
114 try:
115 file = open(filename, "r")
116 size = os.stat(filename)[ST_SIZE]
117 except IOError, e:
118 raise ParameterError("%s: %s" % (filename, e.strerror))
119 headers = SortedDict()
120 headers["content-length"] = size
121 if self.config.acl_public:
122 headers["x-amz-acl"] = "public-read"
123 request = self.create_request("OBJECT_PUT", bucket = bucket, object = object, headers = headers)
124 response = self.send_file(request, file)
125 response["size"] = size
126 return response
127
128 def object_get_file(self, bucket, object, filename):
129 try:
130 stream = open(filename, "w")
131 except IOError, e:
132 raise ParameterError("%s: %s" % (filename, e.strerror))
133 return self.object_get_stream(bucket, object, stream)
134
135 def object_get_stream(self, bucket, object, stream):
136 request = self.create_request("OBJECT_GET", bucket = bucket, object = object)
137 response = self.recv_file(request, stream)
138 return response
139
140 def object_delete(self, bucket, object):
141 request = self.create_request("OBJECT_DELETE", bucket = bucket, object = object)
142 response = self.send_request(request)
143 return response
144
145 def object_put_uri(self, filename, uri):
146 if uri.type != "s3":
147 raise ValueError("Expected URI type 's3', got '%s'" % uri.type)
148 return self.object_put(filename, uri.bucket(), uri.object())
149
150 def object_get_uri(self, uri, filename):
151 if uri.type != "s3":
152 raise ValueError("Expected URI type 's3', got '%s'" % uri.type)
153 if filename == "-":
154 return self.object_get_stream(uri.bucket(), uri.object(), sys.stdout)
155 else:
156 return self.object_get_file(uri.bucket(), uri.object(), filename)
157
158 def object_delete_uri(self, uri):
159 if uri.type != "s3":
160 raise ValueError("Expected URI type 's3', got '%s'" % uri.type)
161 return self.object_delete(uri.bucket(), uri.object())
162
163 ## Low level methods
164 def create_request(self, operation, bucket = None, object = None, headers = None, **params):
165 resource = "/"
166 if bucket:
167 resource += str(bucket)
168 if object:
169 resource += "/"+str(object)
170
171 if not headers:
172 headers = SortedDict()
173
174 if headers.has_key("date"):
175 if not headers.has_key("x-amz-date"):
176 headers["x-amz-date"] = headers["date"]
177 del(headers["date"])
178
179 if not headers.has_key("x-amz-date"):
180 headers["x-amz-date"] = time.strftime("%a, %d %b %Y %H:%M:%S %z", time.gmtime(time.time()))
181
182 method_string = S3.http_methods.getkey(S3.operations[operation] & S3.http_methods["MASK"])
183 signature = self.sign_headers(method_string, resource, headers)
184 headers["Authorization"] = "AWS "+self.config.access_key+":"+signature
185 param_str = ""
186 for param in params:
187 if params[param] not in (None, ""):
188 param_str += "&%s=%s" % (param, params[param])
189 if param_str != "":
190 resource += "?" + param_str[1:]
191 debug("CreateRequest: resource=" + resource)
192 return (method_string, resource, headers)
193
194 def send_request(self, request):
195 method_string, resource, headers = request
196 info("Processing request, please wait...")
197 conn = httplib.HTTPConnection(self.config.host)
198 conn.request(method_string, resource, {}, headers)
199 response = {}
200 http_response = conn.getresponse()
201 response["status"] = http_response.status
202 response["reason"] = http_response.reason
203 response["headers"] = convertTupleListToDict(http_response.getheaders())
204 response["data"] = http_response.read()
205 conn.close()
206 if response["status"] < 200 or response["status"] > 299:
207 raise S3Error(response)
208 return response
209
210 def send_file(self, request, file):
211 method_string, resource, headers = request
212 info("Sending file '%s', please wait..." % file.name)
213 conn = httplib.HTTPConnection(self.config.host)
214 conn.connect()
215 conn.putrequest(method_string, resource)
216 for header in headers.keys():
217 conn.putheader(header, str(headers[header]))
218 conn.endheaders()
219 size_left = size_total = headers.get("content-length")
220 while (size_left > 0):
221 debug("SendFile: Reading up to %d bytes from '%s'" % (self.config.send_chunk, file.name))
222 data = file.read(self.config.send_chunk)
223 debug("SendFile: Sending %d bytes to the server" % len(data))
224 conn.send(data)
225 size_left -= len(data)
226 info("Sent %d bytes (%d %% of %d)" % (
227 (size_total - size_left),
228 (size_total - size_left) * 100 / size_total,
229 size_total))
230 response = {}
231 http_response = conn.getresponse()
232 response["status"] = http_response.status
233 response["reason"] = http_response.reason
234 response["headers"] = convertTupleListToDict(http_response.getheaders())
235 response["data"] = http_response.read()
236 conn.close()
237 if response["status"] < 200 or response["status"] > 299:
238 raise S3Error(response)
239 return response
240
241 def recv_file(self, request, stream):
242 method_string, resource, headers = request
243 info("Receiving file '%s', please wait..." % stream.name)
244 conn = httplib.HTTPConnection(self.config.host)
245 conn.connect()
246 conn.putrequest(method_string, resource)
247 for header in headers.keys():
248 conn.putheader(header, str(headers[header]))
249 conn.endheaders()
250 response = {}
251 http_response = conn.getresponse()
252 response["status"] = http_response.status
253 response["reason"] = http_response.reason
254 response["headers"] = convertTupleListToDict(http_response.getheaders())
255 if response["status"] < 200 or response["status"] > 299:
256 raise S3Error(response)
257
258 md5_hash = md5.new()
259 size_left = size_total = int(response["headers"]["content-length"])
260 size_recvd = 0
261 while (size_recvd < size_total):
262 this_chunk = size_left > self.config.recv_chunk and self.config.recv_chunk or size_left
263 debug("ReceiveFile: Receiving up to %d bytes from the server" % this_chunk)
264 data = http_response.read(this_chunk)
265 debug("ReceiveFile: Writing %d bytes to file '%s'" % (len(data), stream.name))
266 stream.write(data)
267 md5_hash.update(data)
268 size_recvd += len(data)
269 info("Received %d bytes (%d %% of %d)" % (
270 size_recvd,
271 size_recvd * 100 / size_total,
272 size_total))
273 conn.close()
274 response["md5"] = md5_hash.hexdigest()
275 response["md5match"] = response["headers"]["etag"].find(response["md5"]) >= 0
276 response["size"] = size_recvd
277 if response["size"] != long(response["headers"]["content-length"]):
278 warning("Reported size (%s) does not match received size (%s)" % (
279 response["headers"]["content-length"], response["size"]))
280 debug("ReceiveFile: Computed MD5 = %s" % response["md5"])
281 if not response["md5match"]:
282 warning("MD5 signatures do not match: computed=%s, received=%s" % (
283 response["md5"], response["headers"]["etag"]))
284 return response
285
286 def sign_headers(self, method, resource, headers):
287 h = method+"\n"
288 h += headers.get("content-md5", "")+"\n"
289 h += headers.get("content-type", "")+"\n"
290 h += headers.get("date", "")+"\n"
291 for header in headers.keys():
292 if header.startswith("x-amz-"):
293 h += header+":"+str(headers[header])+"\n"
294 h += resource
295 debug("SignHeaders: " + repr(h))
296 return base64.encodestring(hmac.new(self.config.secret_key, h, sha).digest()).strip()
297
298 def check_bucket_name(self, bucket):
299 if re.compile("[^A-Za-z0-9\._-]").search(bucket):
300 raise ParameterError("Bucket name '%s' contains unallowed characters" % bucket)
301 if len(bucket) < 3:
302 raise ParameterError("Bucket name '%s' is too short (min 3 characters)" % bucket)
303 if len(bucket) > 255:
304 raise ParameterError("Bucket name '%s' is too long (max 255 characters)" % bucket)
305 return True
306
0 ## Amazon S3 manager
1 ## Author: Michal Ludvig <michal@logix.cz>
2 ## http://www.logix.cz/michal
3 ## License: GPL Version 2
4
5 import re
6 import sys
7 from BidirMap import BidirMap
8
9 class S3Uri(object):
10 type = None
11 _subclasses = None
12
13 def __new__(self, string):
14 if not self._subclasses:
15 ## Generate a list of all subclasses of S3Uri
16 self._subclasses = []
17 dict = sys.modules[__name__].__dict__
18 for something in dict:
19 if type(dict[something]) is not type(self):
20 continue
21 if issubclass(dict[something], self) and dict[something] != self:
22 self._subclasses.append(dict[something])
23 for subclass in self._subclasses:
24 try:
25 instance = object.__new__(subclass)
26 instance.__init__(string)
27 return instance
28 except ValueError, e:
29 continue
30 raise ValueError("%s: not a recognized URI" % string)
31
32 def __str__(self):
33 return self.uri()
34
35 def public_url(self):
36 raise ValueError("This S3 URI does not have Anonymous URL representation")
37
38 class S3UriS3(S3Uri):
39 type = "s3"
40 _re = re.compile("^s3://([^/]+)/?(.*)", re.IGNORECASE)
41 def __init__(self, string):
42 match = self._re.match(string)
43 if not match:
44 raise ValueError("%s: not a S3 URI" % string)
45 groups = match.groups()
46 self._bucket = groups[0]
47 self._object = groups[1]
48
49 def bucket(self):
50 return self._bucket
51
52 def object(self):
53 return self._object
54
55 def has_bucket(self):
56 return bool(self._bucket)
57
58 def has_object(self):
59 return bool(self._object)
60
61 def uri(self):
62 return "/".join(["s3:/", self._bucket, self._object])
63
64 def public_url(self):
65 return "http://s3.amazonaws.com/%s/%s" % (self._bucket, self._object)
66
67 @staticmethod
68 def compose_uri(bucket, object = ""):
69 return "s3://%s/%s" % (bucket, object)
70
71 class S3UriS3FS(S3Uri):
72 type = "s3fs"
73 _re = re.compile("^s3fs://([^/]*)/?(.*)", re.IGNORECASE)
74 def __init__(self, string):
75 match = self._re.match(string)
76 if not match:
77 raise ValueError("%s: not a S3fs URI" % string)
78 groups = match.groups()
79 self._fsname = groups[0]
80 self._path = groups[1].split("/")
81
82 def fsname(self):
83 return self._fsname
84
85 def path(self):
86 return "/".join(self._path)
87
88 def uri(self):
89 return "/".join(["s3fs:/", self._fsname, self.path()])
90
91 class S3UriFile(S3Uri):
92 type = "file"
93 _re = re.compile("^(\w+://)?(.*)")
94 def __init__(self, string):
95 match = self._re.match(string)
96 groups = match.groups()
97 if groups[0] not in (None, "file://"):
98 raise ValueError("%s: not a file:// URI" % string)
99 self._path = groups[1].split("/")
100
101 def path(self):
102 return "/".join(self._path)
103
104 def uri(self):
105 return "/".join(["file:/", self.path()])
106
107 if __name__ == "__main__":
108 uri = S3Uri("s3://bucket/object")
109 print "type() =", type(uri)
110 print "uri =", uri
111 print "uri.type=", uri.type
112 print "bucket =", uri.bucket()
113 print "object =", uri.object()
114 print
115
116 uri = S3Uri("s3://bucket")
117 print "type() =", type(uri)
118 print "uri =", uri
119 print "uri.type=", uri.type
120 print "bucket =", uri.bucket()
121 print
122
123 uri = S3Uri("s3fs://filesystem1/path/to/remote/file.txt")
124 print "type() =", type(uri)
125 print "uri =", uri
126 print "uri.type=", uri.type
127 print "path =", uri.path()
128 print
129
130 uri = S3Uri("/path/to/local/file.txt")
131 print "type() =", type(uri)
132 print "uri =", uri
133 print "uri.type=", uri.type
134 print "path =", uri.path()
135 print
0 ## Amazon S3 manager
1 ## Author: Michal Ludvig <michal@logix.cz>
2 ## http://www.logix.cz/michal
3 ## License: GPL Version 2
4
5 class SortedDictIterator(object):
6 def __init__(self, dict):
7 self.dict = dict
8 self.keys = dict.keys()
9 self.index = 0
10 self.length = len(self.keys)
11
12 def next(self):
13 if self.length <= self.index:
14 raise StopIteration
15
16 retval = self.keys[self.index]
17 self.index += 1
18 return retval
19
20
21 class SortedDict(dict):
22 def __setitem__(self, name, value):
23 try:
24 value = value.strip()
25 except:
26 pass
27 dict.__setitem__(self, name.lower(), value)
28
29 def __iter__(self):
30 return SortedDictIterator(self)
31
32
33 def keys(self):
34 keys = dict.keys(self)
35 keys.sort()
36 return keys
37
38 def popitem(self):
39 keys = self.keys()
40 if len(keys) < 1:
41 raise KeyError("popitem(): dictionary is empty")
42 retval = (keys[0], dict.__getitem__(self, keys[0]))
43 dict.__delitem__(self, keys[0])
44 return retval
45
46
0 ## Amazon S3 manager
1 ## Author: Michal Ludvig <michal@logix.cz>
2 ## http://www.logix.cz/michal
3 ## License: GPL Version 2
4
5 import time
6 import re
7 import elementtree.ElementTree as ET
8
9 def parseNodes(nodes, xmlns = ""):
10 retval = []
11 for node in nodes:
12 retval_item = {}
13 if xmlns != "":
14 ## Take regexp compilation out of the loop
15 r = re.compile(xmlns)
16 fixup = lambda string : r.sub("", string)
17 else:
18 ## Do-nothing function
19 fixup = lambda string : string
20
21 for child in node.getchildren():
22 name = fixup(child.tag)
23 retval_item[name] = node.findtext(".//%s" % child.tag)
24
25 retval.append(retval_item)
26 return retval
27
28 def getNameSpace(element):
29 if not element.tag.startswith("{"):
30 return ""
31 return re.compile("^(\{[^}]+\})").match(element.tag).groups()[0]
32
33 def getListFromXml(xml, node):
34 tree = ET.fromstring(xml)
35 xmlns = getNameSpace(tree)
36 nodes = tree.findall('.//%s%s' % (xmlns, node))
37 return parseNodes(nodes, xmlns)
38
39 def dateS3toPython(date):
40 date = re.compile("\.\d\d\dZ").sub(".000Z", date)
41 return time.strptime(date, "%Y-%m-%dT%H:%M:%S.000Z")
42
43 def dateS3toUnix(date):
44 ## FIXME: This should be timezone-aware.
45 ## Currently the argument to strptime() is GMT but mktime()
46 ## treats it as "localtime". Anyway...
47 return time.mktime(dateS3toPython(date))
48
49 def formatSize(size, human_readable = False):
50 size = int(size)
51 if human_readable:
52 coeffs = ['k', 'M', 'G', 'T']
53 coeff = ""
54 while size > 2048:
55 size /= 1024
56 coeff = coeffs.pop(0)
57 return (size, coeff)
58 else:
59 return (size, "")
60
61 def formatDateTime(s3timestamp):
62 return time.strftime("%Y-%m-%d %H:%M", dateS3toPython(s3timestamp))
63
64 def convertTupleListToDict(list):
65 retval = {}
66 for tuple in list:
67 retval[tuple[0]] = tuple[1]
68 return retval
69
(New empty file)
+419
-0
s3cmd less more
0 #!/usr/bin/env python
1
2 ## Amazon S3 manager
3 ## Author: Michal Ludvig <michal@logix.cz>
4 ## http://www.logix.cz/michal
5 ## License: GPL Version 2
6
7 import sys
8 import logging
9 import time
10
11 from copy import copy
12 from optparse import OptionParser, Option, OptionValueError, IndentedHelpFormatter
13 from logging import debug, info, warning, error
14 import elementtree.ElementTree as ET
15
16 ## Our modules
17 from S3 import PkgInfo
18 from S3.S3 import *
19 from S3.Config import Config
20 from S3.S3Uri import *
21
22
23 def output(message):
24 print message
25
26 def cmd_du(args):
27 s3 = S3(Config())
28 if len(args) > 0:
29 uri = S3Uri(args[0])
30 if uri.type == "s3" and uri.has_bucket():
31 subcmd_bucket_usage(s3, uri)
32 return
33 subcmd_bucket_usage_all(s3)
34
35 def subcmd_bucket_usage_all(s3):
36 response = s3.list_all_buckets()
37
38 buckets_size = 0
39 for bucket in response["list"]:
40 size = subcmd_bucket_usage(s3, S3Uri("s3://" + bucket["Name"]))
41 if size != None:
42 buckets_size += size
43 total_size, size_coeff = formatSize(buckets_size, Config().human_readable_sizes)
44 total_size_str = str(total_size) + size_coeff
45 output("".rjust(8, "-"))
46 output("%s Total" % (total_size_str.ljust(8)))
47
48 def subcmd_bucket_usage(s3, uri):
49 bucket = uri.bucket()
50 object = uri.object()
51
52 if object.endswith('*'):
53 object = object[:-1]
54 try:
55 response = s3.bucket_list(bucket, prefix = object)
56 except S3Error, e:
57 if S3.codes.has_key(e.Code):
58 error(S3.codes[e.Code] % bucket)
59 return
60 else:
61 raise
62 bucket_size = 0
63 for object in response["list"]:
64 size, size_coeff = formatSize(object["Size"], False)
65 bucket_size += size
66 total_size, size_coeff = formatSize(bucket_size, Config().human_readable_sizes)
67 total_size_str = str(total_size) + size_coeff
68 output("%s %s" % (total_size_str.ljust(8), uri))
69 return bucket_size
70
71 def cmd_ls(args):
72 s3 = S3(Config())
73 if len(args) > 0:
74 uri = S3Uri(args[0])
75 if uri.type == "s3" and uri.has_bucket():
76 subcmd_bucket_list(s3, uri)
77 return
78 subcmd_buckets_list_all(s3)
79
80 def cmd_buckets_list_all_all(args):
81 s3 = S3(Config())
82
83 response = s3.list_all_buckets()
84
85 for bucket in response["list"]:
86 subcmd_bucket_list(s3, S3Uri("s3://" + bucket["Name"]))
87 output("")
88
89
90 def subcmd_buckets_list_all(s3):
91 response = s3.list_all_buckets()
92 for bucket in response["list"]:
93 output("%s s3://%s" % (
94 formatDateTime(bucket["CreationDate"]),
95 bucket["Name"],
96 ))
97
98 def subcmd_bucket_list(s3, uri):
99 bucket = uri.bucket()
100 object = uri.object()
101
102 output("Bucket '%s':" % bucket)
103 if object.endswith('*'):
104 object = object[:-1]
105 try:
106 response = s3.bucket_list(bucket, prefix = object)
107 except S3Error, e:
108 if S3.codes.has_key(e.info["Code"]):
109 error(S3.codes[e.info["Code"]] % bucket)
110 return
111 else:
112 raise
113 for object in response["list"]:
114 size, size_coeff = formatSize(object["Size"], Config().human_readable_sizes)
115 output("%s %s%s %s" % (
116 formatDateTime(object["LastModified"]),
117 str(size).rjust(8), size_coeff.ljust(1),
118 uri.compose_uri(bucket, object["Key"]),
119 ))
120
121 def cmd_bucket_create(args):
122 uri = S3Uri(args[0])
123 if not uri.type == "s3" or not uri.has_bucket() or uri.has_object():
124 raise ParameterError("Expecting S3 URI with just the bucket name set instead of '%s'" % args[0])
125
126 try:
127 s3 = S3(Config())
128 response = s3.bucket_create(uri.bucket())
129 except S3Error, e:
130 if S3.codes.has_key(e.info["Code"]):
131 error(S3.codes[e.info["Code"]] % uri.bucket())
132 return
133 else:
134 raise
135 output("Bucket '%s' created" % uri.bucket())
136
137 def cmd_bucket_delete(args):
138 uri = S3Uri(args[0])
139 if not uri.type == "s3" or not uri.has_bucket() or uri.has_object():
140 raise ParameterError("Expecting S3 URI with just the bucket name set instead of '%s'" % args[0])
141 try:
142 s3 = S3(Config())
143 response = s3.bucket_delete(uri.bucket())
144 except S3Error, e:
145 if S3.codes.has_key(e.info["Code"]):
146 error(S3.codes[e.info["Code"]] % uri.bucket())
147 return
148 else:
149 raise
150 output("Bucket '%s' removed" % uri.bucket())
151
152 def cmd_object_put(args):
153 s3 = S3(Config())
154
155 uri_arg = args.pop()
156 files = args[:]
157
158 uri = S3Uri(uri_arg)
159 if uri.type != "s3":
160 raise ParameterError("Expecting S3 URI instead of '%s'" % uri_arg)
161
162 if len(files) > 1 and uri.object != "" and not Config().force:
163 error("When uploading multiple files the last argument must")
164 error("be a S3 URI specifying just the bucket name")
165 error("WITHOUT object name!")
166 error("Alternatively use --force argument and the specified")
167 error("object name will be prefixed to all stored filenames.")
168 sys.exit(1)
169
170 for file in files:
171 uri_arg_final = str(uri)
172 if len(files) > 1 or uri.object() == "":
173 uri_arg_final += os.path.basename(file)
174
175 uri_final = S3Uri(uri_arg_final)
176 response = s3.object_put_uri(file, uri_final)
177 output("File '%s' stored as %s (%d bytes)" %
178 (file, uri_final, response["size"]))
179 if Config().acl_public:
180 output("Public URL of the object is: %s" %
181 (uri.public_url()))
182
183 def cmd_object_get(args):
184 s3 = S3(Config())
185
186 uri_arg = args.pop(0)
187 uri = S3Uri(uri_arg)
188 if uri.type != "s3" or not uri.has_object():
189 raise ParameterError("Expecting S3 URI instead of '%s'" % uri_arg)
190
191 destination = len(args) > 0 and args.pop(0) or uri.object()
192 if os.path.isdir(destination):
193 destination += ("/" + uri.object())
194 if not Config().force and os.path.exists(destination):
195 raise ParameterError("File %s already exists. Use --force to overwrite it" % destination)
196 response = s3.object_get_uri(uri, destination)
197 if destination != "-":
198 output("Object %s saved as '%s' (%d bytes)" %
199 (uri, destination, response["size"]))
200
201 def cmd_object_del(args):
202 s3 = S3(Config())
203
204 uri_arg = args.pop(0)
205 uri = S3Uri(uri_arg)
206 if uri.type != "s3" or not uri.has_object():
207 raise ParameterError("Expecting S3 URI instead of '%s'" % uri_arg)
208
209 response = s3.object_delete_uri(uri)
210 output("Object %s deleted" % uri)
211
212 def run_configure(config_file):
213 cfg = Config()
214 options = [
215 ("access_key", "Access Key", "Access key and Secret key are your identifiers for Amazon S3"),
216 ("secret_key", "Secret Key"),
217 ]
218 try:
219 while 1:
220 output("\nEnter new values or accept defaults in brackets with Enter.")
221 output("Refer to user manual for detailed description of all options.\n")
222 for option in options:
223 prompt = option[1]
224 try:
225 val = getattr(cfg, option[0])
226 if val not in (None, ""):
227 prompt += " [%s]" % val
228 except AttributeError:
229 pass
230
231 if len(option) >= 3:
232 output("%s" % option[2])
233
234 val = raw_input(prompt + ": ")
235 if val != "":
236 setattr(cfg, option[0], val)
237 output("\nNew settings:")
238 for option in options:
239 output(" %s: %s" % (option[1], getattr(cfg, option[0])))
240 val = raw_input("\nTest access with supplied credentials? [Y/n] ")
241 if val.lower().startswith("y") or val == "":
242 try:
243 output("Please wait...")
244 S3(Config()).bucket_list("", "")
245 output("\nSuccess. Your access key and secret key worked fine :-)")
246 except S3Error, e:
247 error("Test failed: %s" % (e))
248 val = raw_input("\nRetry configuration? [Y/n] ")
249 if val.lower().startswith("y") or val == "":
250 continue
251
252 val = raw_input("\nSave settings? [y/N] ")
253 if val.lower().startswith("y"):
254 break
255 val = raw_input("Retry configuration? [Y/n] ")
256 if val.lower().startswith("n"):
257 raise EOFError()
258 f = open(config_file, "w")
259 cfg.dump_config(f)
260 f.close()
261 output("Configuration saved to '%s'" % config_file)
262
263 except (EOFError, KeyboardInterrupt):
264 output("\nConfiguration aborted. Changes were NOT saved.")
265 return
266
267 except IOError, e:
268 error("Writing config file failed: %s: %s" % (config_file, e.strerror))
269 sys.exit(1)
270
271 commands = {}
272 commands_list = [
273 {"cmd":"mb", "label":"Make bucket", "param":"s3://BUCKET", "func":cmd_bucket_create, "argc":1},
274 {"cmd":"rb", "label":"Remove bucket", "param":"s3://BUCKET", "func":cmd_bucket_delete, "argc":1},
275 {"cmd":"ls", "label":"List objects or buckets", "param":"[s3://BUCKET[/PREFIX]]", "func":cmd_ls, "argc":0},
276 {"cmd":"du", "label":"Disk usage by buckets", "param":"[s3://BUCKET[/PREFIX]]", "func":cmd_du, "argc":0},
277 {"cmd":"la", "label":"List all object in all buckets", "param":"", "func":cmd_buckets_list_all_all, "argc":0},
278 {"cmd":"put", "label":"Put file into bucket", "param":"FILE [FILE...] s3://BUCKET[/PREFIX]", "func":cmd_object_put, "argc":2},
279 {"cmd":"get", "label":"Get file from bucket", "param":"s3://BUCKET/OBJECT LOCAL_FILE", "func":cmd_object_get, "argc":1},
280 {"cmd":"del", "label":"Delete file from bucket", "param":"s3://BUCKET/OBJECT", "func":cmd_object_del, "argc":1},
281 ]
282
283 def format_commands(progname):
284 help = "Commands:\n"
285 for cmd in commands_list:
286 help += " %s\n %s %s %s\n" % (cmd["label"], progname, cmd["cmd"], cmd["param"])
287 return help
288
289 class OptionMimeType(Option):
290 def check_mimetype(option, opt, value):
291 if re.compile("^[a-z0-9]+/[a-z0-9+\.-]+$", re.IGNORECASE).match(value):
292 return value
293 raise OptionValueError("option %s: invalid MIME-Type format: %r" % (opt, value))
294
295 TYPES = Option.TYPES + ("mimetype",)
296 TYPE_CHECKER = copy(Option.TYPE_CHECKER)
297 TYPE_CHECKER["mimetype"] = check_mimetype
298
299 class MyHelpFormatter(IndentedHelpFormatter):
300 def format_epilog(self, epilog):
301 if epilog:
302 return "\n" + epilog + "\n"
303 else:
304 return ""
305
306 if __name__ == '__main__':
307 if float("%d.%d" %(sys.version_info[0], sys.version_info[1])) < 2.4:
308 sys.stderr.write("ERROR: Python 2.4 or higher required, sorry.\n")
309 sys.exit(1)
310
311 ## Populate "commands" from "commands_list"
312 for cmd in commands_list:
313 if cmd.has_key("cmd"):
314 commands[cmd["cmd"]] = cmd
315
316 default_verbosity = Config().verbosity
317 optparser = OptionParser(option_class=OptionMimeType, formatter=MyHelpFormatter())
318 #optparser.disable_interspersed_args()
319
320 optparser.set_defaults(config=os.getenv("HOME")+"/.s3cfg")
321 optparser.set_defaults(verbosity = default_verbosity)
322
323 optparser.add_option( "--configure", dest="run_configure", action="store_true", help="Invoke interactive (re)configuration tool.")
324 optparser.add_option("-c", "--config", dest="config", metavar="FILE", help="Config file name. Defaults to %default")
325 optparser.add_option( "--dump-config", dest="dump_config", action="store_true", help="Dump current configuration after parsing config files and command line options and exit.")
326
327 optparser.add_option("-f", "--force", dest="force", action="store_true", help="Force overwrite and other dangerous operations.")
328 optparser.add_option("-P", "--acl-public", dest="acl_public", action="store_true", help="Store objects with ACL allowing read by anyone.")
329
330 optparser.add_option("-m", "--mime-type", dest="default_mime_type", type="mimetype", metavar="MIME/TYPE", help="Default MIME-type to be set for objects stored.")
331 optparser.add_option("-M", "--guess-mime-type", dest="guess_mime_type", action="store_true", help="Guess MIME-type of files by their extension. Falls back to default MIME-Type as specified by --mime-type option")
332
333 optparser.add_option("-H", "--human-readable-sizes", dest="human_readable_sizes", action="store_true", help="Print sizes in human readable form.")
334 optparser.add_option("-v", "--verbose", dest="verbosity", action="store_const", const=logging.INFO, help="Enable verbose output.")
335 optparser.add_option("-d", "--debug", dest="verbosity", action="store_const", const=logging.DEBUG, help="Enable debug output.")
336 optparser.add_option( "--version", dest="show_version", action="store_true", help="Show s3cmd version (%s) and exit." % (PkgInfo.version))
337
338 optparser.set_usage(optparser.usage + " COMMAND [parameters]")
339 optparser.set_description('S3cmd is a tool for managing objects in '+
340 'Amazon S3 storage. It allows for making and removing '+
341 '"buckets" and uploading, downloading and removing '+
342 '"objects" from these buckets.')
343 optparser.epilog = format_commands(optparser.get_prog_name())
344 optparser.epilog += ("\nSee program homepage for more information at\n%s\n" % PkgInfo.url)
345
346 (options, args) = optparser.parse_args()
347
348 ## Some mucking with logging levels to enable
349 ## debugging/verbose output for config file parser on request
350 logging.basicConfig(level=options.verbosity, format='%(levelname)s: %(message)s')
351
352 if options.show_version:
353 output("s3cmd version %s" % PkgInfo.version)
354 sys.exit(0)
355
356 ## Now finally parse the config file
357 try:
358 cfg = Config(options.config)
359 except IOError, e:
360 if options.run_configure:
361 cfg = Config()
362 else:
363 error("%s: %s" % (options.config, e.strerror))
364 error("Configuration file not available.")
365 error("Consider using --configure parameter to create one.")
366 sys.exit(1)
367
368 ## And again some logging level adjustments
369 ## according to configfile and command line parameters
370 if options.verbosity != default_verbosity:
371 cfg.verbosity = options.verbosity
372 logging.root.setLevel(cfg.verbosity)
373
374 ## Update Config with other parameters
375 for option in cfg.option_list():
376 try:
377 if getattr(options, option) != None:
378 debug("Updating %s -> %s" % (option, getattr(options, option)))
379 cfg.update_option(option, getattr(options, option))
380 except AttributeError:
381 ## Some Config() options are not settable from command line
382 pass
383
384 if options.dump_config:
385 cfg.dump_config(sys.stdout)
386 sys.exit(0)
387
388 if options.run_configure:
389 run_configure(options.config)
390 sys.exit(0)
391
392 if len(args) < 1:
393 error("Missing command. Please run with --help for more information.")
394 sys.exit(1)
395
396 command = args.pop(0)
397 try:
398 debug("Command: " + commands[command]["cmd"])
399 ## We must do this lookup in extra step to
400 ## avoid catching all KeyError exceptions
401 ## from inner functions.
402 cmd_func = commands[command]["func"]
403 except KeyError, e:
404 error("Invalid command: %s" % e)
405 sys.exit(1)
406
407 if len(args) < commands[command]["argc"]:
408 error("Not enough paramters for command '%s'" % command)
409 sys.exit(1)
410
411 try:
412 cmd_func(args)
413 except S3Error, e:
414 error("S3 error: " + str(e))
415 except ParameterError, e:
416 error("Parameter problem: " + str(e))
417
418
Binary diff not shown
0 [sdist]
1 formats = gztar,zip
2
3 [install]
4 prefix = /usr
5
6 [bdist_rpm]
7 requires = python = 2.5
8 group = Productivity/Archiving
9 doc-files = README, INSTALL, NEWS
0 from distutils.core import setup
1 import os
2
3 import S3.PkgInfo
4
5 ## Remove 'MANIFEST' file to force
6 ## distutils to recreate it
7 try:
8 os.unlink("MANIFEST")
9 except:
10 pass
11
12 ## Compress manpage. It behaves weird
13 ## with bdist_rpm when not compressed.
14 os.system("gzip -c s3cmd.1 > s3cmd.1.gz")
15
16 ## Main distutils info
17 setup(
18 ## Content description
19 name = S3.PkgInfo.package,
20 version = S3.PkgInfo.version,
21 packages = [ 'S3' ],
22 scripts = ['s3cmd'],
23 data_files = [
24 ("share/doc/packages/s3cmd", [ "README", "INSTALL", "NEWS" ]),
25 ("share/man/man1", [ "s3cmd.1.gz" ] ),
26 ],
27
28 ## Packaging details
29 author = "Michal Ludvig",
30 author_email = "michal@logix.cz",
31 url = S3.PkgInfo.url,
32 license = S3.PkgInfo.license,
33 description = S3.PkgInfo.short_description,
34 long_description = """
35 %s
36
37 Authors:
38 --------
39 Michal Ludvig <michal@logix.cz>
40 """ % (S3.PkgInfo.long_description)
41 )