Codebase list python-irodsclient / e0396db
New upstream release. Debian Janitor 1 year, 3 months ago
78 changed file(s) with 8327 addition(s) and 575 deletion(s). Raw diff Collapse all Expand all
3030 .settings
3131 .project
3232 .pydevproject
33 .idea
3334
3435 # Vim
3536 *.s[a-w][a-z]
00 Changelog
11 =========
2
3 v1.1.5 (2022-09-21)
4 -------------------
5 - [#383] correct logical path normalization
6 - [#369] remove dynamic generation of message classes
7 - [#386][#389] only load timestamps when requested
8 - [#386] initial change to add create and modify times for metadata
9
10 v1.1.4 (2022-06-29)
11 -------------------
12 - [#372] eliminate SyntaxWarning ("is" operator being used with a literal) [Daniel Moore]
13 - [#358] eliminate fcntl import [Daniel Moore]
14 - [#368] ensure connection is finalized properly [Daniel Moore]
15 - [#362] escape special characters in PAM passwords [Daniel Moore]
16 - [#364] allow ADMIN_KW in all metadata operations [Daniel Moore]
17 - [#365] allow set() method via iRODSMetaCollection [Daniel Moore]
18 - [#3] update tests for 4.3.0 [Daniel Moore]
19 - [irods/irods#844] fix access_test [Daniel Moore]
20 - [#3][irods/irods#6124] adapt for ADMIN_KW in post-4.2.11 ModAVUMetadata api [Daniel Moore]
21 - [#3][irods/irods#5927] test_repave_replica now passes in iRODS >= 4.2.12 [Daniel Moore]
22 - [#3][irods/irods#6340] test_replica_number passes on 4.3.0 [Daniel Moore]
23
24 v1.1.3 (2022-04-07)
25 -------------------
26 - [#356] Removing call to partially unsupported getpeername() [Kaivan Kamali]
27
28 v1.1.2 (2022-03-15)
29 -------------------
30 - [#3][#345] Allow tests to pass and accommodate older Python [Daniel Moore]
31 - [#352] Fix the infinite loop issue when sock.recv() returns an empty buffer [Kaivan Kamali]
32 - [#345] Fix connection destructor issue [Kaivan Kamali]
33 - [#351] replace 704 api constant with AUTH_RESPONSE_AN [Daniel Moore]
34 - [#350] password input to AUTH_RESPONSE_AN should be string [Daniel Moore]
35 - [#315] skip cleanup() if session.pool is None [Daniel Moore]
36 - [#290] only anonymous user can log in without password [Daniel Moore]
37 - [#43][#328] reasonable indentation [Daniel Moore]
38 - [#328] allow user to change own password [Daniel Moore]
39 - [#343][#21] document testing and S3 setup [Daniel Moore]
40 - [#343] allow parallel (multi-1247) data transfer to/from S3 [Daniel Moore]
41 - [#332] capitalize -C,-R object type abbreviations [Daniel Moore]
42 - [#349] normalize() argument not necessarily absolute [Daniel Moore]
43 - [#323] remove trailing slashes in collection names [Daniel Moore]
44
45 v1.1.1 (2022-01-31)
46 -------------------
47 - [#338] clarify Python RE Plugin limitations [Daniel Moore]
48 - [#339] correction to README regarding RULE_ENGINE_ERROR [Daniel Moore]
49 - [#336] rule files can now be submitted from a memory file object [Daniel Moore]
50
51 v1.1.0 (2022-01-20)
52 -------------------
53 - [#334] add SECURE_XML to parser selection [Daniel Moore]
54 - [#279] allow long tokens via PamAuthRequest [Daniel Moore]
55 - [#190] session_cleanup is optional after rule execution. [Daniel Moore]
56 - [#288] Rule execute method can target an instance by name [Daniel Moore]
57 - [#314] allow null parameter on INPUT line of a rule file [Daniel Moore]
58 - [#318] correction for unicode name queries in Python 2 [Daniel Moore]
59 - [#170] fixes for Python2 / ElementTree compatibility [Daniel Moore]
60 - [#170] Fix exception handling QuasiXML parser [Sietse Snel]
61 - [#170] Parse current iRODS XML protocol [Chris Smeele]
62 - [#306] test setting/resetting inheritance [Daniel Moore]
63 - [#297] deal with CHECK_VERIFICATION_RESULTS for checksums [Daniel Moore]
64 - [irods/irods#5933] PRC ticket API now working with ADMIN_KW [Daniel Moore]
65 - [#292] Correct tickets section in README [Daniel Moore]
66 - [#290] allow skipping of password file in anonymous user case [Daniel Moore]
67 - [irods/irods#5954] interpret timestamps as UTC instead of local time [Daniel Moore]
68 - [#294] allow data object get() to work with tickets enabled [Daniel Moore]
69 - [#303] Expose additional iRODS collection information in the Collection object. [Ruben Garcia]
70 - [#143] Use unittest-xml-reporting package, move to extra [Michael R. Crusoe]
71 - [#299] Added GenQuery support for tickets. [Kory Draughn]
72 - [#285] adds tests for irods/irods#5548 and irods/irods#5848 [Daniel Moore]
73 - [#281] honor the irods_ssl_verify_server setting. [Daniel Moore]
74 - [#287] allow passing RError stack through CHKSUM library call [Daniel Moore]
75 - [#282] add NO_COMPUTE keyword [Daniel Moore]
76
77 v1.0.0 (2021-06-03)
78 -------------------
79 - [#274] calculate common vault dir for unicode query tests [Daniel Moore]
80 - [#269] better session cleanup [Daniel Moore]
81
82 v0.9.0 (2021-05-14)
83 -------------------
84 - [#269] cleanup() is now automatic with session destruct [Daniel Moore]
85 - [#235] multithreaded parallel transfer for PUT and GET [Daniel Moore]
86 - [#232] do not arbitrarily pick first replica for DEST RESC [Daniel Moore]
87 - [#233] add null handler for irods package root [Daniel Moore]
88 - [#246] implementation of checksum for data object manager [Daniel Moore]
89 - [#270] speed up tests [Daniel Moore]
90 - [#260] [irods/irods#5520] XML protocol will use BinBytesBuf in 4.2.9 [Daniel Moore]
91 - [#221] prepare test suite for CI [Daniel Moore]
92 - [#267] add RuleExec model for genquery [Daniel Moore]
93 - [#263] update documentation for connection_timeout [Terrell Russell]
94 - [#261] add temporary password support [Paul van Schayck]
95 - [#257] better SSL examples [Terrell Russell]
96 - [#255] make results of atomic metadata operations visible [Daniel Moore]
97 - [#250] add exception for SYS_INVALID_INPUT_PARAM [Daniel Moore]
98
99 v0.8.6 (2021-01-22)
100 -------------------
101 - [#244] added capability to add/remove atomic metadata [Daniel Moore]
102 - [#226] Document creation of users [Ruben Garcia]
103 - [#230] Add force option to data_object_manager create [Ruben Garcia]
104 - [#239] to keep the tests passing [Daniel Moore]
105 - [#239] add iRODSUser.info attribute [Pierre Gay]
106 - [#239] add iRODSUser.comment attribute [Pierre Gay]
107 - [#241] [irods/irods_capability_automated_ingest#136] fix redundant disconnect [Daniel Moore]
108 - [#227] [#228] enable ICAT entries for zones and foreign-zone users [Daniel Moore]
109
110 v0.8.5 (2020-11-10)
111 -------------------
112 - [#220] Use connection create time to determine stale connections [Kaivan Kamali]
113
114 v0.8.4 (2020-10-19)
115 -------------------
116 - [#221] fix tests which were failing in Py3.4 and 3.7 [Daniel Moore]
117 - [#220] Replace stale connections pulled from idle pools [Kaivan Kamali]
118 - [#3] tests failing on Python3 unicode defaults [Daniel Moore]
119 - [#214] store/load rules as utf-8 in files [Daniel Moore]
120 - [#211] set and report application name to server [Daniel Moore]
121 - [#156] skip ssh/pam login tests if user doesn't exist [Daniel Moore]
122 - [#209] pam/ssl/env auth tests imported from test harness [Daniel Moore]
123 - [#209] store hashed PAM pw [Daniel Moore]
124 - [#205] Disallow PAM plaintext passwords as strong default [Daniel Moore]
125 - [#156] fix the PAM authentication with env json file. [Patrice Linel]
126 - [#207] add raw-acl permissions getter [Daniel Moore]
127
128 v0.8.3 (2020-06-05)
129 -------------------
130 - [#3] remove order sensitivity in test_user_dn [Daniel Moore]
131 - [#5] clarify unlink specific replica example [Terrell Russell]
132 - [irods/irods#4796] add data object copy tests [Daniel Moore]
133 - [#5] Additional sections and examples in README [Daniel Moore]
134 - [#187] Allow query on metadata create and modify times [Daniel Moore]
135 - [#135] fix queries for multiple AVUs of same name [Daniel Moore]
136 - [#135] Allow multiple criteria based on column name [Daniel Moore]
137 - [#180] add the "in" genquery operator [Daniel Moore]
138 - [#183] fix key error when tables from order_by() not in query() [Daniel Moore]
139 - [#5] fix ssl example in README.rst [Terrell Russell]
140
141 v0.8.2 (2019-11-13)
142 -------------------
143 - [#8] Add PAM Authentication handling (still needs tests) [Mattia D'Antonio]
144 - [#5] Remove commented-out import [Alan King]
145 - [#5] Add .idea directory to .gitignore [Jonathan Landrum]
146 - [#150] Fix specific query argument labeling [Chris Klimowski]
147 - [#148] DataObjectManager.put() can return the new data_object [Jonathan Landrum]
148 - [#124] Convert strings going to irods to Unicode [Alan King]
149 - [#161] Allow dynamic I/O for rule from file [Mathijs Koymans]
150 - [#162] Include resc_hier in replica information [Brett Hartley]
151 - [#165] Fix CAT_STATEMENT_TABLE_FULL by auto closing queries [Chris Smeele]
152 - [#166] Test freeing statements in unfinished query [Daniel Moore]
153 - [#167] Add metadata for user and usergroup objects [Erwin van Wieringen]
154 - [#175] Add metadata property for instances of iRODSResource [Daniel Moore]
155 - [#163] add keywords to query objects [Daniel Moore]
2156
3157 v0.8.1 (2018-09-27)
4158 -------------------
0 ARG os_image
1 FROM ${os_image}
2 ARG log_output_dir=/tmp
3 ENV LOG_OUTPUT_DIR="$log_output_dir"
4 ARG py_N
5 ENV PY_N "$py_N"
6
7 RUN yum install -y epel-release
8 RUN yum install -y git nmap-ncat sudo
9 RUN yum install -y python${py_N} python${py_N}-pip
10 RUN useradd -md /home/user -s /bin/bash user
11 RUN echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
12 WORKDIR /home/user
13 COPY ./ ./repo/
14 RUN chown -R user repo/
15 USER user
16 RUN pip${py_N} install --user --upgrade pip==20.3.4 # - limit pip version for C7 system python2.7
17 RUN cd repo && python${py_N} -m pip install --user '.[tests]'
18 RUN python${py_N} repo/docker_build/iinit.py \
19 host irods-provider \
20 port 1247 \
21 user rods \
22 zone tempZone \
23 password rods
24 SHELL ["/bin/bash","-c"]
25 CMD echo "Waiting on iRODS server... " ; \
26 python${PY_N} repo/docker_build/recv_oneshot -h irods-provider -p 8888 -t 360 && \
27 sudo groupadd -o -g $(stat -c%g /irods_shared) irods && sudo usermod -aG irods user && \
28 newgrp irods < repo/run_python_tests.sh
0 ARG os_image
1 FROM ${os_image}
2 ARG log_output_dir=/tmp
3 ENV LOG_OUTPUT_DIR="$log_output_dir"
4 ARG py_N
5 ENV PY_N "$py_N"
6
7 RUN apt update
8 RUN apt install -y git netcat-openbsd sudo
9 RUN apt install -y python${py_N} python${py_N}-pip
10 RUN useradd -md /home/user -s /bin/bash user
11 RUN echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
12 WORKDIR /home/user
13 COPY ./ ./repo/
14 RUN chown -R user repo/
15 USER user
16 RUN pip${py_N} install --user --upgrade pip==20.3.4 # -- version specified for Ub16
17 RUN cd repo && python${py_N} -m pip install --user '.[tests]'
18 RUN python${py_N} repo/docker_build/iinit.py \
19 host irods-provider \
20 port 1247 \
21 user rods \
22 zone tempZone \
23 password rods
24
25 SHELL ["/bin/bash","-c"]
26
27 # -- At runtime: --
28 # 1. wait for provider to run.
29 # 2. give user group permissions to access shared irods directories
30 # 3. run python tests as the new group
31
32 CMD echo "Waiting on iRODS server... " ; \
33 python${PY_N} repo/docker_build/recv_oneshot -h irods-provider -p 8888 -t 360 && \
34 sudo groupadd -o -g $(stat -c%g /irods_shared) irods && sudo usermod -aG irods user && \
35 newgrp irods < repo/run_python_tests.sh
0 include AUTHORS CHANGELOG.rst LICENSE.txt README.rst irods/test/README.rst irods/test/unicode_sampler.xml
0 include AUTHORS CHANGELOG.rst LICENSE.txt README.rst irods/test/README.rst irods/test/unicode_sampler.xml irods/test/test-data/*.json
11 Python iRODS Client (PRC)
22 =========================
33
4 `iRODS <https://www.irods.org>`_ is an open source distributed data management system. This is a client API implemented in python.
4 `iRODS <https://www.irods.org>`_ is an open source distributed data management system. This is a client API implemented in Python.
55
66 Currently supported:
77
8 - Establish a connection to iRODS, authenticate
9 - Implement basic Gen Queries (select columns and filtering)
10 - Support more advanced Gen Queries with limits, offsets, and aggregations
8 - Python 2.7, 3.4 or newer
9 - Establish a connection to iRODS
10 - Authenticate via password, GSI, PAM
11 - iRODS connection over SSL
12 - Implement basic GenQueries (select columns and filtering)
13 - Support more advanced GenQueries with limits, offsets, and aggregations
1114 - Query the collections and data objects within a collection
1215 - Execute direct SQL queries
1316 - Execute iRODS rules
1417 - Support read, write, and seek operations for files
15 - PUT/GET data objects
18 - Parallel PUT/GET data objects
19 - Create collections
20 - Rename collections
21 - Delete collections
1622 - Create data objects
23 - Rename data objects
24 - Checksum data objects
1725 - Delete data objects
18 - Create collections
19 - Delete collections
20 - Rename data objects
21 - Rename collections
2226 - Register files and directories
2327 - Query metadata for collections and data objects
2428 - Add, edit, remove metadata
2529 - Replicate data objects to different resource servers
2630 - Connection pool management
27 - Implement gen query result sets as lazy queries
31 - Implement GenQuery result sets as lazy queries
2832 - Return empty result sets when CAT_NO_ROWS_FOUND is raised
2933 - Manage permissions
3034 - Manage users and groups
3135 - Manage resources
32 - GSI authentication
3336 - Unicode strings
3437 - Ticket based access
35 - iRODS connection over SSL
36 - Python 2.7, 3.4 or newer
3738
3839
3940 Installing
4041 ----------
4142
4243 PRC requires Python 2.7 or 3.4+.
43 To install with pip::
44 Canonically, to install with pip::
4445
4546 pip install python-irodsclient
4647
4748 or::
4849
4950 pip install git+https://github.com/irods/python-irodsclient.git[@branch|@commit|@tag]
50
5151
5252 Uninstalling
5353 ------------
5656
5757 pip uninstall python-irodsclient
5858
59
60 Establishing a connection
61 -------------------------
62
63 Using environment files in ``~/.irods/``:
59 Hazard: Outdated Python
60 --------------------------
61 With older versions of Python (as of this writing, the aforementioned 2.7 and 3.4), we
62 can take preparatory steps toward securing workable versions of pip and virtualenv by
63 using these commands::
64
65 $ pip install --upgrade --user pip'<21.0'
66 $ python -m pip install --user virtualenv
67
68 We are then ready to use any of the following commands relevant to and required for the
69 installation::
70
71 $ python -m virtualenv ...
72 $ python -m pip install ...
73
74
75 Establishing a (secure) connection
76 ----------------------------------
77
78 Using environment files (including any SSL settings) in ``~/.irods/``:
6479
6580 >>> import os
81 >>> import ssl
6682 >>> from irods.session import iRODSSession
6783 >>> try:
6884 ... env_file = os.environ['IRODS_ENVIRONMENT_FILE']
6985 ... except KeyError:
7086 ... env_file = os.path.expanduser('~/.irods/irods_environment.json')
7187 ...
72 >>> with iRODSSession(irods_env_file=env_file) as session:
73 ... pass
88 >>> ssl_context = ssl.create_default_context(purpose=ssl.Purpose.SERVER_AUTH, cafile=None, capath=None, cadata=None)
89 >>> ssl_settings = {'ssl_context': ssl_context}
90 >>> with iRODSSession(irods_env_file=env_file, **ssl_settings) as session:
91 ... # workload
7492 ...
7593 >>>
7694
7896
7997 >>> from irods.session import iRODSSession
8098 >>> with iRODSSession(host='localhost', port=1247, user='bob', password='1234', zone='tempZone') as session:
81 ... pass
99 ... # workload
82100 ...
83101 >>>
84102
87105 >>> from irods.session import iRODSSession
88106 >>> with iRODSSession(host='localhost', port=1247, user='rods', password='1234', zone='tempZone',
89107 client_user='bob', client_zone='possibly_another_zone') as session:
90 ... pass
108 ... # workload
91109 ...
92110 >>>
93111
94112 If no ``client_zone`` is provided, the ``zone`` parameter is used in its place.
113
114 A pure Python SSL session (without a local `env_file`) requires a few more things defined:
115
116 >>> import ssl
117 >>> from irods.session import iRODSSession
118 >>> ssl_context = ssl.create_default_context(purpose=ssl.Purpose.SERVER_AUTH, cafile='CERTNAME.crt', capath=None, cadata=None)
119 >>> ssl_settings = {'client_server_negotiation': 'request_server_negotiation',
120 ... 'client_server_policy': 'CS_NEG_REQUIRE',
121 ... 'encryption_algorithm': 'AES-256-CBC',
122 ... 'encryption_key_size': 32,
123 ... 'encryption_num_hash_rounds': 16,
124 ... 'encryption_salt_size': 8,
125 ... 'ssl_context': ssl_context}
126 >>>
127 >>> with iRODSSession(host='HOSTNAME_DEFINED_IN_CAFILE_ABOVE', port=1247, user='bob', password='1234', zone='tempZone', **ssl_settings) as session:
128 ... # workload
129 >>>
130
131
132 Maintaining a connection
133 ------------------------
134
135 The default library timeout for a connection to an iRODS Server is 120 seconds.
136
137 This can be overridden by changing the session `connection_timeout` immediately after creation of the session object:
138
139 >>> session.connection_timeout = 300
140
141 This will set the timeout to five minutes for any associated connections.
142
143 Session objects and cleanup
144 ---------------------------
145
146 When iRODSSession objects are kept as state in an application, spurious SYS_HEADER_READ_LEN_ERR errors
147 can sometimes be seen in the connected iRODS server's log file. This is frequently seen at program exit
148 because socket connections are terminated without having been closed out by the session object's
149 cleanup() method.
150
151 Starting with PRC Release 0.9.0, code has been included in the session object's __del__ method to call
152 cleanup(), properly closing out network connections. However, __del__ cannot be relied to run under all
153 circumstances (Python2 being more problematic), so an alternative may be to call session.cleanup() on
154 any session variable which might not be used again.
155
156
157 Simple PUTs and GETs
158 --------------------
159
160 We can use the just-created session object to put files to (or get them from) iRODS.
161
162 >>> logical_path = "/{0.zone}/home/{0.username}/{1}".format(session,"myfile.dat")
163 >>> session.data_objects.put( "myfile.dat", logical_path)
164 >>> session.data_objects.get( logical_path, "/tmp/myfile.dat.copy" )
165
166 Note that local file paths may be relative, but iRODS data objects must always be referred to by
167 their absolute paths. This is in contrast to the ``iput`` and ``iget`` icommands, which keep
168 track of the current working collection (as modified by ``icd``) for the unix shell.
169
170
171 Parallel Transfer
172 -----------------
173
174 Starting with release 0.9.0, data object transfers using put() and get() will spawn a number
175 of threads in order to optimize performance for iRODS server versions 4.2.9+ and file sizes
176 larger than a default threshold value of 32 Megabytes.
95177
96178
97179 Working with collections
164246 56789
165247
166248
249 Specifying paths
250 ----------------
251
252 Path strings for collection and data objects are usually expected to be absolute in most contexts in the PRC. They
253 must also be normalized to a form including single slashes separating path elements and no slashes at the string's end.
254 If there is any doubt that a path string fulfills this requirement, the wrapper class :code:`irods.path.iRODSPath`
255 (a subclass of :code:`str`) may be used to normalize it::
256
257 if not session.collections.exists( iRODSPath( potentially_unnormalized_path )): #....
258
259 The wrapper serves also as a path joiner; thus::
260
261 iRODSPath( zone, "home", user )
262
263 may replace::
264
265 "/".join(["", zone, "home", user])
266
267 :code:`iRODSPath` is available beginning with PRC release :code:`v1.1.2`.
268
269
167270 Reading and writing files
168271 -------------------------
169272
180283 bar
181284
182285
286 Computing and Retrieving Checksums
287 ----------------------------------
288
289 Each data object may be associated with a checksum by calling chksum() on the object in question. Various
290 behaviors can be elicited by passing in combinations of keywords (for a description of which, please consult the
291 `header documentation <https://github.com/irods/irods/blob/4-2-stable/lib/api/include/dataObjChksum.h>`_ .)
292
293 As with most other iRODS APIs, it is straightforward to specify keywords by adding them to an option dictionary:
294
295 >>> data_object_1.chksum() # - computes the checksum if already in the catalog, otherwise computes and stores it
296 ... # (ie. default behavior with no keywords passed in.)
297 >>> from irods.manager.data_object_manager import Server_Checksum_Warning
298 >>> import irods.keywords as kw
299 >>> opts = { kw.VERIFY_CHKSUM_KW:'' }
300 >>> try:
301 ... data_object_2.chksum( **opts ) # - Uses verification option. (Does not auto-vivify a checksum field).
302 ... # or:
303 ... opts[ kw.NO_COMPUTE_KW ] = ''
304 ... data_object_2.chksum( **opts ) # - Uses both verification and no-compute options. (Like ichksum -K --no-compute)
305 ... except Server_Checksum_Warning:
306 ... print('some checksums are missing or wrong')
307
308 Additionally, if a freshly created irods.message.RErrorStack instance is given, information can be returned and read by
309 the client:
310
311 >>> r_err_stk = RErrorStack()
312 >>> warn = None
313 >>> try: # Here, data_obj has one replica, not yet checksummed.
314 ... data_obj.chksum( r_error = r_err_stk , **{kw.VERIFY_CHKSUM_KW:''} )
315 ... except Server_Checksum_Warning as exc:
316 ... warn = exc
317 >>> print(r_err_stk)
318 [RError<message = u'WARNING: No checksum available for replica [0].', status = -862000 CAT_NO_CHECKSUM_FOR_REPLICA>]
319
320
183321 Working with metadata
184322 ---------------------
185323
324 To enumerate AVU's on an object. With no metadata attached, the result is an empty list:
325
326
327 >>> from irods.meta import iRODSMeta
186328 >>> obj = session.data_objects.get("/tempZone/home/rods/test1")
187329 >>> print(obj.metadata.items())
188330 []
189331
332
333 We then add some metadata.
334 Just as with the icommand equivalent "imeta add ...", we can add multiple AVU's with the same name field:
335
336
190337 >>> obj.metadata.add('key1', 'value1', 'units1')
191338 >>> obj.metadata.add('key1', 'value2')
192339 >>> obj.metadata.add('key2', 'value3')
340 >>> obj.metadata.add('key2', 'value4')
193341 >>> print(obj.metadata.items())
194 [<iRODSMeta (key1, value1, units1, 10014)>, <iRODSMeta (key2, value3, None, 10017)>,
195 <iRODSMeta (key1, value2, None, 10020)>]
196
197 >>> print(obj.metadata.get_all('key1'))
198 [<iRODSMeta (key1, value1, units1, 10014)>, <iRODSMeta (key1, value2, None, 10020)>]
342 [<iRODSMeta 13182 key1 value1 units1>, <iRODSMeta 13185 key2 value4 None>,
343 <iRODSMeta 13183 key1 value2 None>, <iRODSMeta 13184 key2 value3 None>]
344
345
346 We can also use Python's item indexing syntax to perform the equivalent of an "imeta set ...", e.g. overwriting
347 all AVU's with a name field of "key2" in a single update:
348
349
350 >>> new_meta = iRODSMeta('key2','value5','units2')
351 >>> obj.metadata[new_meta.name] = new_meta
352 >>> print(obj.metadata.items())
353 [<iRODSMeta 13182 key1 value1 units1>, <iRODSMeta 13183 key1 value2 None>,
354 <iRODSMeta 13186 key2 value5 units2>]
355
356
357 Now, with only one AVU on the object with a name of "key2", *get_one* is assured of not throwing an exception:
358
199359
200360 >>> print(obj.metadata.get_one('key2'))
201 <iRODSMeta (key2, value3, None, 10017)>
361 <iRODSMeta 13186 key2 value5 units2>
362
363
364 However, the same is not true of "key1":
365
366
367 >>> print(obj.metadata.get_one('key1'))
368 Traceback (most recent call last):
369 File "<stdin>", line 1, in <module>
370 File "/[...]/python-irodsclient/irods/meta.py", line 41, in get_one
371 raise KeyError
372 KeyError
373
374
375 Finally, to remove a specific AVU from an object:
376
202377
203378 >>> obj.metadata.remove('key1', 'value1', 'units1')
204379 >>> print(obj.metadata.items())
205 [<iRODSMeta (key2, value3, None, 10017)>, <iRODSMeta (key1, value2, None, 10020)>]
380 [<iRODSMeta 13186 key2 value5 units2>, <iRODSMeta 13183 key1 value2 None>]
381
382
383 Alternately, this form of the remove() method can also be useful:
384
385
386 >>> for avu in obj.metadata.items():
387 ... obj.metadata.remove(avu)
388 >>> print(obj.metadata.items())
389 []
390
391
392 If we intended on deleting the data object anyway, we could have just done this instead:
393
394
395 >>> obj.unlink(force=True)
396
397
398 But notice that the force option is important, since a data object in the trash may still have AVU's attached.
399
400 At the end of a long session of AVU add/manipulate/delete operations, one should make sure to delete all unused
401 AVU's. We can in fact use any *\*Meta* data model in the queries below, since unattached AVU's are not aware
402 of the (type of) catalog object they once annotated:
403
404
405 >>> from irods.models import (DataObjectMeta, ResourceMeta)
406 >>> len(list( session.query(ResourceMeta) ))
407 4
408 >>> from irods.test.helpers import remove_unused_metadata
409 >>> remove_unused_metadata(session)
410 >>> len(list( session.query(ResourceMeta) ))
411 0
412
413 When altering a fetched iRODSMeta, we must copy it first to avoid errors, due to the fact the reference
414 is cached by the iRODS object reference. A shallow copy is sufficient:
415
416 >>> meta = album.metadata.items()[0]
417 >>> meta.units
418 'quid'
419 >>> import copy; meta = copy.copy(meta); meta.units = 'pounds sterling'
420 >>> album.metadata[ meta.name ] = meta
421
422 Fortunately, as of PRC >= 1.1.4, we can simply do this instead:
423
424 >>> album.metadata.set( meta )
425
426 In versions of iRODS 4.2.12 and later, we can also do:
427
428 >>> album.metadata.set( meta, **{kw.ADMIN_KW: ''} )
429
430 or even:
431
432 >>> album.metadata(admin = True)[meta.name] = meta
433
434 In v1.1.5, the "timestamps" keyword is provided to enable the loading of create and modify timestamps
435 for every AVU returned from the server:
436
437 >>> avus = album.metadata(timestamps = True).items()
438 >>> avus[0].create_time
439 datetime.datetime(2022, 9, 19, 15, 26, 7)
440
441 Atomic operations on metadata
442 -----------------------------
443
444 With release 4.2.8 of iRODS, the atomic metadata API was introduced to allow a group of metadata add and remove
445 operations to be performed transactionally, within a single call to the server. This capability can be leveraged in
446 version 0.8.6 of the PRC.
447
448 So, for example, if 'obj' is a handle to an object in the iRODS catalog (whether a data object, collection, user or
449 storage resource), we can send an arbitrary number of AVUOperation instances to be executed together as one indivisible
450 operation on that object:
451
452 >>> from irods.meta import iRODSMeta, AVUOperation
453 >>> obj.metadata.apply_atomic_operations( AVUOperation(operation='remove', avu=iRODSMeta('a1','v1','these_units')),
454 ... AVUOperation(operation='add', avu=iRODSMeta('a2','v2','those_units')),
455 ... AVUOperation(operation='remove', avu=iRODSMeta('a3','v3')) # , ...
456 ... )
457
458 The list of operations will applied in the order given, so that a "remove" followed by an "add" of the same AVU
459 is, in effect, a metadata "set" operation. Also note that a "remove" operation will be ignored if the AVU value given
460 does not exist on the target object at that point in the sequence of operations.
461
462 We can also source from a pre-built list of AVUOperations using Python's `f(*args_list)` syntax. For example, this
463 function uses the atomic metadata API to very quickly remove all AVUs from an object:
464
465 >>> def remove_all_avus( Object ):
466 ... avus_on_Object = Object.metadata.items()
467 ... Object.metadata.apply_atomic_operations( *[AVUOperation(operation='remove', avu=i) for i in avus_on_Object] )
468
469
470 Special Characters
471 ------------------
472
473 Of course, it is fine to put Unicode characters into your collection and data object names. However, certain
474 non-printable ASCII characters, and the backquote character as well, have historically presented problems -
475 especially for clients using iRODS's human readable XML protocol. Consider this small, only slighly contrived,
476 application:
477 ::
478
479 from irods.test.helpers import make_session
480
481 def create_notes( session, obj_name, content = u'' ):
482 get_home_coll = lambda ses: "/{0.zone}/home/{0.username}".format(ses)
483 path = get_home_coll(session) + "/" + obj_name
484 with session.data_objects.open(path,"a") as f:
485 f.seek(0, 2) # SEEK_END
486 f.write(content.encode('utf8'))
487 return session.data_objects.get(path)
488
489 with make_session() as session:
490
491 # Example 1 : exception thrown when name has non-printable character
492 try:
493 create_notes( session, "lucky\033.dat", content = u'test' )
494 except:
495 pass
496
497 # Example 2 (Ref. issue: irods/irods #4132, fixed for 4.2.9 release of iRODS)
498 print(
499 create_notes( session, "Alice`s diary").name # note diff (' != `) in printed name
500 )
501
502
503 This creates two data objects, but with less than optimal success. The first example object
504 is created but receives no content because an exception is thrown trying to query its name after
505 creation. In the second example, for iRODS 4.2.8 and before, a deficiency in packStruct XML protocol causes
506 the backtick to be read back as an apostrophe, which could create problems manipulating or deleting the object later.
507
508 As of PRC v1.1.0, we can mitigate both problems by switching in the QUASI_XML parser for the default one:
509 ::
510
511 from irods.message import (XML_Parser_Type, ET)
512 ET( XML_Parser.QUASI_XML, session.server_version )
513
514 Two dedicated environment variables may also be used to customize the Python client's XML parsing behavior via the
515 setting of global defaults during start-up.
516
517 For example, we can set the default parser to QUASI_XML, optimized for use with version 4.2.8 of the iRODS server,
518 in the following manner:
519 ::
520
521 Bash-Shell> export PYTHON_IRODSCLIENT_DEFAULT_XML=QUASI_XML PYTHON_IRODSCLIENT_QUASI_XML_SERVER_VERSION=4,2,8
522
523 Other alternatives for PYTHON_IRODSCLIENT_DEFAULT_XML are "STANDARD_XML" and "SECURE_XML". These two latter options
524 denote use of the xml.etree and defusedxml modules, respectively.
525
526 Only the choice of "QUASI_XML" is affected by the specification of a particular server version.
527
528 Finally, note that these global defaults, once set, may be overridden on a per-thread basis using
529 :code:`ET(parser_type, server_version)`. We can also revert the current thread's XML parser back to the
530 global default by calling :code:`ET(None)`.
531
532
533 Rule Execution
534 --------------
535
536 A simple example of how to execute an iRODS rule from the Python client is as follows. Suppose we have a rule file
537 :code:`native1.r` which contains a rule in native iRODS Rule Language::
538
539 main() {
540 writeLine("*stream",
541 *X ++ " squared is " ++ str(double(*X)^2) )
542 }
543
544 INPUT *X="3", *stream="serverLog"
545 OUTPUT null
546
547 The following Python client code will run the rule and produce the appropriate output in the
548 irods server log::
549
550 r = irods.rule.Rule( session, rule_file = 'native1.r')
551 r.execute()
552
553 With release v1.1.1, not only can we target a specific rule engine instance by name (which is useful when
554 more than one is present), but we can also use a file-like object for the :code:`rule_file` parameter::
555
556 Rule( session, rule_file = io.StringIO(u'''mainRule() { anotherRule(*x); writeLine('stdout',*x) }\n'''
557 u'''anotherRule(*OUT) {*OUT='hello world!'}\n\n'''
558 u'''OUTPUT ruleExecOut\n'''),
559 instance_name = 'irods_rule_engine_plugin-irods_rule_language-instance' )
560
561 Incidentally, if we wanted to change the :code:`native1.r` rule code print to stdout also, we could set the
562 :code:`INPUT` parameter, :code:`*stream`, using the Rule constructor's :code:`params` keyword argument.
563 Similarly, we can change the :code:`OUTPUT` parameter from :code:`null` to :code:`ruleExecOut`, to accommodate
564 the output stream, via the :code:`output` argument::
565
566 r = irods.rule.Rule( session, rule_file = 'native1.r',
567 instance_name = 'irods_rule_engine_plugin-irods_rule_language-instance',
568 params={'*stream':'"stdout"'} , output = 'ruleExecOut' )
569 output = r.execute( )
570 if output and len(output.MsParam_PI):
571 buf = output.MsParam_PI[0].inOutStruct.stdoutBuf.buf
572 if buf: print(buf.rstrip(b'\0').decode('utf8'))
573
574 (Changing the input value to be squared in this example is left as an exercise for the reader!)
575
576 To deal with errors resulting from rule execution failure, two approaches can be taken. Suppose we
577 have defined this in the :code:`/etc/irods/core.re` rule-base::
578
579 rule_that_fails_with_error_code(*x) {
580 *y = (if (*x!="") then int(*x) else 0)
581 # if (SOME_PROCEDURE_GOES_WRONG) {
582 if (*y < 0) { failmsg(*y,"-- my error message --"); } #-> throws an error code of int(*x) in REPF
583 else { fail(); } #-> throws FAIL_ACTION_ENCOUNTERED_ERR in REPF
584 # }
585 }
586
587 We can run the rule thus:
588
589 >>> Rule( session, body='rule_that_fails_with_error_code(""), instance_name = 'irods_rule_engine_plugin-irods_rule_language-instance',
590 ... ).execute( r_error = (r_errs:= irods.message.RErrorStack()) )
591
592 Where we've used the Python 3.8 "walrus operator" for brevity. The error will automatically be caught and translated to a
593 returned-error stack::
594
595 >>> pprint.pprint([vars(r) for r in r_errs])
596 [{'raw_msg_': 'DEBUG: fail action encountered\n'
597 'line 14, col 15, rule base core\n'
598 ' else { fail(); }\n'
599 ' ^\n'
600 '\n',
601 'status_': -1220000}]
602
603 Note, if a stringized negative integer is given , ie. as a special fail code to be thrown within the rule,
604 we must add this code into a special parameter to have this automatically caught as well:
605
606 >>> Rule( session, body='rule_that_fails_with_error_code("-2")',instance_name = 'irods_rule_engine_plugin-irods_rule_language-instance'
607 ... ).execute( acceptable_errors = ( FAIL_ACTION_ENCOUNTERED_ERR, -2),
608 ... r_error = (r_errs := irods.message.RErrorStack()) )
609
610 Because the rule is written to emit a custom error message via failmsg in this case, the resulting r_error stack will now include that
611 custom error message as a substring::
612
613 >>> pprint.pprint([vars(r) for r in r_errs])
614 [{'raw_msg_': 'DEBUG: -- my error message --\n'
615 'line 21, col 20, rule base core\n'
616 ' if (*y < 0) { failmsg(*y,"-- my error message --"); } '
617 '#-> throws an error code of int(*x) in REPF\n'
618 ' ^\n'
619 '\n',
620 'status_': -1220000}]
621
622 Alternatively, or in combination with the automatic catching of errors, we may also catch errors as exceptions on the client
623 side. For example, if the Python rule engine is configured, and the following rule is placed in :code:`/etc/irods/core.py`::
624
625 def python_rule(rule_args, callback, rei):
626 # if some operation fails():
627 raise RuntimeError
628
629 we can trap the error thus::
630
631 try:
632 Rule( session, body = 'python_rule', instance_name = 'irods_rule_engine_plugin-python-instance' ).execute()
633 except irods.exception.RULE_ENGINE_ERROR:
634 print('Rule execution failed!')
635 exit(1)
636 print('Rule execution succeeded!')
637
638 As fail actions from native rules are not thrown by default (refer to the help text for :code:`Rule.execute`), if we
639 anticipate these and prefer to catch them as exceptions, we can do it this way::
640
641 try:
642 Rule( session, body = 'python_rule', instance_name = 'irods_rule_engine_plugin-python-instance'
643 ).execute( acceptable_errors = () )
644 except (irods.exception.RULE_ENGINE_ERROR,
645 irods.exception.FAIL_ACTION_ENCOUNTERED_ERR) as e:
646 print('Rule execution failed!')
647 exit(1)
648 print('Rule execution succeeded!')
649
650 Finally, keep in mind that rule code submitted through an :code:`irods.rule.Rule` object is processed by the
651 exec_rule_text function in the targeted plugin instance. This may be a limitation for plugins not equipped to
652 handle rule code in this way. In a sort of middle-ground case, the iRODS Python Rule Engine Plugin is not
653 currently able to handle simple rule calls and the manipulation of iRODS core primitives (like simple parameter
654 passing and variable expansion') as flexibly as the iRODS Rule Language.
655
656 Also, core.py rules may not be run directly (as is also true with :code:`irule`) by other than a rodsadmin user
657 pending the resolution of `this issue <https://github.com/irods/irods_rule_engine_plugin_python/issues/105>`_.
206658
207659
208660 General queries
233685 /tempZone/home/rods/manager/resource_manager.pyc id=212661 size=4570
234686 /tempZone/home/rods/manager/user_manager.py id=212669 size=5509
235687 /tempZone/home/rods/manager/user_manager.pyc id=212658 size=5233
688
689 Query using other models:
690
691 >>> from irods.column import Criterion
692 >>> from irods.models import DataObject, DataObjectMeta, Collection, CollectionMeta
693 >>> from irods.session import iRODSSession
694 >>> import os
695 >>> env_file = os.path.expanduser('~/.irods/irods_environment.json')
696 >>> with iRODSSession(irods_env_file=env_file) as session:
697 ... # by metadata
698 ... # equivalent to 'imeta qu -C type like Project'
699 ... results = session.query(Collection, CollectionMeta).filter( \
700 ... Criterion('=', CollectionMeta.name, 'type')).filter( \
701 ... Criterion('like', CollectionMeta.value, '%Project%'))
702 ... for r in results:
703 ... print(r[Collection.name], r[CollectionMeta.name], r[CollectionMeta.value], r[CollectionMeta.units])
704 ...
705 ('/tempZone/home/rods', 'type', 'Project', None)
706
707 Beginning with version 0.8.3 of PRC, the 'in' genquery operator is also available:
708
709 >>> from irods.models import Resource
710 >>> from irods.column import In
711 >>> [ resc[Resource.id]for resc in session.query(Resource).filter(In(Resource.name, ['thisResc','thatResc'])) ]
712 [10037,10038]
236713
237714 Query with aggregation(min, max, sum, avg, count):
238715
293770 __init__.py 212670
294771 __init__.pyc 212671
295772
773
296774 Recherché queries
297775 -----------------
298776
317795 >>> pprint( list( chained_results ) )
318796
319797
798 Instantiating iRODS objects from query results
799 ----------------------------------------------
800 The General query works well for getting information out of the ICAT if all we're interested in is
801 information representable with
802 primitive types (ie. object names, paths, and ID's, as strings or integers). But Python's object orientation also
803 allows us to create object references to mirror the persistent entities (instances of *Collection*, *DataObject*, *User*, or *Resource*, etc.)
804 inhabiting the ICAT.
805
806 **Background:**
807 Certain iRODS object types can be instantiated easily using the session object's custom type managers,
808 particularly if some parameter (often just the name or path) of the object is already known:
809
810 >>> type(session.users)
811 <class 'irods.manager.user_manager.UserManager'>
812 >>> u = session.users.get('rods')
813 >>> u.id
814 10003
815
816 Type managers are good for specific operations, including object creation and removal::
817
818 >>> session.collections.create('/tempZone/home/rods/subColln')
819 >>> session.collections.remove('/tempZone/home/rods/subColln')
820 >>> session.data_objects.create('/tempZone/home/rods/dataObj')
821 >>> session.data_objects.unlink('/tempZone/home/rods/dataObj')
822
823 When we retrieve a reference to an existing collection using *get* :
824
825 >>> c = session.collections.get('/tempZone/home/rods')
826 >>> c
827 <iRODSCollection 10011 rods>
828
829
830 we have, in that variable *c*, a reference to an iRODS *Collection* object whose properties provide
831 useful information:
832
833 >>> [ x for x in dir(c) if not x.startswith('__') ]
834 ['_meta', 'data_objects', 'id', 'manager', 'metadata', 'move', 'name', 'path', 'remove', 'subcollections', 'unregister', 'walk']
835 >>> c.name
836 'rods'
837 >>> c.path
838 '/tempZone/home/rods'
839 >>> c.data_objects
840 [<iRODSDataObject 10019 test1>]
841 >>> c.metadata.items()
842 [ <... list of AVU's attached to Collection c ... > ]
843
844 or whose methods can do useful things:
845
846 >>> for sub_coll in c.walk(): print('---'); pprint( sub_coll )
847 [ ...< series of Python data structures giving the complete tree structure below collection 'c'> ...]
848
849 This approach of finding objects by name, or via their relations with other objects (ie "contained by", or in the case of metadata, "attached to"),
850 is helpful if we know something about the location or identity of what we're searching for, but we don't always
851 have that kind of a-priori knowledge.
852
853 So, although we can (as seen in the last example) walk an *iRODSCollection* recursively to discover all subordinate
854 collections and their data objects, this approach will not always be best
855 for a given type of application or data discovery, especially in more advanced
856 use cases.
857
858 **A Different Approach:**
859 For the PRC to be sufficiently powerful for general use, we'll often need at least:
860
861 * general queries, and
862 * the capabilities afforded by the PRC's object-relational mapping.
863
864 Suppose, for example, we wish to enumerate all collections in the iRODS catalog.
865
866 Again, the object managers are the answer, but they are now invoked using a different scheme:
867
868 >>> from irods.collection import iRODSCollection; from irods.models import Collection
869 >>> all_collns = [ iRODSCollection(session.collections,result) for result in session.query(Collection) ]
870
871 From there, we have the ability to do useful work, or filtering based on the results of the enumeration.
872 And, because *all_collns* is an iterable of true objects, we can either use Python's list comprehensions or
873 execute more catalog queries to achieve further aims.
874
875 Note that, for similar system-wide queries of Data Objects (which, as it happens, are inextricably joined to their
876 parent Collection objects), a bit more finesse is required. Let us query, for example, to find all data
877 objects in a particular zone with an AVU that matches the following condition::
878
879 META_DATA_ATTR_NAME = "irods::alert_time" and META_DATA_ATTR_VALUE like '+0%'
880
881
882 >>> import irods.keywords
883 >>> from irods.data_object import iRODSDataObject
884 >>> from irods.models import DataObjectMeta, DataObject
885 >>> from irods.column import Like
886 >>> q = session.query(DataObject).filter( DataObjectMeta.name == 'irods::alert_time',
887 Like(DataObjectMeta.value, '+0%') )
888 >>> zone_hint = "" # --> add a zone name in quotes to search another zone
889 >>> if zone_hint: q = q.add_keyword( irods.keywords.ZONE_KW, zone_hint )
890 >>> for res in q:
891 ... colln_id = res [DataObject.collection_id]
892 ... collObject = get_collection( colln_id, session, zone = zone_hint)
893 ... dataObject = iRODSDataObject( session.data_objects, parent = collObject, results=[res])
894 ... print( '{coll}/{data}'.format (coll = collObject.path, data = dataObject.name))
895
896
897 In the above loop we have used a helper function, *get_collection*, to minimize the number of hits to the object
898 catalog. Otherwise, me might find within a typical application that some Collection objects are being queried at
899 a high rate of redundancy. *get_collection* can be implemented thusly:
900
901 .. code:: Python
902
903 import collections # of the Pythonic, not iRODS, kind
904 def makehash():
905 # see https://stackoverflow.com/questions/651794/whats-the-best-way-to-initialize-a-dict-of-dicts-in-python
906 return collections.defaultdict(makehash)
907 from irods.collection import iRODSCollection
908 from irods.models import Collection
909 def get_collection (Id, session, zone=None, memo = makehash()):
910 if not zone: zone = ""
911 c_obj = memo[session][zone].get(Id)
912 if c_obj is None:
913 q = session.query(Collection).filter(Collection.id==Id)
914 if zone != '': q = q.add_keyword( irods.keywords.ZONE_KW, zone )
915 c_id = q.one()
916 c_obj = iRODSCollection(session, result = c_id)
917 memo[session][zone][Id] = c_obj
918 return c_obj
919
920
921 Once instantiated, of course, any *iRODSDataObject*'s data to which we have access permissions is available via its open() method.
922
923 As stated, this type of object discovery requires some extra study and effort, but the ability to search arbitrary iRODS zones
924 (to which we are federated and have the user permissions) is powerful indeed.
925
926
927 Tickets
928 -------
929
930 The :code:`irods.ticket.Ticket` class lets us issue "tickets" which grant limited
931 permissions for other users to access our own data objects (or collections of
932 data objects). As with the iticket client, the access may be either "read"
933 or "write". The recipient of the ticket could be a rodsuser, or even an
934 anonymous user.
935
936 Below is a demonstration of how to generate a new ticket for access to a
937 logical path - in this case, say a collection containing 1 or more data objects.
938 (We assume the creation of the granting_session and receiving_session for the users
939 respectively for the users providing and consuming the ticket access.)
940
941 The user who wishes to provide an access may execute the following:
942
943 >>> from irods.ticket import Ticket
944 >>> new_ticket = Ticket (granting_session)
945 >>> The_Ticket_String = new_ticket.issue('read',
946 ... '/zone/home/my/collection_with_data_objects_for/somebody').string
947
948 at which point that ticket's unique string may be given to other users, who can then apply the
949 ticket to any existing session object in order to gain access to the intended object(s):
950
951 >>> from irods.models import Collection, DataObject
952 >>> ses = receiving_session
953 >>> Ticket(ses, The_Ticket_String).supply()
954 >>> c_result = ses.query(Collection).one()
955 >>> c = iRODSCollection( ses.collections, c_result)
956 >>> for dobj in (c.data_objects):
957 ... ses.data_objects.get( dobj.path, '/tmp/' + dobj.name ) # download objects
958
959 In this case, however, modification will not be allowed because the ticket is for read only:
960
961 >>> c.data_objects[0].open('w').write( # raises
962 ... b'new content') # CAT_NO_ACCESS_PERMISSION
963
964 In another example, we could generate a ticket that explicitly allows 'write' access on a
965 specific data object, thus granting other users the permissions to modify as well as read it:
966
967 >>> ses = iRODSSession( user = 'anonymous', password = '', host = 'localhost',
968 port = 1247, zone = 'tempZone')
969 >>> Ticket(ses, write_data_ticket_string ).supply()
970 >>> d_result = ses.query(DataObject.name,Collection.name).one()
971 >>> d_path = ( d_result[Collection.name] + '/' +
972 ... d_result[DataObject.name] )
973 >>> old_content = ses.data_objects.open(d_path,'r').read()
974 >>> with tempfile.NamedTemporaryFile() as f:
975 ... f.write(b'blah'); f.flush()
976 ... ses.data_objects.put(f.name,d_path)
977
978 As with iticket, we may set a time limit on the availability of a ticket, either as a
979 timestamp or in seconds since the epoch:
980
981 >>> t=Ticket(ses); s = t.string
982 vIOQ6qzrWWPO9X7
983 >>> t.issue('read','/some/path')
984 >>> t.modify('expiry','2021-04-01.12:34:56') # timestamp assumed as UTC
985
986 To check the results of the above, we could invoke this icommand elsewhere in a shell prompt:
987
988 :code:`iticket ls vIOQ6qzrWWPO9X7`
989
990 and the server should report back the same expiration timestamp.
991
992 And, if we are the issuer of a ticket, we may also query, filter on, and
993 extract information based on a ticket's attributes and catalog relations:
994
995 >>> from irods.models import TicketQuery
996 >>> delay = lambda secs: int( time.time() + secs + 1)
997 >>> Ticket(ses).issue('read','/path/to/data_object').modify(
998 'expiry',delay(7*24*3600)) # lasts 1 week
999 >>> Q = ses.query (TicketQuery.Ticket, TicketQuery.DataObject).filter(
1000 ... TicketQuery.DataObject.name == 'data_object')
1001 >>> print ([ _[TicketQuery.Ticket.expiry_ts] for _ in Q ])
1002 ['1636757427']
1003
1004
1005 Tracking and manipulating replicas of Data objects
1006 --------------------------------------------------
1007
1008 Putting together the techniques we've seen so far, it's not hard to write functions
1009 that achieve useful, common goals. Suppose that for all data objects containing replicas on
1010 a given named resource (the "source") we want those replicas "moved" to a second, or
1011 "destination" resource. We can achieve it with a function such as the one below. It
1012 achieves the move via a replication of the data objects found to the destination
1013 resource , followed by a trimming of each replica from the source. We assume for our current
1014 purposed that all replicas are "good", ie have a status of "1" ::
1015
1016 from irods.resource import iRODSResource
1017 from irods.collection import iRODSCollection
1018 from irods.data_object import iRODSDataObject
1019 from irods.models import Resource,Collection,DataObject
1020 def repl_and_trim (srcRescName, dstRescName = '', verbose = False):
1021 objects_trimmed = 0
1022 q = session.query(Resource).filter(Resource.name == srcRescName)
1023 srcResc = iRODSResource( session.resources, q.one())
1024 # loop over data objects found on srcResc
1025 for q_row in session.query(Collection,DataObject) \
1026 .filter(DataObject.resc_id == srcResc.id):
1027 collection = iRODSCollection (session.collections, result = q_row)
1028 data_object = iRODSDataObject (session.data_objects, parent = collection, results = (q_row,))
1029 objects_trimmed += 1
1030 if verbose :
1031 import pprint
1032 print( '--------', data_object.name, '--------')
1033 pprint.pprint( [vars(r) for r in data_object.replicas if
1034 r.resource_name == srcRescName] )
1035 if dstRescName:
1036 objects_trimmed += 1
1037 data_object.replicate(dstRescName)
1038 for replica_number in [r.number for r in data_object.replicas]:
1039 options = { kw.DATA_REPL_KW: replica_number }
1040 data_object.unlink( **options )
1041 return objects_trimmed
1042
1043
1044 Listing Users and Groups ; calculating Group Membership
1045 -------------------------------------------------------
1046
1047 iRODS tracks groups and users using two tables, R_USER_MAIN and R_USER_GROUP.
1048 Under this database schema, all "user groups" are also users:
1049
1050 >>> from irods.models import User, UserGroup
1051 >>> from pprint import pprint
1052 >>> pprint(list( [ (x[User.id], x[User.name]) for x in session.query(User) ] ))
1053 [(10048, 'alice'),
1054 (10001, 'rodsadmin'),
1055 (13187, 'bobby'),
1056 (10045, 'collab'),
1057 (10003, 'rods'),
1058 (13193, 'empty'),
1059 (10002, 'public')]
1060
1061 But it's also worth noting that the User.type field will be 'rodsgroup' for any
1062 user ID that iRODS internally recognizes as a "Group":
1063
1064 >>> groups = session.query(User).filter( User.type == 'rodsgroup' )
1065
1066 >>> [x[User.name] for x in groups]
1067 ['collab', 'public', 'rodsadmin', 'empty']
1068
1069 Since we can instantiate iRODSUserGroup and iRODSUser objects directly from the rows of
1070 a general query on the corresponding tables, it is also straightforward to trace out
1071 the groups' memberships:
1072
1073 >>> from irods.user import iRODSUser, iRODSUserGroup
1074 >>> grp_usr_mapping = [ (iRODSUserGroup ( session.user_groups, result), iRODSUser (session.users, result)) \
1075 ... for result in session.query(UserGroup,User) ]
1076 >>> pprint( [ (x,y) for x,y in grp_usr_mapping if x.id != y.id ] )
1077 [(<iRODSUserGroup 10045 collab>, <iRODSUser 10048 alice rodsuser tempZone>),
1078 (<iRODSUserGroup 10001 rodsadmin>, <iRODSUser 10003 rods rodsadmin tempZone>),
1079 (<iRODSUserGroup 10002 public>, <iRODSUser 10003 rods rodsadmin tempZone>),
1080 (<iRODSUserGroup 10002 public>, <iRODSUser 10048 alice rodsuser tempZone>),
1081 (<iRODSUserGroup 10045 collab>, <iRODSUser 13187 bobby rodsuser tempZone>),
1082 (<iRODSUserGroup 10002 public>, <iRODSUser 13187 bobby rodsuser tempZone>)]
1083
1084 (Note that in general queries, fields cannot be compared to each other, only to literal constants; thus
1085 the '!=' comparison in the Python list comprehension.)
1086
1087 From the above, we can see that the group 'collab' (with user ID 10045) contains users 'bobby'(13187) and
1088 'alice'(10048) but not 'rods'(10003), as the tuple (10045,10003) is not listed. Group 'rodsadmin'(10001)
1089 contains user 'rods'(10003) but no other users; and group 'public'(10002) by default contains all canonical
1090 users (those whose User.type is 'rodsadmin' or 'rodsuser'). The empty group ('empty') has no users as
1091 members, so it doesn't show up in our final list.
1092
1093
1094 Getting and setting permissions
1095 -------------------------------
1096
1097 We can find the ID's of all the collections writable (ie having "modify" ACL) by, but not owned by,
1098 alice (or even alice#otherZone):
1099
1100 >>> from irods.models import Collection,CollectionAccess,CollectionUser,User
1101 >>> from irods.column import Like
1102 >>> q = session.query (Collection,CollectionAccess).filter(
1103 ... CollectionUser.name == 'alice', # User.zone == 'otherZone', # zone optional
1104 ... Like(CollectionAccess.name, 'modify%') ) #defaults to current zone
1105
1106 If we then want to downgrade those permissions to read-only, we can do the following:
1107
1108 >>> from irods.access import iRODSAccess
1109 >>> for c in q:
1110 ... session.permissions.set( iRODSAccess('read', c[Collection.name], 'alice', # 'otherZone' # zone optional
1111 ... ))
1112
1113 We can also query on access type using its numeric value, which will seem more natural to some:
1114
1115 >>> OWN = 1200; MODIFY = 1120 ; READ = 1050
1116 >>> from irods.models import DataAccess, DataObject, User
1117 >>> data_objects_writable = list(session.query(DataObject,DataAccess,User)).filter(User.name=='alice', DataAccess.type >= MODIFY)
1118
1119
1120 Managing users
1121 --------------
1122
1123 You can create a user in the current zone (with an optional auth_str):
1124
1125 >>> session.users.create('user', 'rodsuser', 'MyZone', auth_str)
1126
1127 If you want to create a user in a federated zone, use:
1128
1129 >>> session.users.create('user', 'rodsuser', 'OtherZone', auth_str)
1130
1131
3201132 And more...
3211133 -----------
3221134
323 Additional code samples are available in the `test directory <https://github.com/irods/python-irodsclient/tree/master/irods/test>`_
1135 Additional code samples are available in the `test directory <https://github.com/irods/python-irodsclient/tree/main/irods/test>`_
1136
1137
1138 =======
1139 Testing
1140 =======
1141
1142 Setting up and running tests
1143 ----------------------------
1144
1145 The Python iRODS Client comes with its own suite of tests. Some amount of setting up may be necessary first:
1146
1147 1. Use :code:`iinit` to specify the iRODS client environment.
1148 For best results, point the client at a server running on the local host.
1149
1150 2. Install the python-irodsclient along with the :code:`unittest unittest_xml_reporting` module or the older :code:`xmlrunner` equivalent.
1151
1152 - for PRC versions 1.1.1 and later:
1153
1154 * :code:`pip install ./path-to-python-irodsclient-repo[tests]` (when using a local Git repo); or,
1155 * :code:`pip install python-irodsclient[tests]'>=1.1.1'` (when installing directly from PyPI).
1156
1157 - earlier releases (<= 1.1.0) will install the outdated :code:`xmlrunner` module automatically
1158
1159 3. Follow further instructions in the `test directory <https://github.com/irods/python-irodsclient/tree/main/irods/test>`_
1160
1161
1162 Testing S3 parallel transfer
1163 ----------------------------
1164
1165 System requirements::
1166
1167 - Ubuntu 18 user with Docker installed.
1168 - Local instance of iRODS server running.
1169 - Logged in sudo privileges.
1170
1171 Run a MinIO service::
1172
1173 $ docker run -d -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001"
1174
1175 Set up a bucket :code:`s3://irods` under MinIO::
1176
1177 $ pip install awscli
1178
1179 $ aws configure
1180 AWS Access Key ID [None]: minioadmin
1181 AWS Secret Access Key [None]: minioadmin
1182 Default region name [None]:
1183 Default output format [None]:
1184
1185 $ aws --endpoint-url http://127.0.0.1:9000 s3 mb s3://irods
1186
1187 Set up s3 credentials for the iRODS s3 storage resource::
1188
1189 $ sudo su - irods -c "/bin/echo -e 'minioadmin\nminioadmin' >/var/lib/irods/s3-credentials"
1190 $ sudo chown 600 /var/lib/irods/s3-credentials
1191
1192 Create the s3 storage resource::
1193
1194 $ sudo apt install irods-resource-plugin-s3
1195
1196 As the 'irods' service account user::
1197
1198 $ iadmin mkresc s3resc s3 $(hostname):/irods/ \
1199 "S3_DEFAULT_HOSTNAME=localhost:9000;"\
1200 "S3_AUTH_FILE=/var/lib/irods/s3-credentials;"\
1201 "S3_REGIONNAME=us-east-1;"\
1202 "S3_RETRY_COUNT=1;"\
1203 "S3_WAIT_TIME_SEC=3;"\
1204 "S3_PROTO=HTTP;"\
1205 "ARCHIVE_NAMING_POLICY=consistent;"\
1206 "HOST_MODE=cacheless_attached"
1207
1208 $ dd if=/dev/urandom of=largefile count=40k bs=1k # create 40-megabyte test file
1209
1210 $ pip install 'python-irodsclient>=1.1.2'
1211
1212 $ python -c"from irods.test.helpers import make_session
1213 import irods.keywords as kw
1214 with make_session() as sess:
1215 sess.data_objects.put( 'largefile',
1216 '/tempZone/home/rods/largeFile1',
1217 **{kw.DEST_RESC_NAME_KW:'s3resc'} )
1218 sess.data_objects.get( '/tempZone/home/rods/largeFile1',
1219 '/tmp/largefile')"
0 python-irodsclient (0.8.1-4) UNRELEASED; urgency=medium
0 python-irodsclient (1.1.5-1) UNRELEASED; urgency=medium
11
22 * Update standards version to 4.6.2, no changes needed.
3 * New upstream release.
34
4 -- Debian Janitor <janitor@jelmer.uk> Sun, 08 Jan 2023 17:38:38 -0000
5 -- Debian Janitor <janitor@jelmer.uk> Wed, 11 Jan 2023 04:24:14 -0000
56
67 python-irodsclient (0.8.1-3) unstable; urgency=medium
78
0 version: '3'
1 services:
2
3 icat:
4 image: postgres:10
5 environment:
6 - POSTGRES_HOST_AUTH_METHOD=md5
7 - POSTGRES_PASSWORD=pg_password
8
9 irods-provider:
10 environment:
11 - PYTHON_RULE_ENGINE_INSTALLED=${python_rule_engine_installed}
12 hostname: irods-provider
13 build:
14 context: docker_build
15 dockerfile: Dockerfile.provider
16 args:
17 server_py: "${server_python_version}"
18 volumes:
19 - "${irods_pkg_dir}:/irods_packages:ro"
20 - ./irods_shared:/irods_shared:rw
21 depends_on:
22 - icat
23 networks:
24 default:
25 aliases:
26 - irods-provider
27
28 client-runner:
29 env_file: client-runner.env
30 environment:
31 - PYTHON_RULE_ENGINE_INSTALLED=${python_rule_engine_installed}
32 volumes:
33 - ./irods_shared:/irods_shared:rw
34 build:
35 context: .
36 dockerfile: Dockerfile.prc_test.${client_os_generic}
37 args:
38 os_image: "$client_os_image"
39 py_N: "$client_python_version"
40 depends_on:
41 - irods-provider
0 FROM ubuntu:18.04
1
2 ARG irods_pkg_dir
3 ARG server_py=${server_python_version}
4 ENV SERVER_PY "${server_py}"
5
6 RUN apt update
7 RUN apt install -y wget sudo lsb-release apt-transport-https gnupg2 postgresql-client python3
8 RUN wget -qO - https://packages.irods.org/irods-signing-key.asc | sudo apt-key add -
9 RUN echo "deb [arch=amd64] https://packages.irods.org/apt/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/renci-irods.list
10 RUN apt update
11
12 SHELL [ "/bin/bash","-c" ]
13
14 COPY ICAT.sql /tmp
15 COPY pgpass root/.pgpass
16 RUN chmod 600 root/.pgpass
17
18 RUN apt install -y rsyslog gawk
19 RUN apt install -y jq
20 ADD build_deps_list wait_on_condition send_oneshot install_python_rule_engine setup_python_rule_engine /tmp/
21
22 # At Runtime: 1. Install apt dependencies for the iRODS package files given.
23 # 2. Install the package files.
24 # 3. Wait on database container.
25 # 4. Configure iRODS provider and make sure it is running.
26 # 5. Open a server port, informing the client to start tests now that iRODS is up.
27 # 6. Configure shared folder for tests that need to register data objects.
28 # (We opt out if /irods_shared does not exist, ie is omitted in the docker-compose.yml).
29 # 7. Wait forever.
30
31 CMD apt install -y $(/tmp/build_deps_list /irods_packages/irods*{serv,dev,icommand,runtime,database-*postgres}*.deb) && \
32 dpkg -i /irods_packages/irods*{serv,dev,icommand,runtime,database-*postgres}*.deb && \
33 /tmp/wait_on_condition -i 5 -n 12 "psql -h icat -U postgres -c '\\l' >/dev/null" && \
34 psql -h icat -U postgres -f /tmp/ICAT.sql && \
35 sed 's/localhost/icat/' < /var/lib/irods/packaging/localhost_setup_postgres.input \
36 | python${SERVER_PY} /var/lib/irods/scripts/setup_irods.py && \
37 { [ "${PYTHON_RULE_ENGINE_INSTALLED}" = '' ] || { /tmp/install_python_rule_engine "$PYTHON_RULE_ENGINE_INSTALLED" /irods_packages \
38 && /tmp/setup_python_rule_engine; } } && \
39 { pgrep -u irods irodsServer >/dev/null || su irods -c '~/irodsctl start'; \
40 env PORT=8888 /tmp/send_oneshot "iRODS is running..." & } && \
41 { [ ! -d /irods_shared ] || { mkdir -p /irods_shared/reg_resc && mkdir -p /irods_shared/tmp && \
42 chown -R irods.irods /irods_shared && chmod g+ws /irods_shared/tmp && \
43 chmod 777 /irods_shared/reg_resc ; } } && \
44 echo $'*********\n' $'*********\n' $'*********\n' $'*********\n' $'*********\n' IRODS IS UP && \
45 tail -f /dev/null
0 CREATE USER irods WITH PASSWORD 'testpassword';
1 CREATE DATABASE "ICAT";
2 GRANT ALL PRIVILEGES ON DATABASE "ICAT" TO irods;
0 #!/bin/bash
1
2 build_deps_list()
3 {
4 local -A pkglist
5 local pkg
6 while [ $# -gt 0 ]
7 do
8 while read f
9 do
10 if [[ ! $f =~ \(.*\)\s*$ ]]; then # todo: include version-specific ?
11 pkglist["$f"]=""
12 fi
13 done < <(dpkg -I "$1"|grep -i '^ *depends:'|tr ',:' \\n | tail -n +2)
14 shift
15 done
16 for pkg in "${!pkglist[@]}" # package list de-duped by associative array
17 do
18 echo "$pkg"
19 done
20 }
21 build_deps_list "$@"
0 from getpass import getpass
1 from irods.password_obfuscation import encode
2 import json
3 import os
4 import sys
5 from os import chmod
6 from os.path import expanduser,exists,join
7 from getopt import getopt
8
9
10 home_env_path = expanduser('~/.irods')
11 env_file_path = join(home_env_path,'irods_environment.json')
12 auth_file_path = join(home_env_path,'.irodsA')
13
14
15 def do_iinit(host, port, user, zone, password):
16 if not exists(home_env_path):
17 os.makedirs(home_env_path)
18 else:
19 raise RuntimeError('~/.irods already exists')
20
21 with open(env_file_path,'w') as env_file:
22 json.dump ( { "irods_host": host,
23 "irods_port": int(port),
24 "irods_user_name": user,
25 "irods_zone_name": zone }, env_file, indent=4)
26 with open(auth_file_path,'w') as auth_file:
27 auth_file.write(encode(password))
28 chmod (auth_file_path,0o600)
29
30
31 def get_kv_pairs_from_cmdline(*args):
32 arglist = list(args)
33 while arglist:
34 k = arglist.pop(0)
35 v = arglist.pop(0)
36 yield k,v
37
38
39 if __name__ == '__main__':
40 import sys
41 args = sys.argv[1:]
42 dct = {k:v for k,v in get_kv_pairs_from_cmdline(*args)}
43 do_iinit(**dct)
0 #!/bin/bash
1 # usage $0 [""|"y"|"/"*] [container_irods_packages_path]
2 if [[ $1 = /* ]]; then
3 apt install -y "$2"/irods*rule*python*.deb
4 elif [ "$1" != "" ]; then
5 apt install -y irods-rule.\*python
6 else
7 : # nop
8 fi
0 icat:5432:postgres:postgres:pg_password
0 #!/usr/bin/env python
1 from __future__ import print_function
2 import sys, os, time
3 from socket import *
4 import getopt
5
6 def try_connect(host,port):
7 try:
8 s=socket(AF_INET,SOCK_STREAM)
9 s.connect((host,port))
10 return s
11 except:
12 s.close()
13 return None
14
15 # Options:
16 #
17 # -t timeout
18 # -h host
19 # -p port
20
21 t = now = time.time()
22 opts = dict(getopt.getopt(sys.argv[1:],'t:h:p:')[0])
23
24 host = opts['-h']
25 port = int(opts['-p'])
26 timeout = float(opts['-t'])
27
28 while time.time() < now + timeout:
29 time.sleep(1)
30 s = try_connect(host, port)
31 if s:
32 print(s.recv(32767).decode('utf-8'),end='')
33 exit(0)
34 exit(1)
0 #!/usr/bin/gawk -f
1 BEGIN {
2 SERVER = "/inet/tcp/"ENVIRON["PORT"]"/0/0"
3 print ARGV[1] " - " strftime() |& SERVER
4 close(SERVER)
5 }
0 #!/bin/bash
1
2 jq_process_in_place() {
3 local filename=$1
4 shift
5 local basenm=$(basename "$filename")
6 local tempname=/tmp/.$$.$basenm
7
8 jq "$@" <"$filename" >"$tempname" && \
9 cp "$tempname" "$filename"
10 STATUS=$?
11 rm -f "$tempname"
12 [ $STATUS = 0 ] || echo "**** jq process error" >&2
13 }
14
15 jq_process_in_place /etc/irods/server_config.json \
16 '.plugin_configuration.rule_engines[1:1]=[ { "instance_name": "irods_rule_engine_plugin-python-instance",
17 "plugin_name": "irods_rule_engine_plugin-python",
18 "plugin_specific_configuration": {}
19 }
20 ]'
21
22 echo '
23 defined_in_both {
24 writeLine("stdout", "native rule")
25 }
26
27 generic_failing_rule {
28 fail
29 }
30
31 failing_with_message {
32 failmsg(-2, "error with code of minus 2")
33 }
34
35 ' >> /etc/irods/core.re
36
37 echo '
38 def defined_in_both(rule_args,callback,rei):
39 callback.writeLine("stdout", "python rule")
40
41 def generic_failing_rule(*_):
42 raise RuntimeError
43
44 def failing_with_message_py(rule_args,callback,rei):
45 callback.failing_with_message()
46
47 ' > /etc/irods/core.py
0 #!/bin/bash
1
2 # wait for a program to run with 0 return status
3
4 interval=3; ntimes=20; verbose=""
5
6 usage() {
7 echo "$0 [options] <command args...>"
8 printf "\t options are: -i <sleep interval_secs> (default %d)\n" $interval
9 printf "\t -n <integer_number_of_tries> (default %d)\n" $ntimes
10 printf "\t -v : for verbose reporting\n"
11 exit 1
12 } >&2
13
14 while [[ "$1" = -* ]] ; do
15 case $1 in
16 -i) shift; interval=$1; shift ;;
17 -n) shift; ntimes=$1; shift ;;
18 -v) verbose=1 ; shift;;
19 *) usage;;
20 esac
21 done
22 [ $# -eq 0 ] && usage
23
24 n=1
25 while : ; do
26 eval "$@"
27 STATUS=$?
28 [ -n "$verbose" ] && echo "$n:" 'STATUS =' $STATUS `date`
29 [ $((++n)) -gt $ntimes -o $STATUS -eq 0 ] && break
30 sleep $interval
31 done
32
33 exit $STATUS
00 from .version import __version__
1
2 import logging
3 logger = logging.getLogger(__name__)
4 logger.addHandler(logging.NullHandler())
5 gHandler = None
6
7 def client_logging(flag=True,handler=None):
8 """
9 Example of use:
10
11 import irods
12 # Enable / Disable general client logging
13 irods.client_logging(True[,handler]) -> handler
14 # (handler is a StreamHandler to stderr by default)
15 irods.client_logging(False) # - disable irods client logging
16 """
17 global gHandler
18 if flag:
19 if handler is not None:
20 if gHandler: logger.removeHandler(gHandler)
21 if not handler: handler = logging.StreamHandler()
22 gHandler = handler
23 logger.addHandler(handler)
24 else:
25 if gHandler: logger.removeHandler(gHandler)
26 gHandler = None
27 return gHandler
128
229 # Magic Numbers
330 MAX_PASSWORD_LENGTH = 50
936 MAX_SQL_ROWS = 256
1037 DEFAULT_CONNECTION_TIMEOUT = 120
1138
12 # Other variables
1339 AUTH_SCHEME_KEY = 'a_scheme'
40 AUTH_USER_KEY = 'a_user'
41 AUTH_PWD_KEY = 'a_pw'
42 AUTH_TTL_KEY = 'a_ttl'
43
44 NATIVE_AUTH_SCHEME = 'native'
45
1446 GSI_AUTH_PLUGIN = 'GSI'
1547 GSI_AUTH_SCHEME = GSI_AUTH_PLUGIN.lower()
1648 GSI_OID = "1.3.6.1.4.1.3536.1.1" # taken from http://j.mp/2hDeczm
49
50 PAM_AUTH_PLUGIN = 'PAM'
51 PAM_AUTH_SCHEME = PAM_AUTH_PLUGIN.lower()
175175 # 1100 - 1200 - SSL API calls
176176 "SSL_START_AN": 1100,
177177 "SSL_END_AN": 1101,
178 "ATOMIC_APPLY_METADATA_OPERATIONS_APN": 20002,
179 "GET_FILE_DESCRIPTOR_INFO_APN": 20000,
180 "REPLICA_CLOSE_APN": 20004
178181 }
55 from irods.data_object import iRODSDataObject, irods_basename
66 from irods.meta import iRODSMetaCollection
77
8 def _first_char( *Strings ):
9 for s in Strings:
10 if s: return s[0]
11 return ''
812
913 class iRODSCollection(object):
14
15 class AbsolutePathRequired(Exception):
16 """Exception raised by iRODSCollection.normalize_path.
17
18 AbsolutePathRequired is raised by normalize_path( *paths ) when the leading path element
19 does not start with '/'. The exception will not be raised, however, if enforce_absolute = False
20 is passed to normalize_path as a keyword option.
21 """
22 pass
1023
1124 def __init__(self, manager, result=None):
1225 self.manager = manager
1427 self.id = result[Collection.id]
1528 self.path = result[Collection.name]
1629 self.name = irods_basename(result[Collection.name])
30 self.create_time = result[Collection.create_time]
31 self.modify_time = result[Collection.modify_time]
32 self._inheritance = result[Collection.inheritance]
33 self.owner_name = result[Collection.owner_name]
34 self.owner_zone = result[Collection.owner_zone]
1735 self._meta = None
36
37 @property
38 def inheritance(self):
39 return bool(self._inheritance) and self._inheritance != "0"
1840
1941 @property
2042 def metadata(self):
6890 if not topdown:
6991 yield (self, self.subcollections, self.data_objects)
7092
93 @staticmethod
94 def normalize_path(*paths, **kw_):
95 """Normalize a path or list of paths.
96
97 We use the iRODSPath class to eliminate extra slashes in,
98 and (if more than one parameter is given) concatenate, paths.
99 If the keyword argument `enforce_absolute' is set True, this
100 function requires the first character of path(s) passed in
101 should be '/'.
102 """
103 import irods.path
104 absolute = kw_.get('enforce_absolute',False)
105 if absolute and _first_char(*paths) != '/':
106 raise iRODSCollection.AbsolutePathRequired
107 return irods.path.iRODSPath(*paths, absolute = absolute)
108
71109 def __repr__(self):
72 return "<iRODSCollection {id} {name}>".format(id=self.id, name=self.name.encode('utf-8'))
110 return "<iRODSCollection {id} {name}>".format(id = self.id, name = self.name.encode('utf-8'))
00 from __future__ import absolute_import
1 import six
12 from datetime import datetime
23 from calendar import timegm
34
3738 def value(self):
3839 return self.query_key.column_type.to_irods(self._value)
3940
41 class In(Criterion):
42
43 def __init__(self, query_key, value):
44 super(In, self).__init__('in', query_key, value)
45
46 @property
47 def value(self):
48 v = "("
49 comma = ""
50 for element in self._value:
51 v += "{}'{}'".format(comma,element)
52 comma = ","
53 v += ")"
54 return v
4055
4156 class Like(Criterion):
4257
112127
113128 @staticmethod
114129 def to_irods(data):
130 try:
131 # Convert to Unicode string (aka decode)
132 data = six.text_type(data, 'utf-8', 'replace')
133 except TypeError:
134 # Some strings are already Unicode so they do not need decoding
135 pass
115136 return u"'{}'".format(data)
116137
117138
33 import struct
44 import hashlib
55 import six
6 import struct
76 import os
87 import ssl
8 import datetime
9 import irods.password_obfuscation as obf
10 from irods import MAX_NAME_LEN
11 from ast import literal_eval as safe_eval
12 import re
13
14
15 PAM_PW_ESC_PATTERN = re.compile(r'([@=&;])')
916
1017
1118 from irods.message import (
12 iRODSMessage, StartupPack, AuthResponse, AuthChallenge,
19 iRODSMessage, StartupPack, AuthResponse, AuthChallenge, AuthPluginOut,
1320 OpenedDataObjRequest, FileSeekResponse, StringStringMap, VersionResponse,
14 GSIAuthMessage, ClientServerNegotiation, Error)
15 from irods.exception import get_exception_by_code, NetworkException
21 PluginAuthMessage, ClientServerNegotiation, Error, GetTempPasswordOut)
22 from irods.exception import (get_exception_by_code, NetworkException, nominal_code)
23 from irods.message import (PamAuthRequest, PamAuthRequestOut)
24
25
26 ALLOW_PAM_LONG_TOKENS = True # True to fix [#279]
27 # Message to be logged when the connection
28 # destructor is called. Used in a unit test
29 DESTRUCTOR_MSG = "connection __del__() called"
30
1631 from irods import (
1732 MAX_PASSWORD_LENGTH, RESPONSE_LEN,
18 AUTH_SCHEME_KEY, GSI_AUTH_PLUGIN, GSI_AUTH_SCHEME, GSI_OID)
33 AUTH_SCHEME_KEY, AUTH_USER_KEY, AUTH_PWD_KEY, AUTH_TTL_KEY,
34 NATIVE_AUTH_SCHEME,
35 GSI_AUTH_PLUGIN, GSI_AUTH_SCHEME, GSI_OID,
36 PAM_AUTH_SCHEME)
1937 from irods.client_server_negotiation import (
2038 perform_negotiation,
2139 validate_policy,
2846
2947 logger = logging.getLogger(__name__)
3048
49 class PlainTextPAMPasswordError(Exception): pass
3150
3251 class Connection(object):
52
53 DISALLOWING_PAM_PLAINTEXT = True
3354
3455 def __init__(self, pool, account):
3556
3859 self.account = account
3960 self._client_signature = None
4061 self._server_version = self._connect()
62 self._disconnected = False
4163
4264 scheme = self.account.authentication_scheme
4365
44 if scheme == 'native':
66 if scheme == NATIVE_AUTH_SCHEME:
4567 self._login_native()
46 elif scheme == 'gsi':
68 elif scheme == GSI_AUTH_SCHEME:
4769 self.client_ctx = None
4870 self._login_gsi()
71 elif scheme == PAM_AUTH_SCHEME:
72 self._login_pam()
4973 else:
5074 raise ValueError("Unknown authentication scheme %s" % scheme)
75 self.create_time = datetime.datetime.now()
76 self.last_used_time = self.create_time
5177
5278 @property
5379 def server_version(self):
54 return tuple(int(x) for x in self._server_version.relVersion.replace('rods', '').split('.'))
55
80 detected = tuple(int(x) for x in self._server_version.relVersion.replace('rods', '').split('.'))
81 return (safe_eval(os.environ.get('IRODS_SERVER_VERSION','()'))
82 or detected)
5683 @property
5784 def client_signature(self):
5885 return self._client_signature
5986
6087 def __del__(self):
61 if self.socket:
62 self.disconnect()
88 self.disconnect()
89 logger.debug(DESTRUCTOR_MSG)
6390
6491 def send(self, message):
6592 string = message.pack()
76103 self.release(True)
77104 raise NetworkException("Unable to send message")
78105
79 def recv(self):
106 def recv(self, into_buffer = None
107 , return_message = ()
108 , acceptable_errors = ()):
109 acceptable_codes = set(nominal_code(e) for e in acceptable_errors)
80110 try:
81 msg = iRODSMessage.recv(self.socket)
82 except socket.error:
111 if into_buffer is None:
112 msg = iRODSMessage.recv(self.socket)
113 else:
114 msg = iRODSMessage.recv_into(self.socket, into_buffer)
115 except (socket.error, socket.timeout) as e:
116 # If _recv_message_in_len() fails in recv() or recv_into(),
117 # it will throw a socket.error exception. The exception is
118 # caught here, a critical message is logged, and is wrapped
119 # in a NetworkException with a more user friendly message
120 logger.critical(e)
83121 logger.error("Could not receive server response")
84122 self.release(True)
85123 raise NetworkException("Could not receive server response")
124 if isinstance(return_message,list): return_message[:] = [msg]
86125 if msg.int_info < 0:
87126 try:
88127 err_msg = iRODSMessage(msg=msg.error).get_main_message(Error).RErrMsg_PI[0].msg
89128 except TypeError:
90 raise get_exception_by_code(msg.int_info)
91 raise get_exception_by_code(msg.int_info, err_msg)
129 err_msg = None
130 if nominal_code(msg.int_info) not in acceptable_codes:
131 raise get_exception_by_code(msg.int_info, err_msg)
92132 return msg
93133
94 def recv_into(self, buffer):
95 try:
96 msg = iRODSMessage.recv_into(self.socket, buffer)
97 except socket.error:
98 logger.error("Could not receive server response")
99 self.release(True)
100 raise NetworkException("Could not receive server response")
101
102 if msg.int_info < 0:
103 try:
104 err_msg = iRODSMessage(msg=msg.error).get_main_message(Error).RErrMsg_PI[0].msg
105 except TypeError:
106 raise get_exception_by_code(msg.int_info)
107 raise get_exception_by_code(msg.int_info, err_msg)
108
109 return msg
134 def recv_into(self, buffer, **options):
135 return self.recv( into_buffer = buffer, **options )
110136
111137 def __enter__(self):
112138 return self
146172 context = self.account.ssl_context
147173 except AttributeError:
148174 CA_file = getattr(self.account, 'ssl_ca_certificate_file', None)
175 verify_server_mode = getattr(self.account,'ssl_verify_server', 'hostname')
176 if verify_server_mode == 'none':
177 CA_file = None
149178 context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH, cafile=CA_file)
179 if CA_file is None:
180 context.check_hostname = False
181 context.verify_mode = 0 # VERIFY_NONE
150182
151183 # Wrap socket with context
152184 wrapped_socket = context.wrap_socket(self.socket, server_hostname=host)
182214
183215 try:
184216 s = socket.create_connection(address, timeout)
217 self._disconnected = False
185218 except socket.error:
186219 raise NetworkException(
187220 "Could not connect to specified host and port: " +
188221 "{}:{}".format(*address))
189222
190223 self.socket = s
224
191225 main_message = StartupPack(
192226 (self.account.proxy_user, self.account.proxy_zone),
193 (self.account.client_user, self.account.client_zone)
227 (self.account.client_user, self.account.client_zone),
228 self.pool.application_name
194229 )
195230
196231 # No client-server negotiation
248283 return version_msg.get_main_message(VersionResponse)
249284
250285 def disconnect(self):
251 disconnect_msg = iRODSMessage(msg_type='RODS_DISCONNECT')
252 self.send(disconnect_msg)
286 # Moved the conditions to call disconnect() inside the function.
287 # Added a new criteria for calling disconnect(); Only call
288 # disconnect() if fileno is not -1 (fileno -1 indicates the socket
289 # is already closed). This makes it safe to call disconnect multiple
290 # times on the same connection. The first call cleans up the resources
291 # and next calls are no-ops
253292 try:
254 # SSL shutdown handshake
255 self.socket = self.socket.unwrap()
256 except AttributeError:
257 pass
258 self.socket.shutdown(socket.SHUT_RDWR)
259 self.socket.close()
260 self.socket = None
293 if self.socket and getattr(self, "_disconnected", False) == False and self.socket.fileno() != -1:
294 disconnect_msg = iRODSMessage(msg_type='RODS_DISCONNECT')
295 self.send(disconnect_msg)
296 try:
297 # SSL shutdown handshake
298 self.socket = self.socket.unwrap()
299 except AttributeError:
300 pass
301 self.socket.shutdown(socket.SHUT_RDWR)
302 self.socket.close()
303 finally:
304 self._disconnected = True # Issue 368 - because of undefined destruction order during interpreter shutdown,
305 self.socket = None # as well as the fact that unhandled exceptions are ignored in __del__, we'd at least
306 # like to ensure as much cleanup as possible, thus preventing the above socket shutdown
307 # procedure from running too many times and creating confusing messages
261308
262309 def recvall(self, n):
263310 # Helper function to recv n bytes or return None if EOF is hit
333380 def gsi_client_auth_request(self):
334381
335382 # Request for authentication with GSI on current user
336 message_body = GSIAuthMessage(
383
384 message_body = PluginAuthMessage(
337385 auth_scheme_=GSI_AUTH_PLUGIN,
338 context_='a_user=%s' % self.account.client_user
386 context_='%s=%s' % (AUTH_USER_KEY, self.account.client_user)
339387 )
340388 # GSI = 1201
341389 # https://github.com/irods/irods/blob/master/lib/api/include/apiNumber.h#L158
361409 username=self.account.proxy_user + '#' + self.account.proxy_zone
362410 )
363411 gsi_request = iRODSMessage(
364 msg_type='RODS_API_REQ', int_info=704, msg=gsi_msg)
412 msg_type='RODS_API_REQ', int_info=api_number['AUTH_RESPONSE_AN'], msg=gsi_msg)
365413 self.send(gsi_request)
366414 self.recv()
367415 # auth_response = self.recv()
379427 self.gsi_client_auth_response()
380428
381429 logger.info("GSI authorization validated")
430
431 def _login_pam(self):
432
433 time_to_live_in_seconds = 60
434
435 pam_password = PAM_PW_ESC_PATTERN.sub(lambda m: '\\'+m.group(1), self.account.password)
436
437 ctx_user = '%s=%s' % (AUTH_USER_KEY, self.account.client_user)
438 ctx_pwd = '%s=%s' % (AUTH_PWD_KEY, pam_password)
439 ctx_ttl = '%s=%s' % (AUTH_TTL_KEY, str(time_to_live_in_seconds))
440
441 ctx = ";".join([ctx_user, ctx_pwd, ctx_ttl])
442
443 if type(self.socket) is socket.socket:
444 if getattr(self,'DISALLOWING_PAM_PLAINTEXT',True):
445 raise PlainTextPAMPasswordError
446
447 Pam_Long_Tokens = (ALLOW_PAM_LONG_TOKENS and (len(ctx) >= MAX_NAME_LEN))
448
449 if Pam_Long_Tokens:
450
451 message_body = PamAuthRequest(
452 pamUser=self.account.client_user,
453 pamPassword=pam_password,
454 timeToLive=time_to_live_in_seconds)
455 else:
456
457 message_body = PluginAuthMessage(
458 auth_scheme_ = PAM_AUTH_SCHEME,
459 context_ = ctx)
460
461 auth_req = iRODSMessage(
462 msg_type='RODS_API_REQ',
463 msg=message_body,
464 int_info=(725 if Pam_Long_Tokens else 1201)
465 )
466
467 self.send(auth_req)
468 # Getting the new password
469 output_message = self.recv()
470
471 Pam_Response_Class = (PamAuthRequestOut if Pam_Long_Tokens
472 else AuthPluginOut)
473
474 auth_out = output_message.get_main_message( Pam_Response_Class )
475
476 self.disconnect()
477 self._connect()
478
479 if hasattr(self.account,'store_pw'):
480 drop = self.account.store_pw
481 if type(drop) is list:
482 drop[:] = [ auth_out.result_ ]
483
484 self._login_native(password=auth_out.result_)
485
486 logger.info("PAM authorization validated")
382487
383488 def read_file(self, desc, size=-1, buffer=None):
384489 if size < 0:
407512
408513 return response.bs
409514
410 def _login_native(self):
515 def _login_native(self, password=None):
516
517 # Default case, PAM login will send a new password
518 if password is None:
519 password = self.account.password or ''
411520
412521 # authenticate
413522 auth_req = iRODSMessage(msg_type='RODS_API_REQ', int_info=703)
429538 if six.PY3:
430539 challenge = challenge.strip()
431540 padded_pwd = struct.pack(
432 "%ds" % MAX_PASSWORD_LENGTH, self.account.password.encode(
541 "%ds" % MAX_PASSWORD_LENGTH, password.encode(
433542 'utf-8').strip())
434543 else:
435544 padded_pwd = struct.pack(
436 "%ds" % MAX_PASSWORD_LENGTH, self.account.password)
545 "%ds" % MAX_PASSWORD_LENGTH, password)
437546
438547 m = hashlib.md5()
439548 m.update(challenge)
446555 encoded_pwd_array = bytearray(encoded_pwd)
447556 encoded_pwd = bytes(encoded_pwd_array.replace(b'\x00', b'\x01'))
448557
558
449559 pwd_msg = AuthResponse(
450560 response=encoded_pwd, username=self.account.proxy_user)
451561 pwd_request = iRODSMessage(
452 msg_type='RODS_API_REQ', int_info=704, msg=pwd_msg)
562 msg_type='RODS_API_REQ', int_info=api_number['AUTH_RESPONSE_AN'], msg=pwd_msg)
453563 self.send(pwd_request)
454564 self.recv()
455565
503613
504614 self.send(message)
505615 self.recv()
616
617 def temp_password(self):
618 request = iRODSMessage("RODS_API_REQ", msg=None,
619 int_info=api_number['GET_TEMP_PASSWORD_AN'])
620
621 # Send and receive request
622 self.send(request)
623 response = self.recv()
624 logger.debug(response.int_info)
625
626 # Convert and return answer
627 msg = response.get_main_message(GetTempPasswordOut)
628 return obf.create_temp_password(msg.stringToHashWith, self.account.password)
22 import sys
33 import logging
44 import six
5 import os
6 import ast
57
68 from irods.models import DataObject
79 from irods.meta import iRODSMetaCollection
810 import irods.keywords as kw
11 from irods.api_number import api_number
12 from irods.message import (JSON_Message, iRODSMessage)
913
1014 logger = logging.getLogger(__name__)
1115
16 IRODS_SERVER_WITH_CLOSE_REPLICA_API = (4,2,9)
1217
1318 def chunks(f, chunksize=io.DEFAULT_BUFFER_SIZE):
1419 return iter(lambda: f.read(chunksize), b'')
2227
2328 class iRODSReplica(object):
2429
25 def __init__(self, number, status, resource_name, path, **kwargs):
30 def __init__(self, number, status, resource_name, path, resc_hier, **kwargs):
2631 self.number = number
2732 self.status = status
2833 self.resource_name = resource_name
2934 self.path = path
35 self.resc_hier = resc_hier
3036 for key, value in kwargs.items():
3137 setattr(self, key, value)
3238
6066 r[DataObject.replica_status],
6167 r[DataObject.resource_name],
6268 r[DataObject.path],
69 r[DataObject.resc_hier],
6370 checksum=r[DataObject.checksum],
6471 size=r[DataObject.size]
6572 ) for r in replicas]
6673 self._meta = None
74
75
76
6777
6878 def __repr__(self):
6979 return "<iRODSDataObject {id} {name}>".format(**vars(self))
7585 self.manager.sess.metadata, DataObject, self.path)
7686 return self._meta
7787
78 def open(self, mode='r', **options):
79 if kw.DEST_RESC_NAME_KW not in options:
80 options[kw.DEST_RESC_NAME_KW] = self.replicas[0].resource_name
81
82 return self.manager.open(self.path, mode, **options)
88 def open(self, mode='r', finalize_on_close = True, **options):
89 return self.manager.open(self.path, mode, finalize_on_close = finalize_on_close, **options)
90
91 def chksum(self, **options):
92 """
93 See: https://github.com/irods/irods/blob/4-2-stable/lib/api/include/dataObjChksum.h
94 for a list of applicable irods.keywords options.
95
96 NB options dict may also include a default-constructed RErrorStack object under the key r_error.
97 If passed, this object can receive a list of warnings, one for each existing replica lacking a
98 checksum. (Relevant only in combination with VERIFY_CHKSUM_KW).
99 """
100 return self.manager.chksum(self.path, **options)
101
102 def trim(self, **options):
103 self.manager.trim(self.path, **options)
83104
84105 def unlink(self, force=False, **options):
85106 self.manager.unlink(self.path, force, **options)
98119
99120 class iRODSDataObjectFileRaw(io.RawIOBase):
100121
101 def __init__(self, conn, descriptor, **options):
122 """The raw object supporting file-like operations (read/write/seek) for the
123 iRODSDataObject."""
124
125 def __init__(self, conn, descriptor, finalize_on_close = True, **options):
126 """
127 Constructor needs a connection and an iRODS data object descriptor. If the
128 finalize_on_close flag evaluates False, close() will invoke the REPLICA_CLOSE
129 API instead of closing and finalizing the object (useful for parallel
130 transfers using multiple threads).
131 """
132 super(iRODSDataObjectFileRaw,self).__init__()
102133 self.conn = conn
103134 self.desc = descriptor
104135 self.options = options
136 self.finalize_on_close = finalize_on_close
137
138 def replica_access_info(self):
139 message_body = JSON_Message( {'fd': self.desc},
140 server_version = self.conn.server_version )
141 message = iRODSMessage('RODS_API_REQ', msg = message_body,
142 int_info=api_number['GET_FILE_DESCRIPTOR_INFO_APN'])
143 self.conn.send(message)
144 result = None
145 try:
146 result = self.conn.recv()
147 except Exception as e:
148 logger.warning('''Couldn't receive or process response to GET_FILE_DESCRIPTOR_INFO_APN -- '''
149 '''caught: %r''',e)
150 raise
151 dobj_info = result.get_json_encoded_struct()
152 replica_token = dobj_info.get("replica_token","")
153 resc_hier = ( dobj_info.get("data_object_info") or {} ).get("resource_hierarchy","")
154 return (replica_token, resc_hier)
155
156 def _close_replica(self):
157 server_version = ast.literal_eval(os.environ.get('IRODS_VERSION_OVERRIDE', '()' ))
158 if (server_version or self.conn.server_version) < IRODS_SERVER_WITH_CLOSE_REPLICA_API: return False
159 message_body = JSON_Message( { "fd": self.desc,
160 "send_notification": False,
161 "update_size": False,
162 "update_status": False,
163 "compute_checksum": False },
164 server_version = self.conn.server_version )
165 self.conn.send( iRODSMessage('RODS_API_REQ', msg = message_body,
166 int_info=api_number['REPLICA_CLOSE_APN']) )
167 try:
168 self.conn.recv().int_info
169 except Exception:
170 logger.warning ('** ERROR on closing replica **')
171 raise
172 return True
105173
106174 def close(self):
107 self.conn.close_file(self.desc, **self.options)
175 if self.finalize_on_close or not self._close_replica():
176 self.conn.close_file(self.desc, **self.options)
108177 self.conn.release()
109178 super(iRODSDataObjectFileRaw, self).close()
110179 return None
33
44 from __future__ import absolute_import
55 import six
6 import numbers
7
8
69 class PycommandsException(Exception):
710 pass
811
2023
2124
2225 class CollectionDoesNotExist(DoesNotExist):
26 pass
27
28
29 class ZoneDoesNotExist(DoesNotExist):
2330 pass
2431
2532
6370 pass
6471
6572
66 def get_exception_by_code(code, message=None):
67 return iRODSExceptionMeta.codes[code](message)
73 def nominal_code( the_code, THRESHOLD = 1000 ):
74 nominal = []
75 c = rounded_code( the_code , nominal_int_repo = nominal )
76 negated = -abs(nominal[0])
77 return c if (negated <= -abs(THRESHOLD)) else negated # produce a negative for nonzero integer input
78
79 def rounded_code( the_code , nominal_int_repo = () ):
80 nom_err = None
81 try:
82 if isinstance(the_code,type) and \
83 issubclass(the_code, iRODSException): the_code = getattr( the_code, 'code', the_code )
84 if isinstance(the_code,str):
85 nom_err = globals()[the_code].code
86 return nom_err
87 elif isinstance(the_code,numbers.Integral):
88 nom_err = the_code
89 return 1000 * ((-abs(the_code) - 1) // 1000 + 1)
90 else:
91 message = 'Supplied code {the_code!r} must be integer or string'.format(**locals())
92 raise RuntimeError(message)
93 finally:
94 if nom_err is not None and isinstance(nominal_int_repo,list):
95 nominal_int_repo[:] = [nom_err]
96
97
98 def get_exception_class_by_code(code, name_only=False):
99 rounded = rounded_code (code) # rounded up to -1000 if code <= -1000
100 cls = iRODSExceptionMeta.codes.get( rounded )
101 return cls if not name_only \
102 else (cls.__name__ if cls is not None else 'Unknown_iRODS_error')
103
104
105 def get_exception_by_code(code, message = None):
106 exc_class = iRODSExceptionMeta.codes[ rounded_code( code ) ]
107 exc_instance = exc_class( message )
108 exc_instance.code = code
109 return exc_instance
110
111
112 class UnknowniRODSError(iRODSException):
113 code = 0 # covers rounded_code (errcode) if 0 > errcode > -1000
68114
69115
70116 class SystemException(iRODSException):
503549 code = -110000
504550
505551
552 class SYS_INVALID_INPUT_PARAM(SystemException):
553 code = -130000
554
555
556 class SYS_BAD_INPUT(iRODSException):
557 code = -158000
558
559
560 class SYS_REPLICA_DOES_NOT_EXIST(iRODSException):
561 code = -164000
562
563
564 class SYS_NOT_ALLOWED(iRODSException):
565 code = -169000
566
567
506568 class UserInputException(iRODSException):
507569 pass
508570
695757 code = -356000
696758
697759
760 class OBJ_PATH_DOES_NOT_EXIST(iRODSException):
761 code = -358000
762
763
764 class LOCKED_DATA_OBJECT_ACCESS(SystemException):
765 code = -406000
766
767
768 class USER_INCOMPATIBLE_PARAMS(iRODSException):
769 code = -402000
770
771
772 class CHECK_VERIFICATION_RESULTS(SystemException):
773 code = -407000
774
775
698776 class FileDriverException(iRODSException):
699777 pass
700778
9941072 class CATALOG_ALREADY_HAS_ITEM_BY_THAT_NAME(CatalogLibraryException):
9951073 code = -809000
9961074
1075 class CAT_NO_CHECKSUM_FOR_REPLICA (CatalogLibraryException):
1076 code = -862000
9971077
9981078 class CAT_INVALID_RESOURCE_TYPE(CatalogLibraryException):
9991079 code = -810000
11301210 class CAT_UNKNOWN_SPECIFIC_QUERY(CatalogLibraryException):
11311211 code = -853000
11321212
1213 class CAT_STATEMENT_TABLE_FULL(CatalogLibraryException):
1214 code = -860000
1215
11331216
11341217 class RDSException(iRODSException):
11351218 pass
11751258 code = -889000
11761259
11771260
1261 class TicketException(CatalogLibraryException):
1262 pass
1263
1264
1265 class CAT_TICKET_INVALID(TicketException):
1266 code = -890000
1267
1268
1269 class CAT_TICKET_EXPIRED(TicketException):
1270 code = -891000
1271
1272
11781273 class MiscException(iRODSException):
11791274 pass
11801275
14911586 code = -1016000
14921587
14931588
1589 class RE_TYPE_ERROR(RuleEngineException):
1590 code = -1230000
1591
1592
14941593 class NO_RULE_FOUND_ERR(RuleEngineException):
14951594 code = -1017000
14961595
18591958 code = -1110000
18601959
18611960
1961 class RULE_ENGINE_ERROR(RuleEngineException):
1962 code = -1828000
1963
1964
18621965 class PHPException(iRODSException):
18631966 pass
18641967
18731976
18741977 class PHP_OPEN_SCRIPT_FILE_ERR(PHPException):
18751978 code = -1602000
1979
1980 class PAMException(iRODSException):
1981 pass
1982
1983
1984 class PAM_AUTH_NOT_BUILT_INTO_CLIENT(PAMException):
1985 code = -991000
1986
1987
1988 class PAM_AUTH_NOT_BUILT_INTO_SERVER(PAMException):
1989 code = -992000
1990
1991
1992 class PAM_AUTH_PASSWORD_FAILED(PAMException):
1993 code = -993000
1994
1995
1996 class PAM_AUTH_PASSWORD_INVALID_TTL(PAMException):
1997 code = -994000
77 CLI_IN_SVR_FIREWALL_KW = "cliInSvrFirewall" # cli behind same firewall #
88 REG_CHKSUM_KW = "regChksum" # register checksum #
99 VERIFY_CHKSUM_KW = "verifyChksum" # verify checksum #
10 NO_COMPUTE_KW = "no_compute"
1011 VERIFY_BY_SIZE_KW = "verifyBySize" # verify by size - used by irsync #
1112 OBJ_PATH_KW = "objPath" # logical path of the object #
1213 RESC_NAME_KW = "rescName" # resource name #
209210 # =-=-=-=-=-=-=-
210211 # irods general keywords definitions
211212 RESC_HIER_STR_KW = "resc_hier"
213 REPLICA_TOKEN_KW = "replicaToken"
212214 DEST_RESC_HIER_STR_KW = "dest_resc_hier"
213215 IN_PDMO_KW = "in_pdmo"
214216 STAGE_OBJ_KW = "stage_object"
00 class Manager(object):
1
2 __server_version = ()
3
4 @property
5 def server_version(self):
6 if not self.__server_version:
7 p = self.sess.pool
8 if p is None : raise RuntimeError ("session not configured")
9 conn = getattr(p,"_conn",None) or p.get_connection()
10 if conn: self.__server_version = conn.server_version
11 return tuple( self.__server_version )
112
213 def __init__(self, sess):
314 self.sess = sess
33 from irods.manager import Manager
44 from irods.api_number import api_number
55 from irods.message import ModAclRequest, iRODSMessage
6 from irods.data_object import iRODSDataObject
6 from irods.data_object import ( iRODSDataObject, irods_dirname, irods_basename )
77 from irods.collection import iRODSCollection
8 from irods.models import (
9 DataObject, Collection, User, DataAccess, CollectionAccess, CollectionUser)
8 from irods.models import ( DataObject, Collection, User, CollectionUser,
9 DataAccess, CollectionAccess )
1010 from irods.access import iRODSAccess
11 from irods.column import In
12 from irods.user import iRODSUser
1113
14 import six
1215 import logging
1316
1417 logger = logging.getLogger(__name__)
1518
19 def users_by_ids(session,ids=()):
20 try:
21 ids=list(iter(ids))
22 except TypeError:
23 if type(ids) in (str,) + six.integer_types: ids=int(ids)
24 else: raise
25 cond = () if not ids \
26 else (In(User.id,list(map(int,ids))),) if len(ids)>1 \
27 else (User.id == int(ids[0]),)
28 return [ iRODSUser(session.users,i)
29 for i in session.query(User.id,User.name,User.type,User.zone).filter(*cond) ]
1630
1731 class AccessManager(Manager):
1832
19 def get(self, target):
33 def get(self, target, report_raw_acls = False, **kw):
34
35 if report_raw_acls:
36 return self.__get_raw(target, **kw) # prefer a behavior consistent with 'ils -A`
37
2038 # different query whether target is an object or a collection
2139 if type(target) == iRODSDataObject:
2240 access_type = DataAccess
4462 user_zone=row[user_type.zone]
4563 ) for row in results]
4664
65 def coll_access_query(self,path):
66 return self.sess.query(Collection, CollectionAccess).filter(Collection.name == path)
67
68 def data_access_query(self,path):
69 cn = irods_dirname(path)
70 dn = irods_basename(path)
71 return self.sess.query(DataObject, DataAccess).filter( Collection.name == cn, DataObject.name == dn )
72
73 def __get_raw(self, target, **kw):
74
75 ### sample usage: ###
76 #
77 # user_id_list = [] # simply to store the user id's from the discovered ACL's
78 # session.permissions.get( data_or_coll_target, report_raw_acls = True,
79 # acl_users = user_id_list,
80 # acl_users_transform = lambda u: u.id)
81 #
82 # -> returns list of iRODSAccess objects mapping one-to-one with ACL's stored in the catalog
83
84 users_out = kw.pop( 'acl_users', None )
85 T = kw.pop( 'acl_users_transform', lambda value : value )
86
87 # different choice of query based on whether target is an object or a collection
88 if isinstance(target, iRODSDataObject):
89 access_column = DataAccess
90 query_func = self.data_access_query
91
92 elif isinstance(target, iRODSCollection):
93 access_column = CollectionAccess
94 query_func = self.coll_access_query
95 else:
96 raise TypeError
97
98 rows = [ r for r in query_func(target.path) ]
99 userids = set( r[access_column.user_id] for r in rows )
100
101 user_lookup = { j.id:j for j in users_by_ids(self.sess, userids) }
102
103 if isinstance(users_out, dict): users_out.update (user_lookup)
104 elif isinstance (users_out, list): users_out += [T(v) for v in user_lookup.values()]
105 elif isinstance (users_out, set): users_out |= set(T(v) for v in user_lookup.values())
106 elif users_out is None: pass
107 else: raise TypeError
108
109 acls = [ iRODSAccess ( r[access_column.name],
110 target.path,
111 user_lookup[r[access_column.user_id]].name,
112 user_lookup[r[access_column.user_id]].zone ) for r in rows ]
113 return acls
114
115
47116 def set(self, acl, recursive=False, admin=False):
48117 prefix = 'admin:' if admin else ''
118
119 userName_=acl.user_name
120 zone_=acl.user_zone
121 if acl.access_name.endswith('inherit'): zone_ = userName_ = ''
49122
50123 message_body = ModAclRequest(
51124 recursiveFlag=int(recursive),
52125 accessLevel='{prefix}{access_name}'.format(prefix=prefix, **vars(acl)),
53 userName=acl.user_name,
54 zone=acl.user_zone,
126 userName=userName_,
127 zone=zone_,
55128 path=acl.path
56129 )
57130 request = iRODSMessage("RODS_API_REQ", msg=message_body,
00 from __future__ import absolute_import
1 from irods.models import Collection
1 from irods.models import Collection, DataObject
22 from irods.manager import Manager
33 from irods.message import iRODSMessage, CollectionRequest, FileOpenRequest, ObjCopyRequest, StringStringMap
44 from irods.exception import CollectionDoesNotExist, NoResultFound
1111 class CollectionManager(Manager):
1212
1313 def get(self, path):
14 query = self.sess.query(Collection).filter(Collection.name == path)
15 try:
16 result = query.one()
17 except NoResultFound:
18 raise CollectionDoesNotExist()
19 return iRODSCollection(self, result)
14 path = iRODSCollection.normalize_path( path )
15 filters = [Collection.name == path]
16 # if a ticket is supplied for this session, try both without and with DataObject join
17 repeats = (True,False) if hasattr(self.sess,'ticket__') \
18 else (False,)
19 for rep in repeats:
20 query = self.sess.query(Collection).filter(*filters)
21 try:
22 result = query.one()
23 except NoResultFound:
24 if rep:
25 filters += [DataObject.id != 0]
26 continue
27 raise CollectionDoesNotExist()
28 return iRODSCollection(self, result)
2029
2130
2231 def create(self, path, recurse=True, **options):
32 path = iRODSCollection.normalize_path( path )
2333 if recurse:
2434 options[kw.RECURSIVE_OPR__KW] = ''
2535
00 from __future__ import absolute_import
11 import os
22 import io
3 from irods.models import DataObject
3 from irods.models import DataObject, Collection
44 from irods.manager import Manager
55 from irods.message import (
6 iRODSMessage, FileOpenRequest, ObjCopyRequest, StringStringMap, DataObjInfo, ModDataObjMeta)
6 iRODSMessage, FileOpenRequest, ObjCopyRequest, StringStringMap, DataObjInfo, ModDataObjMeta,
7 DataObjChksumRequest, DataObjChksumResponse, RErrorStack)
78 import irods.exception as ex
89 from irods.api_number import api_number
10 from irods.collection import iRODSCollection
911 from irods.data_object import (
1012 iRODSDataObject, iRODSDataObjectFileRaw, chunks, irods_dirname, irods_basename)
1113 import irods.keywords as kw
14 import irods.parallel as parallel
15 from irods.parallel import deferred_call
16 import six
17 import ast
18 import json
19 import logging
20
21
22 MAXIMUM_SINGLE_THREADED_TRANSFER_SIZE = 32 * ( 1024 ** 2)
23
24 DEFAULT_NUMBER_OF_THREADS = 0 # Defaults for reasonable number of threads -- optimized to be
25 # performant but allow no more worker threads than available CPUs.
26
27 DEFAULT_QUEUE_DEPTH = 32
28
29
30 class Server_Checksum_Warning(Exception):
31 """Error from iRODS server indicating some replica checksums are missing or incorrect."""
32 def __init__(self,json_response):
33 """Initialize the exception object with a checksum field from the server response message."""
34 super(Server_Checksum_Warning,self).__init__()
35 self.response = json.loads(json_response)
1236
1337
1438 class DataObjectManager(Manager):
2549 O_EXCL = 128
2650 O_TRUNC = 512
2751
28 def _download(self, obj, local_path, **options):
52
53 def should_parallelize_transfer( self,
54 num_threads = 0,
55 obj_sz = 1+MAXIMUM_SINGLE_THREADED_TRANSFER_SIZE,
56 server_version_hint = (),
57 measured_obj_size = () ## output variable. If a list is provided, it shall
58 # be truncated to contain one value, the size of the
59 # seekable object (if one is provided for `obj_sz').
60 ):
61
62 # Allow an environment variable to override the detection of the server version.
63 # Example: $ export IRODS_VERSION_OVERRIDE="4,2,9" ; python -m irods.parallel ...
64 # ---
65 # Delete the following line on resolution of https://github.com/irods/irods/issues/5932 :
66 if getattr(self.sess,'ticket__',None) is not None: return False
67 server_version = ( ast.literal_eval(os.environ.get('IRODS_VERSION_OVERRIDE', '()' )) or server_version_hint or
68 self.server_version )
69 if num_threads == 1 or ( server_version < parallel.MINIMUM_SERVER_VERSION ):
70 return False
71 if getattr(obj_sz,'seek',None) :
72 pos = obj_sz.tell()
73 size = obj_sz.seek(0,os.SEEK_END)
74 if not isinstance(size,six.integer_types):
75 size = obj_sz.tell()
76 obj_sz.seek(pos,os.SEEK_SET)
77 if isinstance(measured_obj_size,list): measured_obj_size[:] = [size]
78 else:
79 size = obj_sz
80 assert (size > -1)
81 return size > MAXIMUM_SINGLE_THREADED_TRANSFER_SIZE
82
83
84 def _download(self, obj, local_path, num_threads, **options):
85 """Transfer the contents of a data object to a local file.
86
87 Called from get() when a local path is named.
88 """
2989 if os.path.isdir(local_path):
30 file = os.path.join(local_path, irods_basename(obj))
90 local_file = os.path.join(local_path, irods_basename(obj))
3191 else:
32 file = local_path
33
34 # Check for force flag if file exists
35 if os.path.exists(file) and kw.FORCE_FLAG_KW not in options:
92 local_file = local_path
93
94 # Check for force flag if local_file exists
95 if os.path.exists(local_file) and kw.FORCE_FLAG_KW not in options:
3696 raise ex.OVERWRITE_WITHOUT_FORCE_FLAG
3797
38 with open(file, 'wb') as f, self.open(obj, 'r', **options) as o:
39 for chunk in chunks(o, self.READ_BUFFER_SIZE):
40 f.write(chunk)
41
42
43 def get(self, path, file=None, **options):
98 with open(local_file, 'wb') as f, self.open(obj, 'r', **options) as o:
99
100 if self.should_parallelize_transfer (num_threads, o):
101 f.close()
102 if not self.parallel_get( (obj,o), local_path, num_threads = num_threads,
103 target_resource_name = options.get(kw.RESC_NAME_KW,'')):
104 raise RuntimeError("parallel get failed")
105 else:
106 for chunk in chunks(o, self.READ_BUFFER_SIZE):
107 f.write(chunk)
108
109
110 def get(self, path, local_path = None, num_threads = DEFAULT_NUMBER_OF_THREADS, **options):
111 """
112 Get a reference to the data object at the specified `path'.
113
114 Only download the object if the local_path is a string (specifying
115 a path in the local filesystem to use as a destination file).
116 """
44117 parent = self.sess.collections.get(irods_dirname(path))
45118
46119 # TODO: optimize
47 if file:
48 self._download(path, file, **options)
120 if local_path:
121 self._download(path, local_path, num_threads = num_threads, **options)
49122
50123 query = self.sess.query(DataObject)\
51124 .filter(DataObject.name == irods_basename(path))\
52 .filter(DataObject.collection_id == parent.id)
125 .filter(DataObject.collection_id == parent.id)\
126 .add_keyword(kw.ZONE_KW, path.split('/')[1])
127
128 if hasattr(self.sess,'ticket__'):
129 query = query.filter(Collection.id != 0) # a no-op, but necessary because CAT_SQL_ERR results if the ticket
130 # is for a DataObject and we don't explicitly join to Collection
131
53132 results = query.all() # get up to max_rows replicas
54133 if len(results) <= 0:
55134 raise ex.DataObjectDoesNotExist()
56135 return iRODSDataObject(self, parent, results)
57136
58137
59 def put(self, file, irods_path, **options):
60 if irods_path.endswith('/'):
61 obj = irods_path + os.path.basename(file)
138 def put(self, local_path, irods_path, return_data_object = False, num_threads = DEFAULT_NUMBER_OF_THREADS, **options):
139
140 if self.sess.collections.exists(irods_path):
141 obj = iRODSCollection.normalize_path(irods_path, os.path.basename(local_path))
62142 else:
63143 obj = irods_path
64144
65 # Set operation type to trigger acPostProcForPut
66 if kw.OPR_TYPE_KW not in options:
67 options[kw.OPR_TYPE_KW] = 1 # PUT_OPR
68
69 with open(file, 'rb') as f, self.open(obj, 'w', **options) as o:
70 for chunk in chunks(f, self.WRITE_BUFFER_SIZE):
71 o.write(chunk)
72
145 with open(local_path, 'rb') as f:
146 sizelist = []
147 if self.should_parallelize_transfer (num_threads, f, measured_obj_size = sizelist):
148 o = deferred_call( self.open, (obj, 'w'), options)
149 f.close()
150 if not self.parallel_put( local_path, (obj,o), total_bytes = sizelist[0], num_threads = num_threads,
151 target_resource_name = options.get(kw.RESC_NAME_KW,'') or
152 options.get(kw.DEST_RESC_NAME_KW,''),
153 open_options = options ):
154 raise RuntimeError("parallel put failed")
155 else:
156 with self.open(obj, 'w', **options) as o:
157 # Set operation type to trigger acPostProcForPut
158 if kw.OPR_TYPE_KW not in options:
159 options[kw.OPR_TYPE_KW] = 1 # PUT_OPR
160 for chunk in chunks(f, self.WRITE_BUFFER_SIZE):
161 o.write(chunk)
73162 if kw.ALL_KW in options:
74 options[kw.UPDATE_REPL_KW] = ''
75 self.replicate(obj, **options)
76
77
78 def create(self, path, resource=None, **options):
163 repl_options = options.copy()
164 repl_options[kw.UPDATE_REPL_KW] = ''
165 # Leaving REG_CHKSUM_KW set would raise the error:
166 # Requested to register checksum without verifying, but source replica has a checksum. This can result
167 # in multiple replicas being marked good with different checksums, which is an inconsistency.
168 del repl_options[kw.REG_CHKSUM_KW]
169 self.replicate(obj, **repl_options)
170
171
172 if return_data_object:
173 return self.get(obj)
174
175
176 def chksum(self, path, **options):
177 """
178 See: https://github.com/irods/irods/blob/4-2-stable/lib/api/include/dataObjChksum.h
179 for a list of applicable irods.keywords options.
180 """
181 r_error_stack = options.pop('r_error',None)
182 message_body = DataObjChksumRequest(path, **options)
183 message = iRODSMessage('RODS_API_REQ', msg=message_body,
184 int_info=api_number['DATA_OBJ_CHKSUM_AN'])
185 checksum = ""
186 msg_retn = []
187 with self.sess.pool.get_connection() as conn:
188 conn.send(message)
189 try:
190 response = conn.recv(return_message = msg_retn)
191 except ex.CHECK_VERIFICATION_RESULTS as exc:
192 # We'll get a response in the client to help qualify or elaborate on the error thrown.
193 if msg_retn: response = msg_retn[0]
194 logging.warning("Exception checksumming data object %r - %r",path,exc)
195 if 'response' in locals():
196 try:
197 results = response.get_main_message(DataObjChksumResponse, r_error = r_error_stack)
198 checksum = results.myStr.strip()
199 if checksum[0] in ( '[','{' ): # in iRODS 4.2.11 and later, myStr is in JSON format.
200 exc = Server_Checksum_Warning( checksum )
201 if not r_error_stack:
202 r_error_stack.fill(exc.response)
203 raise exc
204 except iRODSMessage.ResponseNotParseable:
205 # response.msg is None when VERIFY_CHKSUM_KW is used
206 pass
207 return checksum
208
209
210 def parallel_get(self,
211 data_or_path_ ,
212 file_ ,
213 async_ = False,
214 num_threads = 0,
215 target_resource_name = '',
216 progressQueue = False):
217 """Call into the irods.parallel library for multi-1247 GET.
218
219 Called from a session.data_objects.get(...) (via the _download method) on
220 the condition that the data object is determined to be of appropriate size
221 for parallel download.
222
223 """
224 return parallel.io_main( self.sess, data_or_path_, parallel.Oper.GET | (parallel.Oper.NONBLOCKING if async_ else 0), file_,
225 num_threads = num_threads, target_resource_name = target_resource_name,
226 queueLength = (DEFAULT_QUEUE_DEPTH if progressQueue else 0))
227
228 def parallel_put(self,
229 file_ ,
230 data_or_path_ ,
231 async_ = False,
232 total_bytes = -1,
233 num_threads = 0,
234 target_resource_name = '',
235 open_options = {},
236 progressQueue = False):
237 """Call into the irods.parallel library for multi-1247 PUT.
238
239 Called from a session.data_objects.put(...) on the condition that the
240 data object is determined to be of appropriate size for parallel upload.
241
242 """
243 return parallel.io_main( self.sess, data_or_path_, parallel.Oper.PUT | (parallel.Oper.NONBLOCKING if async_ else 0), file_,
244 num_threads = num_threads, total_bytes = total_bytes, target_resource_name = target_resource_name,
245 open_options = open_options,
246 queueLength = (DEFAULT_QUEUE_DEPTH if progressQueue else 0)
247 )
248
249
250 def create(self, path, resource=None, force=False, **options):
79251 options[kw.DATA_TYPE_KW] = 'generic'
80252
81253 if resource:
87259 except AttributeError:
88260 pass
89261
262 if force:
263 options[kw.FORCE_FLAG_KW] = ''
264
90265 message_body = FileOpenRequest(
91266 objPath=path,
92267 createMode=0o644,
109284 return self.get(path)
110285
111286
112 def open(self, path, mode, **options):
287 def open_with_FileRaw(self, *arg, **kw_options):
288 holder = []
289 handle = self.open(*arg,_raw_fd_holder=holder,**kw_options)
290 return (handle, holder[-1])
291
292 def open(self, path, mode, create = True, finalize_on_close = True, **options):
293 _raw_fd_holder = options.get('_raw_fd_holder',[])
113294 if kw.DEST_RESC_NAME_KW not in options:
114295 # Use client-side default resource if available
115296 try:
116297 options[kw.DEST_RESC_NAME_KW] = self.sess.default_resource
117298 except AttributeError:
118299 pass
119
300 createFlag = self.O_CREAT if create else 0
120301 flags, seek_to_end = {
121302 'r': (self.O_RDONLY, False),
122303 'r+': (self.O_RDWR, False),
123 'w': (self.O_WRONLY | self.O_CREAT | self.O_TRUNC, False),
124 'w+': (self.O_RDWR | self.O_CREAT | self.O_TRUNC, False),
125 'a': (self.O_WRONLY | self.O_CREAT, True),
126 'a+': (self.O_RDWR | self.O_CREAT, True),
304 'w': (self.O_WRONLY | createFlag | self.O_TRUNC, False),
305 'w+': (self.O_RDWR | createFlag | self.O_TRUNC, False),
306 'a': (self.O_WRONLY | createFlag, True),
307 'a+': (self.O_RDWR | createFlag, True),
127308 }[mode]
128309 # TODO: Use seek_to_end
129310
149330 conn.send(message)
150331 desc = conn.recv().int_info
151332
152 return io.BufferedRandom(iRODSDataObjectFileRaw(conn, desc, **options))
333 raw = iRODSDataObjectFileRaw(conn, desc, finalize_on_close = finalize_on_close, **options)
334 (_raw_fd_holder).append(raw)
335 return io.BufferedRandom(raw)
336
337
338 def trim(self, path, **options):
339
340 message_body = FileOpenRequest(
341 objPath=path,
342 createMode=0,
343 openFlags=0,
344 offset=0,
345 dataSize=-1,
346 numThreads=self.sess.numThreads,
347 KeyValPair_PI=StringStringMap(options),
348 )
349 message = iRODSMessage('RODS_API_REQ', msg=message_body,
350 int_info=api_number['DATA_OBJ_TRIM_AN'])
351
352 with self.sess.pool.get_connection() as conn:
353 conn.send(message)
354 response = conn.recv()
153355
154356
155357 def unlink(self, path, force=False, **options):
0 from __future__ import print_function
01 from __future__ import absolute_import
12 import logging
3 import copy
24 from os.path import dirname, basename
35
46 from irods.manager import Manager
5 from irods.message import MetadataRequest, iRODSMessage
7 from irods.message import (MetadataRequest, iRODSMessage, JSON_Message)
68 from irods.api_number import api_number
79 from irods.models import (DataObject, Collection, Resource,
810 User, DataObjectMeta, CollectionMeta, ResourceMeta, UserMeta)
9 from irods.meta import iRODSMeta
11 from irods.meta import iRODSMeta, AVUOperation
12 import irods.keywords as kw
13
1014
1115 logger = logging.getLogger(__name__)
1216
1317
18 class InvalidAtomicAVURequest(Exception): pass
19
20
1421 class MetadataManager(Manager):
22
23 @property
24 def use_timestamps(self):
25 return getattr(self,'_use_ts',False)
26
27 __kw = {} # default (empty) keywords
28
29 def _updated_keywords(self,opts):
30 kw_ = self.__kw.copy()
31 kw_.update(opts)
32 return kw_
33
34 def __call__(self, admin = False, timestamps = False, **irods_kw_opt):
35 if admin:
36 irods_kw_opt.update([(kw.ADMIN_KW,"")])
37 new_self = copy.copy(self)
38 new_self._use_ts = timestamps
39 new_self.__kw = irods_kw_opt
40 return new_self
1541
1642 @staticmethod
1743 def _model_class_to_resource_type(model_cls):
1844 return {
1945 DataObject: 'd',
20 Collection: 'c',
21 Resource: 'r',
46 Collection: 'C',
47 Resource: 'R',
2248 User: 'u',
2349 }[model_cls]
2450
51 @staticmethod
52 def _model_class_to_resource_description(model_cls):
53 return {
54 DataObject: 'data_object',
55 Collection: 'collection',
56 Resource: 'resource',
57 User: 'user',
58 }[model_cls]
59
2560 def get(self, model_cls, path):
2661 resource_type = self._model_class_to_resource_type(model_cls)
2762 model = {
2863 'd': DataObjectMeta,
29 'c': CollectionMeta,
30 'r': ResourceMeta,
64 'C': CollectionMeta,
65 'R': ResourceMeta,
3166 'u': UserMeta
3267 }[resource_type]
3368 conditions = {
3570 Collection.name == dirname(path),
3671 DataObject.name == basename(path)
3772 ],
38 'c': [Collection.name == path],
39 'r': [Resource.name == path],
73 'C': [Collection.name == path],
74 'R': [Resource.name == path],
4075 'u': [User.name == path]
4176 }[resource_type]
42 results = self.sess.query(model.id, model.name, model.value, model.units)\
43 .filter(*conditions).all()
77
78 columns = (model.id, model.name, model.value, model.units)
79 if self.use_timestamps:
80 columns += (model.create_time, model.modify_time)
81 results = self.sess.query(*columns).filter(*conditions).all()
82
83 def meta_opts(row):
84 opts = {'avu_id': row[model.id]}
85 if self.use_timestamps:
86 opts.update(create_time = row[model.create_time], modify_time=row[model.modify_time])
87 return opts
88
4489 return [iRODSMeta(
45 row[model.name],
46 row[model.value],
47 row[model.units],
48 avu_id=row[model.id]
49 ) for row in results]
50
51 def add(self, model_cls, path, meta):
90 row[model.name],
91 row[model.value],
92 row[model.units],
93 **meta_opts(row))
94 for row in results]
95
96 def add(self, model_cls, path, meta, **opts):
5297 # Avoid sending request with empty argument(s)
5398 if not(len(path) and len(meta.name) and len(meta.value)):
5499 raise ValueError('Empty value in ' + repr(meta))
60105 path,
61106 meta.name,
62107 meta.value,
63 meta.units
64 )
65 request = iRODSMessage("RODS_API_REQ", msg=message_body,
66 int_info=api_number['MOD_AVU_METADATA_AN'])
67 with self.sess.pool.get_connection() as conn:
68 conn.send(request)
69 response = conn.recv()
70 logger.debug(response.int_info)
71
72 def remove(self, model_cls, path, meta):
108 meta.units,
109 **self._updated_keywords(opts)
110 )
111 request = iRODSMessage("RODS_API_REQ", msg=message_body,
112 int_info=api_number['MOD_AVU_METADATA_AN'])
113 with self.sess.pool.get_connection() as conn:
114 conn.send(request)
115 response = conn.recv()
116 logger.debug(response.int_info)
117
118 def remove(self, model_cls, path, meta, **opts):
73119 resource_type = self._model_class_to_resource_type(model_cls)
74120 message_body = MetadataRequest(
75121 "rm",
77123 path,
78124 meta.name,
79125 meta.value,
80 meta.units
81 )
82 request = iRODSMessage("RODS_API_REQ", msg=message_body,
83 int_info=api_number['MOD_AVU_METADATA_AN'])
84 with self.sess.pool.get_connection() as conn:
85 conn.send(request)
86 response = conn.recv()
87 logger.debug(response.int_info)
88
89 def copy(self, src_model_cls, dest_model_cls, src, dest):
126 meta.units,
127 **self._updated_keywords(opts)
128 )
129 request = iRODSMessage("RODS_API_REQ", msg=message_body,
130 int_info=api_number['MOD_AVU_METADATA_AN'])
131 with self.sess.pool.get_connection() as conn:
132 conn.send(request)
133 response = conn.recv()
134 logger.debug(response.int_info)
135
136 def copy(self, src_model_cls, dest_model_cls, src, dest, **opts):
90137 src_resource_type = self._model_class_to_resource_type(src_model_cls)
91138 dest_resource_type = self._model_class_to_resource_type(dest_model_cls)
92139 message_body = MetadataRequest(
94141 "-" + src_resource_type,
95142 "-" + dest_resource_type,
96143 src,
97 dest
98 )
99 request = iRODSMessage("RODS_API_REQ", msg=message_body,
100 int_info=api_number['MOD_AVU_METADATA_AN'])
101
102 with self.sess.pool.get_connection() as conn:
103 conn.send(request)
104 response = conn.recv()
105 logger.debug(response.int_info)
106
107 def set(self, model_cls, path, meta):
144 dest,
145 **self._updated_keywords(opts)
146 )
147 request = iRODSMessage("RODS_API_REQ", msg=message_body,
148 int_info=api_number['MOD_AVU_METADATA_AN'])
149
150 with self.sess.pool.get_connection() as conn:
151 conn.send(request)
152 response = conn.recv()
153 logger.debug(response.int_info)
154
155 def set(self, model_cls, path, meta, **opts):
108156 resource_type = self._model_class_to_resource_type(model_cls)
109157 message_body = MetadataRequest(
110158 "set",
112160 path,
113161 meta.name,
114162 meta.value,
115 meta.units
116 )
117 request = iRODSMessage("RODS_API_REQ", msg=message_body,
118 int_info=api_number['MOD_AVU_METADATA_AN'])
119 with self.sess.pool.get_connection() as conn:
120 conn.send(request)
121 response = conn.recv()
122 logger.debug(response.int_info)
163 meta.units,
164 **self._updated_keywords(opts)
165 )
166 request = iRODSMessage("RODS_API_REQ", msg=message_body,
167 int_info=api_number['MOD_AVU_METADATA_AN'])
168 with self.sess.pool.get_connection() as conn:
169 conn.send(request)
170 response = conn.recv()
171 logger.debug(response.int_info)
172
173 @staticmethod
174 def _avu_operation_to_dict( op ):
175 opJSON = { "operation": op.operation,
176 "attribute": op.avu.name,
177 "value": op.avu.value
178 }
179 if op.avu.units not in ("",None):
180 opJSON["units"] = op.avu.units
181 return opJSON
182
183 def apply_atomic_operations(self, model_cls, path, *avu_ops):
184 if not all(isinstance(op,AVUOperation) for op in avu_ops):
185 raise InvalidAtomicAVURequest("avu_ops must contain 1 or more AVUOperations")
186 request = {
187 "entity_name": path,
188 "entity_type": self._model_class_to_resource_description(model_cls),
189 "operations" : [self._avu_operation_to_dict(op) for op in avu_ops]
190 }
191 self._call_atomic_metadata_api(request)
192
193 def _call_atomic_metadata_api(self, request_text):
194 with self.sess.pool.get_connection() as conn:
195 request_msg = iRODSMessage("RODS_API_REQ", JSON_Message( request_text, conn.server_version ),
196 int_info=api_number['ATOMIC_APPLY_METADATA_OPERATIONS_APN'])
197 conn.send( request_msg )
198 response = conn.recv()
199 response_msg = response.get_json_encoded_struct()
200 logger.debug("in atomic_metadata, server responded with: %r",response_msg)
00 from __future__ import absolute_import
11 import logging
2 import os
23
34 from irods.models import User, UserGroup
45 from irods.manager import Manager
5 from irods.message import GeneralAdminRequest, iRODSMessage
6 from irods.exception import UserDoesNotExist, UserGroupDoesNotExist, NoResultFound
6 from irods.message import UserAdminRequest, GeneralAdminRequest, iRODSMessage, GetTempPasswordForOtherRequest, GetTempPasswordForOtherOut
7 from irods.exception import UserDoesNotExist, UserGroupDoesNotExist, NoResultFound, CAT_SQL_ERR
78 from irods.api_number import api_number
89 from irods.user import iRODSUser, iRODSUserGroup
910 import irods.password_obfuscation as obf
2930 message_body = GeneralAdminRequest(
3031 "add",
3132 "user",
32 user_name,
33 user_name if not user_zone or user_zone == self.sess.zone \
34 else "{}#{}".format(user_name,user_zone),
3335 user_type,
3436 user_zone,
3537 auth_str
5456 with self.sess.pool.get_connection() as conn:
5557 conn.send(request)
5658 response = conn.recv()
59 logger.debug(response.int_info)
60
61 def temp_password_for_user(self, user_name):
62 with self.sess.pool.get_connection() as conn:
63 message_body = GetTempPasswordForOtherRequest(
64 targetUser=user_name,
65 unused=None
66 )
67 request = iRODSMessage("RODS_API_REQ", msg=message_body,
68 int_info=api_number['GET_TEMP_PASSWORD_FOR_OTHER_AN'])
69
70 # Send request
71 conn.send(request)
72
73 # Receive answer
74 try:
75 response = conn.recv()
76 logger.debug(response.int_info)
77 except CAT_SQL_ERR:
78 raise UserDoesNotExist()
79
80 # Convert and return answer
81 msg = response.get_main_message(GetTempPasswordForOtherOut)
82 return obf.create_temp_password(msg.stringToHashWith, conn.account.password)
83
84
85 class EnvStoredPasswordNotEdited(RuntimeError):
86
87 """
88 Error thrown by a password change attempt if a login password encoded in the
89 irods environment could not be updated.
90
91 This error will be seen when `modify_irods_authentication_file' is set True and the
92 authentication scheme in effect for the session is other than iRODS native,
93 using a password loaded from the client environment.
94 """
95
96 pass
97
98 @staticmethod
99 def abspath_exists(path):
100 return (isinstance(path,str) and
101 os.path.isabs(path) and
102 os.path.exists(path))
103
104 def modify_password(self, old_value, new_value, modify_irods_authentication_file = False):
105
106 """
107 Change the password for the current user (in the manner of `ipasswd').
108
109 Parameters:
110 old_value - the currently valid (old) password
111 new_value - the desired (new) password
112 modify_irods_authentication_file - Can be False, True, or a string. If a string, it should indicate
113 the absolute path of an IRODS_AUTHENTICATION_FILE to be altered.
114 """
115 with self.sess.pool.get_connection() as conn:
116
117 hash_new_value = obf.obfuscate_new_password(new_value, old_value, conn.client_signature)
118
119 message_body = UserAdminRequest(
120 "userpw",
121 self.sess.username,
122 "password",
123 hash_new_value
124 )
125 request = iRODSMessage("RODS_API_REQ", msg=message_body,
126 int_info=api_number['USER_ADMIN_AN'])
127
128 conn.send(request)
129 response = conn.recv()
130 if modify_irods_authentication_file:
131 auth_file = self.sess.auth_file
132 if not auth_file or isinstance(modify_irods_authentication_file, str):
133 auth_file = (modify_irods_authentication_file if self.abspath_exists(modify_irods_authentication_file) else '')
134 if not auth_file:
135 message = "Session not loaded from an environment file."
136 raise UserManager.EnvStoredPasswordNotEdited(message)
137 else:
138 with open(auth_file) as f:
139 stored_pw = obf.decode(f.read())
140 if stored_pw != old_value:
141 message = "Not changing contents of '{}' - "\
142 "stored password is non-native or false match to old password".format(auth_file)
143 raise UserManager.EnvStoredPasswordNotEdited(message)
144 with open(auth_file,'w') as f:
145 f.write(obf.encode(new_value))
146
57147 logger.debug(response.int_info)
58148
59149 def modify(self, user_name, option, new_value, user_zone=""):
0 from __future__ import absolute_import
1 import logging
2
3 from irods.models import Zone
4 from irods.zone import iRODSZone
5 from irods.manager import Manager
6 from irods.message import GeneralAdminRequest, iRODSMessage
7 from irods.api_number import api_number
8 from irods.exception import ZoneDoesNotExist, NoResultFound
9
10 logger = logging.getLogger(__name__)
11
12 class ZoneManager(Manager):
13
14 def get(self, zone_name):
15 query = self.sess.query(Zone).filter(Zone.name == zone_name)
16
17 try:
18 result = query.one()
19 except NoResultFound:
20 raise ZoneDoesNotExist()
21 return iRODSZone(self, result)
22
23 def create(self, zone_name, zone_type):
24 message_body = GeneralAdminRequest(
25 "add",
26 "zone",
27 zone_name,
28 zone_type,
29 )
30 request = iRODSMessage("RODS_API_REQ", msg=message_body,
31 int_info=api_number['GENERAL_ADMIN_AN'])
32 with self.sess.pool.get_connection() as conn:
33 conn.send(request)
34 response = conn.recv()
35 logger.debug(response.int_info)
36 return self.get(zone_name)
37
38 def remove(self, zone_name):
39 message_body = GeneralAdminRequest(
40 "rm",
41 "zone",
42 zone_name
43 )
44 request = iRODSMessage("RODS_API_REQ", msg=message_body,
45 int_info=api_number['GENERAL_ADMIN_AN'])
46 with self.sess.pool.get_connection() as conn:
47 conn.send(request)
48 response = conn.recv()
49 logger.debug(response.int_info)
0 """Define objects related to communication with iRODS server API endpoints."""
1
2 import sys
03 import struct
14 import logging
25 import socket
3 import xml.etree.ElementTree as ET
6 import json
7 from six.moves import builtins
8 import irods.exception as ex
9 import xml.etree.ElementTree as ET_xml
10 import defusedxml.ElementTree as ET_secure_xml
11 from . import quasixml as ET_quasi_xml
12 from collections import namedtuple
13 import os
14 import ast
15 import threading
416 from irods.message.message import Message
517 from irods.message.property import (BinaryProperty, StringProperty,
618 IntegerProperty, LongProperty, ArrayProperty,
719 SubmessageProperty)
20
21 _TUPLE_LIKE_TYPES = (tuple, list)
22
23 def _qxml_server_version( var ):
24 val = os.environ.get( var, '()' )
25 vsn = (val and ast.literal_eval( val ))
26 if not isinstance( vsn, _TUPLE_LIKE_TYPES ): return None
27 return tuple( vsn )
28
29 if sys.version_info >= (3,):
30 import enum
31 class XML_Parser_Type(enum.Enum):
32 _invalid = 0
33 STANDARD_XML = 1
34 QUASI_XML = 2
35 SECURE_XML = 3
36 else:
37 class MyIntEnum(int):
38 """An integer enum class suited to the purpose. A shim until we get rid of Python2."""
39 def __init__(self,i):
40 """Initialize based on an integer or another instance."""
41 super(MyIntEnum,self).__init__()
42 try:self.i = i._value()
43 except AttributeError:
44 self.i = i
45 def _value(self): return self.i
46 @builtins.property
47 def value(self): return self._value()
48
49 class XML_Parser_Type(MyIntEnum):
50 """An enum specifying which XML parser is active."""
51 pass
52 XML_Parser_Type.STANDARD_XML = XML_Parser_Type (1)
53 XML_Parser_Type.QUASI_XML = XML_Parser_Type (2)
54 XML_Parser_Type.SECURE_XML = XML_Parser_Type (3)
55
56 # We maintain values on a per-thread basis of:
57 # - the server version with which we're communicating
58 # - which of the choices of parser (STANDARD_XML or QUASI_XML) we're using
59
60 _thrlocal = threading.local()
61
62 # The packStruct message parser defaults to STANDARD_XML but we can override it by setting the
63 # environment variable PYTHON_IRODSCLIENT_DEFAULT_XML to either 'SECURE_XML' or 'QUASI_XML'.
64 # If QUASI_XML is selected, the environment variable PYTHON_IRODSCLIENT_QUASI_XML_SERVER_VERSION
65 # may also be set to a tuple "X,Y,Z" to inform the client of the connected iRODS server version.
66 # If we set a value for the version, it can be either:
67 # * 4,2,8 to work with that server version and older ones which incorrectly encoded back-ticks as '&apos;'
68 # * an empty tuple "()" or something >= 4,2,9 to work with newer servers to allow a flexible character
69 # set within iRODS protocol.
70
71 class BadXMLSpec(RuntimeError): pass
72
73 _Quasi_Xml_Server_Version = _qxml_server_version('PYTHON_IRODSCLIENT_QUASI_XML_SERVER_VERSION')
74 if _Quasi_Xml_Server_Version is None: # unspecified in environment yields empty tuple ()
75 raise BadXMLSpec('Must properly specify a server version to use QUASI_XML')
76
77 _XML_strings = { k:v for k,v in vars(XML_Parser_Type).items() if k.endswith('_XML')}
78
79
80 _default_XML = os.environ.get('PYTHON_IRODSCLIENT_DEFAULT_XML','')
81 if not _default_XML:
82 _default_XML = XML_Parser_Type.STANDARD_XML
83 else:
84 try:
85 _default_XML = _XML_strings[_default_XML]
86 except KeyError:
87 raise BadXMLSpec('XML parser type not recognized')
88
89
90 def current_XML_parser(get_module = False):
91 d = getattr(_thrlocal,'xml_type',_default_XML)
92 return d if not get_module else _XML_parsers[d]
93
94 def default_XML_parser(get_module = False):
95 d = _default_XML
96 return d if not get_module else _XML_parsers[d]
97
98 _XML_parsers = {
99 XML_Parser_Type.STANDARD_XML : ET_xml,
100 XML_Parser_Type.QUASI_XML : ET_quasi_xml,
101 XML_Parser_Type.SECURE_XML : ET_secure_xml
102 }
103
104
105 def XML_entities_active():
106 Server = getattr(_thrlocal,'irods_server_version',_Quasi_Xml_Server_Version)
107 return [ ('&', '&amp;'), # note: order matters. & must be encoded first.
108 ('<', '&lt;'),
109 ('>', '&gt;'),
110 ('"', '&quot;'),
111 ("'" if not(Server) or Server >= (4,2,9) else '`',
112 '&apos;') # https://github.com/irods/irods/issues/4132
113 ]
114
115
116 # ET() [no-args form] will just fetch the current thread's XML parser settings
117
118 def ET(xml_type = (), server_version = None):
119 """
120 Return the module used to parse XML from iRODS protocol messages text.
121
122 May also be used to specify the following attributes of the currently running thread:
123
124 `xml_type', if given, should be 1 for STANDARD_XML or 2 for QUASI_XML.
125 * QUASI_XML is custom parser designed to be more compatible with the use of
126 non-printable characters in object names.
127 * STANDARD_XML uses the standard module, xml.etree.ElementTree.
128 * an empty tuple is the default argument for `xml_type', imparting the same
129 semantics as for the argumentless form ET(), ie., short-circuit any parser change.
130
131 `server_version', if given, should be a list or tuple specifying the version of the connected iRODS server.
132
133 """
134 if xml_type != ():
135 _thrlocal.xml_type = (default_XML_parser() if xml_type in (None, XML_Parser_Type(0))
136 else XML_Parser_Type(xml_type))
137 if isinstance(server_version, _TUPLE_LIKE_TYPES):
138 _thrlocal.irods_server_version = tuple(server_version) # A default server version for Quasi-XML parsing is set (from the environment) and
139 return _XML_parsers[current_XML_parser()] # applies to all threads in which ET() has not been called to update the value.
140
8141
9142 logger = logging.getLogger(__name__)
10143
18151 UNICODE = str
19152
20153
154
155 # Necessary for older python (<3.7):
156
157 def _socket_is_blocking(sk):
158 try:
159 return sk.getblocking()
160 except AttributeError:
161 # Python 3.7+ docs say sock.getblocking() is equivalent to checking if sock.gettimeout() == 0, but this is misleading.
162 # Manual testing shows this to be a more accurate equivalent:
163 timeout = sk.gettimeout()
164 return (timeout is None or timeout > 0)
165
21166 def _recv_message_in_len(sock, size):
22167 size_left = size
23168 retbuf = None
169
170 # Get socket properties for debug and exception messages.
171 is_blocking = _socket_is_blocking(sock)
172 timeout = sock.gettimeout()
173
174 logger.debug('is_blocking: %s',is_blocking)
175 logger.debug('timeout: %s',timeout)
176
24177 while size_left > 0:
25178 try:
26179 buf = sock.recv(size_left, socket.MSG_WAITALL)
31184 if getattr(e, 'winerror', 0) != 10045:
32185 raise
33186 buf = sock.recv(size_left)
187
188 # This prevents an infinite loop. If the call to recv()
189 # returns an empty buffer, break out of the loop.
190 if len(buf) == 0:
191 break
34192 size_left -= len(buf)
35193 if retbuf is None:
36194 retbuf = buf
37195 else:
38196 retbuf += buf
197
198 # This method is supposed to read and return 'size'
199 # bytes from the socket. If it reads no bytes (retbuf
200 # will be None), or if it reads less number of bytes
201 # than 'size', throw a socket.error exception
202 if retbuf is None or len(retbuf) != size:
203 retbuf_size = len(retbuf) if retbuf is not None else 0
204 msg = 'Read {} bytes from socket instead of expected {} bytes'.format(retbuf_size, size)
205 raise socket.error(msg)
206
39207 return retbuf
40208
41209
57225 index += rsize
58226 return mv[:index]
59227
228 #------------------------------------
229
230 class BinBytesBuf(Message):
231 _name = 'BinBytesBuf_PI'
232 buflen = IntegerProperty()
233 buf = BinaryProperty()
234
235 class JSON_Binary_Response(BinBytesBuf):
236 pass
60237
61238 class iRODSMessage(object):
239
240 class ResponseNotParseable(Exception):
241
242 """
243 Raised by get_main_message(ResponseClass) to indicate a server response
244 wraps a msg string that is the `None' object rather than an XML String.
245 (Not raised for the ResponseClass is irods.message.Error; see source of
246 get_main_message for further detail.)
247 """
248 pass
62249
63250 def __init__(self, msg_type=b'', msg=None, error=b'', bs=b'', int_info=0):
64251 self.msg_type = msg_type
66253 self.error = error
67254 self.bs = bs
68255 self.int_info = int_info
256
257 def get_json_encoded_struct (self):
258 Xml = ET().fromstring(self.msg.replace(b'\0',b''))
259 json_str = Xml.find('buf').text
260 if Xml.tag == 'BinBytesBuf_PI':
261 mybin = JSON_Binary_Response()
262 mybin.unpack(Xml)
263 json_str = mybin.buf.replace(b'\0',b'').decode()
264 return json.loads( json_str )
69265
70266 @staticmethod
71267 def recv(sock):
75271 # rsp_header = sock.recv(rsp_header_size, socket.MSG_WAITALL)
76272 rsp_header = _recv_message_in_len(sock, rsp_header_size)
77273
78 xml_root = ET.fromstring(rsp_header)
274 xml_root = ET().fromstring(rsp_header)
79275 msg_type = xml_root.find('type').text
80276 msg_len = int(xml_root.find('msgLen').text)
81277 err_len = int(xml_root.find('errorLen').text)
102298 rsp_header_size = struct.unpack(">i", rsp_header_size)[0]
103299 rsp_header = _recv_message_in_len(sock, rsp_header_size)
104300
105 xml_root = ET.fromstring(rsp_header)
301 xml_root = ET().fromstring(rsp_header)
106302 msg_type = xml_root.find('type').text
107303 msg_len = int(xml_root.find('msgLen').text)
108304 err_len = int(xml_root.find('errorLen').text)
164360 return packed_header + main_msg + self.error + self.bs
165361
166362
167 def get_main_message(self, cls):
363 def get_main_message(self, cls, r_error = None):
168364 msg = cls()
169 logger.debug(self.msg)
170 msg.unpack(ET.fromstring(self.msg))
365 logger.debug('Attempt to parse server response [%r] as class [%r].',self.msg,cls)
366 if self.error and isinstance(r_error, RErrorStack):
367 r_error.fill( iRODSMessage(msg=self.error).get_main_message(Error) )
368 if self.msg is None:
369 if cls is not Error:
370 # - For dedicated API response classes being built from server response, allow catching
371 # of the exception. However, let iRODS errors such as CAT_NO_ROWS_FOUND to filter
372 # through as usual for express reporting by instances of irods.connection.Connection .
373 message = "Server response was {self.msg} while parsing as [{cls}]".format(**locals())
374 raise self.ResponseNotParseable( message )
375 msg.unpack(ET().fromstring(self.msg))
171376 return msg
172377
173378
187392 class StartupPack(Message):
188393 _name = 'StartupPack_PI'
189394
190 def __init__(self, proxy_user, client_user):
395 def __init__(self, proxy_user, client_user, application_name = ''):
191396 super(StartupPack, self).__init__()
192397 if proxy_user and client_user:
193398 self.irodsProt = 1
196401 self.clientUser, self.clientRcatZone = client_user
197402 self.relVersion = "rods{}.{}.{}".format(*IRODS_VERSION)
198403 self.apiVersion = "{3}".format(*IRODS_VERSION)
199 self.option = ""
404 self.option = application_name
200405
201406 irodsProt = IntegerProperty()
202407 reconnFlag = IntegerProperty()
222427 _name = 'authRequestOut_PI'
223428 challenge = BinaryProperty(64)
224429
430
431 class AuthPluginOut(Message):
432 _name = 'authPlugReqOut_PI'
433 result_ = StringProperty()
434 # result_ = BinaryProperty(16)
435
436
437 # The following PamAuthRequest* classes correspond to older, less generic
438 # PAM auth api in iRODS, but one which allowed longer password tokens.
439 # They are contributed by Rick van de Hoef at Utrecht Univ, c. June 2021:
440
441 class PamAuthRequest(Message):
442 _name = 'pamAuthRequestInp_PI'
443 pamUser = StringProperty()
444 pamPassword = StringProperty()
445 timeToLive = IntegerProperty()
446
447 class PamAuthRequestOut(Message):
448 _name = 'pamAuthRequestOut_PI'
449 irodsPamPassword = StringProperty()
450 @builtins.property
451 def result_(self): return self.irodsPamPassword
452
453
454
225455 # define InxIvalPair_PI "int iiLen; int *inx(iiLen); int *ivalue(iiLen);"
226456
227
228 class BinBytesBuf(Message):
229 _name = 'BinBytesBuf_PI'
457 class JSON_Binary_Request(BinBytesBuf):
458
459 """A message body whose payload is BinBytesBuf containing JSON."""
460
461 def __init__(self,msg_struct):
462 """Initialize with a Python data structure that will be converted to JSON."""
463 super(JSON_Binary_Request,self).__init__()
464 string = json.dumps(msg_struct)
465 self.buf = string
466 self.buflen = len(string)
467
468 class BytesBuf(Message):
469
470 """A generic structure carrying text content"""
471
472 _name = 'BytesBuf_PI'
230473 buflen = IntegerProperty()
231 buf = BinaryProperty()
232
233
234 class GSIAuthMessage(Message):
474 buf = StringProperty()
475 def __init__(self,string,*v,**kw):
476 super(BytesBuf,self).__init__(*v,**kw)
477 self.buf = string
478 self.buflen = len(self.buf)
479
480 class JSON_XMLFramed_Request(BytesBuf):
481
482 """A message body whose payload is a BytesBuf containing JSON."""
483 def __init__(self, msg_struct):
484 """Initialize with a Python data structure that will be converted to JSON."""
485 s = json.dumps(msg_struct)
486 super(JSON_XMLFramed_Request,self).__init__(s)
487
488 def JSON_Message( msg_struct , server_version = () ):
489 cls = JSON_XMLFramed_Request if server_version < (4,2,9) \
490 else JSON_Binary_Request
491 return cls(msg_struct)
492
493
494 class PluginAuthMessage(Message):
235495 _name = 'authPlugReqInp_PI'
236496 auth_scheme_ = StringProperty()
237497 context_ = StringProperty()
498
499
500 class _OrderedMultiMapping :
501 def keys(self):
502 return self._keys
503 def values(self):
504 return self._values
505 def __len__(self):
506 return len(self._keys)
507 def __init__(self, list_of_keyval_tuples ):
508 self._keys = []
509 self._values = []
510 for k,v in list_of_keyval_tuples:
511 self._keys.append(k)
512 self._values.append(v)
238513
239514
240515 class IntegerIntegerMap(Message):
340615 oprType = IntegerProperty()
341616 KeyValPair_PI = SubmessageProperty(StringStringMap)
342617
618 class DataObjChksumRequest(FileOpenRequest):
619 """Report and/or generate a data object's checksum."""
620
621 def __init__(self,path,**chksumOptions):
622 """Construct the request using the path of a data object."""
623 super(DataObjChksumRequest,self).__init__()
624 for attr,prop in vars(FileOpenRequest).items():
625 if isinstance(prop, (IntegerProperty,LongProperty)):
626 setattr(self, attr, 0)
627 self.objPath = path
628 self.KeyValPair_PI = StringStringMap(chksumOptions)
629
630 class DataObjChksumResponse(Message):
631 name = 'Str_PI'
632 myStr = StringProperty()
633
343634 # define OpenedDataObjInp_PI "int l1descInx; int len; int whence; int
344635 # oprType; double offset; double bytesWritten; struct KeyValPair_PI;"
345636
369660 srcDataObjInp_PI = SubmessageProperty(FileOpenRequest)
370661 destDataObjInp_PI = SubmessageProperty(FileOpenRequest)
371662
663
372664 # define ModAVUMetadataInp_PI "str *arg0; str *arg1; str *arg2; str *arg3;
373 # str *arg4; str *arg5; str *arg6; str *arg7; str *arg8; str *arg9;"
374
665 # str *arg4; str *arg5; str *arg6; str *arg7; str *arg8; str *arg9; struct KeyValPair_PI"
375666
376667 class MetadataRequest(Message):
377668 _name = 'ModAVUMetadataInp_PI'
378669
379 def __init__(self, *args):
670 def __init__(self, *args, **metadata_opts):
380671 super(MetadataRequest, self).__init__()
381672 for i in range(len(args)):
382673 if args[i]:
383674 setattr(self, 'arg%d' % i, args[i])
675 self.KeyValPair_PI = StringStringMap(metadata_opts)
384676
385677 arg0 = StringProperty()
386678 arg1 = StringProperty()
393685 arg8 = StringProperty()
394686 arg9 = StringProperty()
395687
688 KeyValPair_PI = SubmessageProperty(StringStringMap)
689
690
396691 # define modAccessControlInp_PI "int recursiveFlag; str *accessLevel; str
397692 # *userName; str *zone; str *path;"
398693
430725 cookie = IntegerProperty()
431726
432727
433 # define generalAdminInp_PI "str *arg0; str *arg1; str *arg2; str *arg3;
434 # str *arg4; str *arg5; str *arg6; str *arg7; str *arg8; str *arg9;"
435
436 class GeneralAdminRequest(Message):
437 _name = 'generalAdminInp_PI'
728 class _admin_request_base(Message):
729
730 _name = None
438731
439732 def __init__(self, *args):
440 super(GeneralAdminRequest, self).__init__()
733 if self.__class__._name is None:
734 raise NotImplementedError
735 super(_admin_request_base, self).__init__()
441736 for i in range(10):
442737 if i < len(args) and args[i]:
443738 setattr(self, 'arg{0}'.format(i), args[i])
456751 arg9 = StringProperty()
457752
458753
754 # define generalAdminInp_PI "str *arg0; str *arg1; str *arg2; str *arg3;
755 # str *arg4; str *arg5; str *arg6; str *arg7; str *arg8; str *arg9;"
756
757 class GeneralAdminRequest(_admin_request_base):
758 _name = 'generalAdminInp_PI'
759
760
761 # define userAdminInp_PI "str *arg0; str *arg1; str *arg2; str *arg3;
762 # str *arg4; str *arg5; str *arg6; str *arg7; str *arg8; str *arg9;"
763
764 class UserAdminRequest(_admin_request_base):
765 _name = 'userAdminInp_PI'
766
767
768 class GetTempPasswordForOtherRequest(Message):
769 _name = 'getTempPasswordForOtherInp_PI'
770 targetUser = StringProperty()
771 unused = StringProperty()
772
773
774 class GetTempPasswordForOtherOut(Message):
775 _name = 'getTempPasswordForOtherOut_PI'
776 stringToHashWith = StringProperty()
777
778
779 class GetTempPasswordOut(Message):
780 _name = 'getTempPasswordOut_PI'
781 stringToHashWith = StringProperty()
782
783
784 #in iRODS <= 4.2.10:
459785 #define ticketAdminInp_PI "str *arg1; str *arg2; str *arg3; str *arg4; str *arg5; str *arg6;"
786
787 #in iRODS >= 4.2.11:
788 #define ticketAdminInp_PI "str *arg1; str *arg2; str *arg3; str *arg4; str *arg5; str *arg6; struct KeyValPair_PI;"
789
460790
461791 class TicketAdminRequest(Message):
462792 _name = 'ticketAdminInp_PI'
463793
464 def __init__(self, *args):
794 def __init__(self, *args,**ticketOpts):
465795 super(TicketAdminRequest, self).__init__()
466796 for i in range(6):
467797 if i < len(args) and args[i]:
468798 setattr(self, 'arg{0}'.format(i+1), str(args[i]))
469799 else:
470800 setattr(self, 'arg{0}'.format(i+1), "")
801 self.KeyValPair_PI = StringStringMap(ticketOpts)
471802
472803 arg1 = StringProperty()
473804 arg2 = StringProperty()
475806 arg4 = StringProperty()
476807 arg5 = StringProperty()
477808 arg6 = StringProperty()
809 KeyValPair_PI = SubmessageProperty(StringStringMap)
478810
479811
480812 #define specificQueryInp_PI "str *sql; str *arg1; str *arg2; str *arg3; str *arg4; str *arg5; str *arg6; str *arg7; str *arg8; str *arg9; str *arg10; int maxRows; int continueInx; int rowOffset; int options; struct KeyValPair_PI;"
653985 dataObjInfo = SubmessageProperty(DataObjInfo)
654986 regParam = SubmessageProperty(StringStringMap)
655987
988
989 # -- A tuple-descended class which facilitates filling in a
990 # quasi-RError stack from a JSON formatted list.
991
992 _Server_Status_Message = namedtuple('server_status_msg',('msg','status'))
993
994
995 class RErrorStack(list):
996
997 """A list of returned RErrors."""
998
999 def __init__(self,Err = None):
1000 """Initialize from the `errors' member of an API return message."""
1001 super(RErrorStack,self).__init__() # 'list' class initialization
1002 self.fill(Err)
1003
1004 def fill(self,Err = None):
1005
1006 # first, we try to parse from a JSON list, as this is how message and status return the Data.chksum call.
1007 if isinstance (Err, (tuple,list)):
1008 self[:] = [ RError( _Server_Status_Message( msg = elem["message"],
1009 status = elem["error_code"] )
1010 ) for elem in Err
1011 ]
1012 return
1013
1014 # next, we try to parse from a a response message - eg. as returned by the Rule.execute API call when a rule fails.
1015 if Err is not None:
1016 self[:] = [ RError(Err.RErrMsg_PI[i]) for i in range(Err.count) ]
1017
1018
1019 class RError(object):
1020
1021 """One of a list of RError messages potentially returned to the client
1022 from an iRODS API call. """
1023
1024 Encoding = 'utf-8'
1025
1026 def __init__(self,entry):
1027 """Initialize from one member of the RErrMsg_PI array."""
1028 super(RError,self).__init__()
1029 self.raw_msg_ = entry.msg
1030 self.status_ = entry.status
1031
1032
1033 @builtins.property
1034 def message(self): #return self.raw_msg_.decode(self.Encoding)
1035 msg_ = self.raw_msg_
1036 if type(msg_) is UNICODE:
1037 return msg_
1038 elif type(msg_) is bytes:
1039 return msg_.decode(self.Encoding)
1040 else:
1041 raise RuntimeError('bad msg type in',msg_)
1042
1043 @builtins.property
1044 def status(self): return int(self.status_)
1045
1046
1047 @builtins.property
1048 def status_str(self):
1049 """Retrieve the IRODS error identifier."""
1050 return ex.get_exception_class_by_code( self.status, name_only=True )
1051
1052
1053 def __str__(self):
1054 """Retrieve the error message text."""
1055 return self.message
1056
1057 def __int__(self):
1058 """Retrieve integer error code."""
1059 return self.status
1060
1061 def __repr__(self):
1062 """Show both the message and iRODS error type (both integer and human-readable)."""
1063 return "{self.__class__.__name__}"\
1064 "<message = {self.message!r}, status = {self.status} {self.status_str}>".format(**locals())
1065
1066
6561067 #define RErrMsg_PI "int status; str msg[ERR_MSG_LEN];"
6571068
6581069 class ErrorMessage(Message):
0 # A parser for the iRODS XML-like protocol.
1 # The interface aims to be compatible with xml.etree.ElementTree,
2 # at least for the features used by python-irodsclient.
3
4 class Element():
5 """
6 Represents <name>body</name>.
7
8 (Where `body' is either a string or a list of sub-elements.)
9 """
10
11 @property
12 def tag(self): return self.name
13
14 def __init__(self, name, body):
15 """Initialize with the tag's name and the body (i.e. content)."""
16 if body == []:
17 # Empty element.
18 self.text = None
19 elif type(body) is not list:
20 # String element: decode body.
21 body = decode_entities(body)
22 self.text = body
23
24 self.name = name
25 self.body = body
26
27 def find(self, name):
28 """Get first matching child element by name."""
29 for x in self.findall(name):
30 return x
31
32 def findall(self, name):
33 """Get matching child elements by name."""
34 return list(self.findall_(name))
35
36 def findall_(self, name):
37 """Get matching child elements by name (generator variant)."""
38 return (el for el in self.body if el.name == name)
39
40 # For debugging convenience:
41 def __str__(self):
42 if type(self.body) is list:
43 return '<{}>{}</{}>'.format(self.name, ''.join(map(str, self.body)), self.name)
44 else:
45 return '<{}>{}</{}>'.format(self.name, encode_entities(self.body), self.name)
46
47 def __repr__(self):
48 return '{}({})'.format(self.name, repr(self.body))
49
50
51 class Token(object):
52 """A utility class for parsing XML."""
53 def __init__(self, s):
54 """Create a `Token' object from `s', the text comprising the parsed token."""
55 self.text = s
56 def __repr__(self):
57 return str(type(self).__name__) + '(' + self.text.decode('utf-8') + ')'
58 def __str__(self):
59 return repr(self)
60
61 class TokenTagOpen(Token):
62 """An opening tag (<foo>)"""
63 class TokenTagClose(Token):
64 """An closing tag (</foo>)"""
65 class TokenCData(Token):
66 """Textual element body"""
67
68 class QuasiXmlParseError(Exception):
69 """Indicates parse failure of XML protocol data."""
70
71 def tokenize(s):
72 """Parse an XML-ish string into a list of tokens."""
73 tokens = []
74
75 # Consume input until empty.
76 while True:
77 nextclose = s.find(b'</')
78 nextopen = s.find(b'<')
79 if nextopen < nextclose or nextopen == -1:
80 # Either we have no tags left, or we are in a non-cdata element body: strip whitespace.
81 s = s.lstrip()
82
83 if len(s) == 0:
84 return tokens
85
86 # Closing tag?
87 elif s.startswith(b'</'):
88 try:
89 name, s = s[2:].split(b'>', 1)
90 except Exception:
91 raise QuasiXmlParseError('protocol error: unterminated close tag')
92 tokens.append(TokenTagClose(name))
93 s = s.lstrip() # consume space after closing tag
94
95 # Opening tag?
96 elif s.startswith(b'<'):
97 try:
98 name, s = s[1:].split(b'>', 1)
99 except Exception:
100 raise QuasiXmlParseError('protocol error: unterminated open tag')
101 tokens.append(TokenTagOpen(name))
102
103 else:
104 # capture cdata till next tag.
105 try:
106 cdata, s = s.split(b'<', 1)
107 except Exception:
108 raise QuasiXmlParseError('protocol error: unterminated cdata')
109 s = b'<' + s
110 tokens.append(TokenCData(cdata))
111
112 def fromtokens(tokens):
113 """Parse XML-ish tokens into an Element."""
114
115 def parse_elem(tokens):
116 """Parse some tokens into one Element, and return unconsumed tokens."""
117 topen, tokens = tokens[0], tokens[1:]
118 if type(topen) is not TokenTagOpen:
119 raise QuasiXmlParseError('protocol error: data does not start with open tag')
120
121 children = []
122 cdata = None
123
124 while len(tokens) > 0:
125 t, tokens = tokens[0], tokens[1:]
126 if type(t) is TokenTagOpen:
127 # Slurp a sub-element.
128 el, tokens = parse_elem([t] + tokens)
129 children.append(el)
130 # Continue with non-consumed tokens.
131 elif type(t) == TokenTagClose:
132 if t.text != topen.text:
133 raise QuasiXmlParseError('protocol error: close tag <{}> does not match opening tag <{}>'.format(t.text, topen.text))
134 elif cdata is not None and len(children):
135 raise QuasiXmlParseError('protocol error: mixed cdata and child elements')
136 return Element(topen.text.decode('utf-8'), cdata.decode('utf-8') if cdata is not None else children), tokens
137 else:
138 cdata = t.text
139
140 elem, rest = parse_elem(tokens)
141 if rest != []:
142 raise QuasiXmlParseError('protocol error: trailing data')
143
144 return elem
145
146
147 try:
148 unicode # Python 2
149 except NameError:
150 unicode = str
151
152
153 def fromstring(s):
154 if type(s) is unicode:
155 s = s.encode('utf-8')
156 if type(s) is not bytes:
157 raise TypeError('expected a bytes-object, got {}'.format(type(s).__name__))
158
159 return fromtokens(tokenize(s))
160
161
162 def encode_entities(s):
163 from . import XML_entities_active
164 for k, v in XML_entities_active():
165 s = s.replace(k, v)
166 return s
167
168 def decode_entities(s):
169 from . import XML_entities_active
170 rev = list(XML_entities_active())
171 rev.reverse() # (make sure &amp; is decoded last)
172 for k, v in rev:
173 s = s.replace(v, k)
174 return s
0
1
02 class iRODSMeta(object):
13
2 def __init__(self, name, value, units=None, avu_id=None):
4 def __init__(self, name, value, units=None, avu_id=None, create_time=None, modify_time=None):
35 self.avu_id = avu_id
46 self.name = name
57 self.value = value
68 self.units = units
9 self.create_time = create_time
10 self.modify_time = modify_time
11
12 def __eq__(self, other):
13 return tuple(self) == tuple(other)
14
15 def __iter__(self):
16 yield self.name
17 yield self.value
18 if self.units: yield self.units
719
820 def __repr__(self):
921 return "<iRODSMeta {avu_id} {name} {value} {units}>".format(**vars(self))
1022
1123
24 class BadAVUOperationKeyword(Exception): pass
25
26 class BadAVUOperationValue(Exception): pass
27
28
29 class AVUOperation(dict):
30
31 @property
32 def operation(self):
33 return self['operation']
34
35 @operation.setter
36 def operation(self,Oper):
37 self._check_operation(Oper)
38 self['operation'] = Oper
39
40 @property
41 def avu(self):
42 return self['avu']
43
44 @avu.setter
45 def avu(self,newAVU):
46 self._check_avu(newAVU)
47 self['avu'] = newAVU
48
49 def _check_avu(self,avu_param):
50 if not isinstance(avu_param, iRODSMeta):
51 error_msg = "Nonconforming avu {!r} of type {}; must be an iRODSMeta." \
52 "".format(avu_param,type(avu_param).__name__)
53 raise BadAVUOperationValue(error_msg)
54
55 def _check_operation(self,operation):
56 if operation not in ('add','remove'):
57 error_msg = "Nonconforming operation {!r}; must be 'add' or 'remove'.".format(operation)
58 raise BadAVUOperationValue(error_msg)
59
60 def __init__(self, operation, avu, **kw):
61 """Constructor:
62 AVUOperation( operation = opstr, # where opstr is "add" or "remove"
63 avu = metadata ) # where metadata is an irods.meta.iRODSMeta instance
64 """
65 super(AVUOperation,self).__init__()
66 self._check_operation (operation)
67 self._check_avu (avu)
68 if kw:
69 raise BadAVUOperationKeyword('''Nonconforming keyword (s) {}.'''.format(list(kw.keys())))
70 for atr in ('operation','avu'):
71 setattr(self,atr,locals()[atr])
72
73
74 import copy
75
1276 class iRODSMetaCollection(object):
77
78 def __call__(self, admin = False, timestamps = False, **opts):
79 x = copy.copy(self)
80 x._manager = (x._manager)(admin, timestamps, **opts)
81 x._reset_metadata()
82 return x
1383
1484 def __init__(self, manager, model_cls, path):
1585 self._manager = manager
46116 "Must specify an iRODSMeta object or key, value, units)")
47117 return args[0] if len(args) == 1 else iRODSMeta(*args)
48118
49 def add(self, *args):
119 def apply_atomic_operations(self, *avu_ops):
120 self._manager.apply_atomic_operations(self._model_cls, self._path, *avu_ops)
121 self._reset_metadata()
122
123 def set(self, *args, **opts):
124 """
125 Set as iRODSMeta to a key
126 """
127 meta = self._get_meta(*args)
128 self._manager.set(self._model_cls, self._path, meta, **opts)
129 self._reset_metadata()
130
131 def add(self, *args, **opts):
50132 """
51133 Add as iRODSMeta to a key
52134 """
53135 meta = self._get_meta(*args)
54 self._manager.add(self._model_cls, self._path, meta)
55 self._reset_metadata()
56
57 def remove(self, *args):
136 self._manager.add(self._model_cls, self._path, meta, **opts)
137 self._reset_metadata()
138
139 def remove(self, *args, **opts):
58140 """
59141 Removes an iRODSMeta
60142 """
61143 meta = self._get_meta(*args)
62 self._manager.remove(self._model_cls, self._path, meta)
144 self._manager.remove(self._model_cls, self._path, meta, **opts)
63145 self._reset_metadata()
64146
65147 def items(self):
114196 values = self.get_all(key)
115197 return len(values) > 0
116198
117 def remove_all(self):
199 def remove_all(self, **opts):
118200 for meta in self._meta:
119 self._manager.remove(self._model_cls, self._path, meta)
120 self._reset_metadata()
201 self._manager.remove(self._model_cls, self._path, meta, **opts)
202 self._reset_metadata()
1818 pass
1919
2020
21 class RuleExec(Model):
22 id = Column(Integer, 'RULE_EXEC_ID', 1000)
23 name = Column(String, 'RULE_EXEC_NAME', 1001)
24 rei_file_path = Column(String,'RULE_EXEC_REI_FILE_PATH', 1002)
25 user_name = Column(String, 'RULE_EXEC_USER_NAME', 1003)
26 time = Column(DateTime,'RULE_EXEC_TIME', 1005)
27 last_exe_time = Column(DateTime,'RULE_EXEC_LAST_EXE_TIME', 1010)
28 frequency = Column(String,'RULE_EXEC_FREQUENCY', 1006)
29 priority = Column(String, 'RULE_EXEC_PRIORITY', 1007)
30
31 # # If needed in 4.2.9, we can update the Query class to dynamically
32 # # attach this field based on server version:
33 # context = Column(String, 'RULE_EXEC_CONTEXT', 1012)
34
35 # # These are either unused or usually absent:
36 # exec_status = Column(String,'RULE_EXEC_STATUS', 1011)
37 # address = Column(String,'RULE_EXEC_ADDRESS', 1004)
38 # notification_addr = Column('RULE_EXEC_NOTIFICATION_ADDR', 1009)
39
40
2141 class Zone(Model):
2242 id = Column(Integer, 'ZONE_ID', 101)
2343 name = Column(String, 'ZONE_NAME', 102)
44 type = Column(String, 'ZONE_TYPE', 103)
2445
2546
2647 class User(Model):
111132 name = Column(String, 'COL_META_DATA_ATTR_NAME', 600)
112133 value = Column(String, 'COL_META_DATA_ATTR_VALUE', 601)
113134 units = Column(String, 'COL_META_DATA_ATTR_UNITS', 602)
135 create_time = Column(DateTime, 'COL_META_DATA_CREATE_TIME', 604)
136 modify_time = Column(DateTime, 'COL_META_DATA_MODIFY_TIME', 605)
114137
115138
116139 class CollectionMeta(Model):
118141 name = Column(String, 'COL_META_COLL_ATTR_NAME', 610)
119142 value = Column(String, 'COL_META_COLL_ATTR_VALUE', 611)
120143 units = Column(String, 'COL_META_COLL_ATTR_UNITS', 612)
144 create_time = Column(DateTime, 'COL_META_COLL_CREATE_TIME', 614)
145 modify_time = Column(DateTime, 'COL_META_COLL_MODIFY_TIME', 615)
146
121147
122148
123149 class ResourceMeta(Model):
125151 name = Column(String, 'COL_META_RESC_ATTR_NAME', 630)
126152 value = Column(String, 'COL_META_RESC_ATTR_VALUE', 631)
127153 units = Column(String, 'COL_META_RESC_ATTR_UNITS', 632)
154 create_time = Column(DateTime, 'COL_META_RESC_CREATE_TIME', 634)
155 modify_time = Column(DateTime, 'COL_META_RESC_MODIFY_TIME', 635)
156
128157
129158
130159 class UserMeta(Model):
132161 name = Column(String, 'COL_META_USER_ATTR_NAME', 640)
133162 value = Column(String, 'COL_META_USER_ATTR_VALUE', 641)
134163 units = Column(String, 'COL_META_USER_ATTR_UNITS', 642)
164 create_time = Column(DateTime, 'COL_META_USER_CREATE_TIME', 644)
165 modify_time = Column(DateTime, 'COL_META_USER_MODIFY_TIME', 645)
166
135167
136168
137169 class DataAccess(Model):
161193 class Keywords(Model):
162194 data_type = Keyword(String, 'dataType')
163195 chksum = Keyword(String, 'chksum')
196
197
198 class TicketQuery:
199 """Various model classes for querying attributes of iRODS tickets.
200
201 Namespacing these model classes under the TicketQuery parent class allows
202 a simple import (not conflicting with irods.ticket.Ticket) and a usage
203 that reflects ICAT table structure:
204
205 from irods.models import TicketQuery
206 # ...
207 for row in session.query( TicketQuery.Ticket )\
208 .filter( TicketQuery.Owner.name == 'alice' ):
209 print( row [TicketQuery.Ticket.string] )
210
211 (For more examples, see irods/test/ticket_test.py)
212
213 """
214 class Ticket(Model):
215 """For queries of R_TICKET_MAIN."""
216 id = Column(Integer, 'TICKET_ID', 2200)
217 string = Column(String, 'TICKET_STRING', 2201)
218 type = Column(String, 'TICKET_TYPE', 2202)
219 user_id = Column(Integer, 'TICKET_USER_ID', 2203)
220 object_id = Column(Integer, 'TICKET_OBJECT_ID', 2204)
221 object_type = Column(String, 'TICKET_OBJECT_TYPE', 2205)
222 uses_limit = Column(Integer, 'TICKET_USES_LIMIT', 2206)
223 uses_count = Column(Integer, 'TICKET_USES_COUNT', 2207)
224 expiry_ts = Column(String, 'TICKET_EXPIRY_TS', 2208)
225 write_file_count = Column(Integer, 'TICKET_WRITE_FILE_COUNT', 2211)
226 write_file_limit = Column(Integer, 'TICKET_WRITE_FILE_LIMIT', 2212)
227 write_byte_count = Column(Integer, 'TICKET_WRITE_BYTE_COUNT', 2213)
228 write_byte_limit = Column(Integer, 'TICKET_WRITE_BYTE_LIMIT', 2214)
229 ## For now, use of these columns raises CAT_SQL_ERR in both PRC and iquest: (irods/irods#5929)
230 # create_time = Column(String, 'TICKET_CREATE_TIME', 2209)
231 # modify_time = Column(String, 'TICKET_MODIFY_TIME', 2210)
232
233 class DataObject(Model):
234 """For queries of R_DATA_MAIN when joining to R_TICKET_MAIN.
235
236 The ticket(s) in question should be for a data object; otherwise
237 CAT_SQL_ERR is thrown.
238
239 """
240 name = Column(String, 'TICKET_DATA_NAME', 2226)
241 coll = Column(String, 'TICKET_DATA_COLL_NAME', 2227)
242
243 class Collection(Model):
244 """For queries of R_COLL_MAIN when joining to R_TICKET_MAIN.
245
246 The returned ticket(s) will be limited to those issued for collections.
247
248 """
249 name = Column(String, 'TICKET_COLL_NAME', 2228)
250
251 class Owner(Model):
252 """For queries concerning R_TICKET_USER_MAIN."""
253 name = Column(String, 'TICKET_OWNER_NAME', 2229)
254 zone = Column(String, 'TICKET_OWNER_ZONE', 2230)
255
256 class AllowedHosts(Model):
257 """For queries concerning R_TICKET_ALLOWED_HOSTS."""
258 ticket_id = Column(String, 'COL_TICKET_ALLOWED_HOST_TICKET_ID', 2220)
259 host = Column(String, 'COL_TICKET_ALLOWED_HOST', 2221)
260
261 class AllowedUsers(Model):
262 """For queries concerning R_TICKET_ALLOWED_USERS."""
263 ticket_id = Column(String, 'COL_TICKET_ALLOWED_USER_TICKET_ID', 2222)
264 user_name = Column(String, 'COL_TICKET_ALLOWED_USER', 2223)
265
266 class AllowedGroups(Model):
267 """For queries concerning R_TICKET_ALLOWED_GROUPS."""
268 ticket_id = Column(String, 'COL_TICKET_ALLOWED_GROUP_TICKET_ID', 2224)
269 group_name = Column(String, 'COL_TICKET_ALLOWED_GROUP', 2225)
0 #!/usr/bin/env python
1 from __future__ import print_function
2
3 import os
4 import ssl
5 import time
6 import sys
7 import logging
8 import contextlib
9 import concurrent.futures
10 import threading
11 import multiprocessing
12 import six
13
14 from irods.data_object import iRODSDataObject
15 from irods.exception import DataObjectDoesNotExist
16 import irods.keywords as kw
17 from six.moves.queue import Queue,Full,Empty
18
19
20 logger = logging.getLogger( __name__ )
21 _nullh = logging.NullHandler()
22 logger.addHandler( _nullh )
23
24
25 MINIMUM_SERVER_VERSION = (4,2,9)
26
27
28 class deferred_call(object):
29
30 """
31 A callable object that stores a function to be called later, along
32 with its parameters.
33 """
34
35 def __init__(self, function, args, keywords):
36 """Initialize the object with a function and its call parameters."""
37 self.function = function
38 self.args = args
39 self.keywords = keywords
40
41 def __setitem__(self, key, val):
42 """Allow changing a keyword option for the deferred function call."""
43 self.keywords[key] = val
44
45 def __call__(self):
46 """Call the stored function, using the arguments and keywords also stored
47 in the instance."""
48 return self.function(*self.args, **self.keywords)
49
50
51 try:
52 from threading import Barrier # Use 'Barrier' class if included (as in Python >= 3.2) ...
53 except ImportError: # ... but otherwise, use this ad hoc:
54 # Based on https://stackoverflow.com/questions/26622745/implementing-barrier-in-python2-7 :
55 class Barrier(object):
56 def __init__(self, n):
57 """Initialize a Barrier to wait on n threads."""
58 self.n = n
59 self.count = 0
60 self.mutex = threading.Semaphore(1)
61 self.barrier = threading.Semaphore(0)
62 def wait(self):
63 """Per-thread wait function.
64
65 As in Python3.2 threading, returns 0 <= wait_serial_int < n
66 """
67 self.mutex.acquire()
68 self.count += 1
69 count = self.count
70 self.mutex.release()
71 if count == self.n: self.barrier.release()
72 self.barrier.acquire()
73 self.barrier.release()
74 return count - 1
75
76 @contextlib.contextmanager
77 def enableLogging(handlerType,args,level_ = logging.INFO):
78 """Context manager for temporarily enabling a logger. For debug or test.
79
80 Usage Example -
81 with irods.parallel.enableLogging(logging.FileHandler,('/tmp/logfile.txt',)):
82 # parallel put/get code here
83 """
84 h = None
85 saveLevel = logger.level
86 try:
87 logger.setLevel(level_)
88 h = handlerType(*args)
89 h.setLevel( level_ )
90 logger.addHandler(h)
91 yield
92 finally:
93 logger.setLevel(saveLevel)
94 if h in logger.handlers:
95 logger.removeHandler(h)
96
97
98 RECOMMENDED_NUM_THREADS_PER_TRANSFER = 3
99
100 verboseConnection = False
101
102 class BadCallbackTarget(TypeError): pass
103
104 class AsyncNotify (object):
105
106 """A type returned when the PUT or GET operation passed includes NONBLOCKING.
107 If enabled, the callback function (or callable object) will be triggered
108 when all parts of the parallel transfer are complete. It should accept
109 exactly one argument, the irods.parallel.AsyncNotify instance that
110 is calling it.
111 """
112
113 def set_transfer_done_callback( self, callback ):
114 if callback is not None:
115 if not callable(callback):
116 raise BadCallbackTarget( '"callback" must be a callable accepting at least 1 argument' )
117 self.done_callback = callback
118
119 def __init__(self, futuresList, callback = None, progress_Queue = None, total = None, keep_ = ()):
120 """AsyncNotify initialization (used internally to the io.parallel library).
121 The casual user will only be concerned with the callback parameter, called when all threads
122 of the parallel PUT or GET have been terminated and the data object closed.
123 """
124 self._futures = set(futuresList)
125 self._futures_done = dict()
126 self.keep = dict(keep_)
127 self._lock = threading.Lock()
128 self.set_transfer_done_callback (callback)
129 self.__done = False
130 if self._futures:
131 for future in self._futures: future.add_done_callback( self )
132 else:
133 self.__invoke_done_callback()
134
135 self.progress = [0, 0]
136 if (progress_Queue) and (total is not None):
137 self.progress[1] = total
138 def _progress(Q,this): # - thread to update progress indicator
139 while this.progress[0] < this.progress[1]:
140 i = None
141 try:
142 i = Q.get(timeout=0.1)
143 except Empty:
144 pass
145 if i is not None:
146 if isinstance(i,six.integer_types) and i >= 0: this.progress[0] += i
147 else: break
148 self._progress_fn = _progress
149 self._progress_thread = threading.Thread( target = self._progress_fn, args = (progress_Queue, self))
150 self._progress_thread.start()
151
152 @staticmethod
153 def asciiBar( lst, memo = [1] ):
154 memo[0] += 1
155 spinner = "|/-\\"[memo[0]%4]
156 percent = "%5.1f%%"%(lst[0]*100.0/lst[1])
157 mbytes = "%9.1f MB / %9.1f MB"%(lst[0]/1e6,lst[1]/1e6)
158 if lst[1] != 0:
159 s = " {spinner} {percent} [ {mbytes} ] "
160 else:
161 s = " {spinner} "
162 return s.format(**locals())
163
164 def wait_until_transfer_done (self, timeout=float('inf'), progressBar = False):
165 carriageReturn = '\r'
166 begin = t = time.time()
167 end = begin + timeout
168 while not self.__done:
169 time.sleep(min(0.1, max(0.0, end - t)))
170 t = time.time()
171 if t >= end: break
172 if progressBar:
173 print (' ' + self.asciiBar( self.progress ) + carriageReturn, end='', file=sys.stderr)
174 sys.stderr.flush()
175 return self.__done
176
177 def __call__(self,future): # Our instance is called by each future (individual file part) when done.
178 # When all futures are done, we invoke the configured callback.
179 with self._lock:
180 self._futures_done[future] = future.result()
181 if len(self._futures) == len(self._futures_done): self.__invoke_done_callback()
182
183 def __invoke_done_callback(self):
184 try:
185 if callable(self.done_callback): self.done_callback(self)
186 finally:
187 self.keep.pop('mgr',None)
188 self.__done = True
189 self.set_transfer_done_callback(None)
190
191 @property
192 def futures(self): return list(self._futures)
193
194 @property
195 def futures_done(self): return dict(self._futures_done)
196
197
198 class Oper(object):
199
200 """A custom enum-type class with utility methods.
201
202 It makes some logic clearer, including succinct calculation of file and data
203 object open() modes based on whether the operation is a PUT or GET and whether
204 we are doing the "initial" open of the file or object.
205 """
206
207 GET = 0
208 PUT = 1
209 NONBLOCKING = 2
210
211 def __int__(self):
212 """Return the stored flags as an integer bitmask. """
213 return self._opr
214
215 def __init__(self, rhs):
216 """Initialize with a bit mask of flags ie. whether Operation PUT or GET,
217 and whether NONBLOCKING."""
218 self._opr = int(rhs)
219
220 def isPut(self): return 0 != (self._opr & self.PUT)
221 def isGet(self): return not self.isPut()
222 def isNonBlocking(self): return 0 != (self._opr & self.NONBLOCKING)
223
224 def data_object_mode(self, initial_open = False):
225 if self.isPut():
226 return 'w' if initial_open else 'a'
227 else:
228 return 'r'
229
230 def disk_file_mode(self, initial_open = False, binary = True):
231 if self.isPut():
232 mode = 'r'
233 else:
234 mode = 'w' if initial_open else 'r+'
235 return ((mode + 'b') if binary else mode)
236
237
238 def _io_send_bytes_progress (queueObject, item):
239 try:
240 queueObject.put(item)
241 return True
242 except Full:
243 return False
244
245 COPY_BUF_SIZE = (1024 ** 2) * 4
246
247 def _copy_part( src, dst, length, queueObject, debug_info, mgr):
248 """
249 The work-horse for performing the copy between file and data object.
250
251 It also helps determine whether there has been a large enough increment of
252 bytes to inform the progress bar of a need to update.
253 """
254 bytecount = 0
255 accum = 0
256 while True and bytecount < length:
257 buf = src.read(min(COPY_BUF_SIZE, length - bytecount))
258 buf_len = len(buf)
259 if 0 == buf_len: break
260 dst.write(buf)
261 bytecount += buf_len
262 accum += buf_len
263 if queueObject and accum and _io_send_bytes_progress(queueObject,accum): accum = 0
264 if verboseConnection:
265 print ("("+debug_info+")",end='',file=sys.stderr)
266 sys.stderr.flush()
267
268 # In a put or get, exactly one of (src,dst) is a file. Find which and close that one first.
269 (file_,obj_) = (src,dst) if dst in mgr else (dst,src)
270 file_.close()
271 mgr.remove_io( obj_ ) # 1. closes obj if it is not the mgr's initial descriptor
272 # 2. blocks at barrier until all transfer threads are done copying
273 # 3. closes with finalize if obj is mgr's initial descriptor
274 return bytecount
275
276
277 class _Multipart_close_manager:
278 """An object used to ensure that the initial transfer thread is also the last one to
279 call the close method on its `Io' object. The caller is responsible for setting up the
280 conditions that the initial thread's close() is the one performing the catalog update.
281
282 All non-initial transfer threads just call close() as soon as they are done transferring
283 the byte range for which they are responsible, whereas we block the initial thread
284 using a threading Barrier until we know all other threads have called close().
285
286 """
287 def __init__(self, initial_io_, exit_barrier_):
288 self.exit_barrier = exit_barrier_
289 self.initial_io = initial_io_
290 self.__lock = threading.Lock()
291 self.aux = []
292
293 def __contains__(self,Io):
294 with self.__lock:
295 return Io is self.initial_io or \
296 Io in self.aux
297
298 # `add_io' - add an i/o object to be managed
299 # note: `remove_io' should only be called for managed i/o objects
300
301 def add_io(self,Io):
302 with self.__lock:
303 if Io is not self.initial_io:
304 self.aux.append(Io)
305
306 # `remove_io' is for closing a channel of parallel i/o and allowing the
307 # data object to flush write operations (if any) in a timely fashion. It also
308 # synchronizes all of the parallel threads just before exit, so that we know
309 # exactly when to perform a finalizing close on the data object
310
311 def remove_io(self,Io):
312 is_initial = True
313 with self.__lock:
314 if Io is not self.initial_io:
315 Io.close()
316 self.aux.remove(Io)
317 is_initial = False
318 self.exit_barrier.wait()
319 if is_initial: self.finalize()
320
321 def finalize(self):
322 self.initial_io.close()
323
324
325 def _io_part (objHandle, range_, file_, opr_, mgr_, thread_debug_id = '', queueObject = None ):
326 """
327 Runs in a separate thread to manage the transfer of a range of bytes within the data object.
328
329 The particular range is defined by the end of the range_ parameter, which should be of type
330 (Py2) xrange or (Py3) range.
331 """
332 if 0 == len(range_): return 0
333 Operation = Oper(opr_)
334 (offset,length) = (range_[0], len(range_))
335 objHandle.seek(offset)
336 file_.seek(offset)
337 if thread_debug_id == '': # for more succinct thread identifiers while debugging.
338 thread_debug_id = str(threading.currentThread().ident)
339 return ( _copy_part (file_, objHandle, length, queueObject, thread_debug_id, mgr_) if Operation.isPut()
340 else _copy_part (objHandle, file_, length, queueObject, thread_debug_id, mgr_) )
341
342
343 def _io_multipart_threaded(operation_ , dataObj_and_IO, replica_token, hier_str, session, fname,
344 total_size, num_threads, **extra_options):
345 """Called by _io_main.
346
347 Carve up (0,total_size) range into `num_threads` parts and initiate a transfer thread for each one.
348 """
349 (Data_object, Io) = dataObj_and_IO
350 Operation = Oper( operation_ )
351
352 def bytes_range_for_thread( i, num_threads, total_bytes, chunk ):
353 begin_offs = i * chunk
354 if i + 1 < num_threads:
355 end_offs = (i + 1) * chunk
356 else:
357 end_offs = total_bytes
358 return six.moves.range(begin_offs, end_offs)
359
360 bytes_per_thread = total_size // num_threads
361
362 ranges = [bytes_range_for_thread(i, num_threads, total_size, bytes_per_thread) for i in range(num_threads)]
363
364 logger.info("num_threads = %s ; bytes_per_thread = %s", num_threads, bytes_per_thread)
365
366 _queueLength = extra_options.get('_queueLength',0)
367 if _queueLength > 0:
368 queueObject = Queue(_queueLength)
369 else:
370 queueObject = None
371
372 futures = []
373 executor = concurrent.futures.ThreadPoolExecutor(max_workers = num_threads)
374 num_threads = min(num_threads, len(ranges))
375 mgr = _Multipart_close_manager(Io, Barrier(num_threads))
376 counter = 1
377 gen_file_handle = lambda: open(fname, Operation.disk_file_mode(initial_open = (counter == 1)))
378 File = gen_file_handle()
379 for byte_range in ranges:
380 if Io is None:
381 Io = session.data_objects.open( Data_object.path, Operation.data_object_mode(initial_open = False),
382 create = False, finalize_on_close = False,
383 **{ kw.NUM_THREADS_KW: str(num_threads),
384 kw.DATA_SIZE_KW: str(total_size),
385 kw.RESC_HIER_STR_KW: hier_str,
386 kw.REPLICA_TOKEN_KW: replica_token })
387 mgr.add_io( Io )
388 if File is None: File = gen_file_handle()
389 futures.append(executor.submit( _io_part, Io, byte_range, File, Operation, mgr, str(counter), queueObject))
390 counter += 1
391 Io = File = None
392
393 if Operation.isNonBlocking():
394 if _queueLength:
395 return futures, queueObject, mgr
396 else:
397 return futures
398 else:
399 bytecounts = [ f.result() for f in futures ]
400 return sum(bytecounts), total_size
401
402
403
404 def io_main( session, Data, opr_, fname, R='', **kwopt):
405 """
406 The entry point for parallel transfers (multithreaded PUT and GET operations).
407
408 Here, we do the following:
409 * instantiate the data object, if this has not already been done.
410 * determine replica information and the appropriate number of threads.
411 * call the multithread manager to initiate multiple data transfer threads
412
413 """
414 total_bytes = kwopt.pop('total_bytes', -1)
415 Operation = Oper(opr_)
416 d_path = None
417 Io = None
418
419 if isinstance(Data,tuple):
420 (Data, Io) = Data[:2]
421
422 if isinstance (Data, six.string_types):
423 d_path = Data
424 try:
425 Data = session.data_objects.get( Data )
426 d_path = Data.path
427 except DataObjectDoesNotExist:
428 if Operation.isGet(): raise
429
430 R_via_libcall = kwopt.pop( 'target_resource_name', '')
431 if R_via_libcall:
432 R = R_via_libcall
433
434 num_threads = kwopt.get( 'num_threads', None)
435 if num_threads is None: num_threads = int(kwopt.get('N','0'))
436 if num_threads < 1:
437 num_threads = RECOMMENDED_NUM_THREADS_PER_TRANSFER
438 num_threads = max(1, min(multiprocessing.cpu_count(), num_threads))
439
440 open_options = {}
441 if Operation.isPut():
442 if R:
443 open_options [kw.RESC_NAME_KW] = R
444 open_options [kw.DEST_RESC_NAME_KW] = R
445 open_options[kw.NUM_THREADS_KW] = str(num_threads)
446 open_options[kw.DATA_SIZE_KW] = str(total_bytes)
447
448 if (not Io):
449 (Io, rawfile) = session.data_objects.open_with_FileRaw( (d_path or Data.path),
450 Operation.data_object_mode(initial_open = True),
451 finalize_on_close = True, **open_options )
452 else:
453 if type(Io) is deferred_call:
454 Io[kw.NUM_THREADS_KW] = str(num_threads)
455 Io[kw.DATA_SIZE_KW] = str(total_bytes)
456 Io = Io()
457 rawfile = Io.raw
458
459 # At this point, the data object's existence in the catalog is guaranteed,
460 # whether the Operation is a GET or PUT.
461
462 if not isinstance(Data,iRODSDataObject):
463 Data = session.data_objects.get(d_path)
464
465 # Determine total number of bytes for transfer.
466
467 if Operation.isGet():
468 total_bytes = Io.seek(0,os.SEEK_END)
469 Io.seek(0,os.SEEK_SET)
470 else: # isPut
471 if total_bytes < 0:
472 with open(fname, 'rb') as f:
473 f.seek(0,os.SEEK_END)
474 total_bytes = f.tell()
475
476 # Get necessary info and initiate threaded transfers.
477
478 (replica_token , resc_hier) = rawfile.replica_access_info()
479
480 queueLength = kwopt.get('queueLength',0)
481 retval = _io_multipart_threaded (Operation, (Data, Io), replica_token, resc_hier, session, fname, total_bytes,
482 num_threads = num_threads,
483 _queueLength = queueLength)
484
485 # SessionObject.data_objects.parallel_{put,get} will return:
486 # - immediately with an AsyncNotify instance, if Oper.NONBLOCKING flag is used.
487 # - upon completion with a boolean completion status, otherwise.
488
489 if Operation.isNonBlocking():
490
491 if queueLength > 0:
492 (futures, chunk_notify_queue, mgr) = retval
493 else:
494 futures = retval
495 chunk_notify_queue = total_bytes = None
496
497 return AsyncNotify( futures, # individual futures, one per transfer thread
498 progress_Queue = chunk_notify_queue, # for notifying the progress indicator thread
499 total = total_bytes, # total number of bytes for parallel transfer
500 keep_ = {'mgr': mgr} ) # an open raw i/o object needing to be persisted, if any
501 else:
502 (_bytes_transferred, _bytes_total) = retval
503 return (_bytes_transferred == _bytes_total)
504
505 if __name__ == '__main__':
506
507 import getopt
508 import atexit
509 from irods.session import iRODSSession
510
511 def setupLoggingWithDateTimeHeader(name,level = logging.DEBUG):
512 if _nullh in logger.handlers:
513 logger.removeHandler(_nullh)
514 if name:
515 handler = logging.FileHandler(name)
516 else:
517 handler = logging.StreamHandler()
518 handler.setFormatter(logging.Formatter('%(asctime)-15s - %(message)s'))
519 logger.addHandler(handler)
520 logger.setLevel( level )
521
522 try:
523 env_file = os.environ['IRODS_ENVIRONMENT_FILE']
524 except KeyError:
525 env_file = os.path.expanduser('~/.irods/irods_environment.json')
526 ssl_context = ssl.create_default_context(purpose=ssl.Purpose.SERVER_AUTH, cafile=None, capath=None, cadata=None)
527 ssl_settings = {'ssl_context': ssl_context}
528 sess = iRODSSession(irods_env_file=env_file, **ssl_settings)
529 atexit.register(lambda : sess.cleanup())
530
531 opt,arg = getopt.getopt( sys.argv[1:], 'vL:l:aR:N:')
532
533 opts = dict(opt)
534
535 logFilename = opts.pop('-L',None) # '' for console, non-empty for filesystem destination
536 logLevel = (logging.INFO if logFilename is None else logging.DEBUG)
537 logFilename = logFilename or opts.pop('-l',None)
538
539 if logFilename is not None:
540 setupLoggingWithDateTimeHeader(logFilename, logLevel)
541
542 verboseConnection = (opts.pop('-v',None) is not None)
543
544 async_xfer = opts.pop('-a',None)
545
546 kwarg = { k.lstrip('-'):v for k,v in opts.items() }
547
548 arg[1] = Oper.PUT if arg[1].lower() in ('w','put','a') \
549 else Oper.GET
550 if async_xfer is not None:
551 arg[1] |= Oper.NONBLOCKING
552
553 ret = io_main(sess, *arg, **kwarg) # arg[0] = data object or path
554 # arg[1] = operation: or'd flags : [PUT|GET] NONBLOCKING
555 # arg[2] = file path on local filesystem
556 # kwarg['queueLength'] sets progress-queue length (0 if no progress indication needed)
557 # kwarg options 'N' (num threads) and 'R' (target resource name) are via command-line
558 # kwarg['num_threads'] (overrides 'N' when called as a library)
559 # kwarg['target_resource_name'] (overrides 'R' when called as a library)
560 if isinstance( ret, AsyncNotify ):
561 print('waiting on completion...',file=sys.stderr)
562 ret.set_transfer_done_callback(lambda r: print('Async transfer done for:',r,file=sys.stderr))
563 done = ret.wait_until_transfer_done (timeout=10.0) # - or do other useful work here
564 if done:
565 bytes_transferred = sum(ret.futures_done.values())
566 print ('Asynch transfer complete. Total bytes transferred:', bytes_transferred,file=sys.stderr)
567 else:
568 print ('Asynch transfer was not completed before timeout expired.',file=sys.stderr)
569 else:
570 print('Synchronous transfer {}'.format('succeeded' if ret else 'failed'),file=sys.stderr)
571
572 # Note : This module requires concurrent.futures, included in Python3.x.
573 # On Python2.7, this dependency must be installed using 'pip install futures'.
574 # Demonstration :
575 #
576 # $ dd if=/dev/urandom bs=1k count=150000 of=$HOME/puttest
577 # $ time python -m irods.parallel -R demoResc -N 3 `ipwd`/test.dat put $HOME/puttest # add -v,-a for verbose, asynch
578 # $ time python -m irods.parallel -R demoResc -N 3 `ipwd`/test.dat get $HOME/gettest # add -v,-a for verbose, asynch
579 # $ diff puttest gettest
274274 new = new + padding[:lcopy]
275275
276276 return scramble_v2(new, old, signature)
277
278
279 def create_temp_password(temp_hash, source_password):
280 password = (temp_hash + source_password).ljust(100, chr(0))
281 password_md5 = hashlib.md5(password.encode('utf-8'))
282
283 # Return hexdigest
284 return password_md5.hexdigest()
0 """A module providing tools for path normalization and manipulation."""
1
2 __all__ = ['iRODSPath']
3
4 import re
5 import logging
6 import os
7
8 class iRODSPath(str):
9 """A subclass of the Python string that normalizes iRODS logical paths."""
10
11 def __new__(cls, *elem_list, **kw):
12 """
13 Initializes our immutable string object with a normalized form.
14 An instance of iRODSPath is also a `str'.
15
16 Keywords may include only 'absolute'. The default is True, forcing a slash as
17 the leading character of the resulting string.
18
19 Variadic parameters are the path elements, strings which may name individual
20 collections or sub-hierarchies (internally slash-separated). These are then
21 joined using the path separator:
22
23 data_path = iRODSPath( 'myZone', 'home/user', './dir', 'mydata')
24 # => '/myZone/home/user/dir/mydata'
25
26 In the resulting object, any trailing and redundant path separators are removed,
27 as is the "trivial" path element ('.'), so this will work:
28
29 c = iRODSPath('/tempZone//home/./',username + '/')
30 session.collections.get( c )
31
32 If the `absolute' keyword hint is set to False, leading '..' elements are not
33 suppressed (since only for absolute paths is "/.." equivalent to "/"), and the
34 leading slash requirement will not be imposed on the resulting string.
35 Note also that a leading slash in the first argument will be preserved regardless
36 of the `absolute' hint, but subsequent arguments will act as relative paths
37 regardless of leading slashes. So this will do the "right thing":
38
39 my_dir = str(iRODSPath('dir1')) # => "/dir1"
40 my_rel = ""+iRODSPath('dir2', absolute=False) # => "dir2"
41 my_abs = iRODSPath('/Z/home/user', my_dir, my_rel) # => "/Z/home/user/dir1/dir2"
42
43 Finally, because iRODSPath has `str` as a base class, this is also possible:
44
45 iRODSPath('//zone/home/public/this', iRODSPath('../that',absolute=False))
46 # => "/zone/home/public/that"
47 """
48 absolute_ = kw.pop('absolute',True)
49 if kw:
50 logging.warning("These iRODSPath options have no effect: %r",kw.items())
51 normalized = _normalize_iRODS_logical_path(elem_list, absolute_)
52 obj = str.__new__(cls,normalized)
53 return obj
54
55
56 def _normalize_iRODS_logical_path(paths, make_absolute):
57 build = []
58
59 if paths and paths[0][:1] == '/':
60 make_absolute = True
61
62 p = '/'.join(paths).split('/')
63
64 while p and not p[0]:
65 p.pop(0)
66
67 prefixed_updirs = 0
68
69 # Break out and resolve updir('..') and trivial path elements like '.', ''
70
71 for elem in p:
72 if elem == '..':
73 if not build:
74 prefixed_updirs += (0 if make_absolute else 1)
75 else:
76 if build[-1]:
77 build.pop()
78 continue
79 elif elem in ('','.'):
80 continue
81 build.append(elem)
82
83 # Restore any initial updirs
84 build[:0] = ['..'] * prefixed_updirs
85
86 # Rejoin components, respecting 'make_absolute' flag
87 path_= ('/' if make_absolute else '') + '/'.join(build)
88
89 # Empty path equivalent to "current directory"
90 return '.' if not path_ else path_
00 from __future__ import absolute_import
1 import datetime
12 import logging
23 import threading
4 import os
35
46 from irods import DEFAULT_CONNECTION_TIMEOUT
57 from irods.connection import Connection
68
79 logger = logging.getLogger(__name__)
810
11 def attribute_from_return_value(attrname):
12 def deco(method):
13 def method_(self,*s,**kw):
14 ret = method(self,*s,**kw)
15 setattr(self,attrname,ret)
16 return ret
17 return method_
18 return deco
19
20 DEFAULT_APPLICATION_NAME = 'python-irodsclient'
921
1022 class Pool(object):
1123
12 def __init__(self, account):
24 def __init__(self, account, application_name='', connection_refresh_time=-1):
25 '''
26 Pool( account , application_name='' )
27 Create an iRODS connection pool; 'account' is an irods.account.iRODSAccount instance and
28 'application_name' specifies the application name as it should appear in an 'ips' listing.
29 '''
30
31 self._thread_local = threading.local()
1332 self.account = account
1433 self._lock = threading.RLock()
1534 self.active = set()
1635 self.idle = set()
1736 self.connection_timeout = DEFAULT_CONNECTION_TIMEOUT
37 self.application_name = ( os.environ.get('spOption','') or
38 application_name or
39 DEFAULT_APPLICATION_NAME )
1840
41 if connection_refresh_time > 0:
42 self.refresh_connection = True
43 self.connection_refresh_time = connection_refresh_time
44 else:
45 self.refresh_connection = False
46 self.connection_refresh_time = None
47
48 @property
49 def _conn(self): return getattr( self._thread_local, "_conn", None)
50
51 @_conn.setter
52 def _conn(self, conn_): setattr( self._thread_local, "_conn", conn_)
53
54 @attribute_from_return_value("_conn")
1955 def get_connection(self):
2056 with self._lock:
2157 try:
2258 conn = self.idle.pop()
59
60 curr_time = datetime.datetime.now()
61 # If 'refresh_connection' flag is True and the connection was
62 # created more than 'connection_refresh_time' seconds ago,
63 # release the connection (as its stale) and create a new one
64 if self.refresh_connection and (curr_time - conn.create_time).total_seconds() > self.connection_refresh_time:
65 logger.debug('Connection with id {} was created more than {} seconds ago. Releasing the connection and creating a new one.'.format(id(conn), self.connection_refresh_time))
66 # Since calling disconnect() repeatedly is safe, we call disconnect()
67 # here explicitly, instead of relying on the garbage collector to clean
68 # up the object and call disconnect(). This makes the behavior of the
69 # code more predictable as we are not relying on when garbage collector is called
70 conn.disconnect()
71 conn = Connection(self, self.account)
72 logger.debug("Created new connection with id: {}".format(id(conn)))
2373 except KeyError:
2474 conn = Connection(self, self.account)
75 logger.debug("No connection found in idle set. Created a new connection with id: {}".format(id(conn)))
76
2577 self.active.add(conn)
78 logger.debug("Adding connection with id {} to active set".format(id(conn)))
79
2680 logger.debug('num active: {}'.format(len(self.active)))
81 logger.debug('num idle: {}'.format(len(self.idle)))
2782 return conn
2883
2984 def release_connection(self, conn, destroy=False):
3085 with self._lock:
3186 if conn in self.active:
3287 self.active.remove(conn)
88 logger.debug("Removed connection with id: {} from active set".format(id(conn)))
3389 if not destroy:
90 # If 'refresh_connection' flag is True, update connection's 'last_used_time'
91 if self.refresh_connection:
92 conn.last_used_time = datetime.datetime.now()
3493 self.idle.add(conn)
94 logger.debug("Added connection with id: {} to idle set".format(id(conn)))
3595 elif conn in self.idle and destroy:
96 logger.debug("Destroyed connection with id: {}".format(id(conn)))
3697 self.idle.remove(conn)
98 logger.debug('num active: {}'.format(len(self.active)))
3799 logger.debug('num idle: {}'.format(len(self.idle)))
44 from irods.models import Model
55 from irods.column import Column, Keyword
66 from irods.message import (
7 IntegerIntegerMap, IntegerStringMap, StringStringMap,
7 IntegerIntegerMap, IntegerStringMap, StringStringMap, _OrderedMultiMapping,
88 GenQueryRequest, GenQueryResponse, empty_gen_query_out,
99 iRODSMessage, SpecificQueryRequest, GeneralAdminRequest)
1010 from irods.api_number import api_number
3535 self._limit = -1
3636 self._offset = 0
3737 self._continue_index = 0
38 self._keywords = {}
3839
3940 for arg in args:
4041 if isinstance(arg, type) and issubclass(arg, Model):
5354 new_q._limit = self._limit
5455 new_q._offset = self._offset
5556 new_q._continue_index = self._continue_index
57 new_q._keywords = self._keywords
58 return new_q
59
60 def add_keyword(self, keyword, value = ''):
61 new_q = self._clone()
62 new_q._keywords[keyword] = value
5663 return new_q
5764
5865 def filter(self, *criteria):
6269
6370 def order_by(self, column, order='asc'):
6471 new_q = self._clone()
65 del new_q.columns[column]
72 new_q.columns.pop(column,None)
6673 if order == 'asc':
6774 new_q.columns[column] = query_number['ORDER_BY']
6875 elif order == 'desc':
123130 # todo store criterion for columns and criterion for keywords in seaparate
124131 # lists
125132 def _conds_message(self):
126 dct = dict([
133 dct = _OrderedMultiMapping([
127134 (criterion.query_key.icat_id, criterion.op + ' ' + criterion.value)
128135 for criterion in self.criteria
129136 if isinstance(criterion.query_key, Column)
137144 for criterion in self.criteria
138145 if isinstance(criterion.query_key, Keyword)
139146 ])
147 for key in self._keywords:
148 dct[ key ] = self._keywords[key]
140149 return StringStringMap(dct)
141150
142151 def _message(self):
183192
184193 def get_batches(self):
185194 result_set = self.execute()
186 yield result_set
187
188 while result_set.continue_index > 0:
189 try:
190 result_set = self.continue_index(
191 result_set.continue_index).execute()
192 yield result_set
193 except CAT_NO_ROWS_FOUND:
194 break
195
196 try:
197 yield result_set
198
199 while result_set.continue_index > 0:
200 try:
201 result_set = self.continue_index(
202 result_set.continue_index).execute()
203 yield result_set
204 except CAT_NO_ROWS_FOUND:
205 break
206 except GeneratorExit:
207 if result_set.continue_index > 0:
208 self.continue_index(result_set.continue_index).close()
195209
196210 def get_results(self):
197211 for result_set in self.get_batches():
203217
204218 def one(self):
205219 results = self.execute()
220 if results.continue_index > 0:
221 self.continue_index(results.continue_index).close()
206222 if not len(results):
207223 raise NoResultFound()
208224 if len(results) > 1:
212228 def first(self):
213229 query = self.limit(1)
214230 results = query.execute()
231 if results.continue_index > 0:
232 query.continue_index(results.continue_index).close()
215233 if not len(results):
216234 return None
217235 else:
278296 conditions = StringStringMap({})
279297
280298 sql_args = {}
281 for i, arg in enumerate(self._args[:10]):
299 for i, arg in enumerate(self._args[:10], start=1):
282300 sql_args['arg{}'.format(i)] = arg
283301
284302 message_body = SpecificQueryRequest(sql=target,
00 from __future__ import absolute_import
11 from irods.models import Resource
2 from irods.meta import iRODSMetaCollection
23 import six
34
45
3637
3738 self._meta = None
3839
40 @property
41 def metadata(self):
42 if not self._meta:
43 self._meta = iRODSMetaCollection(
44 self.manager.sess.metadata, Resource, self.name)
45 return self._meta
3946
4047 @property
4148 def context_fields(self):
22
33 from irods.models import ModelBase
44 from six.moves import range
5 from six import PY3
6
7
8 try:
9 unicode # Python 2
10 except NameError:
11 unicode = str
512
613
714 class ResultSet(object):
4047 except (TypeError, ValueError):
4148 return (col, value)
4249
50 _str_encode = staticmethod(lambda x:x.encode('utf-8') if type(x) is unicode else x)
51
52 _get_column_values = ( lambda self,index: [(col, col.value[index]) for col in self.cols]
53 ) if PY3 else ( lambda self,index: [(col, self._str_encode(col.value[index])) for col in self.cols] )
54
4355 def _format_row(self, index):
44 values = [(col, col.value[index]) for col in self.cols]
56 values = self._get_column_values(index)
4557 return dict([self._format_attribute(col.attriInx, value) for col, value in values])
4658
4759 def __getitem__(self, index):
00 from __future__ import absolute_import
11 from irods.message import iRODSMessage, StringStringMap, RodsHostAddress, STR_PI, MsParam, MsParamArray, RuleExecutionRequest
22 from irods.api_number import api_number
3 import irods.exception as ex
4 from io import open as io_open
5 from irods.message import Message, StringProperty
6 import six
7
8 class RemoveRuleMessage(Message):
9 #define RULE_EXEC_DEL_INP_PI "str ruleExecId[NAME_LEN];"
10 _name = 'RULE_EXEC_DEL_INP_PI'
11 ruleExecId = StringProperty()
12 def __init__(self,id_):
13 super(RemoveRuleMessage,self).__init__()
14 self.ruleExecId = str(id_)
315
416 class Rule(object):
5 def __init__(self, session, rule_file=None, body='', params=None, output=''):
17 def __init__(self, session, rule_file=None, body='', params=None, output='', instance_name = None, irods_3_literal_style = False):
18 """
19 Initialize a rule object.
20
21 Arguments:
22 Use one of:
23 * rule_file : the name of an existing file containint "rule script" style code. In the context of
24 the native iRODS Rule Language, this is a file ending in '.r' and containing iRODS rules.
25 Optionally, this parameter can be a file-like object containing the rule script text.
26 * body: the text of block of rule code (possibly including rule calls) to be run as if it were
27 the body of a rule, e.g. the part between the braces of a rule definition in the iRODS rule language.
28 * instance_name: the name of the rule engine instance in the context of which to run the rule(s).
29 * output may be set to 'ruleExecOut' if console output is expected on stderr or stdout streams.
30 * params are key/value pairs to be sent into a rule_file.
31 * irods_3_literal_style: affects the format of the @external directive. Use `True' for iRODS 3.x.
32
33 """
634 self.session = session
35
36 self.params = {}
37 self.output = ''
738
839 if rule_file:
940 self.load(rule_file)
1041 else:
11 self.body = '@external\n' + body
12 if params is None:
13 self.params = {}
42 self.body = '@external\n' + body if irods_3_literal_style \
43 else '@external rule { ' + body + ' }'
44
45 # overwrite params and output if received arguments
46 if isinstance( params , dict ):
47 if self.params:
48 self.params.update( params )
1449 else:
1550 self.params = params
51
52 if output != '':
1653 self.output = output
1754
18 def load(self, rule_file):
19 self.params = {}
20 self.output = ''
55 self.instance_name = instance_name
56
57 def remove_by_id(self,*ids):
58 with self.session.pool.get_connection() as conn:
59 for id_ in ids:
60 request = iRODSMessage("RODS_API_REQ", msg=RemoveRuleMessage(id_),
61 int_info=api_number['RULE_EXEC_DEL_AN'])
62 conn.send(request)
63 response = conn.recv()
64 if response.int_info != 0:
65 raise RuntimeError("Error removing rule {id_}".format(**locals()))
66
67 def load(self, rule_file, encoding = 'utf-8'):
68 """Load rule code with rule-file (*.r) semantics.
69
70 A "main" rule is defined first; name does not matter. Other rules may follow, which will be
71 callable from the first rule. Any rules defined in active rule-bases within the server are
72 also callable.
73
74 The `rule_file' parameter is a filename or file-like object. We give it either:
75 - a string holding the path to a rule-file in the local filesystem, or
76 - an in-memory object (eg. io.StringIO or io.BytesIO) whose content is that of a rule-file.
77
78 This addresses a regression in v1.1.0; see issue #336. In v1.1.1+, if rule code is passed in literally via
79 the `body' parameter of the Rule constructor, it is interpreted as if it were the body of a rule, and
80 therefore it may not contain internal rule definitions. However, if rule code is submitted as the content
81 of a file or file-like object referred to by the `rule_file' parameter of the Rule constructor, will be
82 interpreted as .r-file content. Therefore, it must contain a main rule definition first, followed
83 possibly by others which will be callable from the main rule as if they were part of the core rule-base.
84
85 """
2186 self.body = '@external\n'
2287
23 # parse rule file
24 with open(rule_file) as f:
88
89 with (io_open(rule_file, encoding = encoding) if isinstance(rule_file,six.string_types) else rule_file
90 ) as f:
91
92 # parse rule file line-by-line
2593 for line in f:
94
95 # convert input line to Unicode if necessary
96 if isinstance(line, bytes):
97 line = line.decode(encoding)
98
2699 # parse input line
27100 if line.strip().lower().startswith('input'):
101
28102 input_header, input_line = line.split(None, 1)
103
104 if input_line.strip().lower() == 'null':
105 self.params = {}
106 continue
29107
30108 # sanity check
31109 if input_header.lower() != 'input':
52130 self.body += line
53131
54132
55 def execute(self):
56 # rule input
57 param_array = []
58 for label, value in self.params.items():
59 inOutStruct = STR_PI(myStr=value)
60 param_array.append(MsParam(label=label, type='STR_PI', inOutStruct=inOutStruct))
133 def execute(self, session_cleanup = True,
134 acceptable_errors = (ex.FAIL_ACTION_ENCOUNTERED_ERR,),
135 r_error = None,
136 return_message = ()):
137 try:
138 # rule input
139 param_array = []
140 for label, value in self.params.items():
141 inOutStruct = STR_PI(myStr=value)
142 param_array.append(MsParam(label=label, type='STR_PI', inOutStruct=inOutStruct))
61143
62 inpParamArray = MsParamArray(paramLen=len(param_array), oprType=0, MsParam_PI=param_array)
144 inpParamArray = MsParamArray(paramLen=len(param_array), oprType=0, MsParam_PI=param_array)
63145
64 # rule body
65 addr = RodsHostAddress(hostAddr='', rodsZone='', port=0, dummyInt=0)
66 condInput = StringStringMap({})
67 message_body = RuleExecutionRequest(myRule=self.body, addr=addr, condInput=condInput, outParamDesc=self.output, inpParamArray=inpParamArray)
146 # rule body
147 addr = RodsHostAddress(hostAddr='', rodsZone='', port=0, dummyInt=0)
148 condInput = StringStringMap( {} if self.instance_name is None
149 else {'instance_name':self.instance_name} )
150 message_body = RuleExecutionRequest(myRule=self.body, addr=addr, condInput=condInput, outParamDesc=self.output, inpParamArray=inpParamArray)
68151
69 request = iRODSMessage("RODS_API_REQ", msg=message_body, int_info=api_number['EXEC_MY_RULE_AN'])
152 request = iRODSMessage("RODS_API_REQ", msg=message_body, int_info=api_number['EXEC_MY_RULE_AN'])
70153
71 with self.session.pool.get_connection() as conn:
72 conn.send(request)
73 response = conn.recv()
74 out_param_array = response.get_main_message(MsParamArray)
75 self.session.cleanup()
154 with self.session.pool.get_connection() as conn:
155 conn.send(request)
156 response = conn.recv(acceptable_errors = acceptable_errors, return_message = return_message)
157 try:
158 out_param_array = response.get_main_message(MsParamArray, r_error = r_error)
159 except iRODSMessage.ResponseNotParseable:
160 return MsParamArray() # Ergo, no useful return value - but the RError stack will be accessible
161 finally:
162 if session_cleanup:
163 self.session.cleanup()
164
76165 return out_param_array
00 from __future__ import absolute_import
11 import os
2 import ast
23 import json
4 import errno
5 import logging
36 from irods.query import Query
47 from irods.pool import Pool
58 from irods.account import iRODSAccount
912 from irods.manager.access_manager import AccessManager
1013 from irods.manager.user_manager import UserManager, UserGroupManager
1114 from irods.manager.resource_manager import ResourceManager
15 from irods.manager.zone_manager import ZoneManager
1216 from irods.exception import NetworkException
1317 from irods.password_obfuscation import decode
14
18 from irods import NATIVE_AUTH_SCHEME, PAM_AUTH_SCHEME
19
20 logger = logging.getLogger(__name__)
21
22 class NonAnonymousLoginWithoutPassword(RuntimeError): pass
1523
1624 class iRODSSession(object):
25
26 @property
27 def env_file (self):
28 return self._env_file
29
30 @property
31 def auth_file (self):
32 return self._auth_file
1733
1834 def __init__(self, configure=True, **kwargs):
1935 self.pool = None
2036 self.numThreads = 0
21
37 self._env_file = ''
38 self._auth_file = ''
39 self.do_configure = (kwargs if configure else {})
40 self.__configured = None
2241 if configure:
23 self.configure(**kwargs)
42 self.__configured = self.configure(**kwargs)
2443
2544 self.collections = CollectionManager(self)
2645 self.data_objects = DataObjectManager(self)
2948 self.users = UserManager(self)
3049 self.user_groups = UserGroupManager(self)
3150 self.resources = ResourceManager(self)
51 self.zones = ZoneManager(self)
3252
3353 def __enter__(self):
3454 return self
3555
3656 def __exit__(self, exc_type, exc_value, traceback):
3757 self.cleanup()
58
59 def __del__(self):
60 self.do_configure = {}
61 # If self.pool has been fully initialized (ie. no exception was
62 # raised during __init__), then try to clean up.
63 if self.pool is not None:
64 self.cleanup()
3865
3966 def cleanup(self):
4067 for conn in self.pool.active | self.pool.idle:
4370 except NetworkException:
4471 pass
4572 conn.release(True)
73 if self.do_configure:
74 self.__configured = self.configure(**self.do_configure)
4675
4776 def _configure_account(self, **kwargs):
77
4878 try:
4979 env_file = kwargs['irods_env_file']
5080
6191 return iRODSAccount(**kwargs)
6292
6393 # Get credentials from irods environment file
64 creds = self.get_irods_env(env_file)
94 creds = self.get_irods_env(env_file, session_ = self)
6595
6696 # Update with new keywords arguments only
6797 creds.update((key, value) for key, value in kwargs.items() if key not in creds)
73103 # default
74104 auth_scheme = 'native'
75105
76 if auth_scheme != 'native':
106 if auth_scheme.lower() == PAM_AUTH_SCHEME:
107 if 'password' in creds:
108 return iRODSAccount(**creds)
109 else:
110 # password will be from irodsA file therefore use native login
111 creds['irods_authentication_scheme'] = NATIVE_AUTH_SCHEME
112 elif auth_scheme != 'native':
77113 return iRODSAccount(**creds)
78114
79115 # Native auth, try to unscramble password
82118 except KeyError:
83119 pass
84120
85 creds['password'] = self.get_irods_password(**creds)
121 missing_file_path = []
122 error_args = []
123 pw = creds['password'] = self.get_irods_password(session_ = self, file_path_if_not_found = missing_file_path, **creds)
124 if not pw and creds.get('irods_user_name') != 'anonymous':
125 if missing_file_path:
126 error_args += ["Authentication file not found at {!r}".format(missing_file_path[0])]
127 raise NonAnonymousLoginWithoutPassword(*error_args)
86128
87129 return iRODSAccount(**creds)
88130
89
90131 def configure(self, **kwargs):
91 account = self._configure_account(**kwargs)
92 self.pool = Pool(account)
132 account = self.__configured
133 if not account:
134 account = self._configure_account(**kwargs)
135 connection_refresh_time = self.get_connection_refresh_time(**kwargs)
136 logger.debug("In iRODSSession's configure(). connection_refresh_time set to {}".format(connection_refresh_time))
137 self.pool = Pool(account, application_name=kwargs.pop('application_name',''), connection_refresh_time=connection_refresh_time)
138 return account
93139
94140 def query(self, *args):
95141 return Query(self, *args)
112158
113159 @property
114160 def server_version(self):
161 try:
162 reported_vsn = os.environ.get("PYTHON_IRODSCLIENT_REPORTED_SERVER_VERSION","")
163 return tuple(ast.literal_eval(reported_vsn))
164 except SyntaxError: # environment variable was malformed, empty, or unset
165 return self.__server_version()
166
167 def __server_version(self):
115168 try:
116169 conn = next(iter(self.pool.active))
117170 return conn.server_version
122175 return version
123176
124177 @property
178 def pam_pw_negotiated(self):
179 self.pool.account.store_pw = []
180 conn = self.pool.get_connection()
181 pw = getattr(self.pool.account,'store_pw',[])
182 delattr( self.pool.account, 'store_pw')
183 conn.release()
184 return pw
185
186 @property
125187 def default_resource(self):
126188 return self.pool.account.default_resource
127189
145207 return os.path.expanduser('~/.irods/.irodsA')
146208
147209 @staticmethod
148 def get_irods_env(env_file):
149 with open(env_file, 'rt') as f:
150 return json.load(f)
210 def get_irods_env(env_file, session_ = None):
211 try:
212 with open(env_file, 'rt') as f:
213 j = json.load(f)
214 if session_ is not None:
215 session_._env_file = env_file
216 return j
217 except IOError:
218 logger.debug("Could not open file {}".format(env_file))
219 return {}
151220
152221 @staticmethod
153 def get_irods_password(**kwargs):
222 def get_irods_password(session_ = None, file_path_if_not_found = (), **kwargs):
223 path_memo = []
154224 try:
155225 irods_auth_file = kwargs['irods_authentication_file']
156226 except KeyError:
161231 except KeyError:
162232 uid = None
163233
164 with open(irods_auth_file, 'r') as f:
165 return decode(f.read().rstrip('\n'), uid)
234 _retval = ''
235
236 try:
237 with open(irods_auth_file, 'r') as f:
238 _retval = decode(f.read().rstrip('\n'), uid)
239 return _retval
240 except IOError as exc:
241 if exc.errno != errno.ENOENT:
242 raise # Auth file exists but can't be read
243 path_memo = [ irods_auth_file ]
244 return '' # No auth file (as with anonymous user)
245 finally:
246 if isinstance(file_path_if_not_found, list) and path_memo:
247 file_path_if_not_found[:] = path_memo
248 if session_ is not None and _retval:
249 session_._auth_file = irods_auth_file
250
251 def get_connection_refresh_time(self, **kwargs):
252 connection_refresh_time = -1
253
254 connection_refresh_time = int(kwargs.get('refresh_time', -1))
255 if connection_refresh_time != -1:
256 return connection_refresh_time
257
258 try:
259 env_file = kwargs['irods_env_file']
260 except KeyError:
261 return connection_refresh_time
262
263 if env_file is not None:
264 env_file_map = self.get_irods_env(env_file)
265 connection_refresh_time = int(env_file_map.get('irods_connection_refresh_time', -1))
266 if connection_refresh_time < 1:
267 # Negative values are not allowed.
268 logger.debug('connection_refresh_time in {} file has value of {}. Only values greater than 1 are allowed.'.format(env_file, connection_refresh_time))
269 connection_refresh_time = -1
270
271 return connection_refresh_time
0 # The tests in this BATS module must be run as a (passwordless) sudo-enabled user.
1 # It is also required that the python irodsclient be installed under irods' ~/.local environment.
2
3
4 setup() {
5 local -A chars=(
6 [semicolon]=";"
7 [atsymbol]="@"
8 [equals]="="
9 [ampersand]="&"
10 )
11 [ $BATS_TEST_NUMBER = 1 ] && echo "---" >/tmp/PRC_test_issue_362
12 local name=${BATS_TEST_DESCRIPTION##*_}
13 CHR="${chars[$name]}"
14 }
15
16 TEST_THE_TEST=""
17
18 prc_test()
19 {
20 local USER="alissa"
21 local PASSWORD=$(tr "." "$CHR" <<<"my.pass")
22 echo "$USER:$PASSWORD" | sudo chpasswd
23 if [ "$TEST_THE_TEST" = 1 ]; then
24 echo -n `date`: "" >&2
25 { su - "$USER" -c "id" <<<"$PASSWORD" 2>/dev/null | grep $USER ; } >&2
26 else
27 sudo su - irods -c "env PYTHON_IRODSCLIENT_TEST_PAM_PW_OVERRIDE='$PASSWORD' python -m unittest \
28 irods.test.login_auth_test.TestLogins.test_escaped_pam_password_chars__362"
29 fi
30 } 2>> /tmp/PRC_test_issue_362
31
32 @test "test_with_atsymbol" { prc_test; }
33 @test "test_with_semicolon" { prc_test; }
34 @test "test_with_equals" { prc_test; }
35 @test "test_with_ampersand" { prc_test; }
33 import sys
44 import unittest
55 from irods.access import iRODSAccess
6 from irods.user import iRODSUser
7 from irods.session import iRODSSession
8 from irods.models import User,Collection,DataObject
9 from irods.collection import iRODSCollection
610 import irods.test.helpers as helpers
11 from irods.column import In, Like
712
813
914 class TestAccess(unittest.TestCase):
1419 # Create test collection
1520 self.coll_path = '/{}/home/{}/test_dir'.format(self.sess.zone, self.sess.username)
1621 self.coll = helpers.make_collection(self.sess, self.coll_path)
22 VERSION_DEPENDENT_STRINGS = { 'MODIFY':'modify_object', 'READ':'read_object' } if self.sess.server_version >= (4,3) \
23 else { 'MODIFY':'modify object', 'READ':'read object' }
24 self.mapping = dict( [(i,i) for i in ( 'own', VERSION_DEPENDENT_STRINGS['MODIFY'], VERSION_DEPENDENT_STRINGS['READ'])] +
25 [('write',VERSION_DEPENDENT_STRINGS['MODIFY']), ('read', VERSION_DEPENDENT_STRINGS['READ'])] )
1726
1827 def tearDown(self):
1928 '''Remove test data and close connections
2130 self.coll.remove(recurse=True, force=True)
2231 self.sess.cleanup()
2332
33
2434 def test_list_acl(self):
2535 # test args
2636 collection = self.coll_path
5666 # remove object
5767 self.sess.data_objects.unlink(path)
5868
69
70 def test_set_inherit_acl(self):
71
72 acl1 = iRODSAccess('inherit', self.coll_path)
73 self.sess.permissions.set(acl1)
74 c = self.sess.collections.get(self.coll_path)
75 self.assertTrue(c.inheritance)
76
77 acl2 = iRODSAccess('noinherit', self.coll_path)
78 self.sess.permissions.set(acl2)
79 c = self.sess.collections.get(self.coll_path)
80 self.assertFalse(c.inheritance)
81
82 def test_set_inherit_and_test_sub_objects (self):
83 DEPTH = 3
84 OBJ_PER_LVL = 1
85 deepcoll = user = None
86 test_coll_path = self.coll_path + "/test"
87 try:
88 deepcoll = helpers.make_deep_collection(self.sess, test_coll_path, object_content = 'arbitrary',
89 depth=DEPTH, objects_per_level=OBJ_PER_LVL)
90 user = self.sess.users.create('bob','rodsuser')
91 user.modify ('password','bpass')
92
93 acl_inherit = iRODSAccess('inherit', deepcoll.path)
94 acl_read = iRODSAccess('read', deepcoll.path, 'bob')
95
96 self.sess.permissions.set(acl_read)
97 self.sess.permissions.set(acl_inherit)
98
99 # create one new object and one new collection *after* ACL's are applied
100 new_object_path = test_coll_path + "/my_data_obj"
101 with self.sess.data_objects.open( new_object_path ,'w') as f: f.write(b'some_content')
102
103 new_collection_path = test_coll_path + "/my_colln_obj"
104 new_collection = self.sess.collections.create( new_collection_path )
105
106 coll_IDs = [c[Collection.id] for c in
107 self.sess.query(Collection.id).filter(Like(Collection.name , deepcoll.path + "%"))]
108
109 D_rods = list(self.sess.query(Collection.name,DataObject.name).filter(
110 In(DataObject.collection_id, coll_IDs )))
111
112 self.assertEqual (len(D_rods), OBJ_PER_LVL*DEPTH+1) # counts the 'older' objects plus one new object
113
114 with iRODSSession (port=self.sess.port, zone=self.sess.zone, host=self.sess.host,
115 user='bob', password='bpass') as bob:
116
117 D = list(bob.query(Collection.name,DataObject.name).filter(
118 In(DataObject.collection_id, coll_IDs )))
119
120 # - bob should only see the new data object, but none existing before ACLs were applied
121
122 self.assertEqual( len(D), 1 )
123 D_names = [_[Collection.name] + "/" + _[DataObject.name] for _ in D]
124 self.assertEqual( D[0][DataObject.name], 'my_data_obj' )
125
126 # - bob should be able to read the new data object
127
128 with bob.data_objects.get(D_names[0]).open('r') as f:
129 self.assertGreater( len(f.read()), 0)
130
131 C = list(bob.query(Collection).filter( In(Collection.id, coll_IDs )))
132 self.assertEqual( len(C), 2 ) # query should return only the top-level and newly created collections
133 self.assertEqual( sorted([c[Collection.name] for c in C]),
134 sorted([new_collection.path, deepcoll.path]) )
135 finally:
136 if user: user.remove()
137 if deepcoll: deepcoll.remove(force = True, recurse = True)
138
139 def test_set_inherit_acl_depth_test(self):
140 DEPTH = 3 # But test is valid for any DEPTH > 1
141 for recursionTruth in (True, False):
142 deepcoll = None
143 try:
144 test_coll_path = self.coll_path + "/test"
145 deepcoll = helpers.make_deep_collection(self.sess, test_coll_path, depth=DEPTH, objects_per_level=2)
146 acl1 = iRODSAccess('inherit', deepcoll.path)
147 self.sess.permissions.set( acl1, recursive = recursionTruth )
148 test_subcolls = set( iRODSCollection(self.sess.collections,_)
149 for _ in self.sess.query(Collection).filter(Like(Collection.name, deepcoll.path + "/%")) )
150
151 # assert top level collection affected
152 test_coll = self.sess.collections.get(test_coll_path)
153 self.assertTrue( test_coll.inheritance )
154 #
155 # assert lower level collections affected only for case when recursive = True
156 subcoll_truths = [ (_.inheritance == recursionTruth) for _ in test_subcolls ]
157 self.assertEqual( len(subcoll_truths), DEPTH-1 )
158 self.assertTrue( all(subcoll_truths) )
159 finally:
160 if deepcoll: deepcoll.remove(force = True, recurse = True)
161
162
59163 def test_set_data_acl(self):
60164 # test args
61165 collection = self.coll_path
79183 acl = self.sess.permissions.get(obj)[0] # owner's write access
80184
81185 # check values
82 self.assertEqual(acl.access_name, 'modify object')
186 self.assertEqual(acl.access_name, self.mapping['write'])
83187 self.assertEqual(acl.user_zone, user.zone)
84188 self.assertEqual(acl.user_name, user.name)
85189
105209 acl = self.sess.permissions.get(coll)[0] # owner's write access
106210
107211 # check values
108 self.assertEqual(acl.access_name, 'modify object')
212 self.assertEqual(acl.access_name, self.mapping['write'])
109213 self.assertEqual(acl.user_zone, user.zone)
110214 self.assertEqual(acl.user_name, user.name)
111215
112216 # reset permission to own
113217 acl1 = iRODSAccess('own', coll.path, user.name, user.zone)
114218 self.sess.permissions.set(acl1)
219
220 def perms_lists_symm_diff ( self, a_iter, b_iter ):
221 fields = lambda perm: (self.mapping[perm.access_name], perm.user_name, perm.user_zone)
222 A = set (map(fields,a_iter))
223 B = set (map(fields,b_iter))
224 return (A-B) | (B-A)
225
226 def test_raw_acls__207(self):
227 data = helpers.make_object(self.sess,"/".join((self.coll_path,"test_obj")))
228 eg = eu = fg = fu = None
229 try:
230 eg = self.sess.user_groups.create ('egrp')
231 eu = self.sess.users.create ('edith','rodsuser')
232 eg.addmember(eu.name,eu.zone)
233 fg = self.sess.user_groups.create ('fgrp')
234 fu = self.sess.users.create ('frank','rodsuser')
235 fg.addmember(fu.name,fu.zone)
236 my_ownership = set([('own', self.sess.username, self.sess.zone)])
237 #--collection--
238 perms1data = [ iRODSAccess ('write',self.coll_path, eg.name, self.sess.zone),
239 iRODSAccess ('read', self.coll_path, fu.name, self.sess.zone)
240 ]
241 for perm in perms1data: self.sess.permissions.set ( perm )
242 p1 = self.sess.permissions.get ( self.coll, report_raw_acls = True)
243 self.assertEqual(self.perms_lists_symm_diff( perms1data, p1 ), my_ownership)
244 #--data object--
245 perms2data = [ iRODSAccess ('write',data.path, fg.name, self.sess.zone),
246 iRODSAccess ('read', data.path, eu.name, self.sess.zone)
247 ]
248 for perm in perms2data: self.sess.permissions.set ( perm )
249 p2 = self.sess.permissions.get ( data, report_raw_acls = True)
250 self.assertEqual(self.perms_lists_symm_diff( perms2data, p2 ), my_ownership)
251 finally:
252 ids_for_delete = [ u.id for u in (fu,fg,eu,eg) if u is not None ]
253 for u in [ iRODSUser(self.sess.users,row)
254 for row in self.sess.query(User).filter(In(User.id, ids_for_delete)) ]:
255 u.remove()
115256
116257
117258 if __name__ == '__main__':
77 from irods.session import iRODSSession
88 from irods.resource import iRODSResource
99 import irods.test.helpers as helpers
10 import irods.keywords as kw
1011
1112
1213 class TestAdmin(unittest.TestCase):
152153 session.resources.add_child(comp.name, ufs1.name, 'archive')
153154 session.resources.add_child(comp.name, ufs2.name, 'cache')
154155
155 # create object on compound resource
156 obj = session.data_objects.create(obj_path, comp.name)
157
158 # write to object
159 with obj.open('w+') as obj_desc:
160 obj_desc.write(dummy_str)
161
162 # refresh object
163 obj = session.data_objects.get(obj_path)
164
165 # check that we have 2 replicas
166 self.assertEqual(len(obj.replicas), 2)
167
168 # remove object
169 obj.unlink(force=True)
170
171 # remove children from compound resource
172 session.resources.remove_child(comp.name, ufs1.name)
173 session.resources.remove_child(comp.name, ufs2.name)
174
175 # remove resources
176 ufs1.remove()
177 ufs2.remove()
178 comp.remove()
156 obj = None
157
158 try:
159 # create object on compound resource
160 obj = session.data_objects.create(obj_path, resource = comp.name)
161
162 # write to object
163 with obj.open('w+',**{kw.DEST_RESC_NAME_KW:comp.name}) as obj_desc:
164 obj_desc.write(dummy_str)
165
166 # refresh object
167 obj = session.data_objects.get(obj_path)
168
169 # check that we have 2 replicas
170 self.assertEqual(len(obj.replicas), 2)
171 finally:
172 # remove object
173 if obj: obj.unlink(force=True)
174
175 # remove children from compound resource
176 session.resources.remove_child(comp.name, ufs1.name)
177 session.resources.remove_child(comp.name, ufs2.name)
178
179 # remove resources
180 ufs1.remove()
181 ufs2.remove()
182 comp.remove()
179183
180184
181185 def test_get_resource_children(self):
262266
263267
264268 def test_make_ufs_resource(self):
269 RESC_PATH_BASE = helpers.irods_shared_tmp_dir()
270 if not(RESC_PATH_BASE) and not helpers.irods_session_host_local (self.sess):
271 self.skipTest('for non-local server with shared tmp dir missing')
265272 # test data
266273 resc_name = 'temporary_test_resource'
267274 if self.sess.server_version < (4, 0, 0):
303310 obj = self.sess.data_objects.create(obj_path, resc_name)
304311
305312 # write something to the file
306 with obj.open('w+') as obj_desc:
313 # (can omit use of DEST_RESC_NAME_KW on resolution of
314 # https://github.com/irods/irods/issues/5548 )
315 with obj.open('w+', **{kw.DEST_RESC_NAME_KW: resc_name} ) as obj_desc:
307316 obj_desc.write(dummy_str)
308317
309318 # refresh object (size has changed)
351360 self.sess.users.get(self.new_user_name)
352361
353362
363 def test_set_user_comment(self):
364 # make a new user
365 self.sess.users.create(self.new_user_name, self.new_user_type)
366
367 # modify user comment
368 new_comment = '''comment-abc123!"#$%&'()*+,-./:;<=>?@[\]^_{|}~Z''' # omitting backtick due to #170
369 self.sess.users.modify(self.new_user_name, 'comment', new_comment)
370
371 # check comment was modified
372 new_user = self.sess.users.get(self.new_user_name)
373 self.assertEqual(new_user.comment, new_comment)
374
375 # delete new user
376 self.sess.users.remove(self.new_user_name)
377
378 # user should be gone
379 with self.assertRaises(UserDoesNotExist):
380 self.sess.users.get(self.new_user_name)
381
382
383 def test_set_user_info(self):
384 # make a new user
385 self.sess.users.create(self.new_user_name, self.new_user_type)
386
387 # modify user info
388 new_info = '''info-abc123!"#$%&'()*+,-./:;<=>?@[\]^_{|}~Z''' # omitting backtick due to #170
389 self.sess.users.modify(self.new_user_name, 'info', new_info)
390
391 # check info was modified
392 new_user = self.sess.users.get(self.new_user_name)
393 self.assertEqual(new_user.info, new_info)
394
395 # delete new user
396 self.sess.users.remove(self.new_user_name)
397
398 # user should be gone
399 with self.assertRaises(UserDoesNotExist):
400 self.sess.users.get(self.new_user_name)
401
402
354403 if __name__ == '__main__':
355404 # let the tests find the parent irods lib
356405 sys.path.insert(0, os.path.abspath('../..'))
44 import socket
55 import shutil
66 import unittest
7 import time
78 from irods.meta import iRODSMetaCollection
89 from irods.exception import CollectionDoesNotExist
910 from irods.models import Collection, DataObject
1011 import irods.test.helpers as helpers
1112 import irods.keywords as kw
1213 from six.moves import range
14 from irods.test.helpers import my_function_name, unique_name
15 from irods.collection import iRODSCollection
1316
1417
1518 class TestCollection(unittest.TestCase):
3235 coll = self.sess.collections.get(self.test_coll_path)
3336 self.assertEqual(self.test_coll_path, coll.path)
3437
38 def test_irods_collection_information(self):
39 coll = self.sess.collections.get(self.test_coll_path)
40 self.assertIsNotNone(coll.create_time)
41 self.assertIsNotNone(coll.modify_time)
42 self.assertFalse(coll.inheritance)
43 self.assertIsNotNone(coll.owner_name)
44 self.assertIsNotNone(coll.owner_zone)
3545
3646 def test_append_to_collection(self):
3747 """ Append a new file to the collection"""
240250
241251
242252 def test_register_collection(self):
243 if self.sess.host not in ('localhost', socket.gethostname()):
253 tmp_dir = helpers.irods_shared_tmp_dir()
254 loc_server = self.sess.host in ('localhost', socket.gethostname())
255 if not(tmp_dir) and not(loc_server):
244256 self.skipTest('Requires access to server-side file(s)')
245257
246258 # test vars
247259 file_count = 10
248260 dir_name = 'register_test_dir'
249 dir_path = os.path.join('/tmp', dir_name)
261 dir_path = os.path.join((tmp_dir or '/tmp'), dir_name)
250262 coll_path = '{}/{}'.format(self.test_coll.path, dir_name)
251263
252264 # make test dir
271283
272284
273285 def test_register_collection_with_checksums(self):
274 if self.sess.host not in ('localhost', socket.gethostname()):
286 tmp_dir = helpers.irods_shared_tmp_dir()
287 loc_server = self.sess.host in ('localhost', socket.gethostname())
288 if not(tmp_dir) and not(loc_server):
275289 self.skipTest('Requires access to server-side file(s)')
276290
277291 # test vars
278292 file_count = 10
279 dir_name = 'register_test_dir'
280 dir_path = os.path.join('/tmp', dir_name)
293 dir_name = 'register_test_dir_with_chksums'
294 dir_path = os.path.join((tmp_dir or '/tmp'), dir_name)
281295 coll_path = '{}/{}'.format(self.test_coll.path, dir_name)
282296
283297 # make test dir
310324 shutil.rmtree(dir_path)
311325
312326
327 def test_collection_with_trailing_slash__323(self):
328 Home = helpers.home_collection(self.sess)
329 subcoll, dataobj = [unique_name(my_function_name(),time.time()) for x in range(2)]
330 subcoll_fullpath = "{}/{}".format(Home,subcoll)
331 subcoll_unnormalized = subcoll_fullpath + "/"
332 try:
333 # Test create and exists with trailing slashes.
334 self.sess.collections.create(subcoll_unnormalized)
335 c1 = self.sess.collections.get(subcoll_unnormalized)
336 c2 = self.sess.collections.get(subcoll_fullpath)
337 self.assertEqual(c1.id, c2.id)
338 self.assertTrue(self.sess.collections.exists(subcoll_unnormalized))
339
340 # Test data put to unnormalized collection name.
341 with open(dataobj, "wb") as f: f.write(b'hello')
342 self.sess.data_objects.put(dataobj, subcoll_unnormalized)
343 self.assertEqual(
344 self.sess.query(DataObject).filter(DataObject.name == dataobj).one()[DataObject.collection_id]
345 ,c1.id
346 )
347 finally:
348 if self.sess.collections.exists(subcoll_fullpath):
349 self.sess.collections.remove(subcoll_fullpath, recurse = True, force = True)
350 if os.path.exists(dataobj):
351 os.unlink(dataobj)
352
353
354 def test_concatenation__323(self):
355 coll = iRODSCollection.normalize_path('/zone/','/home/','/dan//','subdir///')
356 self.assertEqual(coll, '/zone/home/dan/subdir')
357
358 def test_object_paths_with_dot_and_dotdot__323(self):
359
360 normalize = iRODSCollection.normalize_path
361 session = self.sess
362 home = helpers.home_collection( session )
363
364 # Test requirement for collection names to be absolute
365 with self.assertRaises(iRODSCollection.AbsolutePathRequired):
366 normalize('../public', enforce_absolute = True)
367
368 # Test '.' and double slashes
369 public_home = normalize(home,'..//public/.//')
370 self.assertEqual(public_home, '/{sess.zone}/home/public'.format(sess = session))
371
372 # Test that '..' cancels last nontrival path element
373 subpath = normalize(home,'./collA/coll2/./../collB')
374 self.assertEqual(subpath, home + "/collA/collB")
375
376 # Test multiple '..'
377 home1 = normalize('/zone','holmes','public/../..','home','user')
378 self.assertEqual(home1, '/zone/home/user')
379 home2 = normalize('/zone','holmes','..','home','public','..','user')
380 self.assertEqual(home2, '/zone/home/user')
381
382
313383 if __name__ == "__main__":
314384 # let the tests find the parent irods lib
315385 sys.path.insert(0, os.path.abspath('../..'))
2323 def test_connection_destructor(self):
2424 conn = self.sess.pool.get_connection()
2525 conn.__del__()
26 # These asserts confirm that disconnect() in connection destructor is called
27 self.assertIsNone(conn.socket)
28 self.assertTrue(conn._disconnected)
2629 conn.release(destroy=True)
2730
2831 def test_failed_connection(self):
3740 # set port back
3841 self.sess.pool.account.port = saved_port
3942
40 def test_send_failure(self):
43 def test_1_multiple_disconnect(self):
4144 with self.sess.pool.get_connection() as conn:
42 # try to close connection twice, 2nd one should fail
45 # disconnect() may now be called multiple times without error.
46 # (Note, here it is called implicitly upon exiting the with-block.)
4347 conn.disconnect()
44 with self.assertRaises(NetworkException):
45 conn.disconnect()
48
49 def test_2_multiple_disconnect(self):
50 conn = self.sess.pool.get_connection()
51 # disconnect() may now be called multiple times without error.
52 conn.disconnect()
53 conn.disconnect()
4654
4755 def test_reply_failure(self):
4856 with self.sess.pool.get_connection() as conn:
88 import random
99 import string
1010 import unittest
11 import contextlib # check if redundant
12 import logging
13 import io
14 import re
15 import time
16 import concurrent.futures
17 import xml.etree.ElementTree
18
1119 from irods.models import Collection, DataObject
12 from irods.session import iRODSSession
1320 import irods.exception as ex
1421 from irods.column import Criterion
1522 from irods.data_object import chunks
1623 import irods.test.helpers as helpers
1724 import irods.keywords as kw
25 from irods.manager import data_object_manager
26 from irods.message import RErrorStack
27 from irods.message import ( ET, XML_Parser_Type, default_XML_parser, current_XML_parser )
1828 from datetime import datetime
29 from tempfile import NamedTemporaryFile
30 from irods.test.helpers import (unique_name, my_function_name)
31 import irods.parallel
32 from irods.manager.data_object_manager import Server_Checksum_Warning
33
34
35 def make_ufs_resc_in_tmpdir(session, base_name, allow_local = False):
36 tmpdir = helpers.irods_shared_tmp_dir()
37 if not tmpdir and allow_local:
38 tmpdir = os.getenv('TMPDIR') or '/tmp'
39 if not tmpdir:
40 raise RuntimeError("Must have filesystem path shareable with server.")
41 full_phys_dir = os.path.join(tmpdir,base_name)
42 if not os.path.exists(full_phys_dir): os.mkdir(full_phys_dir)
43 session.resources.create(base_name,'unixfilesystem',session.host,full_phys_dir)
44 return full_phys_dir
45
1946
2047 class TestDataObjOps(unittest.TestCase):
2148
49
50 from irods.test.helpers import (create_simple_resc)
51
2252 def setUp(self):
53 # Create test collection
2354 self.sess = helpers.make_session()
24
25 # Create test collection
2655 self.coll_path = '/{}/home/{}/test_dir'.format(self.sess.zone, self.sess.username)
2756 self.coll = helpers.make_collection(self.sess, self.coll_path)
28
57 with self.sess.pool.get_connection() as conn:
58 self.SERVER_VERSION = conn.server_version
2959
3060 def tearDown(self):
3161 '''Remove test data and close connections
3363 self.coll.remove(recurse=True, force=True)
3464 self.sess.cleanup()
3565
66 @staticmethod
67 def In_Memory_Stream():
68 return io.BytesIO() if sys.version_info < (3,) else io.StringIO()
69
70
71 @contextlib.contextmanager
72 def create_resc_hierarchy (self, Root, Leaf = None):
73 if not Leaf:
74 Leaf = 'simple_leaf_resc_' + unique_name (my_function_name(), datetime.now())
75 y_value = (Root,Leaf)
76 else:
77 y_value = ';'.join([Root,Leaf])
78 self.sess.resources.create(Leaf,'unixfilesystem',
79 host = self.sess.host,
80 path='/tmp/' + Leaf)
81 self.sess.resources.create(Root,'passthru')
82 self.sess.resources.add_child(Root,Leaf)
83 try:
84 yield y_value
85 finally:
86 self.sess.resources.remove_child(Root,Leaf)
87 self.sess.resources.remove(Leaf)
88 self.sess.resources.remove(Root)
89
90 def test_data_write_stales_other_repls__ref_irods_5548(self):
91 test_data = 'irods_5548_testfile'
92 test_coll = '/{0.zone}/home/{0.username}'.format(self.sess)
93 test_path = test_coll + "/" + test_data
94 demoResc = self.sess.resources.get('demoResc').name
95 self.sess.data_objects.open(test_path, 'w',**{kw.DEST_RESC_NAME_KW: demoResc}).write(b'random dater')
96
97 with self.create_simple_resc() as newResc:
98 try:
99 with self.sess.data_objects.open(test_path, 'a', **{kw.DEST_RESC_NAME_KW: newResc}) as d:
100 d.seek(0,2)
101 d.write(b'z')
102 data = self.sess.data_objects.get(test_path)
103 statuses = { repl.resource_name: repl.status for repl in data.replicas }
104 self.assertEqual( '0', statuses[demoResc] )
105 self.assertEqual( '1', statuses[newResc] )
106 finally:
107 self.cleanup_data_object(test_path)
108
109
110 def cleanup_data_object(self,data_logical_path):
111 try:
112 self.sess.data_objects.get(data_logical_path).unlink(force = True)
113 except ex.DataObjectDoesNotExist:
114 pass
115
116
117 def write_and_check_replica_on_parallel_connections( self, data_object_path, root_resc, caller_func, required_num_replicas = 1, seconds_to_wait_for_replicas = 10):
118 """Helper function for testing irods/irods#5548 and irods/irods#5848.
119
120 Writes the string "books\n" to a replica, but not as a single write operation.
121 It is done piecewise on two independent connections, essentially simulating parallel "put".
122 Then we assert the file contents and dispose of the data object."""
123
124 try:
125 self.sess.data_objects.create(data_object_path, resource = root_resc)
126 for _ in range( seconds_to_wait_for_replicas ):
127 if required_num_replicas <= len( self.sess.data_objects.get(data_object_path).replicas ): break
128 time.sleep(1)
129 else:
130 raise RuntimeError("Did not see %d replicas" % required_num_replicas)
131 fd1 = self.sess.data_objects.open(data_object_path, 'w', **{kw.DEST_RESC_NAME_KW: root_resc} )
132 (replica_token, hier_str) = fd1.raw.replica_access_info()
133 fd2 = self.sess.data_objects.open(data_object_path, 'a', finalize_on_close = False, **{kw.RESC_HIER_STR_KW: hier_str,
134 kw.REPLICA_TOKEN_KW: replica_token})
135 fd2.seek(4) ; fd2.write(b's\n')
136 fd1.write(b'book')
137 fd2.close()
138 fd1.close()
139 with self.sess.data_objects.open(data_object_path, 'r', **{kw.DEST_RESC_NAME_KW: root_resc} ) as f:
140 self.assertEqual(f.read(), b'books\n')
141 except Exception as e:
142 logging.debug('Exception %r in [%s], called from [%s]', e, my_function_name(), caller_func)
143 raise
144 finally:
145 if 'fd2' in locals() and not fd2.closed: fd2.close()
146 if 'fd1' in locals() and not fd1.closed: fd1.close()
147 self.cleanup_data_object( data_object_path )
148
149
150 def test_parallel_conns_to_repl_with_cousin__irods_5848(self):
151 """Cousins = resource nodes not sharing any common parent nodes."""
152 data_path = '/{0.zone}/home/{0.username}/cousin_resc_5848.dat'.format(self.sess)
153
154 #
155 # -- Create replicas of a data object under two different root resources and test parallel write: --
156
157 with self.create_simple_resc() as newResc:
158
159 # - create empty data object on demoResc
160 self.sess.data_objects.open(data_path, 'w',**{kw.DEST_RESC_NAME_KW: 'demoResc'})
161
162 # - replicate data object to newResc
163 self.sess.data_objects.get(data_path).replicate(newResc)
164
165 # - test whether a write to the replica on newResc functions correctly.
166 self.write_and_check_replica_on_parallel_connections( data_path, newResc, my_function_name(), required_num_replicas = 2)
167
168
169 def test_parallel_conns_with_replResc__irods_5848(self):
170 session = self.sess
171 replication_resource = None
172 ufs_resources = []
173 replication_resource = self.sess.resources.create('repl_resc_1_5848', 'replication')
174 number_of_replicas = 2
175 # -- Create replicas of a data object by opening it on a replication resource; then, test parallel write --
176 try:
177 # Build up the replication resource with `number_of_replicas' being the # of children
178 for i in range(number_of_replicas):
179 resource_name = unique_name(my_function_name(),i)
180 resource_type = 'unixfilesystem'
181 resource_host = session.host
182 resource_path = '/tmp/' + resource_name
183 ufs_resources.append(session.resources.create(
184 resource_name, resource_type, resource_host, resource_path))
185 session.resources.add_child(replication_resource.name, resource_name)
186 data_path = '/{0.zone}/home/{0.username}/Replicated_5848.dat'.format(self.sess)
187
188 # -- Perform the check of writing by a single replica (which is unspecified, but one of the `number_of_replicas`
189 # will be selected by voting)
190
191 self.write_and_check_replica_on_parallel_connections (data_path, replication_resource.name, my_function_name(), required_num_replicas = 2)
192 finally:
193 for resource in ufs_resources:
194 session.resources.remove_child(replication_resource.name, resource.name)
195 resource.remove()
196 if replication_resource:
197 replication_resource.remove()
198
199 def test_put_get_parallel_autoswitch_A__235(self):
200 if not self.sess.data_objects.should_parallelize_transfer(server_version_hint = self.SERVER_VERSION):
201 self.skipTest('Skip unless detected server version is 4.2.9')
202 if getattr(data_object_manager,'DEFAULT_NUMBER_OF_THREADS',None) in (1, None):
203 self.skipTest('Data object manager not configured for parallel puts and gets')
204 Root = 'pt235'
205 Leaf = 'resc235'
206 files_to_delete = []
207 # This test does the following:
208 # - set up a small resource hierarchy and generate a file large enough to trigger parallel transfer
209 # - `put' the file to iRODS, then `get' it back, comparing the resulting two disk files and making
210 # sure that the parallel routines were invoked to do both transfers
211
212 with self.create_resc_hierarchy(Root) as (Root_ , Leaf):
213 self.assertEqual(Root , Root_)
214 self.assertIsInstance( Leaf, str)
215 datafile = NamedTemporaryFile (prefix='getfromhier_235_',delete=True)
216 datafile.write( os.urandom( data_object_manager.MAXIMUM_SINGLE_THREADED_TRANSFER_SIZE + 1 ))
217 datafile.flush()
218 base_name = os.path.basename(datafile.name)
219 data_obj_name = '/{0.zone}/home/{0.username}/{1}'.format(self.sess, base_name)
220 options = { kw.DEST_RESC_NAME_KW:Root,
221 kw.RESC_NAME_KW:Root }
222
223 PUT_LOG = self.In_Memory_Stream()
224 GET_LOG = self.In_Memory_Stream()
225 NumThreadsRegex = re.compile('^num_threads\s*=\s*(\d+)',re.MULTILINE)
226
227 try:
228 with irods.parallel.enableLogging( logging.StreamHandler, (PUT_LOG,), level_=logging.INFO):
229 self.sess.data_objects.put(datafile.name, data_obj_name, num_threads = 0, **options) # - PUT
230 match = NumThreadsRegex.search (PUT_LOG.getvalue())
231 self.assertTrue (match is not None and int(match.group(1)) >= 1) # - PARALLEL code path taken?
232
233 with irods.parallel.enableLogging( logging.StreamHandler, (GET_LOG,), level_=logging.INFO):
234 self.sess.data_objects.get(data_obj_name, datafile.name+".get", num_threads = 0, **options) # - GET
235 match = NumThreadsRegex.search (GET_LOG.getvalue())
236 self.assertTrue (match is not None and int(match.group(1)) >= 1) # - PARALLEL code path taken?
237
238 files_to_delete += [datafile.name + ".get"]
239
240 with open(datafile.name, "rb") as f1, open(datafile.name + ".get", "rb") as f2:
241 self.assertEqual ( f1.read(), f2.read() )
242
243 q = self.sess.query (DataObject.name,DataObject.resc_hier).filter( DataObject.name == base_name,
244 DataObject.resource_name == Leaf)
245 replicas = list(q)
246 self.assertEqual( len(replicas), 1 )
247 self.assertEqual( replicas[0][DataObject.resc_hier] , ';'.join([Root,Leaf]) )
248
249 finally:
250 self.sess.data_objects.unlink( data_obj_name, force = True)
251 for n in files_to_delete: os.unlink(n)
252
253 def test_open_existing_dataobj_in_resource_hierarchy__232(self):
254 Root = 'pt1'
255 Leaf = 'resc1'
256 with self.create_resc_hierarchy(Root,Leaf) as hier_str:
257 obj = None
258 try:
259 datafile = NamedTemporaryFile (prefix='getfromhier_232_',delete=True)
260 datafile.write(b'abc\n')
261 datafile.flush()
262 fname = datafile.name
263 bname = os.path.basename(fname)
264 LOGICAL = self.coll_path + '/' + bname
265 self.sess.data_objects.put(fname,LOGICAL, **{kw.DEST_RESC_NAME_KW:Root})
266 self.assertEqual([bname], [res[DataObject.name] for res in
267 self.sess.query(DataObject.name).filter(DataObject.resc_hier == hier_str)])
268 obj = self.sess.data_objects.get(LOGICAL)
269 obj.open('a') # prior to #232 fix, raises DIRECT_CHILD_ACCESS
270 finally:
271 if obj: obj.unlink(force=True)
36272
37273 def make_new_server_config_json(self, server_config_filename):
38274 # load server_config.json to inject a new rule base
54290 sha256.update(chunk)
55291 return sha256.hexdigest()
56292
293 def test_routine_verify_chksum_operation( self ):
294
295 if self.sess.server_version < (4, 2, 11):
296 self.skipTest('iRODS servers < 4.2.11 do not raise a checksum warning')
297
298 dobj_path = '/{0.zone}/home/{0.username}/verify_chksum.dat'.format(self.sess)
299 self.sess.data_objects.create(dobj_path)
300 try:
301 with self.sess.data_objects.open(dobj_path,'w') as f:
302 f.write(b'abcd')
303 checksum = self.sess.data_objects.chksum(dobj_path)
304 self.assertGreater(len(checksum),0)
305 r_err_stk = RErrorStack()
306 warning = None
307 try:
308 self.sess.data_objects.chksum(dobj_path, **{'r_error': r_err_stk, kw.VERIFY_CHKSUM_KW:''})
309 except Server_Checksum_Warning as exc_:
310 warning = exc_
311 # There's one replica and it has a checksum, so expect no errors or hints from error stack.
312 self.assertIsNone(warning)
313 self.assertEqual(0, len(r_err_stk))
314 finally:
315 self.sess.data_objects.unlink(dobj_path, force = True)
316
317 def test_verify_chksum__282_287( self ):
318
319 if self.sess.server_version < (4, 2, 11):
320 self.skipTest('iRODS servers < 4.2.11 do not raise a checksum warning')
321
322 with self.create_simple_resc() as R, self.create_simple_resc() as R2, NamedTemporaryFile(mode = 'wb') as f:
323 f.write(b'abcxyz\n')
324 f.flush()
325 coll_path = '/{0.zone}/home/{0.username}' .format(self.sess)
326 dobj_path = coll_path + '/' + os.path.basename(f.name)
327 Data = self.sess.data_objects
328 r_err_stk = RErrorStack()
329 try:
330 demoR = self.sess.resources.get('demoResc').name # Assert presence of demoResc and
331 Data.put( f.name, dobj_path ) # Establish three replicas of data object.
332 Data.replicate( dobj_path, resource = R)
333 Data.replicate( dobj_path, resource = R2)
334 my_object = Data.get(dobj_path)
335
336 my_object.chksum( **{kw.RESC_NAME_KW:demoR} ) # Make sure demoResc has the only checksummed replica of the three.
337 my_object = Data.get(dobj_path) # Refresh replica list to get checksum(s).
338
339 Baseline_repls_without_checksum = set( r.number for r in my_object.replicas if not r.checksum )
340
341 warn_exception = None
342 try:
343 my_object.chksum( r_error = r_err_stk, **{kw.VERIFY_CHKSUM_KW:''} ) # Verify checksums without auto-vivify.
344 except Server_Checksum_Warning as warn:
345 warn_exception = warn
346
347 self.assertIsNotNone(warn_exception, msg = "Expected exception of type [Server_Checksum_Warning] was not received.")
348
349 # -- Make sure integer codes are properly reflected for checksum warnings.
350 self.assertEqual (2, len([e for e in r_err_stk if e.status_ == ex.rounded_code('CAT_NO_CHECKSUM_FOR_REPLICA')]))
351
352 NO_CHECKSUM_MESSAGE_PATTERN = re.compile( 'No\s+Checksum\s+Available.+\s+Replica\s\[(\d+)\]', re.IGNORECASE)
353
354 Reported_repls_without_checksum = set( int(match.group(1)) for match in [ NO_CHECKSUM_MESSAGE_PATTERN.search(e.raw_msg_)
355 for e in r_err_stk ]
356 if match is not None )
357
358 # Ensure that VERIFY_CHKSUM_KW reported all replicas lacking a checksum
359 self.assertEqual (Reported_repls_without_checksum,
360 Baseline_repls_without_checksum)
361 finally:
362 if Data.exists (dobj_path):
363 Data.unlink (dobj_path, force = True)
364
365
366 def test_compute_chksum( self ):
367
368 with self.create_simple_resc() as R, NamedTemporaryFile(mode = 'wb') as f:
369 coll_path = '/{0.zone}/home/{0.username}' .format(self.sess)
370 dobj_path = coll_path + '/' + os.path.basename(f.name)
371 Data = self.sess.data_objects
372 try:
373 f.write(b'some content bytes ...\n')
374 f.flush()
375 Data.put( f.name, dobj_path )
376
377 # get original checksum and resource name
378 my_object = Data.get(dobj_path)
379 orig_resc = my_object.replicas[0].resource_name
380 chk1 = my_object.chksum()
381
382 # repl to new resource and iput to that new replica
383 Data.replicate( dobj_path, resource = R)
384 f.write(b'...added bytes\n')
385 f.flush()
386 Data.put( f.name, dobj_path, **{kw.DEST_RESC_NAME_KW: R,
387 kw.FORCE_FLAG_KW: '1'})
388 # compare checksums
389 my_object = Data.get(dobj_path)
390 chk2 = my_object.chksum( **{kw.RESC_NAME_KW : R} )
391 chk1b = my_object.chksum( **{kw.RESC_NAME_KW : orig_resc} )
392 self.assertEqual (chk1, chk1b)
393 self.assertNotEqual (chk1, chk2)
394
395 finally:
396 if Data.exists (dobj_path): Data.unlink (dobj_path, force = True)
397
57398
58399 def test_obj_exists(self):
59400 obj_name = 'this_object_will_exist_once_made'
69410 self.assertFalse(self.sess.data_objects.exists(does_not_exist_path))
70411
71412
413 def test_create_from_invalid_path__250(self):
414 possible_exceptions = { ex.SYS_INVALID_INPUT_PARAM: (lambda serv_vsn : serv_vsn <= (4,2,8)),
415 ex.CAT_UNKNOWN_COLLECTION: (lambda serv_vsn : (4,2,9) <= serv_vsn < (4,3,0)),
416 ex.SYS_INVALID_FILE_PATH: (lambda serv_vsn : (4,3,0) <= serv_vsn)
417 }
418 raisedExc = None
419 try:
420 self.sess.data_objects.create('t')
421 except Exception as exc:
422 raisedExc = exc
423 server_version_cond = possible_exceptions.get(type(raisedExc))
424 self.assertTrue(server_version_cond is not None)
425 self.assertTrue(server_version_cond(self.sess.server_version))
426
427
72428 def test_rename_obj(self):
73429 # test args
74430 collection = self.coll_path
98454 self.assertEqual(obj.id, saved_id)
99455
100456 # remove object
101 self.sess.data_objects.unlink(new_path)
457 self.sess.data_objects.unlink(new_path, force = True)
102458
103459
104460 def test_move_obj_to_coll(self):
130486 # remove new collection
131487 new_coll.remove(recurse=True, force=True)
132488
489 def test_copy_existing_obj_to_relative_dest_fails_irods4796(self):
490 if self.sess.server_version <= (4, 2, 7):
491 self.skipTest('iRODS servers <= 4.2.7 will give nondescriptive error')
492 obj_name = 'this_object_will_exist_once_made'
493 exists_path = '{}/{}'.format(self.coll_path, obj_name)
494 helpers.make_object(self.sess, exists_path)
495 self.assertTrue(self.sess.data_objects.exists(exists_path))
496 non_existing_zone = 'this_zone_absent'
497 relative_dst_path = '{non_existing_zone}/{obj_name}'.format(**locals())
498 options = {}
499 with self.assertRaises(ex.USER_INPUT_PATH_ERR):
500 self.sess.data_objects.copy(exists_path, relative_dst_path, **options)
501
502 def test_copy_from_nonexistent_absolute_data_obj_path_fails_irods4796(self):
503 if self.sess.server_version <= (4, 2, 7):
504 self.skipTest('iRODS servers <= 4.2.7 will hang the client')
505 non_existing_zone = 'this_zone_absent'
506 src_path = '/{non_existing_zone}/non_existing.src'.format(**locals())
507 dst_path = '/{non_existing_zone}/non_existing.dst'.format(**locals())
508 options = {}
509 with self.assertRaises(ex.USER_INPUT_PATH_ERR):
510 self.sess.data_objects.copy(src_path, dst_path, **options)
511
512 def test_copy_from_relative_path_fails_irods4796(self):
513 if self.sess.server_version <= (4, 2, 7):
514 self.skipTest('iRODS servers <= 4.2.7 will hang the client')
515 src_path = 'non_existing.src'
516 dst_path = 'non_existing.dst'
517 options = {}
518 with self.assertRaises(ex.USER_INPUT_PATH_ERR):
519 self.sess.data_objects.copy(src_path, dst_path, **options)
133520
134521 def test_copy_obj_to_obj(self):
135522 # test args
291678 obj_path = "{collection}/{filename}".format(**locals())
292679 contents = 'blah' * 100
293680 checksum = base64.b64encode(
294 hashlib.sha256(contents).digest()).decode()
681 hashlib.sha256(contents.encode()).digest()).decode()
295682
296683 # make object in test collection
297684 options = {kw.OPR_TYPE_KW: 1} # PUT_OPR
352739 # make pseudo-random test file
353740 filename = 'test_put_file_trigger_pep.txt'
354741 test_file = os.path.join('/tmp', filename)
355 contents = ''.join(random.choice(string.printable) for _ in range(1024))
742 contents = ''.join(random.choice(string.printable) for _ in range(1024)).encode()
743 contents = contents[:1024]
356744 with open(test_file, 'wb') as f:
357745 f.write(contents)
358746
463851 # delete second resource
464852 self.sess.resources.remove(resc_name)
465853
466
467854 def test_replica_number(self):
468855 if self.sess.server_version < (4, 0, 0):
469856 self.skipTest('For iRODS 4+')
481868 # make ufs resources
482869 ufs_resources = []
483870 for i in range(number_of_replicas):
484 resource_name = 'ufs{}'.format(i)
871 resource_name = unique_name(my_function_name(),i)
485872 resource_type = 'unixfilesystem'
486873 resource_host = session.host
487874 resource_path = '/tmp/' + resource_name
515902 self.assertEqual(replica.number, i)
516903
517904 # now trim odd-numbered replicas
905 # note (see irods/irods#4861): COPIES_KW might disappear in the future
906 options = {kw.COPIES_KW: 1}
518907 for i in [1, 3, 5]:
519 options = {kw.REPL_NUM_KW: str(i)}
520 obj.unlink(**options)
908 options[kw.REPL_NUM_KW] = str(i)
909 obj.trim(**options)
521910
522911 # refresh object
523912 obj = session.data_objects.get(obj_path)
542931
543932 def test_repave_replicas(self):
544933 # Can't do one step open/create with older servers
545 if self.sess.server_version <= (4, 1, 4):
934 server_vsn = self.sess.server_version
935 if server_vsn <= (4, 1, 4):
546936 self.skipTest('For iRODS 4.1.5 and newer')
547
548 number_of_replicas = 7
937 try:
938 number_of_replicas = 7
939 session = self.sess
940 zone = session.zone
941 username = session.username
942 test_dir = '/tmp'
943 filename = 'repave_replica_test_file.txt'
944 test_file = os.path.join(test_dir, filename)
945 obj_path = '/{zone}/home/{username}/{filename}'.format(**locals())
946 ufs_resources = []
947
948 # make test file
949 obj_content = u'foobar'
950 checksum = base64.b64encode(hashlib.sha256(obj_content.encode('utf-8')).digest()).decode()
951 with open(test_file, 'w') as f:
952 f.write(obj_content)
953
954 # put test file onto default resource
955 options = {kw.REG_CHKSUM_KW: ''}
956 session.data_objects.put(test_file, obj_path, **options)
957
958 # make ufs resources and replicate object
959 for i in range(number_of_replicas):
960 resource_name = unique_name(my_function_name(),i)
961 resource_type = 'unixfilesystem'
962 resource_host = session.host
963 resource_path = '/tmp/{}'.format(resource_name)
964 ufs_resources.append(session.resources.create(
965 resource_name, resource_type, resource_host, resource_path))
966
967 session.data_objects.replicate(obj_path, resource=resource_name)
968
969 # refresh object
970 obj = session.data_objects.get(obj_path)
971
972 # verify each replica's checksum
973 for replica in obj.replicas:
974 self.assertEqual(replica.checksum, 'sha2:{}'.format(checksum))
975
976 # now repave test file
977 obj_content = u'bar'
978 checksum = base64.b64encode(hashlib.sha256(obj_content.encode('utf-8')).digest()).decode()
979 with open(test_file, 'w') as f:
980 f.write(obj_content)
981
982 options = {kw.REG_CHKSUM_KW: '', kw.ALL_KW: ''}
983 session.data_objects.put(test_file, obj_path, **options)
984 obj = session.data_objects.get(obj_path)
985
986 # verify each replica's checksum
987 for replica in obj.replicas:
988 self.assertEqual(replica.checksum, 'sha2:{}'.format(checksum))
989
990 finally:
991 # remove data object
992 data = self.sess.data_objects
993 if data.exists(obj_path):
994 data.unlink(obj_path,force=True)
995 # remove ufs resources
996 for resource in ufs_resources:
997 resource.remove()
998
999 def test_get_replica_size(self):
5491000 session = self.sess
550 zone = session.zone
551 username = session.username
1001
1002 # Can't do one step open/create with older servers
1003 if session.server_version <= (4, 1, 4):
1004 self.skipTest('For iRODS 4.1.5 and newer')
1005
1006 # test vars
5521007 test_dir = '/tmp'
553 filename = 'repave_replica_test_file.txt'
1008 filename = 'get_replica_size_test_file'
5541009 test_file = os.path.join(test_dir, filename)
555 obj_path = '/{zone}/home/{username}/{filename}'.format(**locals())
556
557 # make test file
558 obj_content = u'foobar'
559 checksum = base64.b64encode(hashlib.sha256(obj_content.encode('utf-8')).digest()).decode()
560 with open(test_file, 'w') as f:
561 f.write(obj_content)
562
563 # put test file onto default resource
564 options = {kw.REG_CHKSUM_KW: ''}
565 session.data_objects.put(test_file, obj_path, **options)
566
567 # make ufs resources and replicate object
1010 collection = self.coll.path
1011
1012 # make random 16byte binary file
1013 original_size = 16
1014 with open(test_file, 'wb') as f:
1015 f.write(os.urandom(original_size))
1016
1017 # make ufs resources
5681018 ufs_resources = []
569 for i in range(number_of_replicas):
570 resource_name = 'ufs{}'.format(i)
1019 for i in range(2):
1020 resource_name = unique_name(my_function_name(),i)
5711021 resource_type = 'unixfilesystem'
5721022 resource_host = session.host
5731023 resource_path = '/tmp/{}'.format(resource_name)
5741024 ufs_resources.append(session.resources.create(
5751025 resource_name, resource_type, resource_host, resource_path))
5761026
577 session.data_objects.replicate(obj_path, resource=resource_name)
578
579 # refresh object
580 obj = session.data_objects.get(obj_path)
581
582 # verify each replica's checksum
583 for replica in obj.replicas:
584 self.assertEqual(replica.checksum, 'sha2:{}'.format(checksum))
585
586 # now repave test file
587 obj_content = u'bar'
588 checksum = base64.b64encode(hashlib.sha256(obj_content.encode('utf-8')).digest()).decode()
589 with open(test_file, 'w') as f:
590 f.write(obj_content)
591
592 # update all replicas
593 options = {kw.REG_CHKSUM_KW: '', kw.ALL_KW: ''}
594 session.data_objects.put(test_file, obj_path, **options)
595 obj = session.data_objects.get(obj_path)
596
597 # verify each replica's checksum
598 for replica in obj.replicas:
599 self.assertEqual(replica.checksum, 'sha2:{}'.format(checksum))
600
601 # remove object
602 obj.unlink(force=True)
603
604 # remove ufs resources
605 for resource in ufs_resources:
606 resource.remove()
607
608 def test_get_replica_size(self):
609 session = self.sess
610
611 # Can't do one step open/create with older servers
612 if session.server_version <= (4, 1, 4):
613 self.skipTest('For iRODS 4.1.5 and newer')
614
615 # test vars
616 test_dir = '/tmp'
617 filename = 'get_replica_size_test_file'
618 test_file = os.path.join(test_dir, filename)
619 collection = self.coll.path
620
621 # make random 16byte binary file
622 original_size = 16
623 with open(test_file, 'wb') as f:
624 f.write(os.urandom(original_size))
625
626 # make ufs resources
627 ufs_resources = []
628 for i in range(2):
629 resource_name = 'ufs{}'.format(i)
630 resource_type = 'unixfilesystem'
631 resource_host = session.host
632 resource_path = '/tmp/{}'.format(resource_name)
633 ufs_resources.append(session.resources.create(
634 resource_name, resource_type, resource_host, resource_path))
635
6361027 # put file in test collection and replicate
6371028 obj_path = '{collection}/{filename}'.format(**locals())
6381029 options = {kw.DEST_RESC_NAME_KW: ufs_resources[0].name}
6641055 # remove ufs resources
6651056 for resource in ufs_resources:
6661057 resource.remove()
1058
6671059
6681060 def test_obj_put_get(self):
6691061 # Can't do one step open/create with older servers
8241216 os.remove(new_env_file)
8251217
8261218
1219 def test_obj_put_and_return_data_object(self):
1220 # Can't do one step open/create with older servers
1221 if self.sess.server_version <= (4, 1, 4):
1222 self.skipTest('For iRODS 4.1.5 and newer')
1223
1224 # make another UFS resource
1225 session = self.sess
1226 resource_name = 'ufs'
1227 resource_type = 'unixfilesystem'
1228 resource_host = session.host
1229 resource_path = '/tmp/' + resource_name
1230 session.resources.create(resource_name, resource_type, resource_host, resource_path)
1231
1232 # set default resource to new UFS resource
1233 session.default_resource = resource_name
1234
1235 # make a local file with random text content
1236 content = ''.join(random.choice(string.printable) for _ in range(1024))
1237 filename = 'testfile.txt'
1238 file_path = os.path.join('/tmp', filename)
1239 with open(file_path, 'w') as f:
1240 f.write(content)
1241
1242 # put file
1243 collection = self.coll_path
1244 obj_path = '{collection}/{filename}'.format(**locals())
1245
1246 new_file = session.data_objects.put(file_path, obj_path, return_data_object=True)
1247
1248 # get object and confirm resource
1249 obj = session.data_objects.get(obj_path)
1250 self.assertEqual(new_file.replicas[0].resource_name, obj.replicas[0].resource_name)
1251
1252 # cleanup
1253 os.remove(file_path)
1254 obj.unlink(force=True)
1255 session.resources.remove(resource_name)
1256
1257
1258
8271259 def test_force_get(self):
8281260 # Can't do one step open/create with older servers
8291261 if self.sess.server_version <= (4, 1, 4):
8551287 os.remove(test_file)
8561288
8571289
858 def test_register(self):
1290 def test_modDataObjMeta(self):
1291 test_dir = helpers.irods_shared_tmp_dir()
8591292 # skip if server is remote
860 if self.sess.host not in ('localhost', socket.gethostname()):
1293 loc_server = self.sess.host in ('localhost', socket.gethostname())
1294 if not(test_dir) and not (loc_server):
8611295 self.skipTest('Requires access to server-side file(s)')
8621296
8631297 # test vars
864 test_dir = '/tmp'
1298 resc_name = 'testDataObjMetaResc'
8651299 filename = 'register_test_file'
866 test_file = os.path.join(test_dir, filename)
8671300 collection = self.coll.path
8681301 obj_path = '{collection}/{filename}'.format(**locals())
1302 test_path = make_ufs_resc_in_tmpdir(self.sess, resc_name, allow_local = loc_server)
1303 test_file = os.path.join(test_path, filename)
8691304
8701305 # make random 4K binary file
8711306 with open(test_file, 'wb') as f:
8721307 f.write(os.urandom(1024 * 4))
8731308
8741309 # register file in test collection
1310 self.sess.data_objects.register(test_file, obj_path, **{kw.RESC_NAME_KW:resc_name})
1311
1312 qu = self.sess.query(Collection.id).filter(Collection.name == collection)
1313 for res in qu:
1314 collection_id = res[Collection.id]
1315
1316 qu = self.sess.query(DataObject.size, DataObject.modify_time).filter(DataObject.name == filename, DataObject.collection_id == collection_id)
1317 for res in qu:
1318 self.assertEqual(int(res[DataObject.size]), 1024 * 4)
1319 self.sess.data_objects.modDataObjMeta({"objPath" : obj_path}, {"dataSize":1024, "dataModify":4096})
1320
1321 qu = self.sess.query(DataObject.size, DataObject.modify_time).filter(DataObject.name == filename, DataObject.collection_id == collection_id)
1322 for res in qu:
1323 self.assertEqual(int(res[DataObject.size]), 1024)
1324 self.assertEqual(res[DataObject.modify_time], datetime.utcfromtimestamp(4096))
1325
1326 # leave physical file on disk
1327 self.sess.data_objects.unregister(obj_path)
1328
1329 # delete file
1330 os.remove(test_file)
1331
1332
1333 def test_get_data_objects(self):
1334 # Can't do one step open/create with older servers
1335 if self.sess.server_version <= (4, 1, 4):
1336 self.skipTest('For iRODS 4.1.5 and newer')
1337
1338 # test vars
1339 test_dir = '/tmp'
1340 filename = 'get_data_objects_test_file'
1341 test_file = os.path.join(test_dir, filename)
1342 collection = self.coll.path
1343
1344 # make random 16byte binary file
1345 original_size = 16
1346 with open(test_file, 'wb') as f:
1347 f.write(os.urandom(original_size))
1348
1349 # make ufs resources
1350 ufs_resources = []
1351 for i in range(2):
1352 resource_name = unique_name(my_function_name(),i)
1353 resource_type = 'unixfilesystem'
1354 resource_host = self.sess.host
1355 resource_path = '/tmp/{}'.format(resource_name)
1356 ufs_resources.append(self.sess.resources.create(
1357 resource_name, resource_type, resource_host, resource_path))
1358
1359
1360 # make passthru resource and add ufs1 as a child
1361 passthru_resource = self.sess.resources.create('pt', 'passthru')
1362 self.sess.resources.add_child(passthru_resource.name, ufs_resources[1].name)
1363
1364 # put file in test collection and replicate
1365 obj_path = '{collection}/{filename}'.format(**locals())
1366 options = {kw.DEST_RESC_NAME_KW: ufs_resources[0].name}
1367 self.sess.data_objects.put(test_file, '{collection}/'.format(**locals()), **options)
1368 self.sess.data_objects.replicate(obj_path, passthru_resource.name)
1369
1370 # ensure that replica info is populated
1371 obj = self.sess.data_objects.get(obj_path)
1372 for i in ["number","status","resource_name","path","resc_hier"]:
1373 self.assertIsNotNone(obj.replicas[0].__getattribute__(i))
1374 self.assertIsNotNone(obj.replicas[1].__getattribute__(i))
1375
1376 # ensure replica info is sensible
1377 for i in range(2):
1378 self.assertEqual(obj.replicas[i].number, i)
1379 self.assertEqual(obj.replicas[i].status, '1')
1380 self.assertEqual(obj.replicas[i].path.split('/')[-1], filename)
1381 self.assertEqual(obj.replicas[i].resc_hier.split(';')[-1], ufs_resources[i].name)
1382
1383 self.assertEqual(obj.replicas[0].resource_name, ufs_resources[0].name)
1384 if self.sess.server_version < (4, 2, 0):
1385 self.assertEqual(obj.replicas[i].resource_name, passthru_resource.name)
1386 else:
1387 self.assertEqual(obj.replicas[i].resource_name, ufs_resources[1].name)
1388 self.assertEqual(obj.replicas[1].resc_hier.split(';')[0], passthru_resource.name)
1389
1390 # remove object
1391 obj.unlink(force=True)
1392 # delete file
1393 os.remove(test_file)
1394
1395 # remove resources
1396 self.sess.resources.remove_child(passthru_resource.name, ufs_resources[1].name)
1397 passthru_resource.remove()
1398 for resource in ufs_resources:
1399 resource.remove()
1400
1401
1402 def test_register(self):
1403 test_dir = helpers.irods_shared_tmp_dir()
1404 loc_server = self.sess.host in ('localhost', socket.gethostname())
1405 if not(test_dir) and not(loc_server):
1406 self.skipTest('data_obj register requires server has access to local or shared files')
1407
1408 # test vars
1409 resc_name = "testRegisterOpResc"
1410 filename = 'register_test_file'
1411 collection = self.coll.path
1412 obj_path = '{collection}/{filename}'.format(**locals())
1413
1414 test_path = make_ufs_resc_in_tmpdir(self.sess,resc_name, allow_local = loc_server)
1415 test_file = os.path.join(test_path, filename)
1416
1417 # make random 4K binary file
1418 with open(test_file, 'wb') as f:
1419 f.write(os.urandom(1024 * 4))
1420
1421 # register file in test collection
8751422 self.sess.data_objects.register(test_file, obj_path)
8761423
8771424 # confirm object presence
8861433
8871434
8881435 def test_register_with_checksum(self):
889 # skip if server is remote
890 if self.sess.host not in ('localhost', socket.gethostname()):
891 self.skipTest('Requires access to server-side file(s)')
1436 test_dir = helpers.irods_shared_tmp_dir()
1437 loc_server = self.sess.host in ('localhost', socket.gethostname())
1438 if not(test_dir) and not(loc_server):
1439 self.skipTest('data_obj register requires server has access to local or shared files')
8921440
8931441 # test vars
894 test_dir = '/tmp'
1442 resc_name= 'regWithChksumResc'
8951443 filename = 'register_test_file'
896 test_file = os.path.join(test_dir, filename)
8971444 collection = self.coll.path
8981445 obj_path = '{collection}/{filename}'.format(**locals())
1446
1447 test_path = make_ufs_resc_in_tmpdir(self.sess, resc_name, allow_local = loc_server)
1448 test_file = os.path.join(test_path, filename)
8991449
9001450 # make random 4K binary file
9011451 with open(test_file, 'wb') as f:
9021452 f.write(os.urandom(1024 * 4))
9031453
9041454 # register file in test collection
905 options = {kw.VERIFY_CHKSUM_KW: ''}
1455 options = {kw.VERIFY_CHKSUM_KW: '', kw.RESC_NAME_KW: resc_name}
9061456 self.sess.data_objects.register(test_file, obj_path, **options)
9071457
9081458 # confirm object presence and verify checksum
9191469 # delete file
9201470 os.remove(test_file)
9211471
922 def test_modDataObjMeta(self):
923 # skip if server is remote
924 if self.sess.host not in ('localhost', socket.gethostname()):
925 self.skipTest('Requires access to server-side file(s)')
1472
1473 def test_object_names_with_nonprintable_chars (self):
1474 if (4,2,8) < self.sess.server_version < (4,2,11):
1475 self.skipTest('4.2.9 and 4.2.10 are known to fail as apostrophes in object names are problematic')
1476 test_dir = helpers.irods_shared_tmp_dir()
1477 loc_server = self.sess.host in ('localhost', socket.gethostname())
1478 if not(test_dir) and not(loc_server):
1479 self.skipTest('data_obj register requires server has access to local or shared files')
1480 temp_names = []
1481 vault = ''
1482 try:
1483 resc_name = 'regWithNonPrintableNamesResc'
1484 vault = make_ufs_resc_in_tmpdir(self.sess, resc_name, allow_local = loc_server)
1485 def enter_file_into_irods( session, filename, **kw_opt ):
1486 ET( XML_Parser_Type.QUASI_XML, session.server_version)
1487 basename = os.path.basename(filename)
1488 logical_path = '/{0.zone}/home/{0.username}/{basename}'.format(session,**locals())
1489 bound_method = getattr(session.data_objects, kw_opt['method'])
1490 bound_method( os.path.abspath(filename), logical_path, **kw_opt['options'] )
1491 d = session.data_objects.get(logical_path)
1492 Path_Good = (d.path == logical_path)
1493 session.data_objects.unlink( logical_path, force = True )
1494 session.cleanup()
1495 return Path_Good
1496 futr = []
1497 threadpool = concurrent.futures.ThreadPoolExecutor()
1498 fname = re.sub( r'[/]', '',
1499 ''.join(map(chr,range(1,128))) )
1500 for opts in [
1501 {'method':'put', 'options':{}},
1502 {'method':'register','options':{kw.RESC_NAME_KW: resc_name}, 'dir':(test_dir or None)}
1503 ]:
1504 with NamedTemporaryFile(prefix=opts["method"]+"_"+fname, dir=opts.get("dir"), delete=False) as f:
1505 f.write(b'hello')
1506 temp_names += [f.name]
1507 ses = helpers.make_session()
1508 futr.append( threadpool.submit( enter_file_into_irods, ses, f.name, **opts ))
1509 results = [ f.result() for f in futr ]
1510 self.assertEqual (results, [True, True])
1511 finally:
1512 for name in temp_names:
1513 if os.path.exists(name):
1514 os.unlink(name)
1515 if vault:
1516 self.sess.resources.remove( resc_name )
1517 self.assertIs( default_XML_parser(), current_XML_parser() )
1518
1519 def test_register_with_xml_special_chars(self):
1520 test_dir = helpers.irods_shared_tmp_dir()
1521 loc_server = self.sess.host in ('localhost', socket.gethostname())
1522 if not(test_dir) and not(loc_server):
1523 self.skipTest('data_obj register requires server has access to local or shared files')
9261524
9271525 # test vars
928 test_dir = '/tmp'
929 filename = 'register_test_file'
930 test_file = os.path.join(test_dir, filename)
1526 resc_name = 'regWithXmlSpecialCharsResc'
9311527 collection = self.coll.path
932 obj_path = '{collection}/{filename}'.format(**locals())
933
934 # make random 4K binary file
935 with open(test_file, 'wb') as f:
936 f.write(os.urandom(1024 * 4))
937
938 # register file in test collection
939 self.sess.data_objects.register(test_file, obj_path)
940
941 qu = self.sess.query(Collection.id).filter(Collection.name == collection)
942 for res in qu:
943 collection_id = res[Collection.id]
944
945 qu = self.sess.query(DataObject.size, DataObject.modify_time).filter(DataObject.name == filename, DataObject.collection_id == collection_id)
946 for res in qu:
947 self.assertEqual(int(res[DataObject.size]), 1024 * 4)
948 self.sess.data_objects.modDataObjMeta({"objPath" : obj_path}, {"dataSize":1024, "dataModify":4096})
949
950 qu = self.sess.query(DataObject.size, DataObject.modify_time).filter(DataObject.name == filename, DataObject.collection_id == collection_id)
951 for res in qu:
952 self.assertEqual(int(res[DataObject.size]), 1024)
953 self.assertEqual(res[DataObject.modify_time], datetime.utcfromtimestamp(4096))
954
955 # leave physical file on disk
956 self.sess.data_objects.unregister(obj_path)
957
958 # delete file
959 os.remove(test_file)
960
961 def test_register_with_xml_special_chars(self):
962 # skip if server is remote
963 if self.sess.host not in ('localhost', socket.gethostname()):
964 self.skipTest('Requires access to server-side file(s)')
965
966 # test vars
967 test_dir = '/tmp'
9681528 filename = '''aaa'"<&test&>"'_file'''
969 test_file = os.path.join(test_dir, filename)
970 collection = self.coll.path
971 obj_path = '{collection}/{filename}'.format(**locals())
972
973 # make random 4K binary file
974 with open(test_file, 'wb') as f:
975 f.write(os.urandom(1024 * 4))
976
977 # register file in test collection
978 print('registering [' + obj_path + ']')
979 self.sess.data_objects.register(test_file, obj_path)
980
981 # confirm object presence
982 print('getting [' + obj_path + ']')
983 obj = self.sess.data_objects.get(obj_path)
984
985 # in a real use case we would likely
986 # want to leave the physical file on disk
987 print('unregistering [' + obj.path + ']')
988 obj.unregister()
989
990 # delete file
991 os.remove(test_file)
1529 test_path = make_ufs_resc_in_tmpdir(self.sess, resc_name, allow_local = loc_server)
1530 try:
1531 test_file = os.path.join(test_path, filename)
1532 obj_path = '{collection}/{filename}'.format(**locals())
1533
1534 # make random 4K binary file
1535 with open(test_file, 'wb') as f:
1536 f.write(os.urandom(1024 * 4))
1537
1538 # register file in test collection
1539 self.sess.data_objects.register(test_file, obj_path, **{kw.RESC_NAME_KW: resc_name})
1540
1541 # confirm object presence
1542 obj = self.sess.data_objects.get(obj_path)
1543
1544 finally:
1545 # in a real use case we would likely
1546 # want to leave the physical file on disk
1547 obj.unregister()
1548 # delete file
1549 os.remove(test_file)
1550 # delete resource
1551 self.sess.resources.get(resc_name).remove()
9921552
9931553
9941554 if __name__ == '__main__':
00 #! /usr/bin/env python
1 from __future__ import print_function
12 from __future__ import absolute_import
23 import os
34 import sys
89
910 class TestContinueQuery(unittest.TestCase):
1011
12 @classmethod
13 def setUpClass(cls):
14 # once only (before all tests), set up large collection
15 print ("Creating a large collection...", file = sys.stderr)
16 with helpers.make_session() as sess:
17 # Create test collection
18 cls.coll_path = '/{}/home/{}/test_dir'.format(sess.zone, sess.username)
19 cls.obj_count = 2500
20 cls.coll = helpers.make_test_collection( sess, cls.coll_path, cls.obj_count)
21
1122 def setUp(self):
23 # open the session (per-test)
1224 self.sess = helpers.make_session()
1325
14 # Create test collection
15 self.coll_path = '/{}/home/{}/test_dir'.format(self.sess.zone, self.sess.username)
16 self.obj_count = 2500
17 self.coll = helpers.make_test_collection(
18 self.sess, self.coll_path, self.obj_count)
26 def tearDown(self):
27 # close the session (per-test)
28 self.sess.cleanup()
1929
20 def tearDown(self):
21 '''Remove test data and close connections
22 '''
23 self.coll.remove(recurse=True, force=True)
24 self.sess.cleanup()
30 @classmethod
31 def tearDownClass(cls):
32 """Remove test data."""
33 # once only (after all tests), delete large collection
34 print ("Deleting the large collection...", file = sys.stderr)
35 with helpers.make_session() as sess:
36 sess.collections.remove(cls.coll_path, recurse=True, force=True)
2537
2638 def test_walk_large_collection(self):
2739 for current_coll, subcolls, objects in self.coll.walk():
0 #! /usr/bin/env python
1 from __future__ import absolute_import
2 import os
3 import sys
4 import unittest
5
6 from irods.exception import OVERWRITE_WITHOUT_FORCE_FLAG
7 import irods.test.helpers as helpers
8
9 class TestForceCreate(unittest.TestCase):
10
11 def setUp(self):
12 self.sess = helpers.make_session()
13
14 def tearDown(self):
15 """Close connections."""
16 self.sess.cleanup()
17
18 # This test should pass whether or not federation is configured:
19 def test_force_create(self):
20 if self.sess.server_version > (4, 2, 8):
21 self.skipTest('force flag unneeded for create in iRODS > 4.2.8')
22 session = self.sess
23 FILE = '/{session.zone}/home/{session.username}/a.txt'.format(**locals())
24 try:
25 session.data_objects.unlink(FILE)
26 except:
27 pass
28 error = None
29 try:
30 session.data_objects.create(FILE)
31 session.data_objects.create(FILE)
32 except OVERWRITE_WITHOUT_FORCE_FLAG:
33 error = "OVERWRITE_WITHOUT_FORCE_FLAG"
34 self.assertEqual (error, "OVERWRITE_WITHOUT_FORCE_FLAG")
35 error = None
36 try:
37 session.data_objects.create(FILE, force=True)
38 except:
39 error = "Error creating with force"
40 self.assertEqual (error, None)
41 try:
42 session.data_objects.unlink(FILE)
43 except:
44 error = "Error cleaning up"
45 self.assertEqual (error, None)
46
47
48 if __name__ == '__main__':
49 # let the tests find the parent irods lib
50 sys.path.insert(0, os.path.abspath('../..'))
51 unittest.main()
66 import hashlib
77 import base64
88 import math
9 import socket
10 import inspect
11 import threading
12 import random
13 import datetime
14 import json
915 from pwd import getpwnam
1016 from irods.session import iRODSSession
1117 from irods.message import iRODSMessage
18 from irods.password_obfuscation import encode
1219 from six.moves import range
1320
1421
22 def my_function_name():
23 """Returns the name of the calling function or method"""
24 return inspect.getframeinfo(inspect.currentframe().f_back).function
25
26
27 _thrlocal = threading.local()
28
29 def unique_name(*seed_tuple):
30 '''For deterministic pseudo-random identifiers based on function/method name
31 to prevent e.g. ICAT collisions within and between tests. Example use:
32
33 def f(session):
34 seq_num = 1
35 a_name = unique_name( my_function_name(), seq_num # [, *optional_further_args]
36 )
37 seq_num += 1
38 session.resources.create( a_name, 'unixfilesystem', session.host, '/tmp/' + a_name )
39 '''
40 if not getattr(_thrlocal,"rand_gen",None) : _thrlocal.rand_gen = random.Random()
41 _thrlocal.rand_gen.seed(seed_tuple)
42 return '%016X' % _thrlocal.rand_gen.randint(0,(1<<64)-1)
43
44
45 IRODS_SHARED_DIR = os.path.join( os.path.sep, 'irods_shared' )
46 IRODS_SHARED_TMP_DIR = os.path.join(IRODS_SHARED_DIR,'tmp')
47 IRODS_SHARED_REG_RESC_VAULT = os.path.join(IRODS_SHARED_DIR,'reg_resc')
48
49 IRODS_REG_RESC = 'MyRegResc'
50
51 def irods_shared_tmp_dir():
52 pth = IRODS_SHARED_TMP_DIR
53 can_write = False
54 if os.path.exists(pth):
55 try: tempfile.NamedTemporaryFile(dir = pth)
56 except: pass
57 else: can_write = True
58 return pth if can_write else ''
59
60 def irods_shared_reg_resc_vault() :
61 vault = IRODS_SHARED_REG_RESC_VAULT
62 if os.path.exists(vault):
63 return vault
64 else:
65 return None
66
67 def get_register_resource(session):
68 vault_path = irods_shared_reg_resc_vault()
69 Reg_Resc_Name = ''
70 if vault_path:
71 session.resources.create(IRODS_REG_RESC, 'unixfilesystem', session.host, vault_path)
72 Reg_Resc_Name = IRODS_REG_RESC
73 return Reg_Resc_Name
74
75
76 def make_environment_and_auth_files( dir_, **params ):
77 if not os.path.exists(dir_): os.mkdir(dir_)
78 def recast(k):
79 return 'irods_' + k + ('_name' if k in ('user','zone') else '')
80 config = os.path.join(dir_,'irods_environment.json')
81 with open(config,'w') as f1:
82 json.dump({recast(k):v for k,v in params.items() if k != 'password'},f1,indent=4)
83 auth = os.path.join(dir_,'.irodsA')
84 with open(auth,'w') as f2:
85 f2.write(encode(params['password']))
86 os.chmod(auth,0o600)
87 return (config, auth)
88
89
1590 def make_session(**kwargs):
1691 try:
17 env_file = kwargs['irods_env_file']
92 env_file = kwargs.pop('irods_env_file')
1893 except KeyError:
1994 try:
2095 env_file = os.environ['IRODS_ENVIRONMENT_FILE']
27102 except KeyError:
28103 uid = None
29104
30 return iRODSSession(irods_authentication_uid=uid, irods_env_file=env_file)
105 return iRODSSession( irods_authentication_uid = uid, irods_env_file = env_file, **kwargs )
106
107
108 def home_collection(session):
109 return "/{0.zone}/home/{0.username}".format(session)
31110
32111
33112 def make_object(session, path, content=None, **options):
36115
37116 content = iRODSMessage.encode_unicode(content)
38117
39 # 2 step open-create necessary for iRODS 4.1.4 or older
40 obj = session.data_objects.create(path)
41 with obj.open('w', **options) as obj_desc:
42 obj_desc.write(content)
118 if session.server_version <= (4,1,4):
119 # 2 step open-create necessary for iRODS 4.1.4 or older
120 obj = session.data_objects.create(path)
121 with obj.open('w', **options) as obj_desc:
122 obj_desc.write(content)
123 else:
124 with session.data_objects.open(path, 'w', **options) as obj_desc:
125 obj_desc.write(content)
43126
44127 # refresh object after write
45128 return session.data_objects.get(path)
108191 with open(file_path, 'wb') as f:
109192 f.write(os.urandom(file_size))
110193
194 @contextlib.contextmanager
195 def create_simple_resc (self, rescName = None):
196 if not rescName:
197 rescName = 'simple_resc_' + unique_name (my_function_name() + '_simple_resc', datetime.datetime.now())
198 created = False
199 try:
200 self.sess.resources.create(rescName,
201 'unixfilesystem',
202 host = self.sess.host,
203 path = '/tmp/' + rescName)
204 created = True
205 yield rescName
206 finally:
207 if created:
208 self.sess.resources.remove(rescName)
209
210 @contextlib.contextmanager
211 def create_simple_resc_hierarchy (self, Root, Leaf):
212 d = tempfile.mkdtemp()
213 self.sess.resources.create(Leaf,'unixfilesystem',
214 host = self.sess.host,
215 path=d)
216 self.sess.resources.create(Root,'passthru')
217 self.sess.resources.add_child(Root,Leaf)
218 try:
219 yield ';'.join([Root,Leaf])
220 finally:
221 self.sess.resources.remove_child(Root,Leaf)
222 self.sess.resources.remove(Leaf)
223 self.sess.resources.remove(Root)
224 shutil.rmtree(d)
225
111226
112227 def chunks(f, chunksize=io.DEFAULT_BUFFER_SIZE):
113228 return iter(lambda: f.read(chunksize), b'')
121236 hasher.update(chunk)
122237
123238 return base64.b64encode(hasher.digest()).decode()
239
240
241 def remove_unused_metadata(session):
242 from irods.message import GeneralAdminRequest
243 from irods.api_number import api_number
244 message_body = GeneralAdminRequest( 'rm', 'unusedAVUs', '','','','')
245 req = iRODSMessage("RODS_API_REQ", msg = message_body,int_info=api_number['GENERAL_ADMIN_AN'])
246 with session.pool.get_connection() as conn:
247 conn.send(req)
248 response=conn.recv()
249 if (response.int_info != 0): raise RuntimeError("Error removing unused AVUs")
124250
125251
126252 @contextlib.contextmanager
131257 yield filename
132258 finally:
133259 shutil.copyfile(f.name, filename)
260
261
262 def irods_session_host_local (sess):
263 return socket.gethostbyname(sess.host) == \
264 socket.gethostbyname(socket.gethostname())
0 #! /usr/bin/env python
1 from __future__ import print_function
2 from __future__ import absolute_import
3 import os
4 import sys
5 import tempfile
6 import unittest
7 import textwrap
8 import json
9 import shutil
10 import ssl
11 import irods.test.helpers as helpers
12 from irods.connection import Connection
13 from irods.session import iRODSSession, NonAnonymousLoginWithoutPassword
14 from irods.rule import Rule
15 from irods.models import User
16 from socket import gethostname
17 from irods.password_obfuscation import (encode as pw_encode)
18 from irods.connection import PlainTextPAMPasswordError
19 from irods.access import iRODSAccess
20 import irods.exception as ex
21 import contextlib
22 import socket
23 from re import compile as regex
24 import gc
25 import six
26
27 #
28 # Allow override to specify the PAM password in effect for the test rodsuser.
29 #
30 TEST_PAM_PW_OVERRIDE = os.environ.get('PYTHON_IRODSCLIENT_TEST_PAM_PW_OVERRIDE','')
31 TEST_PAM_PW = TEST_PAM_PW_OVERRIDE or 'test123'
32
33 TEST_IRODS_PW = 'apass'
34 TEST_RODS_USER = 'alissa'
35
36
37 try:
38 from re import _pattern_type as regex_type
39 except ImportError:
40 from re import Pattern as regex_type # Python 3.7+
41
42
43 def json_file_update(fname,keys_to_delete=(),**kw):
44 with open(fname,'r') as f:
45 j = json.load(f)
46 j.update(**kw)
47 for k in keys_to_delete:
48 if k in j: del j [k]
49 elif isinstance(k,regex_type):
50 jk = [i for i in j.keys() if k.search(i)]
51 for ky in jk: del j[ky]
52 with open(fname,'w') as out:
53 json.dump(j, out, indent=4)
54
55 def env_dir_fullpath(authtype): return os.path.join( os.environ['HOME'] , '.irods.' + authtype)
56 def json_env_fullpath(authtype): return os.path.join( env_dir_fullpath(authtype), 'irods_environment.json')
57 def secrets_fullpath(authtype): return os.path.join( env_dir_fullpath(authtype), '.irodsA')
58
59 SERVER_ENV_PATH = os.path.expanduser('~irods/.irods/irods_environment.json')
60
61 SERVER_ENV_SSL_SETTINGS = {
62 "irods_ssl_certificate_chain_file": "/etc/irods/ssl/irods.crt",
63 "irods_ssl_certificate_key_file": "/etc/irods/ssl/irods.key",
64 "irods_ssl_dh_params_file": "/etc/irods/ssl/dhparams.pem",
65 "irods_ssl_ca_certificate_file": "/etc/irods/ssl/irods.crt",
66 "irods_ssl_verify_server": "cert"
67 }
68
69 def update_service_account_for_SSL():
70 json_file_update( SERVER_ENV_PATH, **SERVER_ENV_SSL_SETTINGS )
71
72 CLIENT_OPTIONS_FOR_SSL = {
73 "irods_client_server_policy": "CS_NEG_REQUIRE",
74 "irods_client_server_negotiation": "request_server_negotiation",
75 "irods_ssl_ca_certificate_file": "/etc/irods/ssl/irods.crt",
76 "irods_ssl_verify_server": "cert",
77 "irods_encryption_key_size": 16,
78 "irods_encryption_salt_size": 8,
79 "irods_encryption_num_hash_rounds": 16,
80 "irods_encryption_algorithm": "AES-256-CBC"
81 }
82
83
84 def client_env_from_server_env(user_name, auth_scheme=""):
85 cli_env = {}
86 with open(SERVER_ENV_PATH) as f:
87 srv_env = json.load(f)
88 for k in [ "irods_host", "irods_zone_name", "irods_port" ]:
89 cli_env [k] = srv_env[k]
90 cli_env["irods_user_name"] = user_name
91 if auth_scheme:
92 cli_env["irods_authentication_scheme"] = auth_scheme
93 return cli_env
94
95 @contextlib.contextmanager
96 def pam_password_in_plaintext(allow=True):
97 saved = bool(Connection.DISALLOWING_PAM_PLAINTEXT)
98 try:
99 Connection.DISALLOWING_PAM_PLAINTEXT = not(allow)
100 yield
101 finally:
102 Connection.DISALLOWING_PAM_PLAINTEXT = saved
103
104
105 class TestLogins(unittest.TestCase):
106 '''
107 Ideally, these tests should move into CI, but that would require the server
108 (currently a different node than the client) to have SSL certs created and
109 enabled.
110
111 Until then, we require these tests to be run manually on a server node,
112 with:
113
114 python -m unittest "irods.test.login_auth_test[.XX[.YY]]'
115
116 Additionally:
117
118 1. The PAM/SSL tests under the TestLogins class should be run on a
119 single-node iRODS system, by the service account user. This ensures
120 the /etc/irods directory is local and writable.
121
122 2. ./setupssl.py (sets up SSL keys etc. in /etc/irods/ssl) should be run
123 first to create (or overwrite, if appropriate) the /etc/irods/ssl directory
124 and its contents.
125
126 3. Must add & override configuration entries in /var/lib/irods/irods_environment
127 Per https://slides.com/irods/ugm2018-ssl-and-pam-configuration#/3/7
128
129 '''
130
131 user_auth_envs = {
132 '.irods.pam': {
133 'USER': TEST_RODS_USER,
134 'PASSWORD': TEST_PAM_PW,
135 'AUTH': 'pam'
136 },
137 '.irods.native': {
138 'USER': TEST_RODS_USER,
139 'PASSWORD': TEST_IRODS_PW,
140 'AUTH': 'native'
141 }
142 }
143
144 env_save = {}
145
146 @contextlib.contextmanager
147 def setenv(self,var,newvalue):
148 try:
149 self.env_save[var] = os.environ.get(var,None)
150 os.environ[var] = newvalue
151 yield newvalue
152 finally:
153 oldvalue = self.env_save[var]
154 if oldvalue is None:
155 del os.environ[var]
156 else:
157 os.environ[var]=oldvalue
158
159 def create_env_dirs(self):
160 dirs = {}
161 retval = []
162 # -- create environment configurations and secrets
163 with pam_password_in_plaintext():
164 for dirname,lookup in self.user_auth_envs.items():
165 if lookup['AUTH'] == 'pam':
166 ses = iRODSSession( host=gethostname(),
167 user=lookup['USER'],
168 zone='tempZone',
169 authentication_scheme=lookup['AUTH'],
170 password=lookup['PASSWORD'],
171 port= 1247 )
172 try:
173 pam_hashes = ses.pam_pw_negotiated
174 except AttributeError:
175 pam_hashes = []
176 if not pam_hashes: print('Warning ** PAM pw couldnt be generated' ); break
177 scrambled_pw = pw_encode( pam_hashes[0] )
178 #elif lookup['AUTH'] == 'XXXXXX': # TODO: insert other authentication schemes here
179 elif lookup['AUTH'] in ('native', '',None):
180 scrambled_pw = pw_encode( lookup['PASSWORD'] )
181 cl_env = client_env_from_server_env(TEST_RODS_USER)
182 if lookup.get('AUTH',None) is not None: # - specify auth scheme only if given
183 cl_env['irods_authentication_scheme'] = lookup['AUTH']
184 dirbase = os.path.join(os.environ['HOME'],dirname)
185 dirs[dirbase] = { 'secrets':scrambled_pw , 'client_environment':cl_env }
186
187 # -- create the environment directories and write into them the configurations just created
188 for absdir in dirs.keys():
189 shutil.rmtree(absdir,ignore_errors=True)
190 os.mkdir(absdir)
191 with open(os.path.join(absdir,'irods_environment.json'),'w') as envfile:
192 envfile.write('{}')
193 json_file_update(envfile.name, **dirs[absdir]['client_environment'])
194 with open(os.path.join(absdir,'.irodsA'),'w') as secrets_file:
195 secrets_file.write(dirs[absdir]['secrets'])
196 os.chmod(secrets_file.name,0o600)
197
198 retval = dirs.keys()
199 return retval
200
201
202 @classmethod
203 def setUpClass(cls):
204 cls.admin = helpers.make_session()
205
206 @classmethod
207 def tearDownClass(cls):
208 cls.admin.cleanup()
209
210 def setUp(self):
211 if os.environ['HOME'] != '/var/lib/irods':
212 self.skipTest('Must be run as irods')
213 super(TestLogins,self).setUp()
214
215 def tearDown(self):
216 for envdir in getattr(self, 'envdirs', []):
217 shutil.rmtree(envdir, ignore_errors=True)
218 super(TestLogins,self).tearDown()
219
220 def validate_session(self, session, verbose=False, **options):
221
222 # - try to get the home collection
223 home_coll = '/{0.zone}/home/{0.username}'.format(session)
224 self.assertTrue(session.collections.get(home_coll).path == home_coll)
225 if verbose: print(home_coll)
226 # - check user is as expected
227 self.assertEqual( session.username, TEST_RODS_USER )
228 # - check socket type (normal vs SSL) against whether ssl requested
229 use_ssl = options.pop('ssl',None)
230 if use_ssl is not None:
231 my_connect = [s for s in (session.pool.active|session.pool.idle)] [0]
232 self.assertEqual( bool( use_ssl ), my_connect.socket.__class__ is ssl.SSLSocket )
233
234
235 @contextlib.contextmanager
236 def _setup_rodsuser_and_optional_pw(self, name, make_irods_pw = False):
237 try:
238 self.admin.users.create(name, 'rodsuser')
239 if make_irods_pw:
240 self.admin.users.modify(name,'password',TEST_IRODS_PW)
241 yield
242 finally:
243 self.admin.users.remove( name )
244
245 def tst0(self, ssl_opt, auth_opt, env_opt, name = TEST_RODS_USER, make_irods_pw = False):
246
247 with self._setup_rodsuser_and_optional_pw(name = name, make_irods_pw = make_irods_pw):
248 self.envdirs = self.create_env_dirs()
249 if not self.envdirs:
250 raise RuntimeError('Could not create one or more client environments')
251 auth_opt_explicit = 'native' if auth_opt=='' else auth_opt
252 verbosity=False
253 #verbosity='' # -- debug - sanity check by printing out options applied
254 out = {'':''}
255 if env_opt:
256 with self.setenv('IRODS_ENVIRONMENT_FILE', json_env_fullpath(auth_opt_explicit)) as env_file,\
257 self.setenv('IRODS_AUTHENTICATION_FILE', secrets_fullpath(auth_opt_explicit)):
258 cli_env_extras = {} if not(ssl_opt) else dict( CLIENT_OPTIONS_FOR_SSL )
259 if auth_opt:
260 cli_env_extras.update( irods_authentication_scheme = auth_opt )
261 remove=[]
262 else:
263 remove=[regex('authentication_')]
264 with helpers.file_backed_up(env_file):
265 json_file_update( env_file, keys_to_delete=remove, **cli_env_extras )
266 session = iRODSSession(irods_env_file=env_file)
267 with open(env_file) as f:
268 out = json.load(f)
269 self.validate_session( session, verbose = verbosity, ssl = ssl_opt )
270 session.cleanup()
271 out['ARGS']='no'
272 else:
273 session_options = {}
274 if auth_opt:
275 session_options.update (authentication_scheme = auth_opt)
276 if ssl_opt:
277 SSL_cert = CLIENT_OPTIONS_FOR_SSL["irods_ssl_ca_certificate_file"]
278 session_options.update(
279 ssl_context = ssl.create_default_context ( purpose = ssl.Purpose.SERVER_AUTH,
280 capath = None,
281 cadata = None,
282 cafile = SSL_cert),
283 **CLIENT_OPTIONS_FOR_SSL )
284 lookup = self.user_auth_envs ['.irods.'+('native' if not(auth_opt) else auth_opt)]
285 session = iRODSSession ( host=gethostname(),
286 user=lookup['USER'],
287 zone='tempZone',
288 password=lookup['PASSWORD'],
289 port= 1247,
290 **session_options )
291 out = session_options
292 self.validate_session( session, verbose = verbosity, ssl = ssl_opt )
293 session.cleanup()
294 out['ARGS']='yes'
295
296 if verbosity == '':
297 print ('--- ssl:',ssl_opt,'/ auth:',repr(auth_opt),'/ env:',env_opt)
298 print ('--- > ',json.dumps({k:v for k,v in out.items() if k != 'ssl_context'},indent=4))
299 print ('---')
300
301
302
303 # == test defaulting to 'native'
304
305 def test_01(self):
306 self.tst0 ( ssl_opt = True , auth_opt = '' , env_opt = False , make_irods_pw = True)
307 def test_02(self):
308 self.tst0 ( ssl_opt = False, auth_opt = '' , env_opt = False , make_irods_pw = True)
309 def test_03(self):
310 self.tst0 ( ssl_opt = True , auth_opt = '' , env_opt = True , make_irods_pw = True )
311 def test_04(self):
312 self.tst0 ( ssl_opt = False, auth_opt = '' , env_opt = True , make_irods_pw = True )
313
314 # == test explicit scheme 'native'
315
316 def test_1(self):
317 self.tst0 ( ssl_opt = True , auth_opt = 'native' , env_opt = False, make_irods_pw = True)
318
319 def test_2(self):
320 self.tst0 ( ssl_opt = False, auth_opt = 'native' , env_opt = False, make_irods_pw = True)
321
322 def test_3(self):
323 self.tst0 ( ssl_opt = True , auth_opt = 'native' , env_opt = True, make_irods_pw = True)
324
325 def test_4(self):
326 self.tst0 ( ssl_opt = False, auth_opt = 'native' , env_opt = True, make_irods_pw = True)
327
328 # == test explicit scheme 'pam'
329
330 def test_5(self):
331 self.tst0 ( ssl_opt = True, auth_opt = 'pam' , env_opt = False )
332
333 def test_6(self):
334 try:
335 self.tst0 ( ssl_opt = False, auth_opt = 'pam' , env_opt = False )
336 except PlainTextPAMPasswordError:
337 pass
338 else:
339 # -- no exception raised
340 self.fail("PlainTextPAMPasswordError should have been raised")
341
342 def test_7(self):
343 self.tst0 ( ssl_opt = True , auth_opt = 'pam' , env_opt = True )
344
345 def test_8(self):
346 self.tst0 ( ssl_opt = False, auth_opt = 'pam' , env_opt = True )
347
348 @unittest.skipUnless(TEST_PAM_PW_OVERRIDE, "Skipping unless pam password is overridden (e.g. to test special characters)")
349 def test_escaped_pam_password_chars__362(self):
350 with self._setup_rodsuser_and_optional_pw(name = TEST_RODS_USER):
351 context = ssl._create_unverified_context(
352 purpose=ssl.Purpose.SERVER_AUTH, capath=None, cadata=None, cafile=None,
353 )
354 ssl_settings = {
355 'client_server_negotiation': 'request_server_negotiation',
356 'client_server_policy': 'CS_NEG_REQUIRE',
357 'encryption_algorithm': 'AES-256-CBC',
358 'encryption_key_size': 32,
359 'encryption_num_hash_rounds': 16,
360 'encryption_salt_size': 8,
361 'ssl_ca_certificate_file': '/etc/irods/ssl/irods.crt',
362 'ssl_context': context
363 }
364 irods_session = iRODSSession(
365 host = self.admin.host,
366 port = self.admin.port,
367 zone = self.admin.zone,
368 user = TEST_RODS_USER,
369 password = TEST_PAM_PW_OVERRIDE,
370 authentication_scheme = 'pam',
371 **ssl_settings
372 )
373 home_coll = '/{0.zone}/home/{0.username}'.format(irods_session)
374 self.assertEqual(irods_session.collections.get(home_coll).path, home_coll)
375
376 class TestAnonymousUser(unittest.TestCase):
377
378 def setUp(self):
379 admin = self.admin = helpers.make_session()
380
381 user = self.user = admin.users.create('anonymous', 'rodsuser', admin.zone)
382 self.home = '/{admin.zone}/home/{user.name}'.format(**locals())
383
384 admin.collections.create(self.home)
385 acl = iRODSAccess('own', self.home, user.name)
386 admin.permissions.set(acl)
387
388 self.env_file = os.path.expanduser('~/.irods.anon/irods_environment.json')
389 self.env_dir = ( os.path.dirname(self.env_file))
390 self.auth_file = os.path.expanduser('~/.irods.anon/.irodsA')
391 os.mkdir( os.path.dirname(self.env_file))
392 json.dump( { "irods_host": admin.host,
393 "irods_port": admin.port,
394 "irods_user_name": user.name,
395 "irods_zone_name": admin.zone }, open(self.env_file,'w'), indent=4 )
396
397 def tearDown(self):
398 self.admin.collections.remove(self.home, recurse = True, force = True)
399 self.admin.users.remove(self.user.name)
400 shutil.rmtree (self.env_dir, ignore_errors = True)
401
402 def test_login_from_environment(self):
403 orig_env = os.environ.copy()
404 try:
405 os.environ["IRODS_ENVIRONMENT_FILE"] = self.env_file
406 os.environ["IRODS_AUTHENTICATION_FILE"] = self.auth_file
407 ses = helpers.make_session()
408 ses.collections.get(self.home)
409 finally:
410 os.environ.clear()
411 os.environ.update( orig_env )
412
413 class TestMiscellaneous(unittest.TestCase):
414
415 def test_nonanonymous_login_without_auth_file_fails__290(self):
416 ses = self.admin
417 if ses.users.get( ses.username ).type != 'rodsadmin':
418 self.skipTest( 'Only a rodsadmin may run this test.')
419 try:
420 ENV_DIR = tempfile.mkdtemp()
421 ses.users.create('bob', 'rodsuser')
422 ses.users.modify('bob', 'password', 'bpass')
423 d = dict(password = 'bpass', user = 'bob', host = ses.host, port = ses.port, zone = ses.zone)
424 (bob_env, bob_auth) = helpers.make_environment_and_auth_files(ENV_DIR, **d)
425 login_options = { 'irods_env_file': bob_env, 'irods_authentication_file': bob_auth }
426 with helpers.make_session(**login_options) as s:
427 s.users.get('bob')
428 os.unlink(bob_auth)
429 # -- Check that we raise an appropriate exception pointing to the missing auth file path --
430 with self.assertRaisesRegexp(NonAnonymousLoginWithoutPassword, bob_auth):
431 with helpers.make_session(**login_options) as s:
432 s.users.get('bob')
433 finally:
434 try:
435 shutil.rmtree(ENV_DIR,ignore_errors=True)
436 ses.users.get('bob').remove()
437 except ex.UserDoesNotExist:
438 pass
439
440
441 def setUp(self):
442 admin = self.admin = helpers.make_session()
443 if admin.users.get(admin.username).type != 'rodsadmin':
444 self.skipTest('need admin privilege')
445 admin.users.create('alice','rodsuser')
446
447 def tearDown(self):
448 self.admin.users.remove('alice')
449 self.admin.cleanup()
450
451 @unittest.skipUnless(six.PY3, "Skipping in Python2 because it doesn't reliably do cyclic GC.")
452 def test_destruct_session_with_no_pool_315(self):
453
454 destruct_flag = [False]
455
456 class mySess( iRODSSession ):
457 def __del__(self):
458 self.pool = None
459 super(mySess,self).__del__() # call parent destructor(s) - will raise
460 # an error before the #315 fix
461 destruct_flag[:] = [True]
462
463 admin = self.admin
464 admin.users.modify('alice','password','apass')
465
466 my_sess = mySess( user = 'alice',
467 password = 'apass',
468 host = admin.host,
469 port = admin.port,
470 zone = admin.zone)
471 my_sess.cleanup()
472 del my_sess
473 gc.collect()
474 self.assertEqual( destruct_flag, [True] )
475
476 def test_non_anon_native_login_omitting_password_fails_1__290(self):
477 # rodsuser with password unset
478 with self.assertRaises(ex.CAT_INVALID_USER):
479 self._non_anon_native_login_omitting_password_fails_N__290()
480
481 def test_non_anon_native_login_omitting_password_fails_2__290(self):
482 # rodsuser with a password set
483 self.admin.users.modify('alice','password','apass')
484 with self.assertRaises(ex.CAT_INVALID_AUTHENTICATION):
485 self._non_anon_native_login_omitting_password_fails_N__290()
486
487 def _non_anon_native_login_omitting_password_fails_N__290(self):
488 admin = self.admin
489 with iRODSSession(zone = admin.zone, port = admin.port, host = admin.host, user = 'alice') as alice:
490 alice.collections.get(helpers.home_collection(alice))
491
492 class TestWithSSL(unittest.TestCase):
493 '''
494 The tests within this class should be run by an account other than the
495 service account. Otherwise there is risk of corrupting the server setup.
496 '''
497
498 def setUp(self):
499 if os.path.expanduser('~') == '/var/lib/irods':
500 self.skipTest('TestWithSSL may not be run by user irods')
501 if not os.path.exists('/etc/irods/ssl'):
502 self.skipTest('Running setupssl.py as irods user is prerequisite for this test.')
503 with helpers.make_session() as session:
504 if not session.host in ('localhost', socket.gethostname()):
505 self.skipTest('Test must be run co-resident with server')
506
507
508 def test_ssl_with_server_verify_set_to_none_281(self):
509 env_file = os.path.expanduser('~/.irods/irods_environment.json')
510 with helpers.file_backed_up(env_file):
511 with open(env_file) as env_file_handle:
512 env = json.load( env_file_handle )
513 env.update({ "irods_client_server_negotiation": "request_server_negotiation",
514 "irods_client_server_policy": "CS_NEG_REQUIRE",
515 "irods_ssl_ca_certificate_file": "/path/to/some/file.crt", # does not need to exist
516 "irods_ssl_verify_server": "none",
517 "irods_encryption_key_size": 32,
518 "irods_encryption_salt_size": 8,
519 "irods_encryption_num_hash_rounds": 16,
520 "irods_encryption_algorithm": "AES-256-CBC" })
521 with open(env_file,'w') as f:
522 json.dump(env,f)
523 with helpers.make_session() as session:
524 session.collections.get('/{session.zone}/home/{session.username}'.format(**locals()))
525
526
527 if __name__ == '__main__':
528 # let the tests find the parent irods lib
529 sys.path.insert(0, os.path.abspath('../..'))
530 unittest.main()
77 if __name__ == '__main__':
88 sys.path.insert(0, os.path.abspath('../..'))
99
10 from xml.etree import ElementTree as ET
10 from irods.message import ET
11
1112 # from base64 import b64encode, b64decode
1213 from irods.message import (StartupPack, AuthResponse, IntegerIntegerMap,
1314 IntegerStringMap, StringStringMap, GenQueryRequest,
4344 self.assertEqual(xml_str, expected)
4445
4546 sup2 = StartupPack(('rods', 'tempZone'), ('rods', 'tempZone'))
46 sup2.unpack(ET.fromstring(expected))
47 sup2.unpack(ET().fromstring(expected))
4748 self.assertEqual(sup2.irodsProt, 2)
4849 self.assertEqual(sup2.reconnFlag, 3)
4950 self.assertEqual(sup2.proxyUser, "rods")
6566 self.assertEqual(ar.pack(), expected)
6667
6768 ar2 = AuthResponse()
68 ar2.unpack(ET.fromstring(expected))
69 ar2.unpack(ET().fromstring(expected))
6970 self.assertEqual(ar2.response, b"hello")
7071 self.assertEqual(ar2.username, "rods")
7172
8485 self.assertEqual(iip.pack(), expected)
8586
8687 iip2 = IntegerIntegerMap()
87 iip2.unpack(ET.fromstring(expected))
88 iip2.unpack(ET().fromstring(expected))
8889 self.assertEqual(iip2.iiLen, 2)
8990 self.assertEqual(iip2.inx, [4, 5])
9091 self.assertEqual(iip2.ivalue, [1, 2])
104105 self.assertEqual(kvp.pack(), expected)
105106
106107 kvp2 = StringStringMap()
107 kvp2.unpack(ET.fromstring(expected))
108 kvp2.unpack(ET().fromstring(expected))
108109 self.assertEqual(kvp2.ssLen, 2)
109110 self.assertEqual(kvp2.keyWord, ["one", "two"])
110111 self.assertEqual(kvp2.svalue, ["three", "four"])
139140 self.assertEqual(gq.pack(), expected)
140141
141142 gq2 = GenQueryRequest()
142 gq2.unpack(ET.fromstring(expected))
143 gq2.unpack(ET().fromstring(expected))
143144 self.assertEqual(gq2.maxRows, 4)
144145 self.assertEqual(gq2.continueInx, 3)
145146 self.assertEqual(gq2.partialStartIndex, 2)
169170 self.assertEqual(sr.pack(), expected)
170171
171172 sr2 = GenQueryResponseColumn()
172 sr2.unpack(ET.fromstring(expected))
173 sr2.unpack(ET().fromstring(expected))
173174 self.assertEqual(sr2.attriInx, 504)
174175 self.assertEqual(sr2.reslen, 64)
175176 self.assertEqual(sr2.value, ["one", "two"])
192193 self.assertEqual(gqo.pack(), expected)
193194
194195 gqo2 = GenQueryResponse()
195 gqo2.unpack(ET.fromstring(expected))
196 gqo2.unpack(ET().fromstring(expected))
196197
197198 self.assertEqual(gqo2.rowCnt, 2)
198199 self.assertEqual(gqo2.pack(), expected)
22 from __future__ import absolute_import
33 import os
44 import sys
5 import time
6 import datetime
57 import unittest
6 from irods.meta import iRODSMeta
7 from irods.models import DataObject, Collection
8 from irods.meta import (iRODSMeta, AVUOperation, BadAVUOperationValue, BadAVUOperationKeyword)
9 from irods.manager.metadata_manager import InvalidAtomicAVURequest
10 from irods.models import (DataObject, Collection, Resource)
811 import irods.test.helpers as helpers
12 import irods.keywords as kw
13 from irods.session import iRODSSession
914 from six.moves import range
15 from six import PY3
1016
1117
1218 class TestMeta(unittest.TestCase):
1824
1925 def setUp(self):
2026 self.sess = helpers.make_session()
21
2227 # test data
2328 self.coll_path = '/{}/home/{}/test_dir'.format(self.sess.zone, self.sess.username)
2429 self.obj_name = 'test1'
2833 self.coll = self.sess.collections.create(self.coll_path)
2934 self.obj = self.sess.data_objects.create(self.obj_path)
3035
31
3236 def tearDown(self):
3337 '''Remove test data and close connections
3438 '''
3539 self.coll.remove(recurse=True, force=True)
40 helpers.remove_unused_metadata(self.sess)
3641 self.sess.cleanup()
3742
43 from irods.test.helpers import create_simple_resc_hierarchy
44
45 def test_atomic_metadata_operations_244(self):
46 user = self.sess.users.get("rods")
47 group = self.sess.user_groups.get("public")
48 m = ( "attr_244","value","units")
49
50 with self.assertRaises(BadAVUOperationValue):
51 AVUOperation(operation="add", avu=m)
52
53 with self.assertRaises(BadAVUOperationValue):
54 AVUOperation(operation="not_add_or_remove", avu=iRODSMeta(*m))
55
56 with self.assertRaises(BadAVUOperationKeyword):
57 AVUOperation(operation="add", avu=iRODSMeta(*m), extra_keyword=None)
58
59
60 with self.assertRaises(InvalidAtomicAVURequest):
61 user.metadata.apply_atomic_operations( tuple() )
62
63 user.metadata.apply_atomic_operations() # no AVUs applied - no-op without error
64
65 for n,obj in enumerate((group, user, self.coll, self.obj)):
66 avus = [ iRODSMeta('some_attribute',str(i),'some_units') for i in range(n*100,(n+1)*100) ]
67 obj.metadata.apply_atomic_operations(*[AVUOperation(operation="add", avu=avu_) for avu_ in avus])
68 obj.metadata.apply_atomic_operations(*[AVUOperation(operation="remove", avu=avu_) for avu_ in avus])
69
70
71 def test_atomic_metadata_operation_for_resource_244(self):
72 (root,leaf)=('ptX','rescX')
73 with self.create_simple_resc_hierarchy(root,leaf):
74 root_resc = self.sess.resources.get(root) # resource objects
75 leaf_resc = self.sess.resources.get(leaf)
76 root_tuple = ('role','root','new units #1') # AVU tuples to apply
77 leaf_tuple = ('role','leaf','new units #2')
78 root_resc.metadata.add( *root_tuple[:2] ) # first apply without units ...
79 leaf_resc.metadata.add( *leaf_tuple[:2] )
80 for resc,resc_tuple in ((root_resc, root_tuple), (leaf_resc, leaf_tuple)):
81 resc.metadata.apply_atomic_operations( # metadata set operation (remove + add) to add units
82 AVUOperation(operation="remove", avu=iRODSMeta(*resc_tuple[:2])),
83 AVUOperation(operation="add", avu=iRODSMeta(*resc_tuple[:3]))
84 )
85 resc_meta = self.sess.metadata.get(Resource, resc.name)
86 avus_to_tuples = lambda avu_list: sorted([(i.name,i.value,i.units) for i in avu_list])
87 self.assertEqual(avus_to_tuples(resc_meta), avus_to_tuples([iRODSMeta(*resc_tuple)]))
88
89
90 def test_atomic_metadata_operation_for_data_object_244(self):
91 AVUs_Equal = lambda avu1,avu2,fn=(lambda x:x): fn(avu1)==fn(avu2)
92 AVU_As_Tuple = lambda avu,length=3:(avu.name,avu.value,avu.units)[:length]
93 AVU_Units_String = lambda avu:"" if not avu.units else avu.units
94 m = iRODSMeta( "attr_244","value","units")
95 self.obj.metadata.add(m)
96 meta = self.sess.metadata.get(DataObject, self.obj_path)
97 self.assertEqual(len(meta), 1)
98 self.assertTrue(AVUs_Equal(m,meta[0],AVU_As_Tuple))
99 self.obj.metadata.apply_atomic_operations( # remove original AVU and replace
100 AVUOperation(operation="remove",avu=m), # with two altered versions
101 AVUOperation(operation="add",avu=iRODSMeta(m.name,m.value,"units_244")), # (one of them without units) ...
102 AVUOperation(operation="add",avu=iRODSMeta(m.name,m.value))
103 )
104 meta = self.sess.metadata.get(DataObject, self.obj_path) # ... check integrity of change
105 self.assertEqual(sorted([AVU_Units_String(i) for i in meta]), ["","units_244"])
106
107 def test_atomic_metadata_operations_255(self):
108 my_resc = self.sess.resources.create('dummyResc','passthru')
109 avus = [iRODSMeta('a','b','c'), iRODSMeta('d','e','f')]
110 objects = [ self.sess.users.get("rods"), self.sess.user_groups.get("public"), my_resc,
111 self.sess.collections.get(self.coll_path), self.sess.data_objects.get(self.obj_path) ]
112 try:
113 for obj in objects:
114 self.assertEqual(len(obj.metadata.items()), 0)
115 for n,item in enumerate(avus):
116 obj.metadata.apply_atomic_operations(AVUOperation(operation='add',avu=item))
117 self.assertEqual(len(obj.metadata.items()), n+1)
118 finally:
119 for obj in objects: obj.metadata.remove_all()
120 my_resc.remove()
38121
39122 def test_get_obj_meta(self):
40123 # get object metadata
43126 # there should be no metadata at this point
44127 assert len(meta) == 0
45128
129 def test_resc_meta(self):
130 rescname = 'demoResc'
131 self.sess.resources.get(rescname).metadata.remove_all()
132 self.sess.metadata.set(Resource, rescname, iRODSMeta('zero','marginal','cost'))
133 self.sess.metadata.add(Resource, rescname, iRODSMeta('zero','marginal'))
134 self.sess.metadata.set(Resource, rescname, iRODSMeta('for','ever','after'))
135 meta = self.sess.resources.get(rescname).metadata
136 self.assertTrue( len(meta) == 3 )
137 resource = self.sess.resources.get(rescname)
138 all_AVUs= resource.metadata.items()
139 for avu in all_AVUs:
140 resource.metadata.remove(avu)
141 self.assertTrue(0 == len(self.sess.resources.get(rescname).metadata))
46142
47143 def test_add_obj_meta(self):
48144 # add metadata to test object
73169 assert meta[1].units == self.unit1
74170
75171 assert meta[2].name == attribute
76 assert meta[2].value == value
172 testValue = (value if PY3 else value.encode('utf8'))
173 assert meta[2].value == testValue
77174
78175
79176 def test_add_obj_meta_empty(self):
122219 # check that metadata is gone
123220 meta = self.sess.metadata.get(DataObject, self.obj_path)
124221 assert len(meta) == 0
222
223
224 def test_metadata_manipulations_with_admin_kw__364__365(self):
225 try:
226 d = user = None
227 adm = self.sess
228
229 if adm.server_version <= (4,2,11):
230 self.skipTest('ADMIN_KW not valid for Metadata API in iRODS 4.2.11 and previous')
231
232 # Create a rodsuser, and a session for that roduser.
233 user = adm.users.create ( 'bobby','rodsuser' )
234 user.modify('password','bpass')
235 with iRODSSession (port=adm.port,zone=adm.zone,host=adm.host, user=user.name,password='bpass') as ses:
236 # Create a data object owned by the rodsuser. Set AVUs in various ways and guarantee each attempt
237 # has the desired effect.
238 d = ses.data_objects.create('/{adm.zone}/home/{user.name}/testfile'.format(**locals()))
239
240 d.metadata.set('a','aa','1')
241 self.assertIn(('a','aa','1'), d.metadata.items())
242
243 d.metadata.set('a','aa')
244 self.assertEqual([('a','aa')], [tuple(_) for _ in d.metadata.items()])
245
246 d.metadata['a'] = iRODSMeta('a','bb')
247 self.assertEqual([('a','bb')], [tuple(_) for _ in d.metadata.items()])
248
249 # Now the admin does two AVU-set operations. A successful test of these operations' success
250 # includes that both ('x','y') has been added and ('a','b','c') has overwritten ('a','bb').
251
252 da = adm.data_objects.get(d.path)
253 da.metadata.set('a','b','c',**{kw.ADMIN_KW:''})
254 da.metadata(admin = True)['x'] = iRODSMeta('x','y')
255 d = ses.data_objects.get(d.path) # assure metadata are not cached
256 self.assertEqual(set([('x','y'), ('a','b','c')]),
257 set(tuple(_) for _ in d.metadata.items()))
258 finally:
259 if d: d.unlink(force=True)
260 if user: user.remove()
125261
126262
127263 def test_add_coll_meta(self):
262398 test_obj.unlink(force=True)
263399
264400
401 @staticmethod
402 def check_timestamps(metadata_accessor, key):
403 avu = metadata_accessor[key]
404 create = getattr(avu,'create_time',None)
405 modify = getattr(avu,'modify_time',None)
406 return (create,modify)
407
408
409 def test_timestamp_access_386(self):
410 with helpers.make_session() as session:
411 def units():
412 return str(time.time())
413 d = None
414 try:
415 d = session.data_objects.create('/tempZone/home/rods/issue_386')
416
417 # Test metadata access without timestamps
418
419 meta = d.metadata
420 avu = iRODSMeta('no_ts','val',units())
421 meta.set(avu)
422 self.assertEqual((None, None), # Assert no timestamps are stored.
423 self.check_timestamps(meta, key = avu.name))
424
425 # -- Test metadata access with timestamps
426
427 meta_ts = meta(timestamps = True)
428 avu_use_ts = iRODSMeta('use_ts','val',units())
429 meta_ts.set(avu_use_ts)
430 time.sleep(1.5)
431 now = datetime.datetime.utcnow()
432 time.sleep(1.5)
433 avu_use_ts.units = units()
434 meta_ts.set(avu_use_ts) # Set an AVU with modified units.
435
436 (create, modify) = self.check_timestamps(meta_ts, key = avu_use_ts.name)
437
438 self.assertLess(create, now) # Ensure timestamps are in proper order.
439 self.assertLess(now, modify)
440 finally:
441 if d: d.unlink(force = True)
442 helpers.remove_unused_metadata(session)
443
265444 if __name__ == '__main__':
266445 # let the tests find the parent irods lib
267446 sys.path.insert(0, os.path.abspath('../..'))
00 #! /usr/bin/env python
11 from __future__ import absolute_import
2 import datetime
3 import gc
4 import logging
25 import os
6 import re
37 import sys
8 import tempfile
9 import time
10 import json
411 import unittest
12 import socket
513 import irods.test.helpers as helpers
14 from irods.connection import DESTRUCTOR_MSG
15
16 # Regular expression to match common synonyms for localhost.
17 #
18
19 LOCALHOST_REGEX = re.compile(r"""^(127(\.\d+){1,3}|[0:]+1|(.*-)?localhost(\.\w+)?)$""",re.IGNORECASE)
20 USE_ONLY_LOCALHOST = False
621
722
823 class TestPool(unittest.TestCase):
924
25 config_extension = ".json"
26 test_extension = ""
27 preferred_parameters = {}
28
29 @classmethod
30 def setUpClass(cls): # generate test env files using connect data from ~/.irods environment
31 if USE_ONLY_LOCALHOST: return
32 Nonlocal_Ext = ".test"
33 with helpers.make_session() as session:
34 cls.preferred_parameters = { 'irods_host':session.host,
35 'irods_port':session.port,
36 'irods_user_name':session.username,
37 'irods_zone_name':session.zone }
38 test_configs_dir = os.path.join(irods_test_path(),"test-data")
39 for config in [os.path.join(test_configs_dir,f) for f in os.listdir(test_configs_dir)
40 if f.endswith(cls.config_extension)]:
41 with open(config,"r") as in_, open(config + Nonlocal_Ext,"w") as out_:
42 cf = json.load(in_)
43 cf.update(cls.preferred_parameters)
44 json.dump(cf, out_,indent=4)
45 cls.test_extension = Nonlocal_Ext
46
47
1048 def setUp(self):
11 self.sess = helpers.make_session()
49 self.sess = helpers.make_session(
50 irods_env_file=os.path.join(irods_test_path(),"test-data","irods_environment.json" + self.test_extension))
51 if USE_ONLY_LOCALHOST and not LOCALHOST_REGEX.match (self.sess.host):
52 self.skipTest('for non-local server')
1253
1354 def tearDown(self):
1455 '''Close connections
1657 self.sess.cleanup()
1758
1859 def test_release_connection(self):
19 with self.sess.pool.get_connection() as conn:
60 with self.sess.pool.get_connection():
2061 self.assertEqual(1, len(self.sess.pool.active))
2162 self.assertEqual(0, len(self.sess.pool.idle))
2263
3374 self.assertEqual(0, len(self.sess.pool.idle))
3475
3576 def test_destroy_idle(self):
36 with self.sess.pool.get_connection() as conn:
77 with self.sess.pool.get_connection():
3778 self.assertEqual(1, len(self.sess.pool.active))
3879 self.assertEqual(0, len(self.sess.pool.idle))
3980
5697 self.sess.cleanup()
5798 self.assertEqual(0, len(self.sess.pool.active))
5899 self.assertEqual(0, len(self.sess.pool.idle))
100
101 def test_connection_create_time(self):
102 # Get a connection and record its object ID and create_time
103 # Release the connection (goes from active to idle queue)
104 # Again, get a connection. Should get the same connection back.
105 # I.e., the object IDs should match. However, the new connection
106 # should have a more recent 'last_used_time'
107 conn_obj_id_1 = None
108 conn_obj_id_2 = None
109 create_time_1 = None
110 create_time_2 = None
111 last_used_time_1 = None
112 last_used_time_2 = None
113
114 with self.sess.pool.get_connection() as conn:
115 conn_obj_id_1 = id(conn)
116 curr_time = datetime.datetime.now()
117 create_time_1 = conn.create_time
118 last_used_time_1 = conn.last_used_time
119 self.assertTrue(curr_time >= create_time_1)
120 self.assertTrue(curr_time >= last_used_time_1)
121 self.assertEqual(1, len(self.sess.pool.active))
122 self.assertEqual(0, len(self.sess.pool.idle))
123
124 self.sess.pool.release_connection(conn)
125 self.assertEqual(0, len(self.sess.pool.active))
126 self.assertEqual(1, len(self.sess.pool.idle))
127
128 with self.sess.pool.get_connection() as conn:
129 conn_obj_id_2 = id(conn)
130 curr_time = datetime.datetime.now()
131 create_time_2 = conn.create_time
132 last_used_time_2 = conn.last_used_time
133 self.assertEqual(conn_obj_id_1, conn_obj_id_2)
134 self.assertTrue(curr_time >= create_time_2)
135 self.assertTrue(curr_time >= last_used_time_2)
136 self.assertTrue(last_used_time_2 >= last_used_time_1)
137 self.assertEqual(1, len(self.sess.pool.active))
138 self.assertEqual(0, len(self.sess.pool.idle))
139
140 self.sess.pool.release_connection(conn)
141 self.assertEqual(0, len(self.sess.pool.active))
142 self.assertEqual(1, len(self.sess.pool.idle))
143
144 self.sess.pool.release_connection(conn, True)
145 self.assertEqual(0, len(self.sess.pool.active))
146 self.assertEqual(0, len(self.sess.pool.idle))
147
148 def test_refresh_connection(self):
149 # Set 'irods_connection_refresh_time' to '3' (in seconds) in
150 # ~/.irods/irods_environment.json file. This means any connection
151 # that was created more than 3 seconds ago will be dropped and
152 # a new connection is created/returned. This is to avoid
153 # issue with idle connections that are dropped.
154 conn_obj_id_1 = None
155 conn_obj_id_2 = None
156 create_time_1 = None
157 create_time_2 = None
158 last_used_time_1 = None
159 last_used_time_2 = None
160
161 with self.sess.pool.get_connection() as conn:
162 conn_obj_id_1 = id(conn)
163 curr_time = datetime.datetime.now()
164 create_time_1 = conn.create_time
165 last_used_time_1 = conn.last_used_time
166 self.assertTrue(curr_time >= create_time_1)
167 self.assertTrue(curr_time >= last_used_time_1)
168 self.assertEqual(1, len(self.sess.pool.active))
169 self.assertEqual(0, len(self.sess.pool.idle))
170
171 self.sess.pool.release_connection(conn)
172 self.assertEqual(0, len(self.sess.pool.active))
173 self.assertEqual(1, len(self.sess.pool.idle))
174
175 # Wait more than 'irods_connection_refresh_time' seconds,
176 # which is set to 3. Connection object should have a different
177 # object ID (as a new connection is created)
178 time.sleep(5)
179
180 with self.sess.pool.get_connection() as conn:
181 conn_obj_id_2 = id(conn)
182 curr_time = datetime.datetime.now()
183 create_time_2 = conn.create_time
184 last_used_time_2 = conn.last_used_time
185 self.assertTrue(curr_time >= create_time_2)
186 self.assertTrue(curr_time >= last_used_time_2)
187 self.assertNotEqual(conn_obj_id_1, conn_obj_id_2)
188 self.assertTrue(create_time_2 > create_time_1)
189 self.assertEqual(1, len(self.sess.pool.active))
190 self.assertEqual(0, len(self.sess.pool.idle))
191
192 self.sess.pool.release_connection(conn, True)
193 self.assertEqual(0, len(self.sess.pool.active))
194 self.assertEqual(0, len(self.sess.pool.idle))
195
196 def test_no_refresh_connection(self):
197 # Set 'irods_connection_refresh_time' to '3' (in seconds) in
198 # ~/.irods/irods_environment.json file. This means any connection
199 # created more than 3 seconds ago will be dropped and
200 # a new connection is created/returned. This is to avoid
201 # issue with idle connections that are dropped.
202 conn_obj_id_1 = None
203 conn_obj_id_2 = None
204 create_time_1 = None
205 create_time_2 = None
206 last_used_time_1 = None
207 last_used_time_2 = None
208
209 with self.sess.pool.get_connection() as conn:
210 conn_obj_id_1 = id(conn)
211 curr_time = datetime.datetime.now()
212 create_time_1 = conn.create_time
213 last_used_time_1 = conn.last_used_time
214 self.assertTrue(curr_time >= create_time_1)
215 self.assertTrue(curr_time >= last_used_time_1)
216 self.assertEqual(1, len(self.sess.pool.active))
217 self.assertEqual(0, len(self.sess.pool.idle))
218
219 self.sess.pool.release_connection(conn)
220 self.assertEqual(0, len(self.sess.pool.active))
221 self.assertEqual(1, len(self.sess.pool.idle))
222
223 # Wait less than 'irods_connection_refresh_time' seconds,
224 # which is set to 3. Connection object should have the same
225 # object ID (as idle time is less than 'irods_connection_refresh_time')
226 time.sleep(1)
227
228 with self.sess.pool.get_connection() as conn:
229 conn_obj_id_2 = id(conn)
230 curr_time = datetime.datetime.now()
231 create_time_2 = conn.create_time
232 last_used_time_2 = conn.last_used_time
233 self.assertTrue(curr_time >= create_time_2)
234 self.assertTrue(curr_time >= last_used_time_2)
235 self.assertEqual(conn_obj_id_1, conn_obj_id_2)
236 self.assertTrue(create_time_2 >= create_time_1)
237 self.assertEqual(1, len(self.sess.pool.active))
238 self.assertEqual(0, len(self.sess.pool.idle))
239
240 self.sess.pool.release_connection(conn, True)
241 self.assertEqual(0, len(self.sess.pool.active))
242 self.assertEqual(0, len(self.sess.pool.idle))
243
244 # Test to confirm the connection destructor log message is actually
245 # logged to file, to confirm the destructor is called
246 def test_connection_destructor_called(self):
247
248 if self.sess.host != socket.gethostname() and not LOCALHOST_REGEX.match (self.sess.host):
249 self.skipTest('local test only - client dot does not like the extra logging')
250
251 # Set 'irods_connection_refresh_time' to '3' (in seconds) in
252 # ~/.irods/irods_environment.json file. This means any connection
253 # that was created more than 3 seconds ago will be dropped and
254 # a new connection is created/returned. This is to avoid
255 # issue with idle connections that are dropped.
256 conn_obj_id_1 = None
257 conn_obj_id_2 = None
258 create_time_1 = None
259 create_time_2 = None
260 last_used_time_1 = None
261 last_used_time_2 = None
262
263 try:
264
265 # Create a temporary log file
266 my_log_file = tempfile.NamedTemporaryFile()
267
268 logging.getLogger('irods.connection').setLevel(logging.DEBUG)
269 file_handler = logging.FileHandler(my_log_file.name, mode='a')
270 file_handler.setLevel(logging.DEBUG)
271 logging.getLogger('irods.connection').addHandler(file_handler)
272
273 with self.sess.pool.get_connection() as conn:
274 conn_obj_id_1 = id(conn)
275 curr_time = datetime.datetime.now()
276 create_time_1 = conn.create_time
277 last_used_time_1 = conn.last_used_time
278 self.assertTrue(curr_time >= create_time_1)
279 self.assertTrue(curr_time >= last_used_time_1)
280 self.assertEqual(1, len(self.sess.pool.active))
281 self.assertEqual(0, len(self.sess.pool.idle))
282
283 self.sess.pool.release_connection(conn)
284 self.assertEqual(0, len(self.sess.pool.active))
285 self.assertEqual(1, len(self.sess.pool.idle))
286
287 # Wait more than 'irods_connection_refresh_time' seconds,
288 # which is set to 3. Connection object should have a different
289 # object ID (as a new connection is created)
290 time.sleep(5)
291
292 # Call garbage collector, so the unreferenced conn object is garbage collected
293 gc.collect()
294
295 with self.sess.pool.get_connection() as conn:
296 conn_obj_id_2 = id(conn)
297 curr_time = datetime.datetime.now()
298 create_time_2 = conn.create_time
299 last_used_time_2 = conn.last_used_time
300 self.assertTrue(curr_time >= create_time_2)
301 self.assertTrue(curr_time >= last_used_time_2)
302 self.assertNotEqual(conn_obj_id_1, conn_obj_id_2)
303 self.assertTrue(create_time_2 > create_time_1)
304 self.assertEqual(1, len(self.sess.pool.active))
305 self.assertEqual(0, len(self.sess.pool.idle))
306
307 self.sess.pool.release_connection(conn, True)
308 self.assertEqual(0, len(self.sess.pool.active))
309 self.assertEqual(0, len(self.sess.pool.idle))
310
311 # Assert that connection destructor called
312 with open(my_log_file.name, 'r') as fh:
313 lines = fh.read().splitlines()
314 self.assertTrue(DESTRUCTOR_MSG in lines)
315 finally:
316 # Remove irods.connection's file_handler that was added just for this test
317 logging.getLogger('irods.connection').removeHandler(file_handler)
318
319 def test_get_connection_refresh_time_no_env_file_input_param(self):
320 connection_refresh_time = self.sess.get_connection_refresh_time(first_name="Magic", last_name="Johnson")
321 self.assertEqual(connection_refresh_time, -1)
322
323 def test_get_connection_refresh_time_none_existant_env_file(self):
324 connection_refresh_time = self.sess.get_connection_refresh_time(
325 irods_env_file=os.path.join(irods_test_path(),"test-data","irods_environment_non_existant.json" + self.test_extension))
326 self.assertEqual(connection_refresh_time, -1)
327
328 def test_get_connection_refresh_time_no_connection_refresh_field(self):
329 connection_refresh_time = self.sess.get_connection_refresh_time(
330 irods_env_file=os.path.join(irods_test_path(),"test-data","irods_environment_no_refresh_field.json" + self.test_extension))
331 self.assertEqual(connection_refresh_time, -1)
332
333 def test_get_connection_refresh_time_negative_connection_refresh_field(self):
334 connection_refresh_time = self.sess.get_connection_refresh_time(
335 irods_env_file=os.path.join(irods_test_path(),"test-data","irods_environment_negative_refresh_field.json" + self.test_extension))
336 self.assertEqual(connection_refresh_time, -1)
337
338 def test_get_connection_refresh_time(self):
339 default_path = os.path.join (irods_test_path(),"test-data","irods_environment.json" + self.test_extension)
340 connection_refresh_time = self.sess.get_connection_refresh_time(irods_env_file=default_path)
341 self.assertEqual(connection_refresh_time, 3)
342
343
344 def irods_test_path():
345 return os.path.dirname(__file__)
59346
60347
61348 if __name__ == '__main__':
00 #! /usr/bin/env python
1 # -*- coding: utf-8 -*-
2 from __future__ import print_function
13 from __future__ import absolute_import
24 import os
5 import six
36 import sys
7 import tempfile
48 import unittest
9 import time
10 import uuid
511 from datetime import datetime
6 from irods.models import User, Collection, DataObject, Resource
12 from irods.models import (User, UserMeta,
13 Resource, ResourceMeta,
14 Collection, CollectionMeta,
15 DataObject, DataObjectMeta,
16 RuleExec)
17
18 from tempfile import NamedTemporaryFile
719 from irods.exception import MultipleResultsFound, CAT_UNKNOWN_SPECIFIC_QUERY, CAT_INVALID_ARGUMENT
820 from irods.query import SpecificQuery
9 from irods.column import Like, Between
21 from irods.column import Like, Between, In
22 from irods.meta import iRODSMeta
23 from irods.rule import Rule
1024 from irods import MAX_SQL_ROWS
25 from irods.test.helpers import irods_shared_reg_resc_vault
1126 import irods.test.helpers as helpers
27 from six.moves import range as py3_range
28 import irods.keywords as kw
29
30 IRODS_STATEMENT_TABLE_SIZE = 50
31
32
33 def rows_returned(query):
34 return len( list(query) )
1235
1336
1437 class TestQuery(unittest.TestCase):
38
39 Iterate_to_exhaust_statement_table = range(IRODS_STATEMENT_TABLE_SIZE + 1)
40
41 More_than_one_batch = 2*MAX_SQL_ROWS # may need to increase if PRC default page
42 # size is increased beyond 500
43
44 register_resc = ''
45
46 @classmethod
47 def setUpClass(cls):
48 with helpers.make_session() as sess:
49 resource_name = helpers.get_register_resource(sess)
50 if resource_name:
51 cls.register_resc = resource_name
52
53 @classmethod
54 def tearDownClass(cls):
55 with helpers.make_session() as sess:
56 try:
57 if cls.register_resc:
58 sess.resources.get(cls.register_resc).remove()
59 except Exception as e:
60 print( "Could not remove resc {!r} due to: {} ".format(cls.register_resc,e),
61 file=sys.stderr)
62
1563
1664 def setUp(self):
1765 self.sess = helpers.make_session()
2573 self.coll = self.sess.collections.create(self.coll_path)
2674 self.obj = self.sess.data_objects.create(self.obj_path)
2775
28
2976 def tearDown(self):
3077 '''Remove test data and close connections
3178 '''
3279 self.coll.remove(recurse=True, force=True)
3380 self.sess.cleanup()
34
3581
3682 def test_collections_query(self):
3783 # collection query test
144190 results = self.sess.query(User.name).order_by(
145191 User.name, order='moo').all()
146192
193 def test_query_order_by_col_not_in_result__183(self):
194 test_collection_size = 8
195 test_collection_path = '/{0}/home/{1}/testcoln_for_col_not_in_result'.format(self.sess.zone, self.sess.username)
196 c1 = c2 = None
197 try:
198 c1 = helpers.make_test_collection( self.sess, test_collection_path+"1", obj_count=test_collection_size)
199 c2 = helpers.make_test_collection( self.sess, test_collection_path+"2", obj_count=test_collection_size)
200 d12 = [ sorted([d.id for d in c.data_objects]) for c in sorted((c1,c2),key=lambda c:c.id) ]
201 query = self.sess.query(DataObject).filter(Like(Collection.name, test_collection_path+"_")).order_by(Collection.id)
202 q12 = list(map(lambda res:res[DataObject.id], query))
203 self.assertTrue(d12[0] + d12[1] == sorted( q12[:test_collection_size] ) + sorted( q12[test_collection_size:]))
204 finally:
205 if c1: c1.remove(recurse=True,force=True)
206 if c2: c2.remove(recurse=True,force=True)
147207
148208 def test_query_with_like_condition(self):
149209 '''Equivalent to:
153213 query = self.sess.query(Resource).filter(Like(Resource.name, 'dem%'))
154214 self.assertIn('demoResc', [row[Resource.name] for row in query])
155215
156
157216 def test_query_with_between_condition(self):
158217 '''Equivalent to:
159218 iquest "select RESC_NAME, COLL_NAME, DATA_NAME where DATA_MODIFY_TIME between '01451606400' '...'"
169228 for result in query:
170229 res_str = '{} {}/{}'.format(result[Resource.name], result[Collection.name], result[DataObject.name])
171230 self.assertIn(session.zone, res_str)
231
232 def test_query_with_in_condition(self):
233 collection = self.coll_path
234 filename = 'test_query_id_in_list.txt'
235 file_path = '{collection}/{filename}'.format(**locals())
236 obj1 = helpers.make_object(self.sess, file_path+'-1')
237 obj2 = helpers.make_object(self.sess, file_path+'-2')
238 ids = [x.id for x in (obj1,obj2)]
239 for number in range(3): # slice for empty(:0), first(:1) or both(:2)
240 search_tuple = (ids[:number] if number >= 1 else [0] + ids[:number])
241 q = self.sess.query(DataObject.name).filter(In( DataObject.id, search_tuple ))
242 self.assertEqual (number, rows_returned(q))
243
244 def test_simultaneous_multiple_AVU_joins(self):
245 objects = []
246 decoys = []
247 try:
248 collection = self.coll_path
249 filename = 'test_multiple_AVU_joins'
250 file_path = '{collection}/{filename}'.format(**locals())
251 for x in range(3,9):
252 obj = helpers.make_object(self.sess, file_path+'-{}'.format(x)) # with metadata
253 objects.append(obj)
254 obj.metadata.add('A_meta','1{}'.format(x))
255 obj.metadata.add('B_meta','2{}'.format(x))
256 decoys.append(helpers.make_object(self.sess, file_path+'-dummy{}'.format(x))) # without metadata
257 self.assertTrue( len(objects) > 0 )
258
259 # -- test simple repeat of same column --
260 q = self.sess.query(DataObject,DataObjectMeta).\
261 filter(DataObjectMeta.name == 'A_meta', DataObjectMeta.value < '20').\
262 filter(DataObjectMeta.name == 'B_meta', DataObjectMeta.value >= '20')
263 self.assertTrue( rows_returned(q) == len(objects) )
264
265 # -- test no-stomp of previous filter --
266 self.assertTrue( ('B_meta','28') in [ (x.name,x.value) for x in objects[-1].metadata.items() ] )
267 q = self.sess.query(DataObject,DataObjectMeta).\
268 filter(DataObjectMeta.name == 'B_meta').filter(DataObjectMeta.value < '28').\
269 filter(DataObjectMeta.name == 'B_meta').filter(Like(DataObjectMeta.value, '2_'))
270 self.assertTrue( rows_returned(q) == len(objects)-1 )
271
272 # -- test multiple AVU's by same attribute name --
273 objects[-1].metadata.add('B_meta','29')
274 q = self.sess.query(DataObject,DataObjectMeta).\
275 filter(DataObjectMeta.name == 'B_meta').filter(DataObjectMeta.value == '28').\
276 filter(DataObjectMeta.name == 'B_meta').filter(DataObjectMeta.value == '29')
277 self.assertTrue(rows_returned(q) == 1)
278 finally:
279 for x in (objects + decoys):
280 x.unlink(force=True)
281 helpers.remove_unused_metadata( self.sess )
282
283 def test_query_on_AVU_times(self):
284 test_collection_path = '/{zone}/home/{user}/test_collection'.format( zone = self.sess.zone, user = self.sess.username)
285 testColl = helpers.make_test_collection(self.sess, test_collection_path, obj_count = 1)
286 testData = testColl.data_objects[0]
287 testResc = self.sess.resources.get('demoResc')
288 testUser = self.sess.users.get(self.sess.username)
289 objects = { 'r': testResc, 'u': testUser, 'c':testColl, 'd':testData }
290 object_IDs = { sfx:obj.id for sfx,obj in objects.items() }
291 tables = { 'r': (Resource, ResourceMeta),
292 'u': (User, UserMeta),
293 'd': (DataObject, DataObjectMeta),
294 'c': (Collection, CollectionMeta) }
295 try:
296 str_number_incr = lambda str_numbers : str(1+max([0]+[int(n) if n.isdigit() else 0 for n in str_numbers]))
297 AVU_unique_incr = lambda obj,suffix='' : ( 'a_'+suffix,
298 'v_'+suffix,
299 str_number_incr(avu.units for avu in obj.metadata.items()) )
300 before = datetime.utcnow()
301 time.sleep(1.5)
302 for suffix,obj in objects.items(): obj.metadata.add( *AVU_unique_incr(obj,suffix) )
303 after = datetime.utcnow()
304 for suffix, tblpair in tables.items():
305 self.sess.query( *tblpair ).filter(tblpair[1].modify_time <= after )\
306 .filter(tblpair[1].modify_time > before )\
307 .filter(tblpair[0].id == object_IDs[suffix] ).one()
308 self.sess.query( *tblpair ).filter(tblpair[1].create_time <= after )\
309 .filter(tblpair[1].create_time > before )\
310 .filter(tblpair[0].id == object_IDs[suffix] ).one()
311 finally:
312 for obj in objects.values():
313 for avu in obj.metadata.items(): obj.metadata.remove(avu)
314 testColl.remove(recurse=True,force=True)
315 helpers.remove_unused_metadata( self.sess )
316
317
318 def test_multiple_criteria_on_one_column_name(self):
319 collection = self.coll_path
320 filename = 'test_multiple_AVU_joins'
321 file_path = '{collection}/{filename}'.format(**locals())
322 objects = []
323 nobj = 0
324 for x in range(3,9):
325 nobj += 2
326 obj1 = helpers.make_object(self.sess, file_path+'-{}'.format(x))
327 obj2 = helpers.make_object(self.sess, file_path+'-dummy{}'.format(x))
328 objects.extend([obj1,obj2])
329 self.assertTrue( nobj > 0 and len(objects) == nobj )
330 q = self.sess.query(Collection,DataObject)
331 dummy_test = [d for d in q if d[DataObject.name][-1:] != '8'
332 and d[DataObject.name][-7:-1] == '-dummy' ]
333 self.assertTrue( len(dummy_test) > 0 )
334 q = q. filter(Like(DataObject.name, '%-dummy_')).\
335 filter(Collection.name == collection) .\
336 filter(DataObject.name != (filename + '-dummy8'))
337 results = [r[DataObject.name] for r in q]
338 self.assertTrue(len(results) == len(dummy_test))
339
340
341 def common_dir_or_vault_info(self):
342 register_opts= {}
343 dir_ = None
344 if self.register_resc:
345 dir_ = irods_shared_reg_resc_vault()
346 register_opts[ kw.RESC_NAME_KW ] = self.register_resc
347 if not(dir_) and helpers.irods_session_host_local (self.sess):
348 dir_ = tempfile.gettempdir()
349 if not dir_:
350 return ()
351 else:
352 return (dir_ , register_opts)
353
354
355 @unittest.skipIf(six.PY3, 'Test is for python2 only')
356 def test_query_for_data_object_with_utf8_name_python2(self):
357 reg_info = self.common_dir_or_vault_info()
358 if not reg_info:
359 self.skipTest('server is non-localhost and no common path exists for object registration')
360 (dir_,resc_option) = reg_info
361 filename_prefix = '_prefix_ǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZDzdzǴǵǶǷǸ'
362 self.assertEqual(self.FILENAME_PREFIX.encode('utf-8'), filename_prefix)
363 _,test_file = tempfile.mkstemp(dir=dir_,prefix=filename_prefix)
364 obj_path = os.path.join(self.coll.path, os.path.basename(test_file))
365 results = None
366 try:
367 self.sess.data_objects.register(test_file, obj_path, **resc_option)
368 results = self.sess.query(DataObject, Collection.name).filter(DataObject.path == test_file).first()
369 result_logical_path = os.path.join(results[Collection.name], results[DataObject.name])
370 result_physical_path = results[DataObject.path]
371 self.assertEqual(result_logical_path, obj_path)
372 self.assertEqual(result_physical_path, test_file)
373 finally:
374 if results: self.sess.data_objects.unregister(obj_path)
375 os.remove(test_file)
376
377 # view/change this line in text editors under own risk:
378 FILENAME_PREFIX = u'_prefix_ǠǡǢǣǤǥǦǧǨǩǪǫǬǭǮǯǰDZDzdzǴǵǶǷǸ'
379
380 @unittest.skipIf(six.PY2, 'Test is for python3 only')
381 def test_query_for_data_object_with_utf8_name_python3(self):
382 reg_info = self.common_dir_or_vault_info()
383 if not reg_info:
384 self.skipTest('server is non-localhost and no common path exists for object registration')
385 (dir_,resc_option) = reg_info
386 def python34_unicode_mkstemp( prefix, dir = None, open_mode = 0o777 ):
387 file_path = os.path.join ((dir or os.environ.get('TMPDIR') or '/tmp'), prefix+'-'+str(uuid.uuid1()))
388 encoded_file_path = file_path.encode('utf-8')
389 return os.open(encoded_file_path,os.O_CREAT|os.O_RDWR,mode=open_mode), encoded_file_path
390 fd = None
391 filename_prefix = u'_prefix_'\
392 u'\u01e0\u01e1\u01e2\u01e3\u01e4\u01e5\u01e6\u01e7\u01e8\u01e9\u01ea\u01eb\u01ec\u01ed\u01ee\u01ef'\
393 u'\u01f0\u01f1\u01f2\u01f3\u01f4\u01f5\u01f6\u01f7\u01f8' # make more visible/changeable in VIM
394 self.assertEqual(self.FILENAME_PREFIX, filename_prefix)
395 (fd,encoded_test_file) = tempfile.mkstemp(dir = dir_.encode('utf-8'),prefix=filename_prefix.encode('utf-8')) \
396 if sys.version_info >= (3,5) \
397 else python34_unicode_mkstemp(dir = dir_, prefix = filename_prefix)
398 self.assertTrue(os.path.exists(encoded_test_file))
399 test_file = encoded_test_file.decode('utf-8')
400 obj_path = os.path.join(self.coll.path, os.path.basename(test_file))
401 results = None
402 try:
403 self.sess.data_objects.register(test_file, obj_path, **resc_option)
404 results = list(self.sess.query(DataObject, Collection.name).filter(DataObject.path == test_file))
405 if results:
406 results = results[0]
407 result_logical_path = os.path.join(results[Collection.name], results[DataObject.name])
408 result_physical_path = results[DataObject.path]
409 self.assertEqual(result_logical_path, obj_path)
410 self.assertEqual(result_physical_path, test_file)
411 finally:
412 if results: self.sess.data_objects.unregister(obj_path)
413 if fd is not None: os.close(fd)
414 os.remove(encoded_test_file)
415
416 class Issue_166_context:
417 '''
418 For [irods/python-irodsclient#166] related tests
419 '''
420
421 def __init__(self, session, coll_path='test_collection_issue_166', num_objects=8, num_avus_per_object=0):
422 self.session = session
423 if '/' not in coll_path:
424 coll_path = '/{}/home/{}/{}'.format(self.session.zone, self.session.username, coll_path)
425 self.coll_path = coll_path
426 self.num_objects = num_objects
427 self.test_collection = None
428 self.nAVUs = num_avus_per_object
429
430 def __enter__(self): # - prepare for context block ("with" statement)
431
432 self.test_collection = helpers.make_test_collection( self.session, self.coll_path, obj_count=self.num_objects)
433 q_params = (Collection.name, DataObject.name)
434
435 if self.nAVUs > 0:
436
437 # - set the AVUs on the collection's objects:
438 for data_obj_path in map(lambda d:d[Collection.name]+"/"+d[DataObject.name],
439 self.session.query(*q_params).filter(Collection.name == self.test_collection.path)):
440 data_obj = self.session.data_objects.get(data_obj_path)
441 for key in (str(x) for x in py3_range(self.nAVUs)):
442 data_obj.metadata[key] = iRODSMeta(key, "1")
443
444 # - in subsequent test searches, match on each AVU of every data object in the collection:
445 q_params += (DataObjectMeta.name,)
446
447 # - The "with" statement receives, as context variable, a zero-arg function to build the query
448 return lambda : self.session.query( *q_params ).filter( Collection.name == self.test_collection.path)
449
450 def __exit__(self,*_): # - clean up after context block
451
452 if self.test_collection is not None:
453 self.test_collection.remove(recurse=True, force=True)
454
455 if self.nAVUs > 0 and self.num_objects > 0:
456 helpers.remove_unused_metadata(self.session) # delete unused AVU's
457
458 def test_query_first__166(self):
459
460 with self.Issue_166_context(self.sess) as buildQuery:
461 for dummy_i in self.Iterate_to_exhaust_statement_table:
462 buildQuery().first()
463
464 def test_query_one__166(self):
465
466 with self.Issue_166_context(self.sess, num_objects = self.More_than_one_batch) as buildQuery:
467
468 for dummy_i in self.Iterate_to_exhaust_statement_table:
469 query = buildQuery()
470 try:
471 query.one()
472 except MultipleResultsFound:
473 pass # irrelevant result
474
475 def test_query_one_iter__166(self):
476
477 with self.Issue_166_context(self.sess, num_objects = self.More_than_one_batch) as buildQuery:
478
479 for dummy_i in self.Iterate_to_exhaust_statement_table:
480
481 for dummy_row in buildQuery():
482 break # single iteration
483
484 def test_paging_get_batches_and_check_paging__166(self):
485
486 with self.Issue_166_context( self.sess, num_objects = 1,
487 num_avus_per_object = 2 * self.More_than_one_batch) as buildQuery:
488
489 pages = [b for b in buildQuery().get_batches()]
490 self.assertTrue(len(pages) > 2 and len(pages[0]) < self.More_than_one_batch)
491
492 to_compare = []
493
494 for _ in self.Iterate_to_exhaust_statement_table:
495
496 for batch in buildQuery().get_batches():
497 to_compare.append(batch)
498 if len(to_compare) == 2: break #leave query unfinished, but save two pages to compare
499
500 # - To make sure paging was done, we ensure that this "key" tuple (collName/dataName , metadataKey)
501 # is not repeated between first two pages:
502
503 Compare_Key = lambda d: ( d[Collection.name] + "/" + d[DataObject.name],
504 d[DataObjectMeta.name] )
505 Set0 = { Compare_Key(dct) for dct in to_compare[0] }
506 Set1 = { Compare_Key(dct) for dct in to_compare[1] }
507 self.assertTrue(len(Set0 & Set1) == 0) # assert intersection is null set
508
509 def test_paging_get_results__166(self):
510
511 with self.Issue_166_context( self.sess, num_objects = self.More_than_one_batch) as buildQuery:
512 batch_size = 0
513 for result_set in buildQuery().get_batches():
514 batch_size = len(result_set)
515 break
516
517 self.assertTrue(0 < batch_size < self.More_than_one_batch)
518
519 for dummy_iter in self.Iterate_to_exhaust_statement_table:
520 iters = 0
521 for dummy_row in buildQuery().get_results():
522 iters += 1
523 if iters == batch_size - 1:
524 break # leave iteration unfinished
525
526 def test_rules_query__267(self):
527 unique = "Testing prc #267: queryable rule objects"
528 with NamedTemporaryFile(mode='w') as rfile:
529 rfile.write("""f() {{ delay('<EF>1m</EF>') {{ writeLine('serverLog','{unique}') }} }}\n"""
530 """OUTPUT null\n""".format(**locals()))
531 rfile.flush()
532 ## create a delayed rule we can query against
533 myrule = Rule(self.sess, rule_file = rfile.name)
534 myrule.execute()
535 qu = self.sess.query(RuleExec.id).filter( Like(RuleExec.frequency,'%1m%'),
536 Like(RuleExec.name, '%{unique}%'.format(**locals())) )
537 results = [row for row in qu]
538 self.assertEqual(1, len(results))
539 if results:
540 Rule(self.sess).remove_by_id( results[0][RuleExec.id] )
172541
173542
174543 class TestSpecificQuery(unittest.TestCase):
260629 # remove query
261630 query.remove()
262631
263
264632 def test_list_specific_queries(self):
265633 query = SpecificQuery(self.session, alias='ls')
266634
269637 self.assertIn('SELECT', result[1].upper()) # query string
270638
271639
272 def test_list_specific_queries_with_wrong_alias(self):
640 def test_list_specific_queries_with_arguments(self):
641 query = SpecificQuery(self.session, alias='lsl', args=['%OFFSET%'])
642
643 for result in query:
644 self.assertIsNotNone(result[0]) # query alias
645 self.assertIn('SELECT', result[1].upper()) # query string
646
647
648 def test_list_specific_queries_with_unknown_alias(self):
273649 query = SpecificQuery(self.session, alias='foo')
274650
275651 with self.assertRaises(CAT_UNKNOWN_SPECIFIC_QUERY):
276652 res = query.get_results()
277653 next(res)
654
278655
279656
280657 if __name__ == '__main__':
66 import textwrap
77 import unittest
88 from irods.models import DataObject
9 from irods.exception import (FAIL_ACTION_ENCOUNTERED_ERR, RULE_ENGINE_ERROR, UnknowniRODSError)
910 import irods.test.helpers as helpers
1011 from irods.rule import Rule
1112 import six
13 from io import open as io_open
14 import io
15
16
17 RE_Plugins_installed_run_condition_args = ( os.environ.get('PYTHON_RULE_ENGINE_INSTALLED','*').lower()[:1]=='y',
18 'Test depends on server having Python-REP installed beyond the default options' )
1219
1320
1421 class TestRule(unittest.TestCase):
7481 # remove rule file
7582 os.remove(rule_file_path)
7683
84 def test_set_metadata_288(self):
85
86 session = self.sess
87
88 # rule body
89 rule_body = textwrap.dedent('''\
90 *attribute.*attr_name = "*attr_value"
91 msiAssociateKeyValuePairsToObj(*attribute, *object, "-d")
92 # (: -- just a comment -- :) writeLine("serverLog","*value")
93 ''')
94
95 input_params = { '*value': "3334" , "*object": '/tempZone/home/rods/runner.py' ,
96 '*attr_name':'XX',
97 '*attr_value':'YY'
98 }
99
100 output = 'ruleExecOut'
101
102 myrule = Rule(session, body=rule_body, params=input_params, output=output)
103 myrule.execute()
104
105
106 # test catching fail-type actions initiated directly in the instance being called.
107 #
108 @unittest.skipUnless (*RE_Plugins_installed_run_condition_args)
109 def test_with_fail_in_targeted_rule_engines(self):
110 self._failing_in_targeted_rule_engines(rule_to_call = "generic_failing_rule")
111
112
113 # test catching rule fail actions initiated using the native 'failmsg' microservice.
114 #
115 @unittest.skipUnless (*RE_Plugins_installed_run_condition_args)
116 def test_with_using_native_fail_msvc(self):
117 error_dict = \
118 self._failing_in_targeted_rule_engines(rule_to_call = [('irods_rule_engine_plugin-python-instance','failing_with_message_py'),
119 ('irods_rule_engine_plugin-irods_rule_language-instance','failing_with_message')])
120 for v in error_dict.values():
121 self.assertIn('code of minus 2', v[1].args[0])
122
123 # helper for the previous two tests.
124 #
125 def _failing_in_targeted_rule_engines(self, rule_to_call = None):
126 session = self.sess
127 if isinstance(rule_to_call,(list,tuple)):
128 rule_dict = dict(rule_to_call)
129 else:
130 rule_dict = {}
131
132 rule_instances_list = ( 'irods_rule_engine_plugin-irods_rule_language-instance',
133 'irods_rule_engine_plugin-python-instance' )
134 err_hash = {}
135
136 for i in rule_instances_list:
137
138 if rule_dict:
139 rule_to_call = rule_dict[i]
140
141 rule = Rule( session, body = rule_to_call,
142 instance_name = i )
143 try:
144 rule.execute( acceptable_errors = (-1,) )
145 except UnknowniRODSError as e:
146 err_hash[i] = ('rule exec failed! - misc - ',(e)) # 2-tuple = failure
147 except RULE_ENGINE_ERROR as e:
148 err_hash[i] = ('rule exec failed! - python - ',(e)) # 2-tuple = failure
149 except FAIL_ACTION_ENCOUNTERED_ERR as e:
150 err_hash[i] = ('rule exec failed! - native - ',(e))
151 else:
152 err_hash[i] = ('rule exec succeeded!',) # 1-tuple = success
153
154 self.assertEqual( len(err_hash), len(rule_instances_list) )
155 self.assertEqual( len(err_hash), len([val for val in err_hash.values() if val[0].startswith('rule exec failed')]) )
156 return err_hash
157
158
159 @unittest.skipUnless (*RE_Plugins_installed_run_condition_args)
160 def test_targeting_Python_instance_when_rule_multiply_defined(self):
161 self._with_X_instance_when_rule_multiply_defined(
162 instance_name = 'irods_rule_engine_plugin-python-instance',
163 test_condition = lambda bstring: b'python' in bstring
164 )
165
166 @unittest.skipUnless (*RE_Plugins_installed_run_condition_args)
167 def test_targeting_Native_instance_when_rule_multiply_defined(self):
168 self._with_X_instance_when_rule_multiply_defined(
169 instance_name = 'irods_rule_engine_plugin-irods_rule_language-instance',
170 test_condition = lambda bstring: b'native' in bstring
171 )
172
173 @unittest.skipUnless (*RE_Plugins_installed_run_condition_args)
174 def test_targeting_Unspecified_instance_when_rule_multiply_defined(self):
175 self._with_X_instance_when_rule_multiply_defined(
176 test_condition = lambda bstring: b'native' in bstring and b'python' in bstring
177 )
178
179 def _with_X_instance_when_rule_multiply_defined(self,**kw):
180 session = self.sess
181 rule = Rule( session, body = 'defined_in_both',
182 output = 'ruleExecOut',
183 **{key:val for key,val in kw.items() if key == 'instance_name'}
184 )
185 output = rule.execute()
186 buf = output.MsParam_PI[0].inOutStruct.stdoutBuf.buf
187 self.assertTrue(kw['test_condition'](buf.rstrip(b'\0').rstrip()))
188
189
190 def test_specifying_rule_instance(self):
191
192 self._with_writeline_to_stream(
193 stream_name = 'stdout',
194 rule_engine_instance = "irods_rule_engine_plugin-irods_rule_language-instance" )
195
196
197 def _with_writeline_to_stream(self, stream_name = "serverLog",
198 output_string = 'test-writeline-to-stream',
199 alternate_input_params = (),
200 rule_engine_instance = ""):
201
202 session = self.sess
203
204 # rule body
205 rule_body = textwrap.dedent('''\
206 writeLine("{stream_name}","*value")
207 '''.format(**locals()))
208
209 input_params = { '*value': output_string }
210 input_params.update( alternate_input_params )
211
212 output_param = 'ruleExecOut'
213
214 extra_options = {}
215
216 if rule_engine_instance:
217 extra_options [ 'instance_name' ] = rule_engine_instance
218
219 myrule = Rule(session, body=rule_body, params=input_params, output=output_param, **extra_options)
220 output = myrule.execute()
221
222 buf = None
223 if stream_name == 'stdout':
224 buf = output.MsParam_PI[0].inOutStruct.stdoutBuf.buf
225 elif stream_name == 'stderr':
226 buf = output.MsParam_PI[0].inOutStruct.stderrBuf.buf
227
228 if buf is not None:
229 buf = buf.decode('utf-8')
230 self.assertEqual (output_string, buf.rstrip('\0').rstrip())
231
232
77233 def test_add_metadata_from_rule(self):
78234 '''
79235 Runs a rule whose body and input parameters are created in our script.
113269 output = 'ruleExecOut'
114270
115271 # run test rule
116 myrule = Rule(session, body=rule_body,
272 myrule = Rule(session, body=rule_body, irods_3_literal_style = True,
117273 params=input_params, output=output)
118274 myrule.execute()
119275
124280
125281 # remove test object
126282 obj.unlink(force=True)
283
127284
128285 def test_retrieve_std_streams_from_rule(self):
129286 '''
156313 INPUT *some_string="{some_string}",*some_other_string="{some_other_string}",*err_string="{err_string}"
157314 OUTPUT ruleExecOut'''.format(**locals()))
158315
159 with open(rule_file_path, "w") as rule_file:
160 if six.PY2:
161 rule_file.write(rule.encode('utf-8'))
162 else:
163 rule_file.write(rule)
316 with io_open(rule_file_path, "w", encoding='utf-8') as rule_file:
317 rule_file.write(rule)
164318
165319 # run test rule
166320 myrule = Rule(session, rule_file_path)
185339
186340 # remove rule file
187341 os.remove(rule_file_path)
342
343
344 @staticmethod
345 def lines_from_stdout_buf(output):
346 buf = ""
347 if output and len(output.MsParam_PI):
348 buf = output.MsParam_PI[0].inOutStruct.stdoutBuf.buf
349 if buf:
350 buf = buf.rstrip(b'\0').decode('utf8')
351 return buf.splitlines()
352
353
354 def test_rulefile_in_file_like_object_1__336(self):
355
356 rule_file_contents = textwrap.dedent(u"""\
357 hw {
358 helloWorld(*message);
359 writeLine("stdout", "Message is: [*message] ...");
360 }
361 helloWorld(*OUT)
362 {
363 *OUT = "Hello world!"
364 }
365 """)
366 r = Rule(self.sess, rule_file = io.StringIO( rule_file_contents ),
367 output = 'ruleExecOut', instance_name='irods_rule_engine_plugin-irods_rule_language-instance')
368 output = r.execute()
369 lines = self.lines_from_stdout_buf(output)
370 self.assertRegexpMatches (lines[0], '.*\[Hello world!\]')
371
372
373 def test_rulefile_in_file_like_object_2__336(self):
374
375 rule_file_contents = textwrap.dedent("""\
376 main {
377 other_rule()
378 writeLine("stdout","["++type(*msg2)++"][*msg2]");
379 }
380 other_rule {
381 writeLine("stdout","["++type(*msg1)++"][*msg1]");
382 }
383
384 INPUT *msg1="",*msg2=""
385 OUTPUT ruleExecOut
386 """)
387
388 r = Rule(self.sess, rule_file = io.BytesIO( rule_file_contents.encode('utf-8') ))
389 output = r.execute()
390 lines = self.lines_from_stdout_buf(output)
391 self.assertRegexpMatches (lines[0], '\[STRING\]\[\]')
392 self.assertRegexpMatches (lines[1], '\[STRING\]\[\]')
393
394 r = Rule(self.sess, rule_file = io.BytesIO( rule_file_contents.encode('utf-8') )
395 , params = {'*msg1':5, '*msg2':'"A String"'})
396 output = r.execute()
397 lines = self.lines_from_stdout_buf(output)
398 self.assertRegexpMatches (lines[0], '\[INTEGER\]\[5\]')
399 self.assertRegexpMatches (lines[1], '\[STRING\]\[A String\]')
188400
189401
190402 if __name__ == '__main__':
0 #!/usr/bin/env python
1
2 from __future__ import print_function
3 import os
4 import sys
5 import socket
6 import posix
7 import shutil
8 from subprocess import (Popen, PIPE)
9
10 IRODS_SSL_DIR = '/etc/irods/ssl'
11
12 def create_ssl_dir():
13 save_cwd = os.getcwd()
14 silent_run = { 'shell': True, 'stderr' : PIPE, 'stdout' : PIPE }
15 try:
16 if not (os.path.exists(IRODS_SSL_DIR)):
17 os.mkdir(IRODS_SSL_DIR)
18 os.chdir(IRODS_SSL_DIR)
19 Popen("openssl genrsa -out irods.key 2048",**silent_run).communicate()
20 with open("/dev/null","wb") as dev_null:
21 p = Popen("openssl req -new -x509 -key irods.key -out irods.crt -days 365 <<EOF{_sep_}"
22 "US{_sep_}North Carolina{_sep_}Chapel Hill{_sep_}UNC{_sep_}RENCI{_sep_}"
23 "{host}{_sep_}anon@mail.com{_sep_}EOF\n""".format(
24 host = socket.gethostname(), _sep_="\n"),shell=True, stdout=dev_null, stderr=dev_null)
25 p.wait()
26 if 0 == p.returncode:
27 Popen('openssl dhparam -2 -out dhparams.pem',**silent_run).communicate()
28 return os.listdir(".")
29 finally:
30 os.chdir(save_cwd)
31
32 def test(opts,args=()):
33 if args: print ('warning: non-option args are ignored',file=sys.stderr)
34 affirm = 'n' if os.path.exists(IRODS_SSL_DIR) else 'y'
35 if not [v for k,v in opts if k == '-f'] and affirm == 'n' and posix.isatty(sys.stdin.fileno()):
36 try:
37 input_ = raw_input
38 except NameError:
39 input_ = input
40 affirm = input_("This will overwrite directory '{}'. Proceed(Y/N)? ".format(IRODS_SSL_DIR))
41 if affirm[:1].lower() == 'y':
42 shutil.rmtree(IRODS_SSL_DIR,ignore_errors=True)
43 print("Generating new '{}'. This may take a while.".format(IRODS_SSL_DIR), file=sys.stderr)
44 ssl_dir_files = create_ssl_dir()
45 print('ssl_dir_files=', ssl_dir_files)
46
47 if __name__ == '__main__':
48 import getopt
49 test(*getopt.getopt(sys.argv[1:],'f')) # f = force
0 #! /usr/bin/env python
1 from __future__ import absolute_import
2 import os
3 import sys
4 import unittest
5 from irods.exception import UserDoesNotExist
6 from irods.session import iRODSSession
7 import irods.test.helpers as helpers
8
9
10 class TestTempPassword(unittest.TestCase):
11 """ Suite of tests for setting and getting temporary passwords as rodsadmin or rodsuser
12 """
13 admin = None
14 new_user = 'bobby'
15 password = 'foobar'
16
17 @classmethod
18 def setUpClass(cls):
19 cls.admin = helpers.make_session()
20
21 @classmethod
22 def tearDownClass(cls):
23 cls.admin.cleanup()
24
25 def test_temp_password(self):
26 # Make a new user
27 self.admin.users.create(self.new_user, 'rodsuser')
28 self.admin.users.modify(self.new_user, 'password', self.password)
29
30 # Login as the new test user, to retrieve a temporary password
31 with iRODSSession(host=self.admin.host,
32 port=self.admin.port,
33 user=self.new_user,
34 password=self.password,
35 zone=self.admin.zone) as session:
36 # Obtain the temporary password
37 conn = session.pool.get_connection()
38 temp_password = conn.temp_password()
39
40 # Open a new session with the temporary password
41 with iRODSSession(host=self.admin.host,
42 port=self.admin.port,
43 user=self.new_user,
44 password=temp_password,
45 zone=self.admin.zone) as session:
46
47 # do something that connects to the server
48 session.users.get(self.admin.username)
49
50 # delete new user
51 self.admin.users.remove(self.new_user)
52
53 # user should be gone
54 with self.assertRaises(UserDoesNotExist):
55 self.admin.users.get(self.new_user)
56
57 def test_set_temp_password(self):
58 # make a new user
59 temp_user = self.admin.users.create(self.new_user, 'rodsuser')
60
61 # obtain a temporary password as rodsadmin for another user
62 temp_password = temp_user.temp_password()
63
64 # open a session as the new user
65 with iRODSSession(host=self.admin.host,
66 port=self.admin.port,
67 user=self.new_user,
68 password=temp_password,
69 zone=self.admin.zone) as session:
70
71 # do something that connects to the server
72 session.users.get(self.new_user)
73
74 # delete new user
75 self.admin.users.remove(self.new_user)
76
77 # user should be gone
78 with self.assertRaises(UserDoesNotExist):
79 self.admin.users.get(self.new_user)
80
81
82 if __name__ == '__main__':
83 # let the tests find the parent irods lib
84 sys.path.insert(0, os.path.abspath('../..'))
85 unittest.main()
0 {
1 "irods_host": "127.0.0.1",
2 "irods_port": "1247",
3 "irods_user_name": "rods",
4 "irods_zone_name": "tempZone",
5 "irods_connection_refresh_time": "3"
6 }
0 {
1 "irods_host": "127.0.0.1",
2 "irods_port": "1247",
3 "irods_user_name": "rods",
4 "irods_zone_name": "tempZone",
5 "irods_connection_refresh_time": "-3"
6 }
0 {
1 "irods_host": "127.0.0.1",
2 "irods_port": "1247",
3 "irods_user_name": "rods",
4 "irods_zone_name": "tempZone"
5 }
0 import unittest
1 import os.path
2 from irods.path import iRODSPath
3
4 _normalization_test_cases = [
5 # -- test case -- -- reference --
6 ("/zone", "/zone"), # a normal path (1 element)
7 ("/zone/", "/zone"), # single-slash (1 element)
8 ("/zone/abc", "/zone/abc"), # a normal path (2 elements)
9 ("/zone/abc/", "/zone/abc"), # single-slash (2 elements)
10 ("/zone/abc/.", "/zone/abc"), # final "."
11 ("/zone/abc/./", "/zone/abc"), # final "." and "/"
12 ("/zone/abc/..", "/zone"), # final ".."
13 ("/zone/abc/../", "/zone"), # final ".." and "/"
14 ("/zone1/../zone2", "/zone2"), # replace one path element with another
15 ("/zone/home1/../home2", "/zone/home2"), # same for a later path element
16 ("/..", "/"), # go up (1x) above root collection
17 ("/../..", "/"), # go up (2x) above root collection
18 ("", "/"), # absolute makes a blank into the root collection
19 (".", "/"), # absolute makes a single "." into the root collection
20 ("./.", "/"), # absolute makes "." (2x) into the root collection
21 ("././zone", "/zone"), # absolute makes initial "." (2x) a NO-OP before a normal elem
22 ("/./zone/abc", "/zone/abc"), # initial (no-op) '.'
23 ("/../zone", "/zone"), # go up (1x) above root collection and back down
24 ("/../zone/..", "/"), # go up (when first, this is a NO-OP); then down, up
25 ("/../../zone", "/zone"), # go up (2x) above root collection and back down
26 ("//zone1/../.././zone2", "/zone2"), # double-slashes, multiple relative elems
27 ("//zone1//../.././zone2", "/zone2"), # double-slashes (2x), multiple relative elems
28 ("//zone//abc/.", "/zone/abc"), # same with final "."
29 ("//zone//abc/..", "/zone"), # same with final ".."
30 ("//zone//abc/./..", "/zone"), # same with final "." and ".."
31 ("/zone//abc/./../", "/zone"), # mixed relative elems (./..) and final slashes
32 ("/zone//abc/.././", "/zone"), # mixed relative elems (../.) and final slashes
33 ("/zone/home1//user/./../trash", "/zone/home1/trash"), # intermediately situated double-slash and relative elems (vsn 1)
34 ("/zone/home1//user/.././trash", "/zone/home1/trash"), # intermediately situated double-slash and relative elems (vsn 2)
35 ]
36
37
38 class PathsTest(unittest.TestCase):
39 def test_path_normalization__383(self):
40 for test_path, reference in _normalization_test_cases:
41 normalized_path = iRODSPath(test_path)
42 self.assertEqual( normalized_path, reference )
43
44
45 if __name__ == '__main__':
46 import sys
47 # let the tests find the parent irods lib
48 sys.path.insert(0, os.path.abspath('../..'))
49 unittest.main()
0 #! /usr/bin/env python
1 from __future__ import print_function
2 from __future__ import absolute_import
3 import os
4 import sys
5 import unittest
6 import time
7 import calendar
8
9 import irods.test.helpers as helpers
10 import tempfile
11 from irods.session import iRODSSession
12 import irods.exception as ex
13 import irods.keywords as kw
14 from irods.ticket import Ticket
15 from irods.models import (TicketQuery,DataObject,Collection)
16
17
18 # As with most of the modules in this test suite, session objects created via
19 # make_session() are implicitly agents of a rodsadmin unless otherwise indicated.
20 # Counterexamples within this module shall be obvious as they are instantiated by
21 # the login() method, and always tied to one of the traditional rodsuser names
22 # widely used in iRODS test suites, ie. 'alice' or 'bob'.
23
24
25 def gmtime_to_timestamp (gmt_struct):
26 return "{0.tm_year:04d}-{0.tm_mon:02d}-{0.tm_mday:02d}."\
27 "{0.tm_hour:02d}:{0.tm_min:02d}:{0.tm_sec:02d}".format(gmt_struct)
28
29
30 def delete_my_tickets(session):
31 my_userid = session.users.get( session.username ).id
32 my_tickets = session.query(TicketQuery.Ticket).filter(TicketQuery.Ticket.user_id == my_userid)
33 for res in my_tickets:
34 Ticket(session, result = res).delete()
35
36
37 class TestRodsUserTicketOps(unittest.TestCase):
38
39 def login(self,user):
40 return iRODSSession (port=self.port,zone=self.zone,host=self.host,
41 user=user.name,password=self.users[user.name])
42
43 @staticmethod
44 def irods_homedir(sess, path_only = False):
45 path = '/{0.zone}/home/{0.username}'.format(sess)
46 if path_only:
47 return path
48 return sess.collections.get(path)
49
50 @staticmethod
51 def list_objects (sess):
52 return [ '{}/{}'.format(o[Collection.name],o[DataObject.name]) for o in
53 sess.query(Collection.name,DataObject.name) ]
54
55 users = {
56 'alice':'apass',
57 'bob':'bpass'
58 }
59
60 def setUp(self):
61
62 self.alice = self.bob = None
63
64 with helpers.make_session() as ses:
65 u = ses.users.get(ses.username)
66 if u.type != 'rodsadmin':
67 self.skipTest('''Test runnable only by rodsadmin.''')
68 self.host = ses.host
69 self.port = ses.port
70 self.zone = ses.zone
71 for newuser,passwd in self.users.items():
72 u = ses.users.create( newuser, 'rodsuser')
73 setattr(self,newuser,u)
74 u.modify('password', passwd)
75
76 def tearDown(self):
77 with helpers.make_session() as ses:
78 for u in self.users:
79 ses.users.remove(u)
80
81
82 def test_admin_keyword_for_tickets (self):
83
84 if helpers.make_session().server_version < (4,2,11):
85 self.skipTest('ADMIN_KW not valid for Tickets API before iRODS 4.2.11')
86
87 N_TICKETS = 3
88
89 # Create some tickets as alice.
90
91 with self.login(self.alice) as alice:
92 alice_home_path = self.irods_homedir(alice, path_only = True)
93 ticket_strings = [ Ticket(alice).issue('read', alice_home_path).string for _ in range(N_TICKETS) ]
94
95 # As rodsadmin, use the ADMIN_KW flag to delete alice's tickets.
96
97 with helpers.make_session() as ses:
98 alices_tickets = [t[TicketQuery.Ticket.string] for t in ses.query(TicketQuery.Ticket).filter(TicketQuery.Owner.name == 'alice')]
99 self.assertEqual(len(alices_tickets),N_TICKETS)
100 for s in alices_tickets:
101 Ticket( ses, s ).delete(**{kw.ADMIN_KW:''})
102 alices_tickets = [t[TicketQuery.Ticket.string] for t in ses.query(TicketQuery.Ticket).filter(TicketQuery.Owner.name == 'alice')]
103 self.assertEqual(len(alices_tickets),0)
104
105
106 def test_ticket_expiry (self):
107 with helpers.make_session() as ses:
108 t1 = t2 = dobj = None
109 try:
110 gm_now = time.gmtime()
111 gm_later = time.gmtime( calendar.timegm( gm_now ) + 10 )
112 home = self.irods_homedir(ses)
113 dobj = helpers.make_object(ses, home.path+'/dummy', content='abcxyz')
114
115 later_ts = gmtime_to_timestamp (gm_later)
116 later_epoch = calendar.timegm (gm_later)
117
118 t1 = Ticket(ses)
119 t2 = Ticket(ses)
120
121 tickets = [ _.issue('read',dobj.path).string for _ in (t1,
122 t2,) ]
123 t1.modify('expire',later_ts) # - Specify expiry with the human readable timestamp.
124 t2.modify('expire',later_epoch) # - Specify expiry formatted as epoch seconds.
125
126 # Check normal access succeeds prior to expiration
127 for ticket_string in tickets:
128 with self.login(self.alice) as alice:
129 Ticket(alice, ticket_string).supply()
130 alice.data_objects.get(dobj.path)
131
132 # Check that both time formats have effected the same expiry time (The catalog returns epoch secs.)
133 timestamps = []
134 for ticket_string in tickets:
135 t = ses.query(TicketQuery.Ticket).filter(TicketQuery.Ticket.string == ticket_string).one()
136 timestamps.append( t [TicketQuery.Ticket.expiry_ts] )
137 self.assertEqual (len(timestamps),2)
138 self.assertEqual (timestamps[0],timestamps[1])
139
140 # Wait for tickets to expire.
141 epoch = int(time.time())
142 while epoch <= later_epoch:
143 time.sleep(later_epoch - epoch + 1)
144 epoch = int(time.time())
145
146 Expected_Exception = ex.CAT_TICKET_EXPIRED if ses.server_version >= (4,2,9) \
147 else ex.SYS_FILE_DESC_OUT_OF_RANGE
148
149 # Check tickets no longer allow access.
150 for ticket_string in tickets:
151 with self.login(self.alice) as alice, tempfile.NamedTemporaryFile() as f:
152 Ticket(alice, ticket_string).supply()
153 with self.assertRaises( Expected_Exception ):
154 alice.data_objects.get(dobj.path,f.name, **{kw.FORCE_FLAG_KW:''})
155
156 finally:
157 if t1: t1.delete()
158 if t2: t2.delete()
159 if dobj: dobj.unlink(force=True)
160
161
162 def test_object_read_and_write_tickets(self):
163 if self.alice is None or self.bob is None:
164 self.skipTest("A rodsuser (alice and/or bob) could not be created.")
165 t=None
166 data_objs=[]
167 tmpfiles=[]
168 try:
169 # Create ticket for read access to alice's home collection.
170 alice = self.login(self.alice)
171 home = self.irods_homedir(alice)
172
173 # Create 'R' and 'W' in alice's home collection.
174 data_objs = [helpers.make_object(alice,home.path+"/"+name,content='abcxyz') for name in ('R','W')]
175 tickets = {
176 'R': Ticket(alice).issue('read', home.path + "/R").string,
177 'W': Ticket(alice).issue('write', home.path + "/W").string
178 }
179 # Test only write ticket allows upload.
180 with self.login(self.bob) as bob:
181 rw_names={}
182 for name in ('R','W'):
183 Ticket( bob, tickets[name] ).supply()
184 with tempfile.NamedTemporaryFile (delete=False) as tmpf:
185 tmpfiles += [tmpf]
186 rw_names[name] = tmpf.name
187 tmpf.write(b'hello')
188 if name=='W':
189 bob.data_objects.put(tmpf.name,home.path+"/"+name)
190 else:
191 try:
192 bob.data_objects.put(tmpf.name,home.path+"/"+name)
193 except ex.CAT_NO_ACCESS_PERMISSION:
194 pass
195 else:
196 raise AssertionError("A read ticket allowed a data object write operation to happen without error.")
197
198 # Test upload was successful, by getting and confirming contents.
199
200 with self.login(self.bob) as bob: # This check must be in a new session or we get CollectionDoesNotExist. - Possibly a new issue [ ]
201 for name in ('R','W'):
202 Ticket( bob, tickets[name] ).supply()
203 bob.data_objects.get(home.path+"/"+name,rw_names[ name ],**{kw.FORCE_FLAG_KW:''})
204 with open(rw_names[ name ],'r') as tmpread:
205 self.assertEqual(tmpread.read(),
206 'abcxyz' if name == 'R' else 'hello')
207 finally:
208 if t: t.delete()
209 for d in data_objs:
210 d.unlink(force=True)
211 for file_ in tmpfiles: os.unlink( file_.name )
212 alice.cleanup()
213
214
215 def test_coll_read_ticket_between_rodsusers(self):
216 t=None
217 data_objs=[]
218 tmpfiles=[]
219 try:
220 # Create ticket for read access to alice's home collection.
221 alice = self.login(self.alice)
222 tc = Ticket(alice)
223 home = self.irods_homedir(alice)
224 tc.issue('read', home.path)
225
226 # Create 'x' and 'y' in alice's home collection
227 data_objs = [helpers.make_object(alice,home.path+"/"+name,content='abcxyz') for name in ('x','y')]
228
229 with self.login(self.bob) as bob:
230 ts = Ticket( bob, tc.string )
231 ts.supply()
232 # Check collection access ticket allows bob to list both subobjects
233 self.assertEqual(len(self.list_objects(bob)),2)
234 # and that we can get (and read) them properly.
235 for name in ('x','y'):
236 with tempfile.NamedTemporaryFile (delete=False) as tmpf:
237 tmpfiles += [tmpf]
238 bob.data_objects.get(home.path+"/"+name,tmpf.name,**{kw.FORCE_FLAG_KW:''})
239 with open(tmpf.name,'r') as tmpread:
240 self.assertEqual(tmpread.read(),'abcxyz')
241
242 td = Ticket(alice)
243 td.issue('read', home.path+"/x")
244
245 with self.login(self.bob) as bob:
246 ts = Ticket( bob, td.string )
247 ts.supply()
248
249 # Check data access ticket allows bob to list only one data object
250 self.assertEqual(len(self.list_objects(bob)),1)
251
252 # ... and fetch that object (verifying content)
253 with tempfile.NamedTemporaryFile (delete=False) as tmpf:
254 tmpfiles += [tmpf]
255 bob.data_objects.get(home.path+"/x",tmpf.name,**{kw.FORCE_FLAG_KW:''})
256 with open(tmpf.name,'r') as tmpread:
257 self.assertEqual(tmpread.read(),'abcxyz')
258
259 # ... but not fetch the other data object owned by alice.
260 with self.assertRaises(ex.DataObjectDoesNotExist):
261 bob.data_objects.get(home.path+"/y")
262 finally:
263 if t: t.delete()
264 for d in data_objs:
265 d.unlink(force=True)
266 for file_ in tmpfiles: os.unlink( file_.name )
267 alice.cleanup()
268
269
270 class TestTicketOps(unittest.TestCase):
271
272 def setUp(self):
273 """Create objects for test"""
274 self.sess = helpers.make_session()
275 user = self.sess.users.get(self.sess.username)
276 if user.type != 'rodsadmin':
277 self.skipTest('''Test runnable only by rodsadmin.''')
278
279 admin = self.sess
280 delete_my_tickets( admin )
281
282 # Create test collection
283
284 self.coll_path = '/{}/home/{}/ticket_test_dir'.format(admin.zone, admin.username)
285 self.coll = helpers.make_collection(admin, self.coll_path)
286
287 # Create anonymous test user
288 self.user = admin.users.create('anonymous','rodsuser')
289 self.rodsuser_params = { 'host':admin.host,
290 'port':admin.port,
291 'user': 'anonymous',
292 'password':'',
293 'zone':admin.zone }
294
295 # make new data object in the test collection with some initialized content
296
297 self.INITIALIZED_DATA = b'1'*16
298 self.data_path = '{self.coll_path}/ticketed_data'.format(**locals())
299 helpers.make_object (admin, self.data_path, content = self.INITIALIZED_DATA)
300
301 self.MODIFIED_DATA = b'2'*16
302
303 # make new tickets for the various combinations
304
305 self.tickets = {'coll':{},'data':{}}
306 for obj_type in ('coll','data'):
307 for access in ('read','write'):
308 ticket = Ticket(admin)
309 self.tickets [obj_type] [access] = ticket.string
310 ticket.issue( access , getattr(self, obj_type + '_path'))
311
312
313 def tearDown(self):
314 """Clean up tickets , collections and data objects used for test."""
315 admin = self.sess
316 delete_my_tickets( admin )
317 if getattr(self,'coll',None):
318 self.coll.remove(recurse=True, force=True)
319 if getattr(self,'user',None):
320 self.user.remove()
321 admin.cleanup()
322
323
324 def _ticket_read_helper( self, obj_type, download = False ):
325 with iRODSSession( ** self.rodsuser_params ) as user_sess:
326 temp_file = []
327 if download: temp_file += [tempfile.mktemp()]
328 try:
329 Ticket(user_sess, self.tickets[obj_type]['read']).supply()
330 data = user_sess.data_objects.get(self.data_path,*temp_file)
331 self.assertEqual (data.open('r').read(), self.INITIALIZED_DATA)
332 if temp_file:
333 with open(temp_file[0],'rb') as local_file:
334 self.assertEqual (local_file.read(), self.INITIALIZED_DATA)
335 finally:
336 if temp_file and os.path.exists(temp_file[0]):
337 os.unlink(temp_file[0])
338
339
340 def test_data_ticket_read(self): self._ticket_read_helper( obj_type = 'data' )
341
342 def test_coll_ticket_read(self): self._ticket_read_helper( obj_type = 'coll' )
343
344 def test_data_ticket_read_with_download(self): self._ticket_read_helper( obj_type = 'data', download = True )
345
346 def test_coll_ticket_read_with_download(self): self._ticket_read_helper( obj_type = 'coll', download = True )
347
348
349 def _ticket_write_helper( self, obj_type ):
350 with iRODSSession( ** self.rodsuser_params ) as user_sess:
351 Ticket(user_sess, self.tickets[obj_type]['write']).supply()
352 data = user_sess.data_objects.get(self.data_path)
353 with data.open('w') as obj:
354 obj.write(self.MODIFIED_DATA)
355 self.assertEqual (data.open('r').read(), self.MODIFIED_DATA)
356
357
358 def test_data_ticket_write(self): self._ticket_write_helper( obj_type = 'data' )
359
360 def test_coll_ticket_write(self): self._ticket_write_helper( obj_type = 'coll' )
361
362
363 if __name__ == '__main__':
364 # let the tests find the parent irods lib
365 sys.path.insert(0, os.path.abspath('../..'))
366 unittest.main()
55 import unittest
66 from irods.models import Collection, DataObject
77 import xml.etree.ElementTree as ET
8 from irods.message import (ET as ET_set, XML_Parser_Type, current_XML_parser, default_XML_parser)
89 import logging
910 import irods.test.helpers as helpers
11 from six import PY3
1012
1113 logger = logging.getLogger(__name__)
1214
6971 self.coll.remove(recurse=True, force=True)
7072 self.sess.cleanup()
7173
74 def test_object_name_containing_unicode__318(self):
75 dataname = u"réprouvé"
76 homepath = helpers.home_collection( self.sess )
77 try:
78 ET_set( XML_Parser_Type.QUASI_XML, self.sess.server_version )
79 path = homepath + "/" + dataname
80 self.sess.data_objects.create( path )
81 finally:
82 ET_set( None )
83 self.sess.data_objects.unlink (path, force = True)
84
85 # assert successful switch back to global default
86 self.assertIs( current_XML_parser(), default_XML_parser() )
87
7288 def test_files(self):
7389 # Query for all files in test collection
7490 query = self.sess.query(DataObject.name, Collection.name).filter(
7591 Collection.name == self.coll_path)
7692
93 # Python2 compatibility note: In keeping with the principle of least surprise, we now ensure
94 # queries return values of 'str' type in Python2. When and if these quantities have a possibility
95 # of representing unicode quantities, they can then go through a decode stage.
96
97 encode_unless_PY3 = (lambda x:x) if PY3 else (lambda x:x.encode('utf8'))
98 decode_unless_PY3 = (lambda x:x) if PY3 else (lambda x:x.decode('utf8'))
99
77100 for result in query:
78101 # check that we got back one of our original names
79 assert result[DataObject.name] in self.names
102 assert result[DataObject.name] in ( [encode_unless_PY3(n) for n in self.names] )
80103
81104 # fyi
82 logger.info(
83 u"{0}/{1}".format(result[Collection.name], result[DataObject.name]))
105 logger.info( u"{0}/{1}".format( decode_unless_PY3(result[Collection.name]),
106 decode_unless_PY3(result[DataObject.name]) )
107 )
84108
85109 # remove from set
86 self.names.remove(result[DataObject.name])
110 self.names.remove(decode_unless_PY3(result[DataObject.name]))
87111
88112 # make sure we got all of them
89113 self.assertEqual(0, len(self.names))
22 import os
33 import sys
44 import unittest
5 from irods.exception import UserGroupDoesNotExist
5 import tempfile
6 import shutil
7 from irods.exception import UserGroupDoesNotExist, UserDoesNotExist
8 from irods.meta import iRODSMetaCollection, iRODSMeta
9 from irods.models import User, UserGroup, UserMeta
10 from irods.session import iRODSSession
11 import irods.exception as ex
612 import irods.test.helpers as helpers
713 from six.moves import range
814
1622 '''Close connections
1723 '''
1824 self.sess.cleanup()
25
26 def test_modify_password__328(self):
27 ses = self.sess
28 if ses.users.get( ses.username ).type != 'rodsadmin':
29 self.skipTest( 'Only a rodsadmin may run this test.')
30
31 OLDPASS = 'apass'
32 NEWPASS = 'newpass'
33 try:
34 ses.users.create('alice', 'rodsuser')
35 ses.users.modify('alice', 'password', OLDPASS)
36
37 with iRODSSession(user='alice', password=OLDPASS, host=ses.host, port=ses.port, zone=ses.zone) as alice:
38 me = alice.users.get(alice.username)
39 me.modify_password(OLDPASS, NEWPASS)
40
41 with iRODSSession(user='alice', password=NEWPASS, host=ses.host, port=ses.port, zone=ses.zone) as alice:
42 home = helpers.home_collection( alice )
43 alice.collections.get( home ) # Non-trivial operation to test success!
44 finally:
45 try:
46 ses.users.get('alice').remove()
47 except UserDoesNotExist:
48 pass
49
50 @staticmethod
51 def do_something(session):
52 return session.username in [i[User.name] for i in session.query(User)]
53
54 def test_modify_password_with_changing_auth_file__328(self):
55 ses = self.sess
56 if ses.users.get( ses.username ).type != 'rodsadmin':
57 self.skipTest( 'Only a rodsadmin may run this test.')
58 OLDPASS = 'apass'
59 def generator(p = OLDPASS):
60 n = 1
61 old_pw = p
62 while True:
63 pw = p + str(n)
64 yield old_pw, pw
65 n += 1; old_pw = pw
66 password_generator = generator()
67 ENV_DIR = tempfile.mkdtemp()
68 d = dict(password = OLDPASS, user = 'alice', host = ses.host, port = ses.port, zone = ses.zone)
69 (alice_env, alice_auth) = helpers.make_environment_and_auth_files(ENV_DIR, **d)
70 try:
71 ses.users.create('alice', 'rodsuser')
72 ses.users.modify('alice', 'password', OLDPASS)
73 for modify_option, sess_factory in [ (alice_auth, lambda: iRODSSession(**d)),
74 (True,
75 lambda: helpers.make_session(irods_env_file = alice_env,
76 irods_authentication_file = alice_auth)) ]:
77 OLDPASS,NEWPASS=next(password_generator)
78 with sess_factory() as alice_ses:
79 alice = alice_ses.users.get(alice_ses.username)
80 alice.modify_password(OLDPASS, NEWPASS, modify_irods_authentication_file = modify_option)
81 d['password'] = NEWPASS
82 with iRODSSession(**d) as session:
83 self.do_something(session) # can we still do stuff with the final value of the password?
84 finally:
85 shutil.rmtree(ENV_DIR)
86 ses.users.remove('alice')
87
88 def test_modify_password_with_incorrect_old_value__328(self):
89 ses = self.sess
90 if ses.users.get( ses.username ).type != 'rodsadmin':
91 self.skipTest( 'Only a rodsadmin may run this test.')
92 OLDPASS = 'apass'
93 NEWPASS = 'newpass'
94 ENV_DIR = tempfile.mkdtemp()
95 try:
96 ses.users.create('alice', 'rodsuser')
97 ses.users.modify('alice', 'password', OLDPASS)
98 d = dict(password = OLDPASS, user = 'alice', host = ses.host, port = ses.port, zone = ses.zone)
99 (alice_env, alice_auth) = helpers.make_environment_and_auth_files(ENV_DIR, **d)
100 session_factories = [
101 (lambda: iRODSSession(**d)),
102 (lambda: helpers.make_session( irods_env_file = alice_env, irods_authentication_file = alice_auth)),
103 ]
104 for factory in session_factories:
105 with factory() as alice_ses:
106 alice = alice_ses.users.get(alice_ses.username)
107 with self.assertRaises( ex.CAT_PASSWORD_ENCODING_ERROR ):
108 alice.modify_password(OLDPASS + ".", NEWPASS)
109 with iRODSSession(**d) as alice_ses:
110 self.do_something(alice_ses)
111 finally:
112 shutil.rmtree(ENV_DIR)
113 ses.users.remove('alice')
19114
20115 def test_create_group(self):
21116 group_name = "test_group"
84179 with self.assertRaises(UserGroupDoesNotExist):
85180 self.sess.user_groups.get(group_name)
86181
87
88182 def test_user_dn(self):
89183 # https://github.com/irods/irods/issues/3620
90184 if self.sess.server_version == (4, 2, 1):
105199
106200 # add other dn
107201 user.modify('addAuth', user_DNs[1])
108 self.assertEqual(user.dn, user_DNs)
202 self.assertEqual( sorted(user.dn), sorted(user_DNs) )
109203
110204 # remove first dn
111205 user.modify('rmAuth', user_DNs[0])
112206
113207 # confirm removal
114 self.assertEqual(user.dn, user_DNs[1:])
208 self.assertEqual(sorted(user.dn), sorted(user_DNs[1:]))
115209
116210 # delete user
117211 user.remove()
118212
213 def test_group_metadata(self):
214 group_name = "test_group"
215
216 # group should not be already present
217 with self.assertRaises(UserGroupDoesNotExist):
218 self.sess.user_groups.get(group_name)
219
220 group = None
221
222 try:
223 # create group
224 group = self.sess.user_groups.create(group_name)
225
226 # add metadata to group
227 triple = ['key', 'value', 'unit']
228 group.metadata[triple[0]] = iRODSMeta(*triple)
229
230 result = self.sess.query(UserMeta, UserGroup).filter(UserGroup.name == group_name,
231 UserMeta.name == 'key').one()
232
233 self.assertTrue([result[k] for k in (UserMeta.name, UserMeta.value, UserMeta.units)] == triple)
234
235 finally:
236 if group:
237 group.remove()
238 helpers.remove_unused_metadata(self.sess)
239
240 def test_user_metadata(self):
241 user_name = "testuser"
242 user = None
243
244 try:
245 user = self.sess.users.create(user_name, 'rodsuser')
246
247 # metadata collection is the right type?
248 self.assertIsInstance(user.metadata, iRODSMetaCollection)
249
250 # add three AVUs, two having the same key
251 user.metadata['key0'] = iRODSMeta('key0', 'value', 'units')
252 sorted_triples = sorted( [ ['key1', 'value0', 'units0'],
253 ['key1', 'value1', 'units1'] ] )
254 for m in sorted_triples:
255 user.metadata.add(iRODSMeta(*m))
256
257 # general query gives the right results?
258 result_0 = self.sess.query(UserMeta, User)\
259 .filter( User.name == user_name, UserMeta.name == 'key0').one()
260
261 self.assertTrue( [result_0[k] for k in (UserMeta.name, UserMeta.value, UserMeta.units)]
262 == ['key0', 'value', 'units'] )
263
264 results_1 = self.sess.query(UserMeta, User)\
265 .filter(User.name == user_name, UserMeta.name == 'key1')
266
267 retrieved_triples = [ [ res[k] for k in (UserMeta.name, UserMeta.value, UserMeta.units) ]
268 for res in results_1
269 ]
270
271 self.assertTrue( sorted_triples == sorted(retrieved_triples))
272
273 finally:
274 if user:
275 user.remove()
276 helpers.remove_unused_metadata(self.sess)
277
278 def test_get_user_metadata(self):
279 user_name = "testuser"
280 user = None
281
282 try:
283 # create user
284 user = self.sess.users.create(user_name, 'rodsuser')
285 meta = user.metadata.get_all('key')
286
287 # There should be no metadata
288 self.assertEqual(len(meta), 0)
289 finally:
290 if user: user.remove()
291
292 def test_add_user_metadata(self):
293 user_name = "testuser"
294 user = None
295
296 try:
297 # create user
298 user = self.sess.users.create(user_name, 'rodsuser')
299
300 user.metadata.add('key0', 'value0')
301 user.metadata.add('key1', 'value1', 'unit1')
302 user.metadata.add('key2', 'value2a', 'unit2')
303 user.metadata.add('key2', 'value2b', 'unit2')
304
305 meta0 = user.metadata.get_all('key0')
306 self.assertEqual(len(meta0),1)
307 self.assertEqual(meta0[0].name, 'key0')
308 self.assertEqual(meta0[0].value, 'value0')
309
310 meta1 = user.metadata.get_all('key1')
311 self.assertEqual(len(meta1),1)
312 self.assertEqual(meta1[0].name, 'key1')
313 self.assertEqual(meta1[0].value, 'value1')
314 self.assertEqual(meta1[0].units, 'unit1')
315
316 meta2 = sorted(user.metadata.get_all('key2'), key = lambda AVU : AVU.value)
317 self.assertEqual(len(meta2),2)
318 self.assertEqual(meta2[0].name, 'key2')
319 self.assertEqual(meta2[0].value, 'value2a')
320 self.assertEqual(meta2[0].units, 'unit2')
321 self.assertEqual(meta2[1].name, 'key2')
322 self.assertEqual(meta2[1].value, 'value2b')
323 self.assertEqual(meta2[1].units, 'unit2')
324
325 user.metadata.remove('key1', 'value1', 'unit1')
326 metadata = user.metadata.items()
327 self.assertEqual(len(metadata), 3)
328
329 user.metadata.remove('key2', 'value2a', 'unit2')
330 metadata = user.metadata.items()
331 self.assertEqual(len(metadata), 2)
332
333 finally:
334 if user:
335 user.remove()
336 helpers.remove_unused_metadata(self.sess)
119337
120338 if __name__ == '__main__':
121339 # let the tests find the parent irods lib
0 #! /usr/bin/env python
1 from __future__ import absolute_import
2 import os
3 import sys
4 import unittest
5
6 from irods.models import User,Collection
7 from irods.access import iRODSAccess
8 from irods.collection import iRODSCollection
9 from irods.exception import CollectionDoesNotExist
10 import irods.test.helpers as helpers
11
12 class TestRemoteZone(unittest.TestCase):
13
14 def setUp(self):
15 self.sess = helpers.make_session()
16
17 def tearDown(self):
18 """Close connections."""
19 self.sess.cleanup()
20
21 # This test should pass whether or not federation is configured:
22 def test_create_other_zone_user_227_228(self):
23 usercolls = []
24 session = self.sess
25 A_ZONE_NAME = 'otherZone'
26 A_ZONE_USER = 'alice'
27 try:
28 zoneB = session.zones.create(A_ZONE_NAME,'remote')
29 zBuser = session.users.create(A_ZONE_USER,'rodsuser', A_ZONE_NAME, '')
30 usercolls = [ iRODSCollection(session.collections, result) for result in
31 session.query(Collection).filter(Collection.owner_name == zBuser.name and
32 Collection.owner_zone == zBuser.zone) ]
33 self.assertEqual ([(u[User.name],u[User.zone]) for u in session.query(User).filter(User.zone == A_ZONE_NAME)],
34 [(A_ZONE_USER,A_ZONE_NAME)])
35 zBuser.remove()
36 zoneB.remove()
37 finally:
38 for p in usercolls:
39 try:
40 session.collections.get( p.path )
41 except CollectionDoesNotExist:
42 continue
43 perm = iRODSAccess( 'own', p.path, session.username, session.zone)
44 session.permissions.set( perm, admin=True)
45 p.remove(force=True)
46
47
48 if __name__ == '__main__':
49 # let the tests find the parent irods lib
50 sys.path.insert(0, os.path.abspath('../..'))
51 unittest.main()
00 from __future__ import absolute_import
1
2 from irods.api_number import api_number
3 from irods.message import iRODSMessage, TicketAdminRequest
4 from irods.models import TicketQuery
5
16 import random
27 import string
3 from irods.api_number import api_number
4 from irods.message import (
5 iRODSMessage, TicketAdminRequest)
8 import logging
9 import datetime
10 import calendar
611
7 import logging
812
913 logger = logging.getLogger(__name__)
1014
1115
16 def get_epoch_seconds (utc_timestamp):
17 epoch = None
18 try:
19 epoch = int(utc_timestamp)
20 except ValueError:
21 pass
22 if epoch is not None:
23 return epoch
24 HUMAN_READABLE_DATE = '%Y-%m-%d.%H:%M:%S'
25 try:
26 x = datetime.datetime.strptime(utc_timestamp,HUMAN_READABLE_DATE)
27 return calendar.timegm( x.timetuple() )
28 except ValueError:
29 raise # final try at conversion, so a failure is an error
30
31
1232 class Ticket(object):
13 def __init__(self, session, ticket=None):
33 def __init__(self, session, ticket = '', result = None, allow_punctuation = False):
1434 self._session = session
15 self._ticket = ticket if ticket else self.generate()
35 try:
36 if result is not None: ticket = result[TicketQuery.Ticket.string]
37 except TypeError:
38 raise RuntimeError( "If specified, 'result' parameter must be a TicketQuery.Ticket search result")
39 self._ticket = ticket if ticket else self._generate(allow_punctuation = allow_punctuation)
1640
1741 @property
1842 def session(self):
2044
2145 @property
2246 def ticket(self):
47 """Return the unique string associated with the ticket object."""
2348 return self._ticket
2449
50 # Provide 'string' property such that self.string is a synonym for self.ticket
51 string = ticket
2552
26 def generate(self, length=15):
27 return ''.join(random.SystemRandom().choice(string.ascii_letters + string.digits + string.punctuation) for _ in range(length))
53 def _generate(self, length=15, allow_punctuation = False):
54 source_characters = string.ascii_letters + string.digits
55 if allow_punctuation:
56 source_characters += string.punctuation
57 return ''.join(random.SystemRandom().choice(source_characters) for _ in range(length))
2858
29
30 def supply(self):
31 message_body = TicketAdminRequest("session", self.ticket)
59 def _api_request(self,cmd_string,*args, **opts):
60 message_body = TicketAdminRequest(cmd_string, self.ticket, *args, **opts)
3261 message = iRODSMessage("RODS_API_REQ", msg=message_body, int_info=api_number['TICKET_ADMIN_AN'])
3362
3463 with self.session.pool.get_connection() as conn:
3564 conn.send(message)
3665 response = conn.recv()
66 return self
3767
68 def issue(self,permission,target,**opt): return self._api_request("create",permission,target,**opt)
3869
39 def issue(self, permission, target):
40 message_body = TicketAdminRequest("create", self.ticket, permission, target)
41 message = iRODSMessage("RODS_API_REQ", msg=message_body, int_info=api_number['TICKET_ADMIN_AN'])
70 create = issue
4271
43 with self.session.pool.get_connection() as conn:
44 conn.send(message)
45 response = conn.recv()
72 def modify(self,*args,**opt):
73 arglist = list(args)
74 if arglist[0].lower().startswith('expir'):
75 arglist[1] = str(get_epoch_seconds(utc_timestamp = arglist[1]))
76 return self._api_request("mod",*arglist,**opt)
77
78 def supply(self,**opt):
79 object_ = self._api_request("session",**opt)
80 self.session.ticket__ = self._ticket
81 return object_
82
83 def delete(self,**opt):
84 """
85 Delete the iRODS ticket.
86
87 This applies to a Ticket object on which issue() has been called or, as the case may
88 be, to a Ticket initialized with a ticket string already existing in the object catalog.
89 The deleted object is returned, but may not be used further except for local purposes
90 such as extracting the string. E.g.
91
92 for t in tickets:
93 print(t.delete().string, "being deleted")
94
95 """
96 return self._api_request("delete",**opt)
00 from __future__ import absolute_import
11 from irods.models import User, UserGroup, UserAuth
2 from irods.meta import iRODSMetaCollection
23 from irods.exception import NoResultFound
34
5 _Not_Defined = ()
46
57 class iRODSUser(object):
68
1113 self.name = result[User.name]
1214 self.type = result[User.type]
1315 self.zone = result[User.zone]
16 self._comment = result.get(User.comment, _Not_Defined) # these not needed in results for object ident,
17 self._info = result.get(User.info, _Not_Defined) # so we fetch lazily via a property
1418 self._meta = None
19
20 @property
21 def comment(self):
22 if self._comment == _Not_Defined:
23 query = self.manager.sess.query(User.id,User.comment).filter(User.id == self.id)
24 self._comment = query.one()[User.comment]
25 return self._comment
26
27 @property
28 def info(self):
29 if self._info == _Not_Defined:
30 query = self.manager.sess.query(User.id,User.info).filter(User.id == self.id)
31 self._info = query.one()[User.info]
32 return self._info
1533
1634 @property
1735 def dn(self):
1836 query = self.manager.sess.query(UserAuth.user_dn).filter(UserAuth.user_id == self.id)
1937 return [res[UserAuth.user_dn] for res in query]
38
39 @property
40 def metadata(self):
41 if not self._meta:
42 self._meta = iRODSMetaCollection(
43 self.manager.sess.metadata, User, self.name)
44 return self._meta
45
46 def modify_password(self, old_value, new_value, modify_irods_authentication_file = False):
47 self.manager.modify_password(old_value,
48 new_value,
49 modify_irods_authentication_file = modify_irods_authentication_file)
2050
2151 def modify(self, *args, **kwargs):
2252 self.manager.modify(self.name, *args, **kwargs)
2656
2757 def remove(self):
2858 self.manager.remove(self.name, self.zone)
59
60 def temp_password(self):
61 return self.manager.temp_password_for_user(self.name)
2962
3063
3164 class iRODSUserGroup(object):
4780 def members(self):
4881 return self.manager.getmembers(self.name)
4982
83 @property
84 def metadata(self):
85 if not self._meta:
86 self._meta = iRODSMetaCollection(
87 self.manager.sess.metadata, User, self.name)
88 return self._meta
89
5090 def addmember(self, user_name, user_zone=""):
5191 self.manager.addmember(self.name, user_name, user_zone)
5292
0 __version__ = '0.8.1'
0 __version__ = '1.1.5'
0 from __future__ import absolute_import
1 from irods.models import Zone
2
3
4 class iRODSZone(object):
5
6 def __init__(self, manager, result=None):
7 """Construct an iRODSZone object."""
8 self.manager = manager
9 if result:
10 self.id = result[Zone.id]
11 self.name = result[Zone.name]
12 self.type = result[Zone.type]
13
14 def remove(self):
15 self.manager.remove(self.name)
16
17 def __repr__(self):
18 """Render a user-friendly string representation for the iRODSZone object."""
19 return "<iRODSZone {id} {name} {type}>".format(**vars(self))
20
0 import json
1 import sys
2
3 def run (CI):
4
5 final_config = CI.store_config(
6 {
7 "yaml_substitutions": { # -> written to ".env"
8 "client_python_version" : "3",
9 "client_os_generic": "ubuntu",
10 "client_os_image": "ubuntu:18.04",
11 "python_rule_engine_installed": "y"
12 },
13 "container_environments": {
14 "client-runner" : { # -> written to "client-runner.env"
15 "TESTS_TO_RUN": "" # run test subset, e.g. "irods.test.data_obj_test"
16 }
17
18 }
19 }
20 )
21
22 print ('----------\nconfig after CI modify pass\n----------',file=sys.stderr)
23 print(json.dumps(final_config,indent=4),file=sys.stderr)
24
25 return CI.run_and_wait_on_client_exit ()
0 #!/bin/bash
1
2 set -o pipefail
3 cd repo/irods/test
4
5 export PYTHONUNBUFFERED="Y"
6
7 if [ -z "${TESTS_TO_RUN}" ] ; then
8 python"${PY_N}" runner.py 2>&1 | tee "${LOG_OUTPUT_DIR}"/prc_test_logs.txt
9 else
10 python"${PY_N}" -m unittest -v ${TESTS_TO_RUN} 2>&1 | tee "${LOG_OUTPUT_DIR}"/prc_test_logs.txt
11 fi
12
2020 author_email='support@irods.org',
2121 description='A python API for iRODS',
2222 long_description=long_description,
23 long_description_content_type='text/x-rst',
2324 license='BSD',
2425 url='https://github.com/irods/python-irodsclient',
2526 keywords='irods',
3738 install_requires=[
3839 'six>=1.10.0',
3940 'PrettyTable>=0.7.2',
40 'xmlrunner>=1.7.7'
41 ]
41 'defusedxml',
42 # - the new syntax:
43 #'futures; python_version == "2.7"'
44 ],
45 # - the old syntax:
46 extras_require={ ':python_version == "2.7"': ['futures'],
47 'tests': ['unittest-xml-reporting'] # for xmlrunner
48 }
4249 )