Codebase list bundlewrap / be4cb9e
Imported Upstream version 2.12.2 SVN-Git Migration 6 years ago
140 changed file(s) with 18130 addition(s) and 0 deletion(s). Raw diff Collapse all Expand all
0 /dist/
1 /docs/build/
2 /tests/.coveralls.yml
0 language: python
1 python:
2 - 2.7
3 - 3.3
4 - 3.4
5 - 3.5
6 - 3.6
7 install:
8 - pip install .
9 before_script:
10 - ssh-keygen -f ~/.ssh/id_rsa -N ""
11 - cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
12 - ssh -o StrictHostKeyChecking=no localhost id
13 script:
14 - py.test tests
15 notifications:
16 irc:
17 channels:
18 - "irc.freenode.org#bundlewrap"
19 use_notice: true
20 skip_join: true
0 # By adding your name to this file you agree to the Copyright Assignment
1 # Agreement found in the CAA.md file in this repository.
2
3 Torsten Rehn <torsten@rehn.email>
4 Peter Hofmann <scm@uninformativ.de>
5 Tim Buchwaldt <tim@buchwaldt.ws>
6 Rico Ullmann <rico@erinnerungsfragmente.de>
0 # BundleWrap Individual Contributor Copyright Assignment Agreement
1
2 Thank you for your interest in contributing to the BundleWrap open-source project, currently owned and represented by [Torsten Rehn](mailto:torsten@rehn.email) ("We" or "Us").
3
4 This contributor agreement ("Agreement") documents the rights granted by contributors to Us. To make this document effective, please sign it and send it to Us by email or electronic submission, following the instructions at [http://docs.bundlewrap.org/misc/contributing](http://docs.bundlewrap.org/misc/contributing). This is a legally binding document, so please read it carefully before agreeing to it. The Agreement may cover more than one software project managed by Us.
5
6 ## 1. Definitions
7
8 "You" means the individual who Submits a Contribution to Us.
9
10 "Contribution" means any work of authorship that is Submitted by You to Us in which You own or assert ownership of the Copyright. If You do not own the Copyright in the entire work of authorship, please follow the instructions in [http://docs.bundlewrap.org/misc/contributing](http://docs.bundlewrap.org/misc/contributing).
11
12 "Copyright" means all rights protecting works of authorship owned or controlled by You, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence including any extensions by You.
13
14 "Material" means the work of authorship which is made available by Us to third parties. When this Agreement covers more than one software project, the Material means the work of authorship to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.
15
16 "Submit" means any form of electronic, verbal, or written communication sent to Us or our representatives, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us for the purpose of discussing and improving the Material, but excluding communication that is conspicuously marked or otherwise designated in writing by You as "Not a Contribution."
17
18 "Submission Date" means the date on which You Submit a Contribution to Us.
19
20 "Effective Date" means the date You execute this Agreement or the date You first Submit a Contribution to Us, whichever is earlier.
21
22 ## 2. Grant of Rights
23
24 ### 2.1 Copyright Assignment
25
26 1) At the time the Contribution is Submitted, You assign to Us all right, title, and interest worldwide in all Copyright covering the Contribution; provided that this transfer is conditioned upon compliance with Section 2.3.
27
28 2) To the extent that any of the rights in Section 2.1.1 cannot be assigned by You to Us, You grant to Us a perpetual, worldwide, exclusive, royalty-free, transferable, irrevocable license under such non-assigned rights, with rights to sublicense through multiple tiers of sublicensees, to practice such non-assigned rights, including, but not limited to, the right to reproduce, modify, display, perform and distribute the Contribution; provided that this license is conditioned upon compliance with Section 2.3.
29
30 3) To the extent that any of the rights in Section 2.1.1 can neither be assigned nor licensed by You to Us, You irrevocably waive and agree never to assert such rights against Us, any of our successors in interest, or any of our licensees, either direct or indirect; provided that this agreement not to assert is conditioned upon compliance with Section 2.3.
31
32 4) Upon such transfer of rights to Us, to the maximum extent possible, We immediately grant to You a perpetual, worldwide, non-exclusive, royalty-free, transferable, irrevocable license under such rights covering the Contribution, with rights to sublicense through multiple tiers of sublicensees, to reproduce, modify, display, perform, and distribute the Contribution. The intention of the parties is that this license will be as broad as possible and to provide You with rights as similar as possible to the owner of the rights that You transferred. This license back is limited to the Contribution and does not provide any rights to the Material.
33
34 ### 2.2 Patent License
35
36 For patent claims including, without limitation, method, process, and apparatus claims which You own, control or have the right to grant, now or in the future, You grant to Us a perpetual, worldwide, non-exclusive, transferable, royalty-free, irrevocable patent license, with the right to sublicense these rights to multiple tiers of sublicensees, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with the Material (and portions of such combination). This license is granted only to the extent that the exercise of the licensed rights infringes such patent claims; and provided that this license is conditioned upon compliance with Section 2.3.
37
38 ### 2.3 Outbound License
39
40 As a condition on the grant of rights in Sections 2.1 and 2.2, We agree to license the Contribution only under the terms of the license or licenses which We are using on the Submission Date for the Material (including any rights to adopt any future version of a license if permitted).
41
42 ### 2.4 Moral Rights
43
44 If moral rights apply to the Contribution, to the maximum extent permitted by law, You waive and agree not to assert such moral rights against Us or our successors in interest, or any of our licensees, either direct or indirect.
45
46 ### 2.5 Our Rights
47
48 You acknowledge that We are not obligated to use Your Contribution as part of the Material and may decide to include any Contribution We consider appropriate.
49
50 ### 2.6 Reservation of Rights
51
52 Any rights not expressly assigned or licensed under this section are expressly reserved by You.
53
54 ## 3. Agreement
55
56 You confirm that:
57
58 1) You have the legal authority to enter into this Agreement.
59
60 2) You own the Copyright and patent claims covering the Contribution which are required to grant the rights under Section 2.
61
62 3) The grant of rights under Section 2 does not violate any grant of rights which You have made to third parties, including Your employer. If You are an employee, You have had Your employer approve this Agreement or sign the Entity version of this document. If You are less than eighteen years old, please have Your parents or guardian sign the Agreement.
63
64 4) You have followed the instructions in [http://docs.bundlewrap.org/misc/contributing](http://docs.bundlewrap.org/misc/contributing), if You do not own the Copyright in the entire work of authorship Submitted.
65
66 ## 4. Disclaimer
67
68 EXCEPT FOR THE EXPRESS WARRANTIES IN SECTION 3, THE CONTRIBUTION IS PROVIDED "AS IS". MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION TO THE MINIMUM PERIOD PERMITTED BY LAW.
69
70 ## 5. Consequential Damage Waiver
71
72 TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR US BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.
73
74 ## 6. Miscellaneous
75
76 ### 6.1
77
78 This Agreement will be governed by and construed in accordance with the laws of Germany excluding its conflicts of law provisions. Under certain circumstances, the governing law in this section might be superseded by the United Nations Convention on Contracts for the International Sale of Goods ("UN Convention") and the parties intend to avoid the application of the UN Convention to this Agreement and, thus, exclude the application of the UN Convention in its entirety to this Agreement.
79
80 ### 6.2
81
82 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.
83
84 ### 6.3
85
86 If You or We assign the rights or obligations received through this Agreement to a third party, as a condition of the assignment, that third party must agree in writing to abide by all the rights and obligations in the Agreement.
87
88 ### 6.4
89
90 The failure of either party to require performance by the other party of any provision of this Agreement in one situation shall not affect the right of a party to require such performance at any time in the future. A waiver of performance under a provision in one situation shall not be considered a waiver of the performance of the provision in the future or a waiver of the provision in its entirety.
91
92 ### 6.5
93
94 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and which is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.
0 # 2.12.2
1
2 2016-12-23
3
4 * added support for Python 3.6
5 * changed diff line length limit from 128 to 1024 characters
6 * fixed deadlock in Group.members_remove
7 * fixed unknown subgroups not being detected properly
8
9
10 # 2.12.1
11
12 2016-12-20
13
14 * fixed exception when changing owner of postgres databases
15 * fixed postgres roles requiring a password even when deleted
16 * fixed incorrect exit codes in some situations with `bw test`
17
18
19 # 2.12.0
20
21 2016-11-28
22
23 * added `BW_DEBUG_LOG_DIR`
24 * improved reporting of action failures
25 * fixed `bw plot groups` and `bw plot groups-for-node`
26 * fixed access to partial metadata in `Group.members_add` and `_remove`
27
28
29 # 2.11.0
30
31 2016-11-14
32
33 * added `bw nodes --inline`
34 * added `Group.members_add` and `.members_remove`
35 * fixed symlinks not overwriting other path types
36 * fixed `precedes` and `triggers` for bundle, tag and type items
37 * fixed diffs for sets and tuples
38
39
40 # 2.10.0
41
42 2016-11-03
43
44 * added pkg_dnf items
45 * added rudimentary string operations on Faults
46 * added Fault documentation
47 * added `bw test --config-determinism` and `--metadata-determinism`
48 * improved debugging facilities for metadata processor loops
49 * improved handling and reporting of missing Faults
50
51
52 # 2.9.1
53
54 2016-10-18
55
56 * fixed `bw verify` without `-S`
57 * fixed asking for changes to directory items
58
59
60 # 2.9.0
61
62 2016-10-17
63
64 * added directory purging
65 * added `bw --adhoc-nodes`
66 * improve handling of unknown nodes/groups
67 * improvements to `bw nodes`
68
69
70 # 2.8.0
71
72 2016-09-12
73
74 * added `BW_HARDLOCK_EXPIRY` env var
75 * added `bw hash --group`
76 * added `subgroup_patterns`
77 * added `bw test --ignore-missing-faults`
78 * added `node.cmd_wrapper_inner` and `_outer`
79 * added `node.os_version`
80 * fixed exception handling under Python 2
81 * fixed partial metadata not being completed in some cases
82
83
84 # 2.7.1
85
86 2016-07-15
87
88 * improved responsiveness to SIGINT during metadata generation
89 * fixed SIGINT handling on Python 2.7
90
91
92 # 2.7.0
93
94 2016-07-15
95
96 * `bw lock show` can now show entire groups
97 * `bw` can now be invoked from any subdirectory of a repository
98 * added `bw hash --metadata`
99 * added `bw nodes --attrs`
100 * added `repo.vault.format`
101 * added graceful handling of SIGINT
102 * added log level indicator to debug output
103 * added `node.dummy` attribute
104 * added `BW_SSH_ARGS` environment variable
105 * `bash` is no longer required on nodes
106 * `node.os` and `node.use_shadow_passwords` can now be set at the group level
107 * sets are now allowed in metadata
108 * optimized execution of metadata processors
109 * fixed `bw apply --force` with unlocked nodes
110 * fixed `bw test` not detecting merge of lists in unrelated groups' metadata
111 * fixed installation of some pkg_openbsd
112 * fixed piping into `bw apply -i`
113 * fixed handling user names with non-ASCII characters
114 * fixed skipped and failed items sometimes being handled incorrectly
115 * fixed error with autoskipped triggered items
116 * fixed skip reason for some soft locked items
117
118
119 # 2.6.1
120
121 2016-05-29
122
123 * fixed accidentally changed default salt for user items
124
125
126 # 2.6.0
127
128 2016-05-29
129
130 * added support for OpenBSD packages and services
131 * added soft locking mechanism
132 * added `enabled` option for `svc_systemd`
133 * fixed running compound commands
134
135
136 # 2.5.2
137
138 2016-05-04
139
140 * fixed compatibility with some exotic node shells
141 * fixed quitting at question prompts
142 * fixed creating files with content_type 'any'
143
144
145 # 2.5.1
146
147 2016-04-07
148
149 * fixed false positive on metadata collision check
150
151
152 # 2.5.0
153
154 2016-04-04
155
156 * improved performance and memory usage
157 * added metadata conflict detection to `bw test`
158 * added metadata type validation
159 * added `BW_VAULT_DUMMY_MODE`
160 * added q(uit) option to questions
161 * output disabled by default when using as a library
162 * fixed `bw hash -d`
163 * fixed excessive numbers of open files
164 * fixed partial metadata access from metadata processors
165
166
167 # 2.4.0
168
169 2016-03-20
170
171 * added `bw plot group`
172 * added `bw plot groups-for-node`
173 * `bw` will now check requirements.txt in your repo before doing anything
174 * improved output of `--help`
175 * metadata processors now have access to partial node metadata while it is being compiled
176 * fixed `bw test` when using more than the default number of node workers
177 * fixed passing Faults to `postgres_role` and `users`
178 * fixed detection of non-existent paths on CentOS and others
179
180
181 # 2.3.1
182
183 2016-03-15
184
185 * fixed handling of 'generate' keys for `repo.vault`
186
187
188 # 2.3.0
189
190 2016-03-15
191
192 * added `repo.vault` for handling secrets
193 * circular dependencies are now detected by `bw test`
194 * fixed handling of broken pipes in internal subprocesses
195 * fixed previous input being read when asking a question
196 * fixed reading non-ASCII templates on systems with ASCII locale
197 * `bw apply` and `bw verify` now exit with return code 1 if there are errors
198
199
200 # 2.2.0
201
202 2016-03-02
203
204 * added item tagging
205 * added `bw apply --skip`
206 * fixed newline warning on long diff files
207 * fixed calling `bw` without arguments
208
209
210 # 2.1.0
211
212 2016-02-25
213
214 * added `bw stats`
215 * added `bw items --file-preview`
216 * added hooks for `bw test`
217 * reason for skipping an item is now displayed in regular output
218 * fixed exception handling for invalid cdicts/sdicts
219 * fixed handling of SSH errors
220 * fixed broken diffs caused by partial file downloads
221 * fixed interactive prompts sometimes not reading input correctly
222
223
224 # 2.0.1
225
226 2016-02-22
227
228 * fixed display of failed actions
229 * updated display of interactive lock override prompt
230 * improved robustness of internal output subsystem
231
232
233 # 2.0.0
234
235 2016-02-22
236
237 * added support for Python 3.3+
238 * switched from Fabric/Paramiko to OpenSSH
239 * removed SSH and sudo passwords **(BACKWARDS INCOMPATIBLE)**
240 * metadata is now merged recursively **(BACKWARDS INCOMPATIBLE)**
241 * file items: the source attribute now has a default **(BACKWARDS INCOMPATIBLE)**
242 * file items: the default content_type is now text **(BACKWARDS INCOMPATIBLE)**
243 * reworked command line options for `bw verify` **(BACKWARDS INCOMPATIBLE)**
244 * `cascade_skip` now defaults to `False` if the item is triggered or uses `unless` **(BACKWARDS INCOMPATIBLE)**
245 * `bw verify` and `bw apply` now show incorrect/fixed/failed attributes
246 * `bw apply` now uses a status line to show current activity
247 * generally improved output formatting
248
249
250 # 1.6.0
251
252 2016-02-22
253
254 * added `bw migrate` **(will be removed in 2.0.0)**
255 * added warnings for upgrading to 2.0.0 **(will be removed in 2.0.0)**
256
257
258 # 1.5.1
259
260 2015-06-11
261
262 * clean up local lock files
263 * fixed detection of some types of directories
264 * fixed exception spam when trying to load internal attributes as libs
265
266
267 # 1.5.0
268
269 2015-05-10
270
271 * added postgres_db and postgres_role items
272 * added `bw verify --only-needs-fixing`
273 * added `bw verify --summary`
274 * added `Repository.nodes_in_group()`
275 * added `verify_with` attribute for file items
276 * libs now have access to `repo_path`
277 * user items: fixed asking for password hash change
278 * file items: fixed `bw items -w` with `content_type: 'any'`
279 * improved various error messages
280
281
282 # 1.4.0
283
284 2015-03-02
285
286 * added virtualenv support for pkg_pip
287 * added reverse syntax for triggers and preceded_by
288 * lots of fixes and internal improvements around preceded_by
289
290
291 # 1.3.0
292
293 2014-12-31
294
295 * added pkg_pip items
296 * added pkg_yum items
297 * added pkg_zypper items
298 * added preceded_by item attribute
299 * fixed detection of non-existing files on CentOS/RHEL
300 * fixed detection of special files on Arch Linux
301 * fixed handling UTF-8 output of failed commands
302
303
304 # 1.2.2
305
306 2014-10-27
307
308 * fixed item classes not being restored after repo serialization
309
310
311 # 1.2.1
312
313 2014-10-21
314
315 * fixed a critical bug in bundle serialization
316
317
318 # 1.2.0
319
320 2014-10-19
321
322 * added item generators
323 * added `bw test --plugin-conflict-error`
324 * added `bw debug -c`
325 * improved unicode handling
326 * fixed logging issues
327
328
329 # 1.1.0
330
331 2014-08-11
332
333 * added metadata processors
334 * added `bw metadata`
335 * added `bw apply --profiling`
336 * added Repository.nodes_in_all_groups()
337 * added Repository.nodes_in_any_group()
338 * added the data subdirectory
339 * improved various error messages
340
341
342 # 1.0.0
343
344 2014-07-19
345
346 * API will now remain stable until 2.0.0
347 * added hooks for actions
348 * added support for Jinja2 templates
349 * fixed some CLI commands not terminating correctly
350
351
352 # 0.14.0
353
354 2014-07-13
355
356 * files, directories and symlinks don't care about ownership and mode by
357 default **(BACKWARDS INCOMPATIBLE)**
358 * Mako file templates can now use include
359
360
361 # 0.13.0
362
363 2014-06-19
364
365 * added password-based SSH/sudo authentication
366 * fixed symlink items not checking existing link targets
367 * fixed exception when triggering skipped items
368 * output is now prefixed with `node:bundle:item_type:item_name`
369 * `bw repo debug` is now a top-level command **(BACKWARDS INCOMPATIBLE)**
370 * `bw repo plot` is now a top-level command **(BACKWARDS INCOMPATIBLE)**
371 * `bw repo test` is now a top-level command **(BACKWARDS INCOMPATIBLE)**
372
373
374 # 0.12.0
375
376 2014-05-11
377
378 * added plugins
379 * added group metadata
380 * user and group attributes are now optional
381 * user groups may no longer contain primary group **(BACKWARDS INCOMPATIBLE)**
382 * improvements to logging and output
383 * fixed a critical bug preventing per-node customization of bundles
384 * fixed pkg_apt choking on interactive dpkg prompts
385 * fixed hashing of plaintext user passwords without salt
386
387
388 # 0.11.2
389
390 2014-04-02
391
392 * packaging fixes only
393
394
395 # 0.11.1
396
397 2014-04-02
398
399 * packaging fixes only
400
401
402 # 0.11.0
403
404 2014-03-23
405
406 * renamed builtin item attribute 'depends' to 'needs' **(BACKWARDS INCOMPATIBLE)**
407 * removed PARALLEL_APPLY on custom items in favor of BLOCK_CONCURRENT **(BACKWARDS INCOMPATIBLE)**
408 * added builtin item attribute 'needed_by'
409 * added canned actions for services
410 * added deletion of files, groups and users
411 * simplified output of `bw apply`
412 * `bw repo test` now also verifies dependencies
413 * fixed `bw repo test` for files without a template
414 * fixed triggered actions being run every time
415 * various fixes and improvements around dependency handling
416
417
418 # 0.10.0
419
420 2014-03-08
421
422 * removed the 'timing' attribute on actions **(BACKWARDS INCOMPATIBLE)**
423 * actions are now first-class items
424 * items can now trigger each other (most useful with actions)
425 * added System V service item
426 * added `bw repo test`
427 * added negated bundle and group selectors to CLI
428 * can now manage files while ignoring their content
429 * more control over how actions are run in interactive mode
430 * bundles can now be assigned to nodes directly
431 * fixed creating symlinks in nonexistent unmanaged directories
432
433
434 # 0.9.0
435
436 2014-02-24
437
438 * added 'unless' for actions
439 * improved exception handling
440 * fixed actions not triggering in noninteractive mode
441 * fixed noninteractive installation of Debian packages
442 * slightly more verbose output
443
444
445 # 0.8.0
446
447 2014-02-21
448
449 * move from Alpha into Beta stage
450 * added builtin item attribute 'unless'
451 * added lightweight git/hg/bzr integration
452 * added -f switch to `bw apply`
453 * template context can now be customized
454 * added Node.has_bundle, .in_group etc.
455 * fixed a LineBuffer bug
456 * prevented output of some extraneous whitespace
457
458
459 # 0.7.0
460
461 2014-02-16
462
463 * added safety checks to prevent diffs of unwieldy files
464 * added a "text" content type for files
465 * added support for arbitrary encodings in managed files
466 * addes systemd and Upstart service items
467 * added hooks
468 * added action triggers (for service restarts after config changes)
469 * lots of new documentation
470 * better error messages when defining duplicate items
471 * better dependencies between files, directories and symlinks
472 * fixed a bug that prevented managing /etc/sudoers
473
474
475 # 0.6.0
476
477 2014-01-01
478
479 * added actions
480 * reworked group patterns **(BACKWARDS INCOMPATIBLE)**
481 * reworked output verbosity **(BACKWARDS INCOMPATIBLE)**
482 * added support for libs directory
483 * fixed high CPU load while waiting for interactive response
484 * various other minor fixes and improvements
485
486
487 # 0.5.0
488
489 2013-11-09
490
491 * manage users and groups
492 * manage symlinks
493 * node locking
494 * PARALLEL_APPLY setting for items
495 * manage Arch Linux packages
496 * plot item dependencies
497 * encoding fixes for file handling
498
499
500 # 0.4.0
501
502 2013-08-25
503
504 * manage directories
505 * manage Debian packages
506 * UI improvements
507
508
509 # 0.3.0
510
511 2013-08-04
512
513 * basic file management
514 * concurrency improvements
515 * logging/output improvements
516 * use Fabric for remote operations
517 * lots of other small improvements
518
519
520 # 0.2.0
521
522 2013-07-12
523
524 * bundle management
525 * item APIs
526 * new concurrency helpers
527
528
529 # 0.1.0
530
531 2013-06-16
532
533 * initial release
534 * node and group management
535 * running commands on nodes
0 Please see [the docs on contributing](http://docs.bundlewrap.org/misc/contributing).
0 GNU GENERAL PUBLIC LICENSE
1 Version 3, 29 June 2007
2
3 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
4 Everyone is permitted to copy and distribute verbatim copies
5 of this license document, but changing it is not allowed.
6
7 Preamble
8
9 The GNU General Public License is a free, copyleft license for
10 software and other kinds of works.
11
12 The licenses for most software and other practical works are designed
13 to take away your freedom to share and change the works. By contrast,
14 the GNU General Public License is intended to guarantee your freedom to
15 share and change all versions of a program--to make sure it remains free
16 software for all its users. We, the Free Software Foundation, use the
17 GNU General Public License for most of our software; it applies also to
18 any other work released this way by its authors. You can apply it to
19 your programs, too.
20
21 When we speak of free software, we are referring to freedom, not
22 price. Our General Public Licenses are designed to make sure that you
23 have the freedom to distribute copies of free software (and charge for
24 them if you wish), that you receive source code or can get it if you
25 want it, that you can change the software or use pieces of it in new
26 free programs, and that you know you can do these things.
27
28 To protect your rights, we need to prevent others from denying you
29 these rights or asking you to surrender the rights. Therefore, you have
30 certain responsibilities if you distribute copies of the software, or if
31 you modify it: responsibilities to respect the freedom of others.
32
33 For example, if you distribute copies of such a program, whether
34 gratis or for a fee, you must pass on to the recipients the same
35 freedoms that you received. You must make sure that they, too, receive
36 or can get the source code. And you must show them these terms so they
37 know their rights.
38
39 Developers that use the GNU GPL protect your rights with two steps:
40 (1) assert copyright on the software, and (2) offer you this License
41 giving you legal permission to copy, distribute and/or modify it.
42
43 For the developers' and authors' protection, the GPL clearly explains
44 that there is no warranty for this free software. For both users' and
45 authors' sake, the GPL requires that modified versions be marked as
46 changed, so that their problems will not be attributed erroneously to
47 authors of previous versions.
48
49 Some devices are designed to deny users access to install or run
50 modified versions of the software inside them, although the manufacturer
51 can do so. This is fundamentally incompatible with the aim of
52 protecting users' freedom to change the software. The systematic
53 pattern of such abuse occurs in the area of products for individuals to
54 use, which is precisely where it is most unacceptable. Therefore, we
55 have designed this version of the GPL to prohibit the practice for those
56 products. If such problems arise substantially in other domains, we
57 stand ready to extend this provision to those domains in future versions
58 of the GPL, as needed to protect the freedom of users.
59
60 Finally, every program is threatened constantly by software patents.
61 States should not allow patents to restrict development and use of
62 software on general-purpose computers, but in those that do, we wish to
63 avoid the special danger that patents applied to a free program could
64 make it effectively proprietary. To prevent this, the GPL assures that
65 patents cannot be used to render the program non-free.
66
67 The precise terms and conditions for copying, distribution and
68 modification follow.
69
70 TERMS AND CONDITIONS
71
72 0. Definitions.
73
74 "This License" refers to version 3 of the GNU General Public License.
75
76 "Copyright" also means copyright-like laws that apply to other kinds of
77 works, such as semiconductor masks.
78
79 "The Program" refers to any copyrightable work licensed under this
80 License. Each licensee is addressed as "you". "Licensees" and
81 "recipients" may be individuals or organizations.
82
83 To "modify" a work means to copy from or adapt all or part of the work
84 in a fashion requiring copyright permission, other than the making of an
85 exact copy. The resulting work is called a "modified version" of the
86 earlier work or a work "based on" the earlier work.
87
88 A "covered work" means either the unmodified Program or a work based
89 on the Program.
90
91 To "propagate" a work means to do anything with it that, without
92 permission, would make you directly or secondarily liable for
93 infringement under applicable copyright law, except executing it on a
94 computer or modifying a private copy. Propagation includes copying,
95 distribution (with or without modification), making available to the
96 public, and in some countries other activities as well.
97
98 To "convey" a work means any kind of propagation that enables other
99 parties to make or receive copies. Mere interaction with a user through
100 a computer network, with no transfer of a copy, is not conveying.
101
102 An interactive user interface displays "Appropriate Legal Notices"
103 to the extent that it includes a convenient and prominently visible
104 feature that (1) displays an appropriate copyright notice, and (2)
105 tells the user that there is no warranty for the work (except to the
106 extent that warranties are provided), that licensees may convey the
107 work under this License, and how to view a copy of this License. If
108 the interface presents a list of user commands or options, such as a
109 menu, a prominent item in the list meets this criterion.
110
111 1. Source Code.
112
113 The "source code" for a work means the preferred form of the work
114 for making modifications to it. "Object code" means any non-source
115 form of a work.
116
117 A "Standard Interface" means an interface that either is an official
118 standard defined by a recognized standards body, or, in the case of
119 interfaces specified for a particular programming language, one that
120 is widely used among developers working in that language.
121
122 The "System Libraries" of an executable work include anything, other
123 than the work as a whole, that (a) is included in the normal form of
124 packaging a Major Component, but which is not part of that Major
125 Component, and (b) serves only to enable use of the work with that
126 Major Component, or to implement a Standard Interface for which an
127 implementation is available to the public in source code form. A
128 "Major Component", in this context, means a major essential component
129 (kernel, window system, and so on) of the specific operating system
130 (if any) on which the executable work runs, or a compiler used to
131 produce the work, or an object code interpreter used to run it.
132
133 The "Corresponding Source" for a work in object code form means all
134 the source code needed to generate, install, and (for an executable
135 work) run the object code and to modify the work, including scripts to
136 control those activities. However, it does not include the work's
137 System Libraries, or general-purpose tools or generally available free
138 programs which are used unmodified in performing those activities but
139 which are not part of the work. For example, Corresponding Source
140 includes interface definition files associated with source files for
141 the work, and the source code for shared libraries and dynamically
142 linked subprograms that the work is specifically designed to require,
143 such as by intimate data communication or control flow between those
144 subprograms and other parts of the work.
145
146 The Corresponding Source need not include anything that users
147 can regenerate automatically from other parts of the Corresponding
148 Source.
149
150 The Corresponding Source for a work in source code form is that
151 same work.
152
153 2. Basic Permissions.
154
155 All rights granted under this License are granted for the term of
156 copyright on the Program, and are irrevocable provided the stated
157 conditions are met. This License explicitly affirms your unlimited
158 permission to run the unmodified Program. The output from running a
159 covered work is covered by this License only if the output, given its
160 content, constitutes a covered work. This License acknowledges your
161 rights of fair use or other equivalent, as provided by copyright law.
162
163 You may make, run and propagate covered works that you do not
164 convey, without conditions so long as your license otherwise remains
165 in force. You may convey covered works to others for the sole purpose
166 of having them make modifications exclusively for you, or provide you
167 with facilities for running those works, provided that you comply with
168 the terms of this License in conveying all material for which you do
169 not control copyright. Those thus making or running the covered works
170 for you must do so exclusively on your behalf, under your direction
171 and control, on terms that prohibit them from making any copies of
172 your copyrighted material outside their relationship with you.
173
174 Conveying under any other circumstances is permitted solely under
175 the conditions stated below. Sublicensing is not allowed; section 10
176 makes it unnecessary.
177
178 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
179
180 No covered work shall be deemed part of an effective technological
181 measure under any applicable law fulfilling obligations under article
182 11 of the WIPO copyright treaty adopted on 20 December 1996, or
183 similar laws prohibiting or restricting circumvention of such
184 measures.
185
186 When you convey a covered work, you waive any legal power to forbid
187 circumvention of technological measures to the extent such circumvention
188 is effected by exercising rights under this License with respect to
189 the covered work, and you disclaim any intention to limit operation or
190 modification of the work as a means of enforcing, against the work's
191 users, your or third parties' legal rights to forbid circumvention of
192 technological measures.
193
194 4. Conveying Verbatim Copies.
195
196 You may convey verbatim copies of the Program's source code as you
197 receive it, in any medium, provided that you conspicuously and
198 appropriately publish on each copy an appropriate copyright notice;
199 keep intact all notices stating that this License and any
200 non-permissive terms added in accord with section 7 apply to the code;
201 keep intact all notices of the absence of any warranty; and give all
202 recipients a copy of this License along with the Program.
203
204 You may charge any price or no price for each copy that you convey,
205 and you may offer support or warranty protection for a fee.
206
207 5. Conveying Modified Source Versions.
208
209 You may convey a work based on the Program, or the modifications to
210 produce it from the Program, in the form of source code under the
211 terms of section 4, provided that you also meet all of these conditions:
212
213 a) The work must carry prominent notices stating that you modified
214 it, and giving a relevant date.
215
216 b) The work must carry prominent notices stating that it is
217 released under this License and any conditions added under section
218 7. This requirement modifies the requirement in section 4 to
219 "keep intact all notices".
220
221 c) You must license the entire work, as a whole, under this
222 License to anyone who comes into possession of a copy. This
223 License will therefore apply, along with any applicable section 7
224 additional terms, to the whole of the work, and all its parts,
225 regardless of how they are packaged. This License gives no
226 permission to license the work in any other way, but it does not
227 invalidate such permission if you have separately received it.
228
229 d) If the work has interactive user interfaces, each must display
230 Appropriate Legal Notices; however, if the Program has interactive
231 interfaces that do not display Appropriate Legal Notices, your
232 work need not make them do so.
233
234 A compilation of a covered work with other separate and independent
235 works, which are not by their nature extensions of the covered work,
236 and which are not combined with it such as to form a larger program,
237 in or on a volume of a storage or distribution medium, is called an
238 "aggregate" if the compilation and its resulting copyright are not
239 used to limit the access or legal rights of the compilation's users
240 beyond what the individual works permit. Inclusion of a covered work
241 in an aggregate does not cause this License to apply to the other
242 parts of the aggregate.
243
244 6. Conveying Non-Source Forms.
245
246 You may convey a covered work in object code form under the terms
247 of sections 4 and 5, provided that you also convey the
248 machine-readable Corresponding Source under the terms of this License,
249 in one of these ways:
250
251 a) Convey the object code in, or embodied in, a physical product
252 (including a physical distribution medium), accompanied by the
253 Corresponding Source fixed on a durable physical medium
254 customarily used for software interchange.
255
256 b) Convey the object code in, or embodied in, a physical product
257 (including a physical distribution medium), accompanied by a
258 written offer, valid for at least three years and valid for as
259 long as you offer spare parts or customer support for that product
260 model, to give anyone who possesses the object code either (1) a
261 copy of the Corresponding Source for all the software in the
262 product that is covered by this License, on a durable physical
263 medium customarily used for software interchange, for a price no
264 more than your reasonable cost of physically performing this
265 conveying of source, or (2) access to copy the
266 Corresponding Source from a network server at no charge.
267
268 c) Convey individual copies of the object code with a copy of the
269 written offer to provide the Corresponding Source. This
270 alternative is allowed only occasionally and noncommercially, and
271 only if you received the object code with such an offer, in accord
272 with subsection 6b.
273
274 d) Convey the object code by offering access from a designated
275 place (gratis or for a charge), and offer equivalent access to the
276 Corresponding Source in the same way through the same place at no
277 further charge. You need not require recipients to copy the
278 Corresponding Source along with the object code. If the place to
279 copy the object code is a network server, the Corresponding Source
280 may be on a different server (operated by you or a third party)
281 that supports equivalent copying facilities, provided you maintain
282 clear directions next to the object code saying where to find the
283 Corresponding Source. Regardless of what server hosts the
284 Corresponding Source, you remain obligated to ensure that it is
285 available for as long as needed to satisfy these requirements.
286
287 e) Convey the object code using peer-to-peer transmission, provided
288 you inform other peers where the object code and Corresponding
289 Source of the work are being offered to the general public at no
290 charge under subsection 6d.
291
292 A separable portion of the object code, whose source code is excluded
293 from the Corresponding Source as a System Library, need not be
294 included in conveying the object code work.
295
296 A "User Product" is either (1) a "consumer product", which means any
297 tangible personal property which is normally used for personal, family,
298 or household purposes, or (2) anything designed or sold for incorporation
299 into a dwelling. In determining whether a product is a consumer product,
300 doubtful cases shall be resolved in favor of coverage. For a particular
301 product received by a particular user, "normally used" refers to a
302 typical or common use of that class of product, regardless of the status
303 of the particular user or of the way in which the particular user
304 actually uses, or expects or is expected to use, the product. A product
305 is a consumer product regardless of whether the product has substantial
306 commercial, industrial or non-consumer uses, unless such uses represent
307 the only significant mode of use of the product.
308
309 "Installation Information" for a User Product means any methods,
310 procedures, authorization keys, or other information required to install
311 and execute modified versions of a covered work in that User Product from
312 a modified version of its Corresponding Source. The information must
313 suffice to ensure that the continued functioning of the modified object
314 code is in no case prevented or interfered with solely because
315 modification has been made.
316
317 If you convey an object code work under this section in, or with, or
318 specifically for use in, a User Product, and the conveying occurs as
319 part of a transaction in which the right of possession and use of the
320 User Product is transferred to the recipient in perpetuity or for a
321 fixed term (regardless of how the transaction is characterized), the
322 Corresponding Source conveyed under this section must be accompanied
323 by the Installation Information. But this requirement does not apply
324 if neither you nor any third party retains the ability to install
325 modified object code on the User Product (for example, the work has
326 been installed in ROM).
327
328 The requirement to provide Installation Information does not include a
329 requirement to continue to provide support service, warranty, or updates
330 for a work that has been modified or installed by the recipient, or for
331 the User Product in which it has been modified or installed. Access to a
332 network may be denied when the modification itself materially and
333 adversely affects the operation of the network or violates the rules and
334 protocols for communication across the network.
335
336 Corresponding Source conveyed, and Installation Information provided,
337 in accord with this section must be in a format that is publicly
338 documented (and with an implementation available to the public in
339 source code form), and must require no special password or key for
340 unpacking, reading or copying.
341
342 7. Additional Terms.
343
344 "Additional permissions" are terms that supplement the terms of this
345 License by making exceptions from one or more of its conditions.
346 Additional permissions that are applicable to the entire Program shall
347 be treated as though they were included in this License, to the extent
348 that they are valid under applicable law. If additional permissions
349 apply only to part of the Program, that part may be used separately
350 under those permissions, but the entire Program remains governed by
351 this License without regard to the additional permissions.
352
353 When you convey a copy of a covered work, you may at your option
354 remove any additional permissions from that copy, or from any part of
355 it. (Additional permissions may be written to require their own
356 removal in certain cases when you modify the work.) You may place
357 additional permissions on material, added by you to a covered work,
358 for which you have or can give appropriate copyright permission.
359
360 Notwithstanding any other provision of this License, for material you
361 add to a covered work, you may (if authorized by the copyright holders of
362 that material) supplement the terms of this License with terms:
363
364 a) Disclaiming warranty or limiting liability differently from the
365 terms of sections 15 and 16 of this License; or
366
367 b) Requiring preservation of specified reasonable legal notices or
368 author attributions in that material or in the Appropriate Legal
369 Notices displayed by works containing it; or
370
371 c) Prohibiting misrepresentation of the origin of that material, or
372 requiring that modified versions of such material be marked in
373 reasonable ways as different from the original version; or
374
375 d) Limiting the use for publicity purposes of names of licensors or
376 authors of the material; or
377
378 e) Declining to grant rights under trademark law for use of some
379 trade names, trademarks, or service marks; or
380
381 f) Requiring indemnification of licensors and authors of that
382 material by anyone who conveys the material (or modified versions of
383 it) with contractual assumptions of liability to the recipient, for
384 any liability that these contractual assumptions directly impose on
385 those licensors and authors.
386
387 All other non-permissive additional terms are considered "further
388 restrictions" within the meaning of section 10. If the Program as you
389 received it, or any part of it, contains a notice stating that it is
390 governed by this License along with a term that is a further
391 restriction, you may remove that term. If a license document contains
392 a further restriction but permits relicensing or conveying under this
393 License, you may add to a covered work material governed by the terms
394 of that license document, provided that the further restriction does
395 not survive such relicensing or conveying.
396
397 If you add terms to a covered work in accord with this section, you
398 must place, in the relevant source files, a statement of the
399 additional terms that apply to those files, or a notice indicating
400 where to find the applicable terms.
401
402 Additional terms, permissive or non-permissive, may be stated in the
403 form of a separately written license, or stated as exceptions;
404 the above requirements apply either way.
405
406 8. Termination.
407
408 You may not propagate or modify a covered work except as expressly
409 provided under this License. Any attempt otherwise to propagate or
410 modify it is void, and will automatically terminate your rights under
411 this License (including any patent licenses granted under the third
412 paragraph of section 11).
413
414 However, if you cease all violation of this License, then your
415 license from a particular copyright holder is reinstated (a)
416 provisionally, unless and until the copyright holder explicitly and
417 finally terminates your license, and (b) permanently, if the copyright
418 holder fails to notify you of the violation by some reasonable means
419 prior to 60 days after the cessation.
420
421 Moreover, your license from a particular copyright holder is
422 reinstated permanently if the copyright holder notifies you of the
423 violation by some reasonable means, this is the first time you have
424 received notice of violation of this License (for any work) from that
425 copyright holder, and you cure the violation prior to 30 days after
426 your receipt of the notice.
427
428 Termination of your rights under this section does not terminate the
429 licenses of parties who have received copies or rights from you under
430 this License. If your rights have been terminated and not permanently
431 reinstated, you do not qualify to receive new licenses for the same
432 material under section 10.
433
434 9. Acceptance Not Required for Having Copies.
435
436 You are not required to accept this License in order to receive or
437 run a copy of the Program. Ancillary propagation of a covered work
438 occurring solely as a consequence of using peer-to-peer transmission
439 to receive a copy likewise does not require acceptance. However,
440 nothing other than this License grants you permission to propagate or
441 modify any covered work. These actions infringe copyright if you do
442 not accept this License. Therefore, by modifying or propagating a
443 covered work, you indicate your acceptance of this License to do so.
444
445 10. Automatic Licensing of Downstream Recipients.
446
447 Each time you convey a covered work, the recipient automatically
448 receives a license from the original licensors, to run, modify and
449 propagate that work, subject to this License. You are not responsible
450 for enforcing compliance by third parties with this License.
451
452 An "entity transaction" is a transaction transferring control of an
453 organization, or substantially all assets of one, or subdividing an
454 organization, or merging organizations. If propagation of a covered
455 work results from an entity transaction, each party to that
456 transaction who receives a copy of the work also receives whatever
457 licenses to the work the party's predecessor in interest had or could
458 give under the previous paragraph, plus a right to possession of the
459 Corresponding Source of the work from the predecessor in interest, if
460 the predecessor has it or can get it with reasonable efforts.
461
462 You may not impose any further restrictions on the exercise of the
463 rights granted or affirmed under this License. For example, you may
464 not impose a license fee, royalty, or other charge for exercise of
465 rights granted under this License, and you may not initiate litigation
466 (including a cross-claim or counterclaim in a lawsuit) alleging that
467 any patent claim is infringed by making, using, selling, offering for
468 sale, or importing the Program or any portion of it.
469
470 11. Patents.
471
472 A "contributor" is a copyright holder who authorizes use under this
473 License of the Program or a work on which the Program is based. The
474 work thus licensed is called the contributor's "contributor version".
475
476 A contributor's "essential patent claims" are all patent claims
477 owned or controlled by the contributor, whether already acquired or
478 hereafter acquired, that would be infringed by some manner, permitted
479 by this License, of making, using, or selling its contributor version,
480 but do not include claims that would be infringed only as a
481 consequence of further modification of the contributor version. For
482 purposes of this definition, "control" includes the right to grant
483 patent sublicenses in a manner consistent with the requirements of
484 this License.
485
486 Each contributor grants you a non-exclusive, worldwide, royalty-free
487 patent license under the contributor's essential patent claims, to
488 make, use, sell, offer for sale, import and otherwise run, modify and
489 propagate the contents of its contributor version.
490
491 In the following three paragraphs, a "patent license" is any express
492 agreement or commitment, however denominated, not to enforce a patent
493 (such as an express permission to practice a patent or covenant not to
494 sue for patent infringement). To "grant" such a patent license to a
495 party means to make such an agreement or commitment not to enforce a
496 patent against the party.
497
498 If you convey a covered work, knowingly relying on a patent license,
499 and the Corresponding Source of the work is not available for anyone
500 to copy, free of charge and under the terms of this License, through a
501 publicly available network server or other readily accessible means,
502 then you must either (1) cause the Corresponding Source to be so
503 available, or (2) arrange to deprive yourself of the benefit of the
504 patent license for this particular work, or (3) arrange, in a manner
505 consistent with the requirements of this License, to extend the patent
506 license to downstream recipients. "Knowingly relying" means you have
507 actual knowledge that, but for the patent license, your conveying the
508 covered work in a country, or your recipient's use of the covered work
509 in a country, would infringe one or more identifiable patents in that
510 country that you have reason to believe are valid.
511
512 If, pursuant to or in connection with a single transaction or
513 arrangement, you convey, or propagate by procuring conveyance of, a
514 covered work, and grant a patent license to some of the parties
515 receiving the covered work authorizing them to use, propagate, modify
516 or convey a specific copy of the covered work, then the patent license
517 you grant is automatically extended to all recipients of the covered
518 work and works based on it.
519
520 A patent license is "discriminatory" if it does not include within
521 the scope of its coverage, prohibits the exercise of, or is
522 conditioned on the non-exercise of one or more of the rights that are
523 specifically granted under this License. You may not convey a covered
524 work if you are a party to an arrangement with a third party that is
525 in the business of distributing software, under which you make payment
526 to the third party based on the extent of your activity of conveying
527 the work, and under which the third party grants, to any of the
528 parties who would receive the covered work from you, a discriminatory
529 patent license (a) in connection with copies of the covered work
530 conveyed by you (or copies made from those copies), or (b) primarily
531 for and in connection with specific products or compilations that
532 contain the covered work, unless you entered into that arrangement,
533 or that patent license was granted, prior to 28 March 2007.
534
535 Nothing in this License shall be construed as excluding or limiting
536 any implied license or other defenses to infringement that may
537 otherwise be available to you under applicable patent law.
538
539 12. No Surrender of Others' Freedom.
540
541 If conditions are imposed on you (whether by court order, agreement or
542 otherwise) that contradict the conditions of this License, they do not
543 excuse you from the conditions of this License. If you cannot convey a
544 covered work so as to satisfy simultaneously your obligations under this
545 License and any other pertinent obligations, then as a consequence you may
546 not convey it at all. For example, if you agree to terms that obligate you
547 to collect a royalty for further conveying from those to whom you convey
548 the Program, the only way you could satisfy both those terms and this
549 License would be to refrain entirely from conveying the Program.
550
551 13. Use with the GNU Affero General Public License.
552
553 Notwithstanding any other provision of this License, you have
554 permission to link or combine any covered work with a work licensed
555 under version 3 of the GNU Affero General Public License into a single
556 combined work, and to convey the resulting work. The terms of this
557 License will continue to apply to the part which is the covered work,
558 but the special requirements of the GNU Affero General Public License,
559 section 13, concerning interaction through a network will apply to the
560 combination as such.
561
562 14. Revised Versions of this License.
563
564 The Free Software Foundation may publish revised and/or new versions of
565 the GNU General Public License from time to time. Such new versions will
566 be similar in spirit to the present version, but may differ in detail to
567 address new problems or concerns.
568
569 Each version is given a distinguishing version number. If the
570 Program specifies that a certain numbered version of the GNU General
571 Public License "or any later version" applies to it, you have the
572 option of following the terms and conditions either of that numbered
573 version or of any later version published by the Free Software
574 Foundation. If the Program does not specify a version number of the
575 GNU General Public License, you may choose any version ever published
576 by the Free Software Foundation.
577
578 If the Program specifies that a proxy can decide which future
579 versions of the GNU General Public License can be used, that proxy's
580 public statement of acceptance of a version permanently authorizes you
581 to choose that version for the Program.
582
583 Later license versions may give you additional or different
584 permissions. However, no additional obligations are imposed on any
585 author or copyright holder as a result of your choosing to follow a
586 later version.
587
588 15. Disclaimer of Warranty.
589
590 THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
591 APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
592 HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
593 OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
594 THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
595 PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
596 IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
597 ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
598
599 16. Limitation of Liability.
600
601 IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
602 WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
603 THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
604 GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
605 USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
606 DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
607 PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
608 EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
609 SUCH DAMAGES.
610
611 17. Interpretation of Sections 15 and 16.
612
613 If the disclaimer of warranty and limitation of liability provided
614 above cannot be given local legal effect according to their terms,
615 reviewing courts shall apply local law that most closely approximates
616 an absolute waiver of all civil liability in connection with the
617 Program, unless a warranty or assumption of liability accompanies a
618 copy of the Program in return for a fee.
619
620 END OF TERMS AND CONDITIONS
621
622 How to Apply These Terms to Your New Programs
623
624 If you develop a new program, and you want it to be of the greatest
625 possible use to the public, the best way to achieve this is to make it
626 free software which everyone can redistribute and change under these terms.
627
628 To do so, attach the following notices to the program. It is safest
629 to attach them to the start of each source file to most effectively
630 state the exclusion of warranty; and each file should have at least
631 the "copyright" line and a pointer to where the full notice is found.
632
633 <one line to give the program's name and a brief idea of what it does.>
634 Copyright (C) <year> <name of author>
635
636 This program is free software: you can redistribute it and/or modify
637 it under the terms of the GNU General Public License as published by
638 the Free Software Foundation, either version 3 of the License, or
639 (at your option) any later version.
640
641 This program is distributed in the hope that it will be useful,
642 but WITHOUT ANY WARRANTY; without even the implied warranty of
643 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
644 GNU General Public License for more details.
645
646 You should have received a copy of the GNU General Public License
647 along with this program. If not, see <http://www.gnu.org/licenses/>.
648
649 Also add information on how to contact you by electronic and paper mail.
650
651 If the program does terminal interaction, make it output a short
652 notice like this when it starts in an interactive mode:
653
654 <program> Copyright (C) <year> <name of author>
655 This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
656 This is free software, and you are welcome to redistribute it
657 under certain conditions; type `show c' for details.
658
659 The hypothetical commands `show w' and `show c' should show the appropriate
660 parts of the General Public License. Of course, your program's commands
661 might be different; for a GUI interface, you would use an "about box".
662
663 You should also get your employer (if you work as a programmer) or school,
664 if any, to sign a "copyright disclaimer" for the program, if necessary.
665 For more information on this, and how to apply and follow the GNU GPL, see
666 <http://www.gnu.org/licenses/>.
667
668 The GNU General Public License does not permit incorporating your program
669 into proprietary programs. If your program is a subroutine library, you
670 may consider it more useful to permit linking proprietary applications with
671 the library. If this is what you want to do, use the GNU Lesser General
672 Public License instead of this License. But first, please read
673 <http://www.gnu.org/philosophy/why-not-lgpl.html>.
0 include AUTHORS CHANGELOG.md LICENSE README.md
0 BundleWrap is a decentralized configuration management system that is designed to be powerful, easy to extend and extremely versatile.
1
2 For more information, have a look at [bundlewrap.org](http://bundlewrap.org) and [docs.bundlewrap.org](http://docs.bundlewrap.org).
3
4 ------------------------------------------------------------------------
5
6 <a href="https://pypi.python.org/pypi/bundlewrap/">
7 <img src="http://img.shields.io/pypi/v/bundlewrap.svg" alt="Latest Version">
8 </a>
9 &nbsp;
10 <a href="https://travis-ci.org/bundlewrap/bundlewrap">
11 <img src="http://img.shields.io/travis/bundlewrap/bundlewrap/master.svg" alt="Build status">
12 </a>
13 &nbsp;
14 <a href="https://landscape.io/github/bundlewrap/bundlewrap/master">
15 <img src="https://landscape.io/github/bundlewrap/bundlewrap/master/landscape.svg?style=flat" alt="Code health">
16 </a>
17 &nbsp;
18 <a href="https://pypi.python.org/pypi/bundlewrap/">
19 <img src="http://img.shields.io/pypi/pyversions/bundlewrap.svg" alt="Python compatibility">
20 </a>
21
22 ------------------------------------------------------------------------
23
24 BundleWrap is © 2013 - 2016 [Torsten Rehn](mailto:torsten@rehn.email)
Binary diff not shown
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 VERSION = (2, 12, 2)
4 VERSION_STRING = ".".join([str(v) for v in VERSION])
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from os.path import exists, join
4
5 from .exceptions import NoSuchBundle, RepositoryError
6 from .utils import cached_property, get_all_attrs_from_file
7 from .utils.text import mark_for_translation as _
8 from .utils.text import validate_name
9 from .utils.ui import io
10
11
12 FILENAME_BUNDLE = "items.py"
13 FILENAME_METADATA = "metadata.py"
14
15
16 class Bundle(object):
17 """
18 A collection of config items, bound to a node.
19 """
20 def __init__(self, node, name):
21 self.name = name
22 self.node = node
23 self.repo = node.repo
24
25 if not validate_name(name):
26 raise RepositoryError(_("invalid bundle name: {}").format(name))
27
28 if name not in self.repo.bundle_names:
29 raise NoSuchBundle(_("bundle not found: {}").format(name))
30
31 self.bundle_dir = join(self.repo.bundles_dir, self.name)
32 self.bundle_data_dir = join(self.repo.data_dir, self.name)
33 self.bundle_file = join(self.bundle_dir, FILENAME_BUNDLE)
34 self.metadata_file = join(self.bundle_dir, FILENAME_METADATA)
35
36 @cached_property
37 def bundle_attrs(self):
38 if not exists(self.bundle_file):
39 return {}
40 else:
41 with io.job(_(" {node} {bundle} collecting items...").format(
42 node=self.node.name,
43 bundle=self.name,
44 )):
45 return get_all_attrs_from_file(
46 self.bundle_file,
47 base_env={
48 'node': self.node,
49 'repo': self.repo,
50 },
51 )
52
53 @cached_property
54 def items(self):
55 for item_class in self.repo.item_classes:
56 for item_name, item_attrs in self.bundle_attrs.get(
57 item_class.BUNDLE_ATTRIBUTE_NAME,
58 {},
59 ).items():
60 yield self.make_item(
61 item_class.BUNDLE_ATTRIBUTE_NAME,
62 item_name,
63 item_attrs,
64 )
65
66 def make_item(self, attribute_name, item_name, item_attrs):
67 for item_class in self.repo.item_classes:
68 if item_class.BUNDLE_ATTRIBUTE_NAME == attribute_name:
69 return item_class(self, item_name, item_attrs)
70 raise RuntimeError(
71 _("bundle '{bundle}' tried to generate item '{item}' from "
72 "unknown attribute '{attr}'").format(
73 attr=attribute_name,
74 bundle=self.name,
75 item=item_name,
76 )
77 )
78
79 @cached_property
80 def metadata_processors(self):
81 with io.job(_(" {node} {bundle} collecting metadata processors...").format(
82 node=self.node.name,
83 bundle=self.name,
84 )):
85 if not exists(self.metadata_file):
86 return []
87 result = []
88 for name, attr in get_all_attrs_from_file(
89 self.metadata_file,
90 base_env={
91 'node': self.node,
92 'repo': self.repo,
93 },
94 ).items():
95 if name.startswith("_") or not callable(attr):
96 continue
97 result.append(attr)
98 return result
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from functools import wraps
4 from os import environ, getcwd
5 from os.path import dirname
6 from sys import argv, exit, stderr, stdout
7 from traceback import print_exc
8
9
10 from ..exceptions import NoSuchRepository, MissingRepoDependency
11 from ..repo import Repository
12 from ..utils.text import force_text, mark_for_translation as _, red
13 from ..utils.ui import io
14 from .parser import build_parser_bw
15
16
17 def suppress_broken_pipe_msg(f):
18 """
19 Oh boy.
20
21 CPython does funny things with SIGPIPE. By default, it is caught and
22 raised as a BrokenPipeError. When do we get a SIGPIPE? Most commonly
23 when piping into head:
24
25 bw nodes | head -n 1
26
27 head will exit after receiving the first line, causing the kernel to
28 send SIGPIPE to our process. Since in most cases, we can't just quit
29 early, we simply ignore BrokenPipeError in utils.ui.write_to_stream.
30
31 Unfortunately, Python will still print a message:
32
33 Exception ignored in: <_io.TextIOWrapper name='<stdout>'
34 mode='w' encoding='UTF-8'>
35 BrokenPipeError: [Errno 32] Broken pipe
36
37 See also http://bugs.python.org/issue11380. The crazy try/finally
38 construct below is taken from there and I quote:
39
40 This will:
41 - capture any exceptions *you've* raised as the context for the
42 errors raised in this handler
43 - expose any exceptions generated during this thing itself
44 - prevent the interpreter dying during shutdown in
45 flush_std_files by closing the files (you can't easily wipe
46 out the pending writes that have failed)
47
48 CAVEAT: There is a seamingly easier method floating around on the
49 net (http://stackoverflow.com/a/16865106) that restores the default
50 behavior for SIGPIPE (i.e. not turning it into a BrokenPipeError):
51
52 from signal import signal, SIGPIPE, SIG_DFL
53 signal(SIGPIPE,SIG_DFL)
54
55 This worked fine for a while but broke when using
56 multiprocessing.Manager() to share the list of jobs in utils.ui
57 between processes. When the main process terminated, it quit with
58 return code 141 (indicating a broken pipe), and the background
59 process used for the manager continued to hang around indefinitely.
60 Bonus fun: This was observed only on Ubuntu Trusty (14.04).
61 """
62 @wraps(f)
63 def wrapper(*args, **kwargs):
64 try:
65 return f(*args, **kwargs)
66 except SystemExit:
67 raise
68 except:
69 print_exc()
70 exit(1)
71 finally:
72 try:
73 stdout.flush()
74 finally:
75 try:
76 stdout.close()
77 finally:
78 try:
79 stderr.flush()
80 finally:
81 stderr.close()
82 return wrapper
83
84
85 @suppress_broken_pipe_msg
86 def main(*args, **kwargs):
87 """
88 Entry point for the 'bw' command line utility.
89
90 The args and path parameters are used for integration tests.
91 """
92 if not args:
93 args = argv[1:]
94 path = kwargs.get('path', getcwd())
95
96 text_args = [force_text(arg) for arg in args]
97
98 parser_bw = build_parser_bw()
99 pargs = parser_bw.parse_args(args)
100 if not hasattr(pargs, 'func'):
101 parser_bw.print_help()
102 exit(2)
103
104 io.debug_mode = pargs.debug
105 io.activate()
106 io.debug(_("invocation: {}").format(" ".join(argv)))
107
108 if 'BWADDHOSTKEYS' in environ: # TODO remove in 3.0.0
109 environ.setdefault('BW_ADD_HOST_KEYS', environ['BWADDHOSTKEYS'])
110 if 'BWCOLORS' in environ: # TODO remove in 3.0.0
111 environ.setdefault('BW_COLORS', environ['BWCOLORS'])
112 if 'BWITEMWORKERS' in environ: # TODO remove in 3.0.0
113 environ.setdefault('BW_ITEM_WORKERS', environ['BWITEMWORKERS'])
114 if 'BWNODEWORKERS' in environ: # TODO remove in 3.0.0
115 environ.setdefault('BW_NODE_WORKERS', environ['BWNODEWORKERS'])
116
117 environ.setdefault('BW_ADD_HOST_KEYS', "1" if pargs.add_ssh_host_keys else "0")
118
119 if len(text_args) >= 1 and (
120 text_args[0] == "--version" or
121 (len(text_args) >= 2 and text_args[0] == "repo" and text_args[1] == "create") or
122 text_args[0] == "zen" or
123 "-h" in text_args or
124 "--help" in text_args
125 ):
126 # 'bw repo create' is a special case that only takes a path
127 repo = path
128 else:
129 while True:
130 try:
131 repo = Repository(path)
132 break
133 except NoSuchRepository:
134 if path == dirname(path):
135 io.stderr(_(
136 "{x} The current working directory "
137 "is not a BundleWrap repository."
138 ).format(x=red("!!!")))
139 exit(1)
140 else:
141 path = dirname(path)
142 except MissingRepoDependency as exc:
143 io.stderr(str(exc))
144 exit(1)
145
146 # convert all string args into text
147 text_pargs = {key: force_text(value) for key, value in vars(pargs).items()}
148
149 try:
150 pargs.func(repo, text_pargs)
151 finally:
152 io.deactivate()
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from datetime import datetime
4 from sys import exit
5
6 from ..concurrency import WorkerPool
7 from ..utils.cmdline import get_target_nodes
8 from ..utils.text import bold
9 from ..utils.text import error_summary, mark_for_translation as _
10 from ..utils.ui import io
11
12
13 def bw_apply(repo, args):
14 errors = []
15 target_nodes = get_target_nodes(repo, args['target'], adhoc_nodes=args['adhoc_nodes'])
16 pending_nodes = target_nodes[:]
17
18 repo.hooks.apply_start(
19 repo,
20 args['target'],
21 target_nodes,
22 interactive=args['interactive'],
23 )
24
25 start_time = datetime.now()
26
27 def tasks_available():
28 return bool(pending_nodes)
29
30 def next_task():
31 node = pending_nodes.pop()
32 return {
33 'target': node.apply,
34 'task_id': node.name,
35 'kwargs': {
36 'autoskip_selector': args['autoskip'],
37 'force': args['force'],
38 'interactive': args['interactive'],
39 'workers': args['item_workers'],
40 'profiling': args['profiling'],
41 },
42 }
43
44 def handle_result(task_id, return_value, duration):
45 if (
46 return_value is not None and # node skipped because it had no items
47 args['profiling']
48 ):
49 total_time = 0.0
50 io.stdout(_(" {}").format(bold(task_id)))
51 io.stdout(_(" {} BEGIN PROFILING DATA "
52 "(most expensive items first)").format(bold(task_id)))
53 io.stdout(_(" {} seconds item").format(bold(task_id)))
54 for time_elapsed, item_id in return_value.profiling_info:
55 io.stdout(" {} {:10.3f} {}".format(
56 bold(task_id),
57 time_elapsed.total_seconds(),
58 item_id,
59 ))
60 total_time += time_elapsed.total_seconds()
61 io.stdout(_(" {} {:10.3f} (total)").format(bold(task_id), total_time))
62 io.stdout(_(" {} END PROFILING DATA").format(bold(task_id)))
63 io.stdout(_(" {}").format(bold(task_id)))
64
65 def handle_exception(task_id, exception, traceback):
66 msg = "{}: {}".format(task_id, exception)
67 io.stderr(traceback)
68 io.stderr(repr(exception))
69 io.stderr(msg)
70 errors.append(msg)
71
72 worker_pool = WorkerPool(
73 tasks_available,
74 next_task,
75 handle_result=handle_result,
76 handle_exception=handle_exception,
77 pool_id="apply",
78 workers=args['node_workers'],
79 )
80 worker_pool.run()
81
82 error_summary(errors)
83
84 repo.hooks.apply_end(
85 repo,
86 args['target'],
87 target_nodes,
88 duration=datetime.now() - start_time,
89 )
90
91 exit(1 if errors else 0)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from code import interact
4
5 from .. import VERSION_STRING
6 from ..utils.cmdline import get_node
7 from ..utils.text import mark_for_translation as _
8 from ..utils.ui import io
9
10
11 DEBUG_BANNER = _("BundleWrap {version} interactive repository inspector\n"
12 "> You can access the current repository as 'repo'."
13 "").format(version=VERSION_STRING)
14
15 DEBUG_BANNER_NODE = DEBUG_BANNER + "\n" + \
16 _("> You can access the selected node as 'node'.")
17
18
19 def bw_debug(repo, args):
20 if args['node'] is None:
21 env = {'repo': repo}
22 banner = DEBUG_BANNER
23 else:
24 env = {'node': get_node(repo, args['node']), 'repo': repo}
25 banner = DEBUG_BANNER_NODE
26
27 io.deactivate()
28 if args['command']:
29 exec(args['command'], env)
30 else:
31 interact(banner, local=env)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from ..utils import names
4 from ..utils.ui import io
5
6
7 def bw_groups(repo, args):
8 for group in repo.groups:
9 line = group.name
10 if args['show_nodes']:
11 line += ": " + ", ".join(names(group.nodes))
12 io.stdout(line)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from sys import exit
4
5 from ..exceptions import NoSuchGroup, NoSuchNode
6 from ..utils.cmdline import get_item
7 from ..utils.text import mark_for_translation as _, red
8 from ..utils.ui import io
9
10
11 def bw_hash(repo, args):
12 if args['group_membership'] and args['metadata']:
13 io.stdout(_(
14 "{x} Cannot hash group membership and metadata at the same time").format(x=red("!!!")
15 ))
16 exit(1)
17 if args['group_membership'] and args['item']:
18 io.stdout(_("{x} Cannot hash group membership for an item").format(x=red("!!!")))
19 exit(1)
20 if args['item'] and args['metadata']:
21 io.stdout(_("{x} Items don't have metadata").format(x=red("!!!")))
22 exit(1)
23
24 if args['node_or_group']:
25 try:
26 target = repo.get_node(args['node_or_group'])
27 target_type = 'node'
28 except NoSuchNode:
29 try:
30 target = repo.get_group(args['node_or_group'])
31 target_type = 'group'
32 except NoSuchGroup:
33 if args['adhoc_nodes']:
34 target = repo.create_node(args['node_or_group'])
35 target_type = 'node'
36 else:
37 io.stderr(_("{x} No such node or group: {node_or_group}").format(
38 node_or_group=args['node_or_group'],
39 x=red("!!!"),
40 ))
41 exit(1)
42 else:
43 if args['item']:
44 target = get_item(target, args['item'])
45 target_type = 'item'
46 else:
47 target = repo
48 target_type = 'repo'
49
50 if target_type == 'node' and args['dict'] and args['metadata']:
51 io.stdout(_("{x} Cannot show a metadata dict for a single node").format(x=red("!!!")))
52 exit(1)
53 if target_type == 'group' and args['item']:
54 io.stdout(_("{x} Cannot select item for group").format(x=red("!!!")))
55 exit(1)
56
57 if args['dict']:
58 if args['group_membership']:
59 if target_type in ('node', 'repo'):
60 for group in target.groups:
61 io.stdout(group.name)
62 else:
63 for node in target.nodes:
64 io.stdout(node.name)
65 elif args['metadata']:
66 for node in target.nodes:
67 io.stdout("{}\t{}".format(node.name, node.metadata_hash()))
68 else:
69 cdict = target.cached_cdict if args['item'] else target.cdict
70 if cdict is None:
71 io.stdout("REMOVE")
72 else:
73 for key, value in sorted(cdict.items()):
74 io.stdout("{}\t{}".format(key, value) if args['item'] else "{} {}".format(value, key))
75 else:
76 if args['group_membership']:
77 io.stdout(target.group_membership_hash())
78 elif args['metadata']:
79 io.stdout(target.metadata_hash())
80 else:
81 io.stdout(target.hash())
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from os import makedirs
4 from os.path import dirname, exists, join
5 from sys import exit
6
7 from ..utils.cmdline import get_node
8 from ..utils.text import force_text, mark_for_translation as _
9 from ..utils.ui import io
10
11
12 def write_preview(file_item, base_path):
13 """
14 Writes the content of the given file item to the given path.
15 """
16 file_path = join(base_path, file_item.name.lstrip("/"))
17 dir_path = dirname(file_path)
18 if not exists(dir_path):
19 makedirs(dir_path)
20 with open(file_path, 'wb') as f:
21 f.write(file_item.content)
22
23
24 def bw_items(repo, args):
25 node = get_node(repo, args['node'], adhoc_nodes=args['adhoc_nodes'])
26 if args['file_preview']:
27 item = node.get_item("file:{}".format(args['file_preview']))
28 if (
29 item.attributes['content_type'] in ('any', 'base64', 'binary') or
30 item.attributes['delete'] is True
31 ):
32 io.stderr(_(
33 "cannot preview {node} (unsuitable content_type or deleted)"
34 ).format(node=node.name))
35 exit(1)
36 else:
37 io.stdout(item.content.decode(item.attributes['encoding']), append_newline=False)
38 elif args['file_preview_path']:
39 if exists(args['file_preview_path']):
40 io.stderr(_(
41 "not writing to existing path: {path}"
42 ).format(path=args['file_preview_path']))
43 exit(1)
44 for item in node.items:
45 if not item.id.startswith("file:"):
46 continue
47 if item.attributes['content_type'] == 'any':
48 io.stderr(_(
49 "skipping file with 'any' content {filename}..."
50 ).format(filename=item.name))
51 continue
52 if item.attributes['content_type'] == 'binary':
53 io.stderr(_(
54 "skipping binary file {filename}..."
55 ).format(filename=item.name))
56 continue
57 if item.attributes['delete']:
58 io.stderr(_(
59 "skipping file with 'delete' flag {filename}..."
60 ).format(filename=item.name))
61 continue
62 io.stdout(_("writing {path}...").format(path=join(
63 args['file_preview_path'],
64 item.name.lstrip("/"),
65 )))
66 write_preview(item, args['file_preview_path'])
67 else:
68 for item in node.items:
69 if args['show_repr']:
70 io.stdout(force_text(repr(item)))
71 else:
72 io.stdout(force_text(str(item)))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from ..concurrency import WorkerPool
4 from ..lock import softlock_add, softlock_list, softlock_remove
5 from ..utils.cmdline import get_target_nodes
6 from ..utils.text import blue, bold, cyan, error_summary, green, mark_for_translation as _, \
7 randstr
8 from ..utils.time import format_timestamp
9 from ..utils.ui import io
10
11
12 def bw_lock_add(repo, args):
13 errors = []
14 target_nodes = get_target_nodes(repo, args['target'], adhoc_nodes=args['adhoc_nodes'])
15 pending_nodes = target_nodes[:]
16 max_node_name_length = max([len(node.name) for node in target_nodes])
17 lock_id = randstr(length=4).upper()
18
19 def tasks_available():
20 return bool(pending_nodes)
21
22 def next_task():
23 node = pending_nodes.pop()
24 return {
25 'target': softlock_add,
26 'task_id': node.name,
27 'args': (node, lock_id),
28 'kwargs': {
29 'comment': args['comment'],
30 'expiry': args['expiry'],
31 'item_selectors': args['items'].split(","),
32 },
33 }
34
35 def handle_result(task_id, return_value, duration):
36 io.stdout(_("{x} {node} locked with ID {id} (expires in {exp})").format(
37 x=green("✓"),
38 node=bold(task_id.ljust(max_node_name_length)),
39 id=return_value,
40 exp=args['expiry'],
41 ))
42
43 def handle_exception(task_id, exception, traceback):
44 msg = "{}: {}".format(task_id, exception)
45 io.stderr(traceback)
46 io.stderr(repr(exception))
47 io.stderr(msg)
48 errors.append(msg)
49
50 worker_pool = WorkerPool(
51 tasks_available,
52 next_task,
53 handle_exception=handle_exception,
54 handle_result=handle_result,
55 pool_id="lock",
56 workers=args['node_workers'],
57 )
58 worker_pool.run()
59
60 error_summary(errors)
61
62
63 def bw_lock_remove(repo, args):
64 errors = []
65 target_nodes = get_target_nodes(repo, args['target'], adhoc_nodes=args['adhoc_nodes'])
66 pending_nodes = target_nodes[:]
67 max_node_name_length = max([len(node.name) for node in target_nodes])
68
69 def tasks_available():
70 return bool(pending_nodes)
71
72 def next_task():
73 node = pending_nodes.pop()
74 return {
75 'target': softlock_remove,
76 'task_id': node.name,
77 'args': (node, args['lock_id'].upper()),
78 }
79
80 def handle_result(task_id, return_value, duration):
81 io.stdout(_("{x} {node} lock {id} removed").format(
82 x=green("✓"),
83 node=bold(task_id.ljust(max_node_name_length)),
84 id=args['lock_id'].upper(),
85 ))
86
87 def handle_exception(task_id, exception, traceback):
88 msg = "{}: {}".format(task_id, exception)
89 io.stderr(traceback)
90 io.stderr(repr(exception))
91 io.stderr(msg)
92 errors.append(msg)
93
94 worker_pool = WorkerPool(
95 tasks_available,
96 next_task,
97 handle_exception=handle_exception,
98 handle_result=handle_result,
99 pool_id="lock_remove",
100 workers=args['node_workers'],
101 )
102 worker_pool.run()
103
104 error_summary(errors)
105
106
107 def bw_lock_show(repo, args):
108 errors = []
109 target_nodes = get_target_nodes(repo, args['target'], adhoc_nodes=args['adhoc_nodes'])
110 pending_nodes = target_nodes[:]
111 max_node_name_length = max([len(node.name) for node in target_nodes])
112 locks_on_node = {}
113
114 def tasks_available():
115 return bool(pending_nodes)
116
117 def next_task():
118 node = pending_nodes.pop()
119 return {
120 'target': softlock_list,
121 'task_id': node.name,
122 'args': (node,),
123 }
124
125 def handle_result(task_id, return_value, duration):
126 locks_on_node[task_id] = return_value
127
128 def handle_exception(task_id, exception, traceback):
129 msg = "{}: {}".format(task_id, exception)
130 io.stderr(traceback)
131 io.stderr(repr(exception))
132 io.stderr(msg)
133 errors.append(msg)
134
135 worker_pool = WorkerPool(
136 tasks_available,
137 next_task,
138 handle_exception=handle_exception,
139 handle_result=handle_result,
140 pool_id="lock_show",
141 workers=args['node_workers'],
142 )
143 worker_pool.run()
144
145 if errors:
146 error_summary(errors)
147 return
148
149 headers = (
150 ('id', _("ID")),
151 ('formatted_date', _("Created")),
152 ('formatted_expiry', _("Expires")),
153 ('user', _("User")),
154 ('items', _("Items")),
155 ('comment', _("Comment")),
156 )
157
158 locked_nodes = 0
159 for node_name, locks in locks_on_node.items():
160 if locks:
161 locked_nodes += 1
162
163 previous_node_was_unlocked = False
164 for node_name, locks in sorted(locks_on_node.items()):
165 if not locks:
166 io.stdout(_("{x} {node} no soft locks present").format(
167 x=green("✓"),
168 node=bold(node_name.ljust(max_node_name_length)),
169 ))
170 previous_node_was_unlocked = True
171
172 output_counter = 0
173 for node_name, locks in sorted(locks_on_node.items()):
174 if locks:
175 # Unlocked nodes are printed without empty lines in
176 # between them. Locked nodes can produce lengthy output,
177 # though, so we add empty lines.
178 if (
179 previous_node_was_unlocked or (
180 output_counter > 0 and output_counter < locked_nodes
181 )
182 ):
183 previous_node_was_unlocked = False
184 io.stdout('')
185
186 for lock in locks:
187 lock['formatted_date'] = format_timestamp(lock['date'])
188 lock['formatted_expiry'] = format_timestamp(lock['expiry'])
189
190 lengths = {}
191 headline = "{x} {node} ".format(
192 x=blue("i"),
193 node=bold(node_name.ljust(max_node_name_length)),
194 )
195
196 for column, title in headers:
197 lengths[column] = len(title)
198 for lock in locks:
199 if column == 'items':
200 length = max([len(selector) for selector in lock[column]])
201 else:
202 length = len(lock[column])
203 lengths[column] = max(lengths[column], length)
204 headline += bold(title.ljust(lengths[column] + 2))
205
206 io.stdout(headline.rstrip())
207 for lock in locks:
208 for lineno, item_selectors in enumerate(lock['items']):
209 line = "{x} {node} ".format(
210 x=cyan("›"),
211 node=bold(node_name.ljust(max_node_name_length)),
212 )
213 for column, title in headers:
214 if column == 'items':
215 line += lock[column][lineno].ljust(lengths[column] + 2)
216 elif lineno == 0:
217 line += lock[column].ljust(lengths[column] + 2)
218 else:
219 line += " " * (lengths[column] + 2)
220 io.stdout(line.rstrip())
221
222 output_counter += 1
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from json import dumps
4
5 from ..metadata import MetadataJSONEncoder
6 from ..utils.cmdline import get_node
7 from ..utils.text import force_text
8 from ..utils.ui import io
9
10
11 def bw_metadata(repo, args):
12 node = get_node(repo, args['node'], adhoc_nodes=args['adhoc_nodes'])
13 for line in dumps(
14 node.metadata,
15 cls=MetadataJSONEncoder,
16 indent=4,
17 sort_keys=True,
18 ).splitlines():
19 io.stdout(force_text(line))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from ..utils import names
4 from ..utils.cmdline import get_group, get_target_nodes
5 from ..utils.text import bold
6 from ..utils.ui import io
7 from ..group import GROUP_ATTR_DEFAULTS
8
9
10 ATTR_MAX_LENGTH = max([len(attr) for attr in GROUP_ATTR_DEFAULTS])
11
12
13 def bw_nodes(repo, args):
14 if args['filter_group'] is not None:
15 nodes = get_group(repo, args['filter_group']).nodes
16 elif args['target'] is not None:
17 nodes = get_target_nodes(repo, args['target'], adhoc_nodes=args['adhoc_nodes'])
18 else:
19 nodes = repo.nodes
20 max_node_name_length = 0 if not nodes else max([len(name) for name in names(nodes)])
21 for node in nodes:
22 if args['show_attrs']:
23 for attr in sorted(list(GROUP_ATTR_DEFAULTS) + ['hostname']):
24 io.stdout("{}\t{}\t{}".format(
25 node.name.ljust(max_node_name_length),
26 bold(attr.ljust(ATTR_MAX_LENGTH)),
27 getattr(node, attr),
28 ))
29
30 if args['inline']:
31 io.stdout("{}\t{}\t{}".format(
32 node.name.ljust(max_node_name_length),
33 bold("group".ljust(ATTR_MAX_LENGTH)),
34 ", ".join([group.name for group in node.groups]),
35 ))
36 else:
37 for group in node.groups:
38 io.stdout("{}\t{}\t{}".format(
39 node.name.ljust(max_node_name_length),
40 bold("group".ljust(ATTR_MAX_LENGTH)),
41 group.name,
42 ))
43
44 if args['inline']:
45 io.stdout("{}\t{}\t{}".format(
46 node.name.ljust(max_node_name_length),
47 bold("bundle".ljust(ATTR_MAX_LENGTH)),
48 ", ".join([bundle.name for bundle in node.bundles]),
49 ))
50 else:
51 for bundle in node.bundles:
52 io.stdout("{}\t{}\t{}".format(
53 node.name.ljust(max_node_name_length),
54 bold("bundle".ljust(ATTR_MAX_LENGTH)),
55 bundle.name,
56 ))
57 continue
58 line = ""
59 if args['show_hostnames']:
60 line += node.hostname
61 else:
62 line += node.name
63 if args['show_bundles']:
64 line += ": " + ", ".join(sorted(names(node.bundles)))
65 elif args['show_groups']:
66 line += ": " + ", ".join(sorted(names(node.groups)))
67 elif args['show_os']:
68 line += ": " + node.os
69 io.stdout(line)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from argparse import ArgumentParser
4 from os import environ
5
6 from .. import VERSION_STRING
7 from ..utils.text import mark_for_translation as _
8 from .apply import bw_apply
9 from .debug import bw_debug
10 from .groups import bw_groups
11 from .hash import bw_hash
12 from .items import bw_items
13 from .lock import bw_lock_add, bw_lock_remove, bw_lock_show
14 from .metadata import bw_metadata
15 from .nodes import bw_nodes
16 from .plot import bw_plot_group, bw_plot_node, bw_plot_node_groups
17 from .repo import bw_repo_bundle_create, bw_repo_create, bw_repo_plugin_install, \
18 bw_repo_plugin_list, bw_repo_plugin_search, bw_repo_plugin_remove, bw_repo_plugin_update
19 from .run import bw_run
20 from .stats import bw_stats
21 from .test import bw_test
22 from .verify import bw_verify
23 from .zen import bw_zen
24
25
26 def build_parser_bw():
27 parser = ArgumentParser(
28 prog="bw",
29 description=_("BundleWrap - Config Management with Python"),
30 )
31 parser.add_argument(
32 "-a",
33 "--add-host-keys",
34 action='store_true',
35 default=False,
36 dest='add_ssh_host_keys',
37 help=_("set StrictHostKeyChecking=no instead of yes for SSH"),
38 )
39 parser.add_argument(
40 "-A",
41 "--adhoc-nodes",
42 action='store_true',
43 default=False,
44 dest='adhoc_nodes',
45 help=_(
46 "treat unknown node names as adhoc 'virtual' nodes that receive configuration only "
47 "through groups whose member_patterns match the node name given on the command line "
48 "(which also has to be a resolvable hostname)"),
49 )
50 parser.add_argument(
51 "-d",
52 "--debug",
53 action='store_true',
54 default=False,
55 dest='debug',
56 help=_("print debugging info (implies -v)"),
57 )
58 parser.add_argument(
59 "--version",
60 action='version',
61 version=VERSION_STRING,
62 )
63 subparsers = parser.add_subparsers(
64 title=_("subcommands"),
65 help=_("use 'bw <subcommand> --help' for more info"),
66 )
67
68 # bw apply
69 help_apply = _("Applies the configuration defined in your repository to your nodes")
70 parser_apply = subparsers.add_parser("apply", description=help_apply, help=help_apply)
71 parser_apply.set_defaults(func=bw_apply)
72 parser_apply.add_argument(
73 'target',
74 metavar=_("NODE1,NODE2,GROUP1,bundle:BUNDLE1..."),
75 type=str,
76 help=_("target nodes, groups and/or bundle selectors"),
77 )
78 parser_apply.add_argument(
79 "-f",
80 "--force",
81 action='store_true',
82 default=False,
83 dest='force',
84 help=_("ignore existing hard node locks"),
85 )
86 parser_apply.add_argument(
87 "-i",
88 "--interactive",
89 action='store_true',
90 default=False,
91 dest='interactive',
92 help=_("ask before applying each item"),
93 )
94 bw_apply_p_default = int(environ.get("BW_NODE_WORKERS", "4"))
95 parser_apply.add_argument(
96 "-p",
97 "--parallel-nodes",
98 default=bw_apply_p_default,
99 dest='node_workers',
100 help=_("number of nodes to apply to simultaneously "
101 "(defaults to {})").format(bw_apply_p_default),
102 type=int,
103 )
104 bw_apply_p_items_default = int(environ.get("BW_ITEM_WORKERS", "4"))
105 parser_apply.add_argument(
106 "-P",
107 "--parallel-items",
108 default=bw_apply_p_items_default,
109 dest='item_workers',
110 help=_("number of items to apply simultaneously on each node "
111 "(defaults to {})").format(bw_apply_p_items_default),
112 type=int,
113 )
114 parser_apply.add_argument(
115 "--profiling",
116 action='store_true',
117 default=False,
118 dest='profiling',
119 help=_("print time elapsed for each item"),
120 )
121 parser_apply.add_argument(
122 "-s",
123 "--skip",
124 default="",
125 dest='autoskip',
126 help=_(
127 "e.g. 'file:/foo,tag:foo,bundle:bar,node:baz,group:frob' "
128 "to skip all instances of file:/foo "
129 "and items with tag 'foo', "
130 "or in bundle 'bar', "
131 "or on node 'baz', "
132 "or on a node in group 'frob'"
133 ),
134 metavar=_("SELECTOR"),
135 type=str,
136 )
137
138 # bw debug
139 help_debug = _("Start an interactive Python shell for this repository")
140 parser_debug = subparsers.add_parser("debug", description=help_debug, help=help_debug)
141 parser_debug.set_defaults(func=bw_debug)
142 parser_debug.add_argument(
143 "-c",
144 "--command",
145 default=None,
146 dest='command',
147 metavar=_("COMMAND"),
148 required=False,
149 type=str,
150 help=_("command to execute in lieu of REPL"),
151 )
152 parser_debug.add_argument(
153 "-n",
154 "--node",
155 default=None,
156 dest='node',
157 metavar=_("NODE"),
158 required=False,
159 type=str,
160 help=_("name of node to inspect"),
161 )
162
163 # bw groups
164 help_groups = _("Lists groups in this repository (deprecated, use `bw nodes -a`)")
165 parser_groups = subparsers.add_parser("groups", description=help_groups, help=help_groups)
166 parser_groups.set_defaults(func=bw_groups)
167 parser_groups.add_argument(
168 "-n",
169 "--nodes",
170 action='store_true',
171 dest='show_nodes',
172 help=_("show nodes for each group"),
173 )
174
175 # bw hash
176 help_hash = _("Shows a SHA1 hash that summarizes the entire configuration for this repo, node, group, or item.")
177 parser_hash = subparsers.add_parser("hash", description=help_hash, help=help_hash)
178 parser_hash.set_defaults(func=bw_hash)
179 parser_hash.add_argument(
180 "-d",
181 "--dict",
182 action='store_true',
183 default=False,
184 dest='dict',
185 help=_("instead show the data this hash is derived from"),
186 )
187 parser_hash.add_argument(
188 "-g",
189 "--group",
190 action='store_true',
191 default=False,
192 dest='group_membership',
193 help=_("hash group membership instead of configuration"),
194 )
195 parser_hash.add_argument(
196 "-m",
197 "--metadata",
198 action='store_true',
199 default=False,
200 dest='metadata',
201 help=_("hash metadata instead of configuration (not available for items)"),
202 )
203 parser_hash.add_argument(
204 'node_or_group',
205 metavar=_("NODE|GROUP"),
206 type=str,
207 nargs='?',
208 help=_("show config hash for this node or group"),
209 )
210 parser_hash.add_argument(
211 'item',
212 metavar=_("ITEM"),
213 type=str,
214 nargs='?',
215 help=_("show config hash for this item on the given node"),
216 )
217
218 # bw items
219 help_items = _("List and preview items for a specific node")
220 parser_items = subparsers.add_parser("items", description=help_items, help=help_items)
221 parser_items.set_defaults(func=bw_items)
222 parser_items.add_argument(
223 'node',
224 metavar=_("NODE"),
225 type=str,
226 help=_("list items for this node"),
227 )
228 parser_items.add_argument(
229 "-f",
230 "--file-preview",
231 dest='file_preview',
232 help=_("print preview of given file"),
233 metavar=_("FILE"),
234 required=False,
235 type=str,
236 )
237 parser_items.add_argument(
238 "-w",
239 "--write-file-previews",
240 default=None,
241 dest='file_preview_path',
242 metavar=_("DIRECTORY"),
243 required=False,
244 type=str,
245 help=_("create DIRECTORY and fill it with rendered file previews"),
246 )
247 parser_items.add_argument(
248 "--repr",
249 action='store_true',
250 dest='show_repr',
251 help=_("show more verbose representation of each item"),
252 )
253
254 # bw lock
255 help_lock = _("Manage locks on nodes used to prevent collisions between BundleWrap users")
256 parser_lock = subparsers.add_parser("lock", description=help_lock, help=help_lock)
257 parser_lock_subparsers = parser_lock.add_subparsers()
258
259 # bw lock add
260 help_lock_add = _("Add a new lock to one or more nodes")
261 parser_lock_add = parser_lock_subparsers.add_parser(
262 "add",
263 description=help_lock_add,
264 help=help_lock_add,
265 )
266 parser_lock_add.set_defaults(func=bw_lock_add)
267 parser_lock_add.add_argument(
268 'target',
269 metavar=_("NODE1,NODE2,GROUP1,bundle:BUNDLE1..."),
270 type=str,
271 help=_("target nodes, groups and/or bundle selectors"),
272 )
273 parser_lock_add.add_argument(
274 "-c",
275 "--comment",
276 default="",
277 dest='comment',
278 help=_("brief description of the purpose of the lock"),
279 type=str,
280 )
281 bw_lock_add_e_default = environ.get("BW_SOFTLOCK_EXPIRY", "8h")
282 parser_lock_add.add_argument(
283 "-e",
284 "--expires-in",
285 default=bw_lock_add_e_default,
286 dest='expiry',
287 help=_("how long before the lock is ignored and removed automatically "
288 "(defaults to \"{}\")").format(bw_lock_add_e_default),
289 type=str,
290 )
291 parser_lock_add.add_argument(
292 "-i",
293 "--items",
294 default="*",
295 dest='items',
296 help=_("comma-separated list of item selectors the lock applies to "
297 "(defaults to \"*\" meaning all)"),
298 type=str,
299 )
300 bw_lock_add_p_default = int(environ.get("BW_NODE_WORKERS", "4"))
301 parser_lock_add.add_argument(
302 "-p",
303 "--parallel-nodes",
304 default=bw_lock_add_p_default,
305 dest='node_workers',
306 help=_("number of nodes to lock simultaneously "
307 "(defaults to {})").format(bw_lock_add_p_default),
308 type=int,
309 )
310
311 # bw lock remove
312 help_lock_remove = _("Remove a lock from a node")
313 parser_lock_remove = parser_lock_subparsers.add_parser(
314 "remove",
315 description=help_lock_remove,
316 help=help_lock_remove,
317 )
318 parser_lock_remove.set_defaults(func=bw_lock_remove)
319 parser_lock_remove.add_argument(
320 'target',
321 metavar=_("NODE1,NODE2,GROUP1,bundle:BUNDLE1..."),
322 type=str,
323 help=_("target nodes, groups and/or bundle selectors"),
324 )
325 parser_lock_remove.add_argument(
326 'lock_id',
327 metavar=_("LOCK_ID"),
328 type=str,
329 help=_("ID of the lock to remove (obtained with `bw lock show`)"),
330 )
331 bw_lock_remove_p_default = int(environ.get("BW_NODE_WORKERS", "4"))
332 parser_lock_remove.add_argument(
333 "-p",
334 "--parallel-nodes",
335 default=bw_lock_remove_p_default,
336 dest='node_workers',
337 help=_("number of nodes to remove lock from simultaneously "
338 "(defaults to {})").format(bw_lock_remove_p_default),
339 type=int,
340 )
341
342 # bw lock show
343 help_lock_show = _("Show details of locks present on a node")
344 parser_lock_show = parser_lock_subparsers.add_parser(
345 "show",
346 description=help_lock_show,
347 help=help_lock_show,
348 )
349 parser_lock_show.set_defaults(func=bw_lock_show)
350 parser_lock_show.add_argument(
351 'target',
352 metavar=_("NODE1,NODE2,GROUP1,bundle:BUNDLE1..."),
353 type=str,
354 help=_("target node"),
355 )
356 bw_lock_show_p_default = int(environ.get("BW_NODE_WORKERS", "4"))
357 parser_lock_show.add_argument(
358 "-p",
359 "--parallel-nodes",
360 default=bw_lock_show_p_default,
361 dest='node_workers',
362 help=_("number of nodes to retrieve locks from simultaneously "
363 "(defaults to {})").format(bw_lock_show_p_default),
364 type=int,
365 )
366
367 # bw metadata
368 help_metadata = ("View a JSON representation of a node's metadata")
369 parser_metadata = subparsers.add_parser(
370 "metadata",
371 description=help_metadata,
372 help=help_metadata,
373 )
374 parser_metadata.set_defaults(func=bw_metadata)
375 parser_metadata.add_argument(
376 'node',
377 metavar=_("NODE"),
378 type=str,
379 help=_("node to print JSON-formatted metadata for"),
380 )
381
382 # bw nodes
383 help_nodes = _("List all nodes in this repository")
384 parser_nodes = subparsers.add_parser("nodes", description=help_nodes, help=help_nodes)
385 parser_nodes.set_defaults(func=bw_nodes)
386 parser_nodes.add_argument(
387 "-a",
388 "--attrs",
389 action='store_true',
390 dest='show_attrs',
391 help=_("show attributes for each node"),
392 )
393 parser_nodes.add_argument(
394 "--bundles",
395 action='store_true',
396 dest='show_bundles',
397 help=_("show bundles for each node (deprecated, use --attrs)"),
398 )
399 parser_nodes.add_argument(
400 "--hostnames",
401 action='store_true',
402 dest='show_hostnames',
403 help=_("show hostnames instead of node names (deprecated, use --attrs)"),
404 )
405 parser_nodes.add_argument(
406 "-g",
407 "--filter-group",
408 default=None,
409 dest='filter_group',
410 metavar=_("GROUP"),
411 required=False,
412 type=str,
413 help=_("show only nodes in the given group (deprecated)"),
414 )
415 parser_nodes.add_argument(
416 "--groups",
417 action='store_true',
418 dest='show_groups',
419 help=_("show group membership for each node (deprecated, use --attrs)"),
420 )
421 parser_nodes.add_argument(
422 "-i",
423 "--inline",
424 action='store_true',
425 dest='inline',
426 help=_("show multiple values on the same line (use with --attrs)"),
427 )
428 parser_nodes.add_argument(
429 "--os",
430 action='store_true',
431 dest='show_os',
432 help=_("show OS for each node (deprecated, use --attrs)"),
433 )
434 parser_nodes.add_argument(
435 'target',
436 default=None,
437 metavar=_("NODE1,NODE2,GROUP1,bundle:BUNDLE1..."),
438 nargs='?',
439 type=str,
440 help=_("filter according to nodes, groups and/or bundle selectors"),
441 )
442
443 # bw plot
444 help_plot = _("Generates DOT output that can be piped into `dot -Tsvg -ooutput.svg`. "
445 "The resulting output.svg can be viewed using most browsers.")
446 parser_plot = subparsers.add_parser("plot", description=help_plot, help=help_plot)
447 parser_plot_subparsers = parser_plot.add_subparsers()
448
449 # bw plot group
450 help_plot_group = _("Plot subgroups and node members for the given group "
451 "or the entire repository")
452 parser_plot_subparsers_group = parser_plot_subparsers.add_parser(
453 "group",
454 description=help_plot_group,
455 help=help_plot_group,
456 )
457 parser_plot_subparsers_group.set_defaults(func=bw_plot_group)
458 parser_plot_subparsers_group.add_argument(
459 'group',
460 default=None,
461 metavar=_("GROUP"),
462 nargs='?',
463 type=str,
464 help=_("group to plot"),
465 )
466 parser_plot_subparsers_group.add_argument(
467 "-N", "--no-nodes",
468 action='store_false',
469 dest='show_nodes',
470 help=_("do not include nodes in output"),
471 )
472
473 # bw plot node
474 help_plot_node = _("Plot items and their dependencies for the given node")
475 parser_plot_subparsers_node = parser_plot_subparsers.add_parser(
476 "node",
477 description=help_plot_node,
478 help=help_plot_node,
479 )
480 parser_plot_subparsers_node.set_defaults(func=bw_plot_node)
481 parser_plot_subparsers_node.add_argument(
482 'node',
483 metavar=_("NODE"),
484 type=str,
485 help=_("node to plot"),
486 )
487 parser_plot_subparsers_node.add_argument(
488 "--no-cluster",
489 action='store_false',
490 dest='cluster',
491 help=_("do not cluster items by bundle"),
492 )
493 parser_plot_subparsers_node.add_argument(
494 "--no-depends-auto",
495 action='store_false',
496 dest='depends_auto',
497 help=_("do not show auto-generated dependencies and items"),
498 )
499 parser_plot_subparsers_node.add_argument(
500 "--no-depends-conc",
501 action='store_false',
502 dest='depends_concurrency',
503 help=_("do not show concurrency blocker dependencies"),
504 )
505 parser_plot_subparsers_node.add_argument(
506 "--no-depends-regular",
507 action='store_false',
508 dest='depends_regular',
509 help=_("do not show regular user-defined dependencies"),
510 )
511 parser_plot_subparsers_node.add_argument(
512 "--no-depends-reverse",
513 action='store_false',
514 dest='depends_reverse',
515 help=_("do not show reverse dependencies ('needed_by')"),
516 )
517 parser_plot_subparsers_node.add_argument(
518 "--no-depends-static",
519 action='store_false',
520 dest='depends_static',
521 help=_("do not show static dependencies"),
522 )
523
524 # bw plot groups-for-node
525 help_plot_node_groups = _("Show where a specific node gets its groups from")
526 parser_plot_subparsers_node_groups = parser_plot_subparsers.add_parser(
527 "groups-for-node",
528 description=help_plot_node_groups,
529 help=help_plot_node_groups,
530 )
531 parser_plot_subparsers_node_groups.set_defaults(func=bw_plot_node_groups)
532 parser_plot_subparsers_node_groups.add_argument(
533 'node',
534 metavar=_("NODE"),
535 type=str,
536 help=_("node to plot"),
537 )
538
539 # bw repo
540 help_repo = _("Various subcommands to manipulate your repository")
541 parser_repo = subparsers.add_parser("repo", description=help_repo, help=help_repo)
542 parser_repo_subparsers = parser_repo.add_subparsers()
543
544 # bw repo bundle
545 parser_repo_subparsers_bundle = parser_repo_subparsers.add_parser("bundle")
546 parser_repo_subparsers_bundle_subparsers = parser_repo_subparsers_bundle.add_subparsers()
547
548 # bw repo bundle create
549 parser_repo_subparsers_bundle_create = \
550 parser_repo_subparsers_bundle_subparsers.add_parser("create")
551 parser_repo_subparsers_bundle_create.set_defaults(func=bw_repo_bundle_create)
552 parser_repo_subparsers_bundle_create.add_argument(
553 'bundle',
554 metavar=_("BUNDLE"),
555 type=str,
556 help=_("name of bundle to create"),
557 )
558
559 # bw repo create
560 parser_repo_subparsers_create = parser_repo_subparsers.add_parser("create")
561 parser_repo_subparsers_create.set_defaults(func=bw_repo_create)
562
563 # bw repo plugin
564 parser_repo_subparsers_plugin = parser_repo_subparsers.add_parser("plugin")
565 parser_repo_subparsers_plugin_subparsers = parser_repo_subparsers_plugin.add_subparsers()
566
567 # bw repo plugin install
568 parser_repo_subparsers_plugin_install = parser_repo_subparsers_plugin_subparsers.add_parser("install")
569 parser_repo_subparsers_plugin_install.set_defaults(func=bw_repo_plugin_install)
570 parser_repo_subparsers_plugin_install.add_argument(
571 'plugin',
572 metavar=_("PLUGIN_NAME"),
573 type=str,
574 help=_("name of plugin to install"),
575 )
576 parser_repo_subparsers_plugin_install.add_argument(
577 "-f",
578 "--force",
579 action='store_true',
580 dest='force',
581 help=_("overwrite existing files when installing"),
582 )
583
584 # bw repo plugin list
585 parser_repo_subparsers_plugin_list = parser_repo_subparsers_plugin_subparsers.add_parser("list")
586 parser_repo_subparsers_plugin_list.set_defaults(func=bw_repo_plugin_list)
587
588 # bw repo plugin remove
589 parser_repo_subparsers_plugin_remove = parser_repo_subparsers_plugin_subparsers.add_parser("remove")
590 parser_repo_subparsers_plugin_remove.set_defaults(func=bw_repo_plugin_remove)
591 parser_repo_subparsers_plugin_remove.add_argument(
592 'plugin',
593 metavar=_("PLUGIN_NAME"),
594 type=str,
595 help=_("name of plugin to remove"),
596 )
597 parser_repo_subparsers_plugin_remove.add_argument(
598 "-f",
599 "--force",
600 action='store_true',
601 dest='force',
602 help=_("remove files even if locally modified"),
603 )
604
605 # bw repo plugin search
606 parser_repo_subparsers_plugin_search = parser_repo_subparsers_plugin_subparsers.add_parser("search")
607 parser_repo_subparsers_plugin_search.set_defaults(func=bw_repo_plugin_search)
608 parser_repo_subparsers_plugin_search.add_argument(
609 'term',
610 metavar=_("SEARCH_STRING"),
611 nargs='?',
612 type=str,
613 help=_("look for this string in plugin names and descriptions"),
614 )
615
616 # bw repo plugin update
617 parser_repo_subparsers_plugin_update = parser_repo_subparsers_plugin_subparsers.add_parser("update")
618 parser_repo_subparsers_plugin_update.set_defaults(func=bw_repo_plugin_update)
619 parser_repo_subparsers_plugin_update.add_argument(
620 'plugin',
621 default=None,
622 metavar=_("PLUGIN_NAME"),
623 nargs='?',
624 type=str,
625 help=_("name of plugin to update"),
626 )
627 parser_repo_subparsers_plugin_update.add_argument(
628 "-c",
629 "--check-only",
630 action='store_true',
631 dest='check_only',
632 help=_("only show what would be updated"),
633 )
634 parser_repo_subparsers_plugin_update.add_argument(
635 "-f",
636 "--force",
637 action='store_true',
638 dest='force',
639 help=_("overwrite local modifications when updating"),
640 )
641
642 # bw run
643 help_run = _("Run a one-off command on a number of nodes")
644 parser_run = subparsers.add_parser("run", description=help_run, help=help_run)
645 parser_run.set_defaults(func=bw_run)
646 parser_run.add_argument(
647 'target',
648 metavar=_("NODE1,NODE2,GROUP1,bundle:BUNDLE1..."),
649 type=str,
650 help=_("target nodes, groups and/or bundle selectors"),
651 )
652 parser_run.add_argument(
653 'command',
654 metavar=_("COMMAND"),
655 type=str,
656 help=_("command to run"),
657 )
658 parser_run.add_argument(
659 "-f",
660 "--may-fail",
661 action='store_true',
662 dest='may_fail',
663 help=_("ignore non-zero exit codes"),
664 )
665 parser_run.add_argument(
666 "--force",
667 action='store_true',
668 dest='ignore_locks',
669 help=_("ignore soft locks on target nodes"),
670 )
671 bw_run_p_default = int(environ.get("BW_NODE_WORKERS", "1"))
672 parser_run.add_argument(
673 "-p",
674 "--parallel-nodes",
675 default=bw_run_p_default,
676 dest='node_workers',
677 help=_("number of nodes to run command on simultaneously "
678 "(defaults to {})").format(bw_run_p_default),
679 type=int,
680 )
681
682 # bw stats
683 help_stats = _("Show some statistics about your repository")
684 parser_stats = subparsers.add_parser("stats", description=help_stats, help=help_stats)
685 parser_stats.set_defaults(func=bw_stats)
686
687 # bw test
688 help_test = _("Test your repository for consistency "
689 "(you can use this with a CI tool like Jenkins)")
690 parser_test = subparsers.add_parser("test", description=help_test, help=help_test)
691 parser_test.set_defaults(func=bw_test)
692 parser_test.add_argument(
693 'target',
694 default=None,
695 metavar=_("NODE1,NODE2,GROUP1,bundle:BUNDLE1..."),
696 nargs='?',
697 type=str,
698 help=_("target nodes, groups and/or bundle selectors"),
699 )
700 parser_test.add_argument(
701 "-c",
702 "--plugin-conflict-error",
703 action='store_true',
704 dest='plugin_conflict_error',
705 help=_("check for local modifications to files installed by plugins"),
706 )
707 parser_test.add_argument(
708 "-d",
709 "--config-determinism",
710 default=0,
711 dest='determinism_config',
712 help=_("verify determinism of configuration by running `bw hash` N times "
713 "and checking for consistent results (with N > 1)"),
714 metavar="N",
715 type=int,
716 )
717 parser_test.add_argument(
718 "-i",
719 "--ignore-missing-faults",
720 action='store_true',
721 dest='ignore_missing_faults',
722 help=_("do not fail when encountering a missing Fault"),
723 )
724 parser_test.add_argument(
725 "-m",
726 "--metadata-determinism",
727 default=0,
728 dest='determinism_metadata',
729 help=_("verify determinism of metadata by running `bw hash -m` N times "
730 "and checking for consistent results (with N > 1)"),
731 metavar="N",
732 type=int,
733 )
734 bw_test_p_default = int(environ.get("BW_NODE_WORKERS", "1"))
735 parser_test.add_argument(
736 "-p",
737 "--parallel-nodes",
738 default=bw_test_p_default,
739 dest='node_workers',
740 help=_("number of nodes to test simultaneously "
741 "(defaults to {})").format(bw_test_p_default),
742 type=int,
743 )
744 bw_test_p_items_default = int(environ.get("BW_ITEM_WORKERS", "4"))
745 parser_test.add_argument(
746 "-P",
747 "--parallel-items",
748 default=bw_test_p_items_default,
749 dest='item_workers',
750 help=_("number of items to test simultaneously for each node "
751 "(defaults to {})").format(bw_test_p_items_default),
752 type=int,
753 )
754
755 # bw verify
756 help_verify = _("Inspect the health or 'correctness' of a node without changing it")
757 parser_verify = subparsers.add_parser("verify", description=help_verify, help=help_verify)
758 parser_verify.set_defaults(func=bw_verify)
759 parser_verify.add_argument(
760 'target',
761 metavar=_("NODE1,NODE2,GROUP1,bundle:BUNDLE1..."),
762 type=str,
763 help=_("target nodes, groups and/or bundle selectors"),
764 )
765 parser_verify.add_argument(
766 "-a",
767 "--show-all",
768 action='store_true',
769 dest='show_all',
770 help=_("show correct items as well as incorrect ones"),
771 )
772 bw_verify_p_default = int(environ.get("BW_NODE_WORKERS", "4"))
773 parser_verify.add_argument(
774 "-p",
775 "--parallel-nodes",
776 default=bw_verify_p_default,
777 dest='node_workers',
778 help=_("number of nodes to verify simultaneously "
779 "(defaults to {})").format(bw_verify_p_default),
780 type=int,
781 )
782 bw_verify_p_items_default = int(environ.get("BW_ITEM_WORKERS", "4"))
783 parser_verify.add_argument(
784 "-P",
785 "--parallel-items",
786 default=bw_verify_p_items_default,
787 dest='item_workers',
788 help=_("number of items to verify simultaneously on each node "
789 "(defaults to {})").format(bw_verify_p_items_default),
790 type=int,
791 )
792 parser_verify.add_argument(
793 "-S",
794 "--no-summary",
795 action='store_false',
796 dest='summary',
797 help=_("don't show stats summary"),
798 )
799
800 # bw zen
801 parser_zen = subparsers.add_parser("zen")
802 parser_zen.set_defaults(func=bw_zen)
803 return parser
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 import re
4
5 from ..deps import prepare_dependencies
6 from ..utils import graph_for_items, names
7 from ..utils.cmdline import get_group, get_node
8 from ..utils.ui import io
9
10
11 def bw_plot_group(repo, args):
12 group = get_group(repo, args['group']) if args['group'] else None
13
14 if args['show_nodes']:
15 nodes = group.nodes if group else repo.nodes
16 else:
17 nodes = []
18
19 if group:
20 groups = [group]
21 groups.extend(group.subgroups)
22 else:
23 groups = repo.groups
24
25 for line in plot_group(groups, nodes, args['show_nodes']):
26 io.stdout(line)
27
28
29 def plot_group(groups, nodes, show_nodes):
30 yield "digraph bundlewrap"
31 yield "{"
32
33 # Print subgraphs *below* each other
34 yield "rankdir = LR"
35
36 # Global attributes
37 yield ("node [color=\"#303030\"; "
38 "fillcolor=\"#303030\"; "
39 "fontname=Helvetica]")
40 yield "edge [arrowhead=vee]"
41
42 for group in groups:
43 yield "\"{}\" [fontcolor=white,style=filled];".format(group.name)
44
45 for node in nodes:
46 yield "\"{}\" [fontcolor=\"#303030\",shape=box,style=rounded];".format(node.name)
47
48 for group in groups:
49 for subgroup in group.immediate_subgroup_names:
50 yield "\"{}\" -> \"{}\" [color=\"#6BB753\",penwidth=2]".format(group.name, subgroup)
51
52 if show_nodes:
53 for group in groups:
54 for node in group._nodes_from_members:
55 yield "\"{}\" -> \"{}\" [color=\"#D18C57\",penwidth=2]".format(
56 group.name, node.name)
57
58 for node in group._nodes_from_patterns:
59 yield "\"{}\" -> \"{}\" [color=\"#714D99\",penwidth=2]".format(
60 group.name, node.name)
61
62 for node in nodes:
63 if group in node._groups_dynamic:
64 yield "\"{}\" -> \"{}\" [color=\"#FF0000\",penwidth=2]".format(
65 group.name, node.name)
66
67 yield "}"
68
69
70 def bw_plot_node(repo, args):
71 node = get_node(repo, args['node'], adhoc_nodes=args['adhoc_nodes'])
72 for line in graph_for_items(
73 node.name,
74 prepare_dependencies(node.items),
75 cluster=args['cluster'],
76 concurrency=args['depends_concurrency'],
77 static=args['depends_static'],
78 regular=args['depends_regular'],
79 reverse=args['depends_reverse'],
80 auto=args['depends_auto'],
81 ):
82 io.stdout(line)
83
84
85 def bw_plot_node_groups(repo, args):
86 node = get_node(repo, args['node'], adhoc_nodes=args['adhoc_nodes'])
87 for line in plot_node_groups(node):
88 io.stdout(line)
89
90
91 def plot_node_groups(node):
92 yield "digraph bundlewrap"
93 yield "{"
94
95 # Print subgraphs *below* each other
96 yield "rankdir = LR"
97
98 # Global attributes
99 yield ("node [color=\"#303030\"; "
100 "fillcolor=\"#303030\"; "
101 "fontname=Helvetica]")
102 yield "edge [arrowhead=vee]"
103
104 for group in node.groups:
105 yield "\"{}\" [fontcolor=white,style=filled];".format(group.name)
106
107 yield "\"{}\" [fontcolor=\"#303030\",shape=box,style=rounded];".format(node.name)
108
109 for group in node.groups:
110 for subgroup in group.immediate_subgroup_names:
111 if subgroup in names(node.groups):
112 yield "\"{}\" -> \"{}\" [color=\"#6BB753\",penwidth=2]".format(group.name, subgroup)
113 for pattern in group.immediate_subgroup_patterns:
114 compiled_pattern = re.compile(pattern)
115 for group2 in node.groups:
116 if compiled_pattern.search(group2.name) is not None and group2 != group:
117 yield "\"{}\" -> \"{}\" [color=\"#6BB753\",penwidth=2]".format(group.name, group2.name)
118
119 for group in node.groups:
120 if node in group._nodes_from_members:
121 yield "\"{}\" -> \"{}\" [color=\"#D18C57\",penwidth=2]".format(
122 group.name, node.name)
123 elif node in group._nodes_from_patterns:
124 yield "\"{}\" -> \"{}\" [color=\"#714D99\",penwidth=2]".format(
125 group.name, node.name)
126 elif group in node._groups_dynamic:
127 yield "\"{}\" -> \"{}\" [color=\"#FF0000\",penwidth=2]".format(
128 group.name, node.name)
129
130 yield "}"
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from sys import exit
4
5 from ..exceptions import NoSuchPlugin, PluginLocalConflict
6 from ..plugins import PluginManager
7 from ..repo import Repository
8 from ..utils.text import blue, bold, mark_for_translation as _, red
9 from ..utils.ui import io
10
11
12 def bw_repo_bundle_create(repo, args):
13 repo.create_bundle(args['bundle'])
14
15
16 def bw_repo_create(path, args):
17 Repository.create(path)
18
19
20 def bw_repo_plugin_install(repo, args):
21 pm = PluginManager(repo.path)
22 try:
23 manifest = pm.install(args['plugin'], force=args['force'])
24 io.stdout(_("{x} Installed '{plugin}' (v{version})").format(
25 x=blue("i"),
26 plugin=args['plugin'],
27 version=manifest['version'],
28 ))
29 if 'help' in manifest:
30 io.stdout("")
31 for line in manifest['help'].split("\n"):
32 io.stdout(line)
33 except NoSuchPlugin:
34 io.stderr(_("{x} No such plugin: {plugin}").format(x=red("!!!"), plugin=args['plugin']))
35 exit(1)
36 except PluginLocalConflict as e:
37 io.stderr(_("{x} Plugin installation failed: {reason}").format(
38 reason=e.message,
39 x=red("!!!"),
40 ))
41 exit(1)
42
43
44 def bw_repo_plugin_list(repo, args):
45 pm = PluginManager(repo.path)
46 for plugin, version in pm.list():
47 io.stdout(_("{plugin} (v{version})").format(plugin=plugin, version=version))
48
49
50 def bw_repo_plugin_remove(repo, args):
51 pm = PluginManager(repo.path)
52 try:
53 pm.remove(args['plugin'], force=args['force'])
54 except NoSuchPlugin:
55 io.stdout(_("{x} Plugin '{plugin}' is not installed").format(
56 x=red("!!!"),
57 plugin=args['plugin'],
58 ))
59 exit(1)
60
61
62 def bw_repo_plugin_search(repo, args):
63 pm = PluginManager(repo.path)
64 for plugin, desc in pm.search(args['term']):
65 io.stdout(_("{plugin} {desc}").format(desc=desc, plugin=bold(plugin)))
66
67
68 def bw_repo_plugin_update(repo, args):
69 pm = PluginManager(repo.path)
70 if args['plugin']:
71 old_version, new_version = pm.update(
72 args['plugin'],
73 check_only=args['check_only'],
74 force=args['force'],
75 )
76 if old_version != new_version:
77 io.stdout(_("{plugin} {old_version} → {new_version}").format(
78 new_version=new_version,
79 old_version=old_version,
80 plugin=bold(args['plugin']),
81 ))
82 else:
83 for plugin, version in pm.list():
84 old_version, new_version = pm.update(
85 plugin,
86 check_only=args['check_only'],
87 force=args['force'],
88 )
89 if old_version != new_version:
90 io.stdout(_("{plugin} {old_version} → {new_version}").format(
91 new_version=new_version,
92 old_version=old_version,
93 plugin=bold(plugin),
94 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from datetime import datetime
4
5 from ..concurrency import WorkerPool
6 from ..exceptions import NodeLockedException
7 from ..utils.cmdline import get_target_nodes
8 from ..utils.text import mark_for_translation as _
9 from ..utils.text import bold, error_summary, green, red, yellow
10 from ..utils.ui import io
11
12
13 def run_on_node(node, command, may_fail, ignore_locks, log_output):
14 if node.dummy:
15 io.stdout(_("{x} {node} is a dummy node").format(node=bold(node.name), x=yellow("!")))
16 return
17
18 node.repo.hooks.node_run_start(
19 node.repo,
20 node,
21 command,
22 )
23
24 start = datetime.now()
25 result = node.run(
26 command,
27 may_fail=may_fail,
28 log_output=log_output,
29 )
30 end = datetime.now()
31 duration = end - start
32
33 node.repo.hooks.node_run_end(
34 node.repo,
35 node,
36 command,
37 duration=duration,
38 return_code=result.return_code,
39 stdout=result.stdout,
40 stderr=result.stderr,
41 )
42
43 if result.return_code == 0:
44 io.stdout("{x} {node} {msg}".format(
45 msg=_("completed successfully after {time}s").format(
46 time=duration.total_seconds(),
47 ),
48 node=bold(node.name),
49 x=green("✓"),
50 ))
51 else:
52 io.stderr("{x} {node} {msg}".format(
53 msg=_("failed after {time}s (return code {rcode})").format(
54 rcode=result.return_code,
55 time=duration.total_seconds(),
56 ),
57 node=bold(node.name),
58 x=red("✘"),
59 ))
60
61
62 def bw_run(repo, args):
63 errors = []
64 target_nodes = get_target_nodes(repo, args['target'], adhoc_nodes=args['adhoc_nodes'])
65 pending_nodes = target_nodes[:]
66
67 repo.hooks.run_start(
68 repo,
69 args['target'],
70 target_nodes,
71 args['command'],
72 )
73 start_time = datetime.now()
74
75 def tasks_available():
76 return bool(pending_nodes)
77
78 def next_task():
79 node = pending_nodes.pop()
80 return {
81 'target': run_on_node,
82 'task_id': node.name,
83 'args': (
84 node,
85 args['command'],
86 args['may_fail'],
87 args['ignore_locks'],
88 True,
89 ),
90 }
91
92 def handle_exception(task_id, exception, traceback):
93 if isinstance(exception, NodeLockedException):
94 msg = _(
95 "{node_bold} locked by {user} "
96 "(see `bw lock show {node}` for details)"
97 ).format(
98 node_bold=bold(task_id),
99 node=task_id,
100 user=exception.args[0]['user'],
101 )
102 else:
103 msg = "{} {}".format(bold(task_id), exception)
104 io.stderr(traceback)
105 io.stderr(repr(exception))
106 io.stderr("{} {}".format(red("!"), msg))
107 errors.append(msg)
108
109 worker_pool = WorkerPool(
110 tasks_available,
111 next_task,
112 handle_exception=handle_exception,
113 pool_id="run",
114 workers=args['node_workers'],
115 )
116 worker_pool.run()
117
118 error_summary(errors)
119
120 repo.hooks.run_end(
121 repo,
122 args['target'],
123 target_nodes,
124 args['command'],
125 duration=datetime.now() - start_time,
126 )
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from operator import itemgetter
4
5 from ..utils.text import mark_for_translation as _
6 from ..utils.ui import io
7
8
9 def bw_stats(repo, args):
10 io.stdout(_("{} nodes").format(len(repo.nodes)))
11 io.stdout(_("{} groups").format(len(repo.groups)))
12 io.stdout(_("{} items").format(sum([len(list(node.items)) for node in repo.nodes])))
13 items = {}
14 for node in repo.nodes:
15 for item in node.items:
16 items.setdefault(item.ITEM_TYPE_NAME, 0)
17 items[item.ITEM_TYPE_NAME] += 1
18 for item_type, count in sorted(items.items(), key=itemgetter(1), reverse=True):
19 io.stdout(" {} {}".format(count, item_type))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from copy import copy
4 from sys import exit
5
6 from ..concurrency import WorkerPool
7 from ..plugins import PluginManager
8 from ..repo import Repository
9 from ..utils.cmdline import get_target_nodes
10 from ..utils.text import bold, green, mark_for_translation as _, red
11 from ..utils.ui import io
12
13
14 def bw_test(repo, args):
15 if args['target']:
16 pending_nodes = get_target_nodes(repo, args['target'], adhoc_nodes=args['adhoc_nodes'])
17 else:
18 pending_nodes = copy(list(repo.nodes))
19
20 def tasks_available():
21 return bool(pending_nodes)
22
23 def next_task():
24 node = pending_nodes.pop()
25 return {
26 'target': node.test,
27 'task_id': node.name,
28 'kwargs': {
29 'ignore_missing_faults': args['ignore_missing_faults'],
30 'workers': args['item_workers'],
31 },
32 }
33
34 worker_pool = WorkerPool(
35 tasks_available,
36 next_task,
37 pool_id="test",
38 workers=args['node_workers'],
39 )
40 worker_pool.run()
41
42 checked_groups = []
43 for group in repo.groups:
44 if group in checked_groups:
45 continue
46 with io.job(_(" {group} checking for subgroup loops...").format(group=group.name)):
47 checked_groups.extend(group.subgroups) # the subgroups property has the check built in
48 io.stdout(_("{x} {group} has no subgroup loops").format(
49 x=green("✓"),
50 group=bold(group.name),
51 ))
52
53 # check for plugin inconsistencies
54 if args['plugin_conflict_error']:
55 pm = PluginManager(repo.path)
56 for plugin, version in pm.list():
57 local_changes = pm.local_modifications(plugin)
58 if local_changes:
59 io.stderr(_("{x} Plugin '{plugin}' has local modifications:").format(
60 plugin=plugin,
61 x=red("✘"),
62 ))
63 for path, actual_checksum, should_checksum in local_changes:
64 io.stderr(_("\t{path} ({actual_checksum}) should be {should_checksum}").format(
65 actual_checksum=actual_checksum,
66 path=path,
67 should_checksum=should_checksum,
68 ))
69 exit(1)
70 else:
71 io.stdout(_("{x} Plugin '{plugin}' has no local modifications.").format(
72 plugin=plugin,
73 x=green("✓"),
74 ))
75
76 # generate metadata a couple of times for every node and see if
77 # anything changes between iterations
78 if args['determinism_metadata'] > 1:
79 hashes = {}
80 for i in range(args['determinism_metadata']):
81 repo = Repository(repo.path)
82 if args['target']:
83 nodes = get_target_nodes(repo, args['target'], adhoc_nodes=args['adhoc_nodes'])
84 else:
85 nodes = repo.nodes
86 for node in nodes:
87 with io.job(_(" {node} generating metadata ({i}/{n})... ").format(
88 i=i + 1,
89 n=args['determinism_metadata'],
90 node=node.name,
91 )):
92 result = node.metadata_hash()
93 hashes.setdefault(node.name, result)
94 if hashes[node.name] != result:
95 io.stderr(_(
96 "{x} Metadata for node {node} changed when generated repeatedly "
97 "(use `bw hash -d {node}` to debug)"
98 ).format(node=node.name, x=red("✘")))
99 exit(1)
100 io.stdout(_("{x} Metadata remained the same after being generated {n} times").format(
101 n=args['determinism_metadata'],
102 x=green("✓"),
103 ))
104
105 # generate configuration a couple of times for every node and see if
106 # anything changes between iterations
107 if args['determinism_config'] > 1:
108 hashes = {}
109 for i in range(args['determinism_config']):
110 repo = Repository(repo.path)
111 if args['target']:
112 nodes = get_target_nodes(repo, args['target'], adhoc_nodes=args['adhoc_nodes'])
113 else:
114 nodes = repo.nodes
115 for node in nodes:
116 with io.job(_(" {node} generating configuration ({i}/{n})...").format(
117 i=i + 1,
118 n=args['determinism_config'],
119 node=node.name,
120 )):
121 result = node.hash()
122 hashes.setdefault(node.name, result)
123 if hashes[node.name] != result:
124 io.stderr(_(
125 "{x} Configuration for node {node} changed when generated repeatedly "
126 "(use `bw hash -d {node}` to debug)"
127 ).format(node=node.name, x=red("✘")))
128 exit(1)
129 io.stdout(_("{x} Configuration remained the same after being generated {n} times").format(
130 n=args['determinism_config'],
131 x=green("✓"),
132 ))
133
134 if not args['target']:
135 repo.hooks.test(repo)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from sys import exit
4
5 from ..concurrency import WorkerPool
6 from ..utils.cmdline import get_target_nodes
7 from ..utils.text import error_summary, mark_for_translation as _
8 from ..utils.ui import io
9
10
11 def stats_summary(node_stats):
12 for node in node_stats.keys():
13 node_stats[node]['total'] = node_stats[node]['good'] + node_stats[node]['bad']
14 try:
15 node_stats[node]['health'] = \
16 (node_stats[node]['good'] / float(node_stats[node]['total'])) * 100.0
17 except ZeroDivisionError:
18 node_stats[node]['health'] = 0
19
20 total_items = 0
21 total_good = 0
22
23 node_ranking = []
24
25 for node_name, stats in node_stats.items():
26 total_items += stats['total']
27 total_good += stats['good']
28 node_ranking.append((
29 stats['health'],
30 node_name,
31 stats['good'],
32 stats['total'],
33 ))
34
35 node_ranking = sorted(node_ranking)
36 node_ranking.reverse()
37
38 try:
39 overall_health = (total_good / float(total_items)) * 100.0
40 except ZeroDivisionError:
41 overall_health = 0
42
43 if len(node_ranking) == 1:
44 io.stdout(_("node health: {health:.1f}% ({good}/{total} OK)").format(
45 good=node_ranking[0][2],
46 health=node_ranking[0][0],
47 total=node_ranking[0][3],
48 ))
49 else:
50 io.stdout(_("node health:"))
51 for health, node_name, good, total in node_ranking:
52 io.stdout(_(" {health}% {node_name} ({good}/{total} OK)").format(
53 good=good,
54 health="{:.1f}".format(health).rjust(5, " "),
55 node_name=node_name,
56 total=total,
57 ))
58 io.stdout(_("overall: {health:.1f}% ({good}/{total} OK)").format(
59 good=total_good,
60 health=overall_health,
61 total=total_items,
62 ))
63
64
65 def bw_verify(repo, args):
66 errors = []
67 node_stats = {}
68 pending_nodes = get_target_nodes(repo, args['target'], adhoc_nodes=args['adhoc_nodes'])
69
70 def tasks_available():
71 return bool(pending_nodes)
72
73 def next_task():
74 node = pending_nodes.pop()
75 return {
76 'target': node.verify,
77 'task_id': node.name,
78 'kwargs': {
79 'show_all': args['show_all'],
80 'workers': args['item_workers'],
81 },
82 }
83
84 def handle_result(task_id, return_value, duration):
85 node_stats[task_id] = return_value
86
87 def handle_exception(task_id, exception, traceback):
88 msg = "{}: {}".format(
89 task_id,
90 exception,
91 )
92 io.stderr(traceback)
93 io.stderr(repr(exception))
94 io.stderr(msg)
95 errors.append(msg)
96
97 worker_pool = WorkerPool(
98 tasks_available,
99 next_task,
100 handle_result=handle_result,
101 handle_exception=handle_exception,
102 pool_id="verify",
103 workers=args['node_workers'],
104 )
105 worker_pool.run()
106
107 if args['summary']:
108 stats_summary(node_stats)
109
110 error_summary(errors)
111
112 exit(1 if errors else 0)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from ..utils.text import mark_for_translation as _
4 from ..utils.ui import io
5
6 ZEN = _("""
7 ,
8 @@
9 @@@@
10 @@@@@
11 @@@@@
12 @@@@@
13 @@@@@
14 @@@@@
15 @@@@@ '@@@@@@, .@@@@@@+ +@@@@@@.
16 @@@@@@, `@@@@@@@ +@@@@@@, `@@@@@@#
17 @@@@@@@@+ :@@@@@@' `@@@@@@@ ;@@@@@@:
18 @@@@@@@@@@@` #@@@@@@. :@@@@@@' @@@@@@@`
19 @@@@@ ;@@@@@@; .@@@@@@# #@@@@@@` ,@@@@@@+
20 @@@@@ `@@@@@@#'@@@@@@: .@@@@@@+ +@@@@@@.
21 @@@@@ +@@@@@@@@@ +@@@@@@, `@@@@@@#
22 @@@@@ ,@@@@@@+ `@@@@@@@@@` ;@@@@@@:
23 @@@@@ @@@@@@@` :@@@@@@'@@@@@@' @@@@@@@`
24 @@@@@ ;@@@@@@#@@@@@@` `@@@@@@@@@@@@@+
25 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@# +@@@@@@@@.
26 @@@@@@@@@@@@@@@@@@@@@@@@@@@, .@@@#
27
28
29 The Zen of BundleWrap
30 ─────────────────────
31
32 BundleWrap is a tool, not a solution.
33 BundleWrap will not write your configuration for you.
34 BundleWrap is Python all the way down.
35 BundleWrap will adapt rather than grow.
36 BundleWrap is the single point of truth.
37 """)
38
39 def bw_zen(repo, args):
40 io.stdout(ZEN)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from concurrent.futures import ThreadPoolExecutor, wait, FIRST_COMPLETED
4 from datetime import datetime
5 from random import randint
6 from sys import exit
7 from traceback import format_tb
8
9 from .utils.text import mark_for_translation as _
10 from .utils.ui import io, QUIT_EVENT
11
12 JOIN_TIMEOUT = 5 # seconds
13
14
15 class WorkerPool(object):
16 """
17 Manages a bunch of worker threads.
18 """
19 def __init__(
20 self,
21 tasks_available,
22 next_task,
23 handle_result=None,
24 handle_exception=None,
25 pool_id=None,
26 workers=4,
27 ):
28 if workers < 1:
29 raise ValueError(_("at least one worker is required"))
30
31 self.tasks_available = tasks_available
32 self.next_task = next_task
33 self.handle_result = handle_result
34 self.handle_exception = handle_exception
35
36 self.number_of_workers = workers
37 self.idle_workers = set(range(self.number_of_workers))
38
39 self.pool_id = "unnamed_pool_{}".format(randint(1, 99999)) if pool_id is None else pool_id
40 self.pending_futures = {}
41
42 def _get_result(self):
43 """
44 Blocks until a result from a worker is received.
45 """
46 io.debug(_("worker pool {pool} waiting for next task to complete").format(
47 pool=self.pool_id,
48 ))
49 while True:
50 # we must use a timeout here to allow Python <3.3 to call
51 # its SIGINT handler
52 # see also http://stackoverflow.com/q/25676835
53 completed, pending = wait(
54 self.pending_futures.keys(),
55 return_when=FIRST_COMPLETED,
56 timeout=0.1,
57 )
58 if completed:
59 break
60 future = completed.pop()
61
62 start_time = self.pending_futures[future]['start_time']
63 task_id = self.pending_futures[future]['task_id']
64 worker_id = self.pending_futures[future]['worker_id']
65
66 del self.pending_futures[future]
67 self.idle_workers.add(worker_id)
68
69 exception = future.exception()
70 if exception:
71 io.debug(_(
72 "exception raised while executing task {task} on worker #{worker} "
73 "of worker pool {pool}"
74 ).format(
75 pool=self.pool_id,
76 task=task_id,
77 worker=worker_id,
78 ))
79 if not hasattr(exception, '__traceback__'): # Python 2
80 exception.__traceback__ = future.exception_info()[1]
81 exception.__task_id = task_id
82 raise exception
83 else:
84 io.debug(_(
85 "worker pool {pool} delivering result of {task} on worker #{worker}"
86 ).format(
87 pool=self.pool_id,
88 task=task_id,
89 worker=worker_id,
90 ))
91 return (task_id, future.result(), datetime.now() - start_time)
92
93 def start_task(self, target=None, task_id=None, args=None, kwargs=None):
94 """
95 target any callable (includes bound methods)
96 task_id something to remember this worker by
97 args list of positional arguments passed to target
98 kwargs dictionary of keyword arguments passed to target
99 """
100 if args is None:
101 args = []
102 else:
103 args = list(args)
104 if kwargs is None:
105 kwargs = {}
106
107 task_id = "unnamed_task_{}".format(randint(1, 99999)) if task_id is None else task_id
108 worker_id = self.idle_workers.pop()
109
110 io.debug(_("worker pool {pool} is starting task {task} on worker #{worker}").format(
111 pool=self.pool_id,
112 task=task_id,
113 worker=worker_id,
114 ))
115 self.pending_futures[self.executor.submit(target, *args, **kwargs)] = {
116 'start_time': datetime.now(),
117 'task_id': task_id,
118 'worker_id': worker_id,
119 }
120
121 def run(self):
122 io.debug(_("spinning up worker pool {pool}").format(pool=self.pool_id))
123 processed_results = []
124 exit_code = None
125 self.executor = ThreadPoolExecutor(max_workers=self.number_of_workers)
126 try:
127 while (
128 (self.tasks_available() and not QUIT_EVENT.is_set()) or
129 self.workers_are_running
130 ):
131 while (
132 self.tasks_available() and
133 self.workers_are_available and
134 not QUIT_EVENT.is_set()
135 ):
136 task = self.next_task()
137 if task is not None:
138 self.start_task(**task)
139
140 if self.workers_are_running:
141 try:
142 result = self._get_result()
143 except SystemExit as exc:
144 if exit_code is None:
145 # Don't overwrite exit code if it has already been set.
146 # This may be a worker exiting with 0 only because
147 # a previous worker raised SystemExit with 1.
148 # We must preserve that original exit code.
149 exit_code = exc.code
150 # just make sure QUIT_EVENT is set and continue
151 # waiting for pending results
152 QUIT_EVENT.set()
153 except Exception as exc:
154 traceback = "".join(format_tb(exc.__traceback__))
155 if self.handle_exception is None:
156 raise exc
157 else:
158 processed_results.append(
159 self.handle_exception(exc.__task_id, exc, traceback)
160 )
161 else:
162 if self.handle_result is not None:
163 processed_results.append(self.handle_result(*result))
164 if QUIT_EVENT.is_set():
165 # we have reaped all our workers, let's stop this thread
166 # before it does anything else
167 exit(0 if exit_code is None else exit_code)
168 return processed_results
169 finally:
170 io.debug(_("shutting down worker pool {pool}").format(pool=self.pool_id))
171 self.executor.shutdown()
172 io.debug(_("worker pool {pool} has been shut down").format(pool=self.pool_id))
173
174 @property
175 def workers_are_available(self):
176 return bool(self.idle_workers)
177
178 @property
179 def workers_are_running(self):
180 return bool(self.pending_futures)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from .exceptions import BundleError, NoSuchItem
4 from .items import Item
5 from .items.actions import Action
6 from .utils.text import mark_for_translation as _
7 from .utils.ui import io
8
9
10 class DummyItem(object):
11 bundle = None
12 triggered = False
13
14 def __init__(self, *args, **kwargs):
15 self.needed_by = []
16 self.needs = []
17 self.preceded_by = []
18 self.precedes = []
19 self.tags = []
20 self.triggered_by = []
21 self.triggers = []
22 self._deps = []
23 self._precedes_items = []
24
25 def __lt__(self, other):
26 return self.id < other.id
27
28 def _precedes_incorrect_item(self):
29 return False
30
31 def apply(self, *args, **kwargs):
32 return (Item.STATUS_OK, [])
33
34 def test(self):
35 pass
36
37
38 class BundleItem(DummyItem):
39 """
40 Represents a dependency on all items in a certain bundle.
41 """
42 ITEM_TYPE_NAME = 'bundle'
43
44 def __init__(self, bundle):
45 self.bundle = bundle
46 super(BundleItem, self).__init__()
47
48 def __repr__(self):
49 return "<BundleItem: {}>".format(self.bundle.name)
50
51 @property
52 def id(self):
53 return "bundle:{}".format(self.bundle.name)
54
55
56 class TagItem(DummyItem):
57 """
58 This item depends on all items with the given tag.
59 """
60 ITEM_TYPE_NAME = 'tag'
61
62 def __init__(self, tag_name):
63 self.tag_name = tag_name
64 super(TagItem, self).__init__()
65
66 def __repr__(self):
67 return "<TagItem: {}>".format(self.tag_name)
68
69 @property
70 def id(self):
71 return "tag:{}".format(self.tag_name)
72
73
74 class TypeItem(DummyItem):
75 """
76 Represents a dependency on all items of a certain type.
77 """
78 ITEM_TYPE_NAME = 'type'
79
80 def __init__(self, item_type):
81 self.item_type = item_type
82 super(TypeItem, self).__init__()
83
84 def __repr__(self):
85 return "<TypeItem: {}>".format(self.item_type)
86
87 @property
88 def id(self):
89 return "{}:".format(self.item_type)
90
91
92 def find_item(item_id, items):
93 """
94 Returns the first item with the given ID within the given list of
95 items.
96 """
97 try:
98 item = list(filter(lambda item: item.id == item_id, items))[0]
99 except IndexError:
100 raise NoSuchItem(_("item not found: {}").format(item_id))
101 return item
102
103
104 def _find_items_of_types(item_types, items, include_dummy=False):
105 """
106 Returns a subset of items with any of the given types.
107 """
108 return list(filter(
109 lambda item:
110 item.id.split(":", 1)[0] in item_types and (
111 include_dummy or not isinstance(item, DummyItem)
112 ),
113 items,
114 ))
115
116
117 def _flatten_dependencies(items):
118 """
119 This will cause all dependencies - direct AND inherited - to be
120 listed in item._flattened_deps.
121 """
122 for item in items:
123 item._flattened_deps = list(set(
124 item._deps + _get_deps_for_item(item, items)
125 ))
126 return items
127
128
129 def _get_deps_for_item(item, items, deps_found=None):
130 """
131 Recursively retrieves and returns a list of all inherited
132 dependencies of the given item.
133
134 Note: This can handle loops, but won't detect them.
135 """
136 if deps_found is None:
137 deps_found = []
138 deps = []
139 for dep in item._deps:
140 if dep not in deps_found:
141 deps.append(dep)
142 deps_found.append(dep)
143 deps += _get_deps_for_item(
144 find_item(dep, items),
145 items,
146 deps_found,
147 )
148 return deps
149
150
151 def _has_trigger_path(items, item, target_item_id):
152 """
153 Returns True if the given item directly or indirectly (trough
154 other items) triggers the item with the given target item id.
155 """
156 if target_item_id in item.triggers:
157 return True
158 for triggered_id in item.triggers:
159 try:
160 triggered_item = find_item(triggered_id, items)
161 except NoSuchItem:
162 # the triggered item may already have been skipped by
163 # `bw apply -s`
164 continue
165 if _has_trigger_path(items, triggered_item, target_item_id):
166 return True
167 return False
168
169
170 def _inject_bundle_items(items):
171 """
172 Adds virtual items that depend on every item in a bundle.
173 """
174 bundle_items = {}
175 for item in items:
176 if item.bundle is None:
177 continue
178 if item.bundle.name not in bundle_items:
179 bundle_items[item.bundle.name] = BundleItem(item.bundle)
180 bundle_items[item.bundle.name]._deps.append(item.id)
181 return list(bundle_items.values()) + items
182
183
184 def _inject_canned_actions(items):
185 """
186 Looks for canned actions like "svc_upstart:mysql:reload" in item
187 triggers and adds them to the list of items.
188 """
189 added_actions = {}
190 for item in items:
191 for triggered_item_id in item.triggers:
192 if triggered_item_id in added_actions:
193 # action has already been triggered
194 continue
195
196 try:
197 type_name, item_name, action_name = triggered_item_id.split(":")
198 except ValueError:
199 # not a canned action
200 continue
201
202 target_item_id = "{}:{}".format(type_name, item_name)
203
204 try:
205 target_item = find_item(target_item_id, items)
206 except NoSuchItem:
207 raise BundleError(_(
208 "{item} in bundle '{bundle}' triggers unknown item '{target_item}'"
209 ).format(
210 bundle=item.bundle.name,
211 item=item.id,
212 target_item=target_item_id,
213 ))
214
215 try:
216 action_attrs = target_item.get_canned_actions()[action_name]
217 except KeyError:
218 raise BundleError(_(
219 "{item} in bundle '{bundle}' triggers unknown "
220 "canned action '{action}' on {target_item}"
221 ).format(
222 action=action_name,
223 bundle=item.bundle.name,
224 item=item.id,
225 target_item=target_item_id,
226 ))
227
228 action_attrs.update({'triggered': True})
229 action = Action(
230 item.bundle,
231 triggered_item_id,
232 action_attrs,
233 skip_name_validation=True,
234 )
235 action._prepare_deps(items)
236 added_actions[triggered_item_id] = action
237
238 return items + list(added_actions.values())
239
240
241 def _inject_concurrency_blockers(items):
242 """
243 Looks for items with BLOCK_CONCURRENT set to True and inserts
244 dependencies to force a sequential apply.
245 """
246 # find every item type that cannot be applied in parallel
247 item_types = set()
248 for item in items:
249 item._concurrency_deps = []
250 if (
251 not isinstance(item, DummyItem) and
252 item.BLOCK_CONCURRENT
253 ):
254 item_types.add(item.__class__)
255
256 # daisy-chain all items of the blocking type and all items of the
257 # blocked types while respecting existing dependencies between them
258 for item_type in item_types:
259 blocked_types = item_type.BLOCK_CONCURRENT + [item_type.ITEM_TYPE_NAME]
260 type_items = _find_items_of_types(
261 blocked_types,
262 items,
263 )
264 processed_items = []
265 for item in type_items:
266 # disregard deps to items of other types
267 item.__deps = list(filter(
268 lambda dep: dep.split(":", 1)[0] in blocked_types,
269 item._flattened_deps,
270 ))
271 previous_item = None
272 while len(processed_items) < len(type_items):
273 # find the first item without same-type deps we haven't
274 # processed yet
275 try:
276 item = list(filter(
277 lambda item: not item.__deps and item not in processed_items,
278 type_items,
279 ))[0]
280 except IndexError:
281 # this can happen if the flattened deps of all items of
282 # this type already contain a dependency on another
283 # item of this type
284 break
285 if previous_item is not None: # unless we're at the first item
286 # add dep to previous item -- unless it's already in there
287 if previous_item.id not in item._deps:
288 item._deps.append(previous_item.id)
289 item._concurrency_deps.append(previous_item.id)
290 item._flattened_deps.append(previous_item.id)
291 previous_item = item
292 processed_items.append(item)
293 for other_item in type_items:
294 try:
295 other_item.__deps.remove(item.id)
296 except ValueError:
297 pass
298 return items
299
300
301 def _inject_tag_items(items):
302 """
303 Takes a list of items and adds tag items depending on each type of
304 item in the list. Returns the appended list.
305 """
306 tag_items = {}
307 items = list(items)
308 for item in items:
309 for tag in item.tags:
310 if tag not in tag_items:
311 tag_items[tag] = TagItem(tag)
312 tag_items[tag]._deps.append(item.id)
313
314 return list(tag_items.values()) + items
315
316
317 def _inject_type_items(items):
318 """
319 Takes a list of items and adds dummy items depending on each type of
320 item in the list. Returns the appended list.
321 """
322 # first, find all types of items and add dummy deps
323 type_items = {}
324 items = list(items)
325 for item in items:
326 # create dummy items that depend on each item of their type
327 item_type = item.id.split(":")[0]
328 if item_type not in type_items:
329 type_items[item_type] = TypeItem(item_type)
330 type_items[item_type]._deps.append(item.id)
331
332 # create DummyItem for every type
333 for dep in item._deps:
334 item_type = dep.split(":")[0]
335 if item_type not in type_items:
336 type_items[item_type] = TypeItem(item_type)
337 return list(type_items.values()) + items
338
339
340 def _inject_reverse_dependencies(items):
341 """
342 Looks for 'needed_by' deps and creates standard dependencies
343 accordingly.
344 """
345 def add_dep(item, dep):
346 if dep not in item._deps:
347 item._deps.append(dep)
348 item._reverse_deps.append(dep)
349
350 for item in items:
351 item._reverse_deps = []
352
353 for item in items:
354 for depending_item_id in item.needed_by:
355 # bundle items
356 if depending_item_id.startswith("bundle:"):
357 depending_bundle_name = depending_item_id.split(":")[1]
358 for depending_item in items:
359 if depending_item.bundle.name == depending_bundle_name:
360 add_dep(depending_item, item.id)
361
362 # tag items
363 if depending_item_id.startswith("tag:"):
364 tag_name = depending_item_id.split(":")[1]
365 for depending_item in items:
366 if tag_name in depending_item.tags:
367 add_dep(depending_item, item.id)
368
369 # type items
370 if depending_item_id.endswith(":"):
371 target_type = depending_item_id[:-1]
372 for depending_item in _find_items_of_types([target_type], items):
373 add_dep(depending_item, item.id)
374
375 # single items
376 else:
377 depending_item = find_item(depending_item_id, items)
378 add_dep(depending_item, item.id)
379 return items
380
381
382 def _inject_reverse_triggers(items):
383 """
384 Looks for 'triggered_by' and 'precedes' attributes and turns them
385 into standard triggers (defined on the opposing end).
386 """
387 for item in items:
388 for triggering_item_id in item.triggered_by:
389 triggering_item = find_item(triggering_item_id, items)
390 if triggering_item.id.startswith("bundle:"): # bundle items
391 bundle_name = triggering_item.id.split(":")[1]
392 for actual_triggering_item in items:
393 if triggering_item.bundle.name == bundle_name:
394 actual_triggering_item.triggers.append(item.id)
395 elif triggering_item.id.startswith("tag:"): # tag items
396 tag_name = triggering_item.id.split(":")[1]
397 for actual_triggering_item in items:
398 if tag_name in triggering_item.tags:
399 actual_triggering_item.triggers.append(item.id)
400 elif triggering_item.id.endswith(":"): # type items
401 target_type = triggering_item.id[:-1]
402 for actual_triggering_item in _find_items_of_types([target_type], items):
403 actual_triggering_item.triggers.append(item.id)
404 else:
405 triggering_item.triggers.append(item.id)
406 for preceded_item_id in item.precedes:
407 preceded_item = find_item(preceded_item_id, items)
408 if preceded_item.id.startswith("bundle:"): # bundle items
409 bundle_name = preceded_item.id.split(":")[1]
410 for actual_preceded_item in items:
411 if actual_preceded_item.bundle.name == bundle_name:
412 actual_preceded_item.preceded_by.append(item.id)
413 elif preceded_item.id.startswith("tag:"): # tag items
414 tag_name = preceded_item.id.split(":")[1]
415 for actual_preceded_item in items:
416 if tag_name in actual_preceded_item.tags:
417 actual_preceded_item.preceded_by.append(item.id)
418 elif preceded_item.id.endswith(":"): # type items
419 target_type = preceded_item.id[:-1]
420 for actual_preceded_item in _find_items_of_types([target_type], items):
421 actual_preceded_item.preceded_by.append(item.id)
422 else:
423 preceded_item.preceded_by.append(item.id)
424 return items
425
426
427 def _inject_trigger_dependencies(items):
428 """
429 Injects dependencies from all triggered items to their triggering
430 items.
431 """
432 for item in items:
433 for triggered_item_id in item.triggers:
434 try:
435 triggered_item = find_item(triggered_item_id, items)
436 except NoSuchItem:
437 raise BundleError(_(
438 "unable to find definition of '{item1}' triggered "
439 "by '{item2}' in bundle '{bundle}'"
440 ).format(
441 bundle=item.bundle.name,
442 item1=triggered_item_id,
443 item2=item.id,
444 ))
445 if not triggered_item.triggered:
446 raise BundleError(_(
447 "'{item1}' in bundle '{bundle1}' triggered "
448 "by '{item2}' in bundle '{bundle2}', "
449 "but missing 'triggered' attribute"
450 ).format(
451 item1=triggered_item.id,
452 bundle1=triggered_item.bundle.name,
453 item2=item.id,
454 bundle2=item.bundle.name,
455 ))
456 triggered_item._deps.append(item.id)
457 return items
458
459
460 def _inject_preceded_by_dependencies(items):
461 """
462 Injects dependencies from all triggering items to their
463 preceded_by items and attaches triggering items to preceding items.
464 """
465 for item in items:
466 if item.preceded_by and item.triggered:
467 raise BundleError(_(
468 "triggered item '{item}' in bundle '{bundle}' must not use "
469 "'preceded_by' (use chained triggers instead)".format(
470 bundle=item.bundle.name,
471 item=item.id,
472 ),
473 ))
474 for triggered_item_id in item.preceded_by:
475 try:
476 triggered_item = find_item(triggered_item_id, items)
477 except NoSuchItem:
478 raise BundleError(_(
479 "unable to find definition of '{item1}' preceding "
480 "'{item2}' in bundle '{bundle}'"
481 ).format(
482 bundle=item.bundle.name,
483 item1=triggered_item_id,
484 item2=item.id,
485 ))
486 if not triggered_item.triggered:
487 raise BundleError(_(
488 "'{item1}' in bundle '{bundle1}' precedes "
489 "'{item2}' in bundle '{bundle2}', "
490 "but missing 'triggered' attribute"
491 ).format(
492 item1=triggered_item.id,
493 bundle1=triggered_item.bundle.name,
494 item2=item.id,
495 bundle2=item.bundle.name if item.bundle else "N/A",
496 ))
497 triggered_item._precedes_items.append(item)
498 item._deps.append(triggered_item.id)
499 return items
500
501
502 def prepare_dependencies(items):
503 """
504 Performs all dependency preprocessing on a list of items.
505 """
506 items = list(items)
507
508 for item in items:
509 item._check_bundle_collisions(items)
510 item._prepare_deps(items)
511
512 items = _inject_bundle_items(items)
513 items = _inject_tag_items(items)
514 items = _inject_type_items(items)
515 items = _inject_canned_actions(items)
516 items = _inject_reverse_triggers(items)
517 items = _inject_reverse_dependencies(items)
518 items = _inject_trigger_dependencies(items)
519 items = _inject_preceded_by_dependencies(items)
520 items = _flatten_dependencies(items)
521 items = _inject_concurrency_blockers(items)
522
523 for item in items:
524 if not isinstance(item, DummyItem):
525 item._check_redundant_dependencies()
526
527 return items
528
529
530 def remove_dep_from_items(items, dep):
531 """
532 Removes the given item id (dep) from the temporary list of
533 dependencies of all items in the given list.
534 """
535 for item in items:
536 try:
537 item._deps.remove(dep)
538 except ValueError:
539 pass
540 return items
541
542
543 def remove_item_dependents(items, dep_item, skipped=False):
544 """
545 Removes the items depending on the given item from the list of items.
546 """
547 removed_items = []
548 for item in items:
549 if dep_item.id in item._deps:
550 if _has_trigger_path(items, dep_item, item.id):
551 # triggered items cannot be removed here since they
552 # may yet be triggered by another item and will be
553 # skipped anyway if they aren't
554 item._deps.remove(dep_item.id)
555 elif skipped and isinstance(item, DummyItem) and \
556 dep_item.triggered and not dep_item.has_been_triggered:
557 # don't skip dummy items because of untriggered members
558 # see issue #151; separate elif for clarity
559 item._deps.remove(dep_item.id)
560 else:
561 removed_items.append(item)
562
563 for item in removed_items:
564 items.remove(item)
565
566 if removed_items:
567 io.debug(
568 "skipped these items because they depend on {item}, which was "
569 "skipped previously: {skipped}".format(
570 item=dep_item.id,
571 skipped=", ".join([item.id for item in removed_items]),
572 )
573 )
574
575 all_recursively_removed_items = []
576 for removed_item in removed_items:
577 items, recursively_removed_items = \
578 remove_item_dependents(items, removed_item, skipped=skipped)
579 all_recursively_removed_items += recursively_removed_items
580
581 return (items, removed_items + all_recursively_removed_items)
582
583
584 def split_items_without_deps(items):
585 """
586 Takes a list of items and extracts the ones that don't have any
587 dependencies. The extracted deps are returned as a list.
588 """
589 items = list(items) # make sure we're not returning a generator
590 removed_items = []
591 for item in items:
592 if not item._deps:
593 removed_items.append(item)
594 for item in removed_items:
595 items.remove(item)
596 return (items, removed_items)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from sys import version_info
4
5
6 class UnicodeException(Exception):
7 def __init__(self, msg=""):
8 if version_info >= (3, 0):
9 super(UnicodeException, self).__init__(msg)
10 else:
11 super(UnicodeException, self).__init__(msg.encode('utf-8'))
12
13
14 class ActionFailure(UnicodeException):
15 """
16 Raised when an action failes to meet the expected rcode/output.
17 """
18 pass
19
20
21 class DontCache(Exception):
22 """
23 Used in the cached_property decorator to temporily prevent caching
24 the returned result
25 """
26 def __init__(self, obj):
27 self.obj = obj
28
29
30 class FaultUnavailable(UnicodeException):
31 """
32 Raised when a Fault object cannot be resolved.
33 """
34 pass
35
36
37 class NoSuchBundle(UnicodeException):
38 """
39 Raised when a bundle of unknown name is requested.
40 """
41 pass
42
43
44 class NoSuchGroup(UnicodeException):
45 """
46 Raised when a group of unknown name is requested.
47 """
48 pass
49
50
51 class NoSuchItem(UnicodeException):
52 """
53 Raised when an item of unknown name is requested.
54 """
55 pass
56
57
58 class NoSuchNode(UnicodeException):
59 """
60 Raised when a node of unknown name is requested.
61 """
62 pass
63
64
65 class NoSuchPlugin(UnicodeException):
66 """
67 Raised when a plugin of unknown name is requested.
68 """
69 pass
70
71
72 class RemoteException(UnicodeException):
73 """
74 Raised when a shell command on a node fails.
75 """
76 pass
77
78
79 class RepositoryError(UnicodeException):
80 """
81 Indicates that somethings is wrong with the current repository.
82 """
83 pass
84
85
86 class BundleError(RepositoryError):
87 """
88 Indicates an error in a bundle.
89 """
90 pass
91
92
93 class ItemDependencyError(RepositoryError):
94 """
95 Indicates a problem with item dependencies (e.g. loops).
96 """
97 pass
98
99
100 class NoSuchRepository(RepositoryError):
101 """
102 Raised when trying to get a Repository object from a directory that
103 is not in fact a repository.
104 """
105 pass
106
107
108 class MissingRepoDependency(RepositoryError):
109 """
110 Raised when a dependency from requirements.txt is missing.
111 """
112 pass
113
114
115 class PluginError(RepositoryError):
116 """
117 Indicates an error related to a plugin.
118 """
119 pass
120
121
122 class PluginLocalConflict(PluginError):
123 """
124 Raised when a plugin tries to overwrite locally-modified files.
125 """
126 pass
127
128
129 class TemplateError(RepositoryError):
130 """
131 Raised when an error occurs while rendering a template.
132 """
133 pass
134
135
136 class UsageException(UnicodeException):
137 """
138 Raised when command line options don't make sense.
139 """
140 pass
141
142
143 class NodeLockedException(Exception):
144 """
145 Raised when a node is already locked during an 'apply' run.
146 """
147 pass
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 import re
4
5 from .exceptions import NoSuchGroup, NoSuchNode, RepositoryError
6 from .utils import cached_property, names
7 from .utils.statedict import hash_statedict
8 from .utils.text import mark_for_translation as _, validate_name
9
10
11 GROUP_ATTR_DEFAULTS = {
12 'cmd_wrapper_inner': "export LANG=C; {}",
13 'cmd_wrapper_outer': "sudo sh -c {}",
14 'dummy': False,
15 'os': 'linux',
16 # Setting os_version to 0 by default will probably yield less
17 # surprises than setting it to max_int. Users will probably
18 # start at a certain version and then gradually update their
19 # systems, adding conditions like this:
20 #
21 # if node.os_version >= (2,):
22 # new_behavior()
23 # else:
24 # old_behavior()
25 #
26 # If we set os_version to max_int, nodes without an explicit
27 # os_version would automatically adopt the new_behavior() as
28 # soon as it appears in the repo - which is probably not what
29 # people want.
30 'os_version': (0,),
31 'use_shadow_passwords': True,
32 }
33
34
35 def _build_error_chain(loop_node, last_node, nodes_in_between):
36 """
37 Used to illustrate subgroup loop paths in error messages.
38
39 loop_node: name of node that loops back to itself
40 last_node: name of last node pointing back to loop_node,
41 causing the loop
42 nodes_in_between: names of nodes traversed during loop detection,
43 does include loop_node if not a direct loop,
44 but not last_node
45 """
46 error_chain = []
47 for visited in nodes_in_between:
48 if (loop_node in error_chain) != (loop_node == visited):
49 error_chain.append(visited)
50 error_chain.append(last_node)
51 error_chain.append(loop_node)
52 return error_chain
53
54
55 class Group(object):
56 """
57 A group of nodes.
58 """
59 def __init__(self, group_name, infodict=None):
60 if infodict is None:
61 infodict = {}
62
63 if not validate_name(group_name):
64 raise RepositoryError(_("'{}' is not a valid group name.").format(group_name))
65
66 self.name = group_name
67 self.bundle_names = infodict.get('bundles', [])
68 self.immediate_subgroup_names = infodict.get('subgroups', [])
69 self.immediate_subgroup_patterns = infodict.get('subgroup_patterns', [])
70 self.members_add = infodict.get('members_add', None)
71 self.members_remove = infodict.get('members_remove', None)
72 self.metadata = infodict.get('metadata', {})
73 self.node_patterns = infodict.get('member_patterns', [])
74 self.static_member_names = infodict.get('members', [])
75
76 for attr in GROUP_ATTR_DEFAULTS:
77 # defaults are applied in node.py
78 setattr(self, attr, infodict.get(attr))
79
80 def __lt__(self, other):
81 return self.name < other.name
82
83 def __repr__(self):
84 return "<Group: {}>".format(self.name)
85
86 def __str__(self):
87 return self.name
88
89 @cached_property
90 def cdict(self):
91 group_dict = {}
92 for node in self.nodes:
93 group_dict[node.name] = node.hash()
94 return group_dict
95
96 def group_membership_hash(self):
97 return hash_statedict(sorted(names(self.nodes)))
98
99 def hash(self):
100 return hash_statedict(self.cdict)
101
102 def metadata_hash(self):
103 group_dict = {}
104 for node in self.nodes:
105 group_dict[node.name] = node.metadata_hash()
106 return hash_statedict(group_dict)
107
108 @cached_property
109 def nodes(self):
110 for node in self.repo.nodes:
111 if node.in_group(self.name):
112 yield node
113
114 @cached_property
115 def _static_nodes(self):
116 result = set()
117 result.update(self._nodes_from_members)
118 result.update(self._nodes_from_patterns)
119 return result
120
121 @property
122 def _nodes_from_members(self):
123 for node_name in self.static_member_names:
124 try:
125 yield self.repo.get_node(node_name)
126 except NoSuchNode:
127 raise RepositoryError(_(
128 "Group '{group}' has '{node}' listed as a member in groups.py, "
129 "but no such node could be found."
130 ).format(
131 group=self.name,
132 node=node_name,
133 ))
134
135 @property
136 def _nodes_from_patterns(self):
137 for pattern in self.node_patterns:
138 compiled_pattern = re.compile(pattern)
139 for node in self.repo.nodes:
140 if not compiled_pattern.search(node.name) is None:
141 yield node
142
143 def _check_subgroup_names(self, visited_names):
144 """
145 Recursively finds subgroups and checks for loops.
146 """
147 names_from_patterns = []
148 for pattern in self.immediate_subgroup_patterns:
149 compiled_pattern = re.compile(pattern)
150 for group in self.repo.groups:
151 if compiled_pattern.search(group.name) is not None and group != self:
152 names_from_patterns.append(group.name)
153
154 for name in list(self.immediate_subgroup_names) + names_from_patterns:
155 if name not in visited_names:
156 try:
157 group = self.repo.get_group(name)
158 except NoSuchGroup:
159 raise RepositoryError(_(
160 "Group '{group}' has '{subgroup}' listed as a subgroup in groups.py, "
161 "but no such group could be found."
162 ).format(
163 group=self.name,
164 subgroup=name,
165 ))
166 for group_name in group._check_subgroup_names(
167 visited_names + [self.name],
168 ):
169 yield group_name
170 else:
171 error_chain = _build_error_chain(
172 name,
173 self.name,
174 visited_names,
175 )
176 raise RepositoryError(_(
177 "Group '{group}' can't be a subgroup of itself. "
178 "({chain})"
179 ).format(
180 group=name,
181 chain=" -> ".join(error_chain),
182 ))
183 if self.name not in visited_names:
184 yield self.name
185
186 @cached_property
187 def parent_groups(self):
188 for group in self.repo.groups:
189 if self in group.subgroups:
190 yield group
191
192 @cached_property
193 def subgroups(self):
194 """
195 Iterator over all subgroups as group objects.
196 """
197 for group_name in self._check_subgroup_names([self.name]):
198 yield self.repo.get_group(group_name)
0 from .deps import (
1 DummyItem,
2 find_item,
3 prepare_dependencies,
4 remove_item_dependents,
5 remove_dep_from_items,
6 split_items_without_deps,
7 )
8 from .exceptions import NoSuchItem
9 from .utils.text import mark_for_translation as _
10 from .utils.ui import io
11
12
13 class BaseQueue(object):
14 def __init__(self, items):
15 self.items_with_deps = prepare_dependencies(items)
16 self.items_without_deps = []
17 self._split()
18 self.pending_items = []
19
20 def _split(self):
21 self.items_with_deps, self.items_without_deps = \
22 split_items_without_deps(self.all_items)
23
24 @property
25 def all_items(self):
26 return self.items_with_deps + self.items_without_deps
27
28
29 class ItemQueue(BaseQueue):
30 def item_failed(self, item):
31 """
32 Called when an item could not be fixed. Yields all items that
33 have been skipped as a result by cascading.
34 """
35 for skipped_item in self.item_skipped(item, _skipped=False):
36 yield skipped_item
37
38 def item_fixed(self, item):
39 """
40 Called when an item has successfully been fixed.
41 """
42 self.item_ok(item)
43 self._fire_triggers_for_item(item)
44
45 def item_ok(self, item):
46 """
47 Called when an item didn't need to be fixed.
48 """
49 self.pending_items.remove(item)
50 # if an item is applied successfully, all dependencies on it can
51 # be removed from the remaining items
52 self.items_with_deps = remove_dep_from_items(
53 self.items_with_deps,
54 item.id,
55 )
56 self._split()
57
58 def item_skipped(self, item, _skipped=True):
59 """
60 Called when an item has been skipped. Yields all items that have
61 been skipped as a result by cascading.
62 """
63 self.pending_items.remove(item)
64 if item.cascade_skip:
65 # if an item fails or is skipped, all items that depend on
66 # it shall be removed from the queue
67 self.items_with_deps, skipped_items = remove_item_dependents(
68 self.items_with_deps,
69 item,
70 skipped=_skipped,
71 )
72 # since we removed them from further processing, we
73 # fake the status of the removed items so they still
74 # show up in the result statistics
75 for skipped_item in skipped_items:
76 if not isinstance(skipped_item, DummyItem):
77 yield skipped_item
78 else:
79 self.items_with_deps = remove_dep_from_items(
80 self.items_with_deps,
81 item.id,
82 )
83 self._split()
84
85 def pop(self, interactive=False):
86 """
87 Gets the next item available for processing and moves it into
88 self.pending_items. Will raise IndexError if no item is
89 available. Otherwise, it will return the item and a list of
90 items that have been skipped while looking for the item.
91 """
92 skipped_items = []
93
94 if not self.items_without_deps:
95 raise IndexError
96
97 while self.items_without_deps:
98 item = self.items_without_deps.pop()
99
100 if item._precedes_items:
101 if item._precedes_incorrect_item(interactive=interactive):
102 item.has_been_triggered = True
103 else:
104 # we do not have to cascade here at all because
105 # all chained preceding items will be skipped by
106 # this same mechanism
107 io.debug(
108 _("skipping {node}:{bundle}:{item} because its precede trigger "
109 "did not fire").format(
110 bundle=item.bundle.name,
111 item=item.id,
112 node=item.node.name,
113 ),
114 )
115 self.items_with_deps = remove_dep_from_items(self.items_with_deps, item.id)
116 self._split()
117 skipped_items.append(item)
118 item = None
119 continue
120 break
121 assert item is not None
122 self.pending_items.append(item)
123 return (item, skipped_items)
124
125 def _fire_triggers_for_item(self, item):
126 for triggered_item_id in item.triggers:
127 try:
128 triggered_item = find_item(
129 triggered_item_id,
130 self.all_items,
131 )
132 triggered_item.has_been_triggered = True
133 except NoSuchItem:
134 io.debug(_(
135 "{item} tried to trigger {triggered_item}, "
136 "but it wasn't available. It must have been skipped previously."
137 ).format(
138 item=item.id,
139 triggered_item=triggered_item_id,
140 ))
141
142
143 class ItemTestQueue(BaseQueue):
144 """
145 A simpler variation of ItemQueue that is used by `bw test` to check
146 for circular dependencies.
147 """
148 def pop(self):
149 item = self.items_without_deps.pop()
150 self.items_with_deps = remove_dep_from_items(self.items_with_deps, item.id)
151 self._split()
152 return item
0 # -*- coding: utf-8 -*-
1 """
2 Note that modules in this package have to use absolute imports because
3 Repository.item_classes loads them as files.
4 """
5 from __future__ import unicode_literals
6 from copy import copy
7 from datetime import datetime
8 from os.path import join
9
10 from bundlewrap.exceptions import BundleError, FaultUnavailable
11 from bundlewrap.utils import cached_property
12 from bundlewrap.utils.statedict import diff_keys, diff_value, hash_statedict, validate_statedict
13 from bundlewrap.utils.text import force_text, mark_for_translation as _
14 from bundlewrap.utils.text import blue, bold, wrap_question
15 from bundlewrap.utils.ui import io
16
17 BUILTIN_ITEM_ATTRIBUTES = {
18 'cascade_skip': None,
19 'needed_by': [],
20 'needs': [],
21 'preceded_by': [],
22 'precedes': [],
23 'error_on_missing_fault': False,
24 'tags': [],
25 'triggered': False,
26 'triggered_by': [],
27 'triggers': [],
28 'unless': "",
29 }
30
31
32 class ItemStatus(object):
33 """
34 Holds information on a particular Item such as whether it needs
35 fixing and what's broken.
36 """
37
38 def __init__(self, cdict, sdict):
39 self.cdict = cdict
40 self.sdict = sdict
41 self.keys_to_fix = []
42 self.must_be_deleted = (self.sdict is not None and self.cdict is None)
43 self.must_be_created = (self.cdict is not None and self.sdict is None)
44 if not self.must_be_deleted and not self.must_be_created:
45 self.keys_to_fix = diff_keys(cdict, sdict)
46
47 def __repr__(self):
48 return "<ItemStatus correct:{}>".format(self.correct)
49
50 @property
51 def correct(self):
52 return not self.must_be_deleted and not self.must_be_created and not bool(self.keys_to_fix)
53
54
55 class Item(object):
56 """
57 A single piece of configuration (e.g. a file, a package, a service).
58 """
59 BINARY_ATTRIBUTES = []
60 BLOCK_CONCURRENT = []
61 BUNDLE_ATTRIBUTE_NAME = None
62 ITEM_ATTRIBUTES = {}
63 ITEM_TYPE_NAME = None
64 REQUIRED_ATTRIBUTES = []
65 STATUS_OK = 1
66 STATUS_FIXED = 2
67 STATUS_FAILED = 3
68 STATUS_SKIPPED = 4
69 STATUS_ACTION_SUCCEEDED = 5
70
71 def __init__(
72 self,
73 bundle,
74 name,
75 attributes,
76 has_been_triggered=False,
77 skip_validation=False,
78 skip_name_validation=False,
79 ):
80 self.attributes = {}
81 self.bundle = bundle
82 self.has_been_triggered = has_been_triggered
83 self.item_dir = join(bundle.bundle_dir, self.BUNDLE_ATTRIBUTE_NAME)
84 self.item_data_dir = join(bundle.bundle_data_dir, self.BUNDLE_ATTRIBUTE_NAME)
85 self.name = name
86 self.node = bundle.node
87 self._faults_missing_for_attributes = set()
88 self._precedes_items = []
89
90 if not skip_validation:
91 if not skip_name_validation:
92 self._validate_name(bundle, name)
93 self.validate_name(bundle, name)
94 self._validate_attribute_names(bundle, self.id, attributes)
95 self._validate_required_attributes(bundle, self.id, attributes)
96 self.validate_attributes(bundle, self.id, attributes)
97
98 try:
99 attributes = self.patch_attributes(attributes)
100 except FaultUnavailable:
101 self._faults_missing_for_attributes.add(_("unknown"))
102
103 for attribute_name, attribute_default in \
104 BUILTIN_ITEM_ATTRIBUTES.items():
105 setattr(self, attribute_name, force_text(attributes.get(
106 attribute_name,
107 copy(attribute_default),
108 )))
109
110 for attribute_name, attribute_default in \
111 self.ITEM_ATTRIBUTES.items():
112 if attribute_name not in BUILTIN_ITEM_ATTRIBUTES:
113 try:
114 self.attributes[attribute_name] = force_text(attributes.get(
115 attribute_name,
116 attribute_default,
117 ))
118 except FaultUnavailable:
119 self._faults_missing_for_attributes.add(attribute_name)
120
121 if self.cascade_skip is None:
122 self.cascade_skip = not (self.unless or self.triggered)
123
124 if self.id in self.triggers:
125 raise BundleError(_(
126 "item {item} in bundle '{bundle}' can't trigger itself"
127 ).format(
128 bundle=self.bundle.name,
129 item=self.id,
130 ))
131
132 def __lt__(self, other):
133 return self.id < other.id
134
135 def __str__(self):
136 return self.id
137
138 def __repr__(self):
139 return "<Item {}>".format(self.id)
140
141 def _check_bundle_collisions(self, items):
142 for item in items:
143 if item == self:
144 continue
145 if item.id == self.id:
146 raise BundleError(_(
147 "duplicate definition of {item} in bundles '{bundle1}' and '{bundle2}'"
148 ).format(
149 item=item.id,
150 bundle1=item.bundle.name,
151 bundle2=self.bundle.name,
152 ))
153
154 def _check_redundant_dependencies(self):
155 """
156 Alerts the user if they have defined a redundant dependency
157 (such as settings 'needs' on a triggered item pointing to the
158 triggering item).
159 """
160 for dep in self._deps:
161 if self._deps.count(dep) > 1:
162 raise BundleError(_(
163 "redundant dependency of {item1} in bundle '{bundle}' on {item2}"
164 ).format(
165 bundle=self.bundle.name,
166 item1=self.id,
167 item2=dep,
168 ))
169
170 @cached_property
171 def cached_cdict(self):
172 if self._faults_missing_for_attributes:
173 self._raise_for_faults()
174
175 cdict = self.cdict()
176 try:
177 validate_statedict(cdict)
178 except ValueError as e:
179 raise ValueError(_(
180 "{item} from bundle '{bundle}' returned invalid cdict: {msg}"
181 ).format(
182 bundle=self.bundle.name,
183 item=self.id,
184 msg=repr(e),
185 ))
186 return cdict
187
188 @cached_property
189 def cached_sdict(self):
190 status = self.sdict()
191 try:
192 validate_statedict(status)
193 except ValueError as e:
194 raise ValueError(_(
195 "{item} from bundle '{bundle}' returned invalid status: {msg}"
196 ).format(
197 bundle=self.bundle.name,
198 item=self.id,
199 msg=repr(e),
200 ))
201 return status
202
203 @cached_property
204 def cached_status(self):
205 return self.get_status()
206
207 @cached_property
208 def cached_unless_result(self):
209 if self.unless and not self.cached_status.correct:
210 unless_result = self.node.run(self.unless, may_fail=True)
211 return unless_result.return_code == 0
212 else:
213 return False
214
215 def _precedes_incorrect_item(self, interactive=False):
216 """
217 Returns True if this item precedes another and the triggering
218 item is in need of fixing.
219 """
220 for item in self._precedes_items:
221 if item._precedes_incorrect_item():
222 return True
223 if self.cached_unless_result:
224 # triggering item failed unless, so there is nothing to do
225 return False
226 if self.ITEM_TYPE_NAME == 'action':
227 if self.attributes['interactive'] != interactive or \
228 self.attributes['interactive'] is None:
229 return False
230 else:
231 return True
232 return not self.cached_status.correct
233
234 def _prepare_deps(self, items):
235 # merge automatic and user-defined deps
236 self._deps = list(self.needs) + list(self.get_auto_deps(items))
237
238 def _raise_for_faults(self):
239 raise FaultUnavailable(_(
240 "{item} on {node} is missing faults "
241 "for these attributes: {attrs} "
242 "(most of the time this means you're missing "
243 "a required key in your .secrets.cfg)"
244 ).format(
245 attrs=", ".join(sorted(self._faults_missing_for_attributes)),
246 item=self.id,
247 node=self.node.name,
248 ))
249
250 def _skip_with_soft_locks(self, mine, others):
251 """
252 Returns True/False depending on whether the item should be
253 skipped based on the given set of locks.
254 """
255 for lock in mine:
256 for selector in lock['items']:
257 if self.covered_by_autoskip_selector(selector):
258 io.debug(_("{item} on {node} whitelisted by lock {lock}").format(
259 item=self.id,
260 lock=lock['id'],
261 node=self.node.name,
262 ))
263 return False
264 for lock in others:
265 for selector in lock['items']:
266 if self.covered_by_autoskip_selector(selector):
267 io.debug(_("{item} on {node} blacklisted by lock {lock}").format(
268 item=self.id,
269 lock=lock['id'],
270 node=self.node.name,
271 ))
272 return True
273 return False
274
275 def _test(self):
276 if self._faults_missing_for_attributes:
277 self._raise_for_faults()
278 return self.test()
279
280 @classmethod
281 def _validate_attribute_names(cls, bundle, item_id, attributes):
282 invalid_attributes = set(attributes.keys()).difference(
283 set(cls.ITEM_ATTRIBUTES.keys()).union(
284 set(BUILTIN_ITEM_ATTRIBUTES.keys())
285 ),
286 )
287 if invalid_attributes:
288 raise BundleError(
289 _("invalid attribute(s) for '{item}' in bundle '{bundle}': {attrs}").format(
290 item=item_id,
291 bundle=bundle.name,
292 attrs=", ".join(invalid_attributes),
293 )
294 )
295
296 @classmethod
297 def _validate_name(cls, bundle, name):
298 if ":" in name:
299 raise BundleError(_(
300 "invalid name for {type} in bundle '{bundle}': {name} (must not contain colon)"
301 ).format(
302 bundle=bundle.name,
303 name=name,
304 type=cls.ITEM_TYPE_NAME,
305 ))
306
307 def _validate_required_attributes(cls, bundle, item_id, attributes):
308 missing = []
309 for attrname in cls.REQUIRED_ATTRIBUTES:
310 if attrname not in attributes:
311 missing.append(attrname)
312 if missing:
313 raise BundleError(_(
314 "{item} in bundle '{bundle}' missing required attribute(s): {attrs}"
315 ).format(
316 item=item_id,
317 bundle=bundle.name,
318 attrs=", ".join(missing),
319 ))
320
321 def apply(
322 self,
323 autoskip_selector="",
324 my_soft_locks=(),
325 other_peoples_soft_locks=(),
326 interactive=False,
327 interactive_default=True,
328 ):
329 self.node.repo.hooks.item_apply_start(
330 self.node.repo,
331 self.node,
332 self,
333 )
334 keys_to_fix = None
335 status_code = None
336 status_before = None
337 status_after = None
338 start_time = datetime.now()
339
340 if self.covered_by_autoskip_selector(autoskip_selector):
341 io.debug(_(
342 "autoskip matches {item} on {node}"
343 ).format(item=self.id, node=self.node.name))
344 status_code = self.STATUS_SKIPPED
345 keys_to_fix = [_("cmdline")]
346
347 if self._skip_with_soft_locks(my_soft_locks, other_peoples_soft_locks):
348 status_code = self.STATUS_SKIPPED
349 keys_to_fix = [_("soft locked")]
350
351 if self.triggered and not self.has_been_triggered and status_code is None:
352 io.debug(_(
353 "skipping {item} on {node} because it wasn't triggered"
354 ).format(item=self.id, node=self.node.name))
355 status_code = self.STATUS_SKIPPED
356 keys_to_fix = [_("not triggered")]
357
358 if status_code is None and self.cached_unless_result and status_code is None:
359 io.debug(_(
360 "'unless' for {item} on {node} succeeded, not fixing"
361 ).format(item=self.id, node=self.node.name))
362 status_code = self.STATUS_SKIPPED
363 keys_to_fix = ["unless"]
364
365 if self._faults_missing_for_attributes and status_code is None:
366 if self.error_on_missing_fault:
367 self._raise_for_faults()
368 else:
369 io.debug(_(
370 "skipping {item} on {node} because it is missing faults "
371 "for these attributes: {attrs} "
372 "(most of the time this means you're missing "
373 "a required key in your .secrets.cfg)"
374 ).format(
375 attrs=", ".join(sorted(self._faults_missing_for_attributes)),
376 item=self.id,
377 node=self.node.name,
378 ))
379 status_code = self.STATUS_SKIPPED
380 keys_to_fix = [_("Fault unavailable")]
381
382 if status_code is None:
383 try:
384 status_before = self.cached_status
385 except FaultUnavailable:
386 if self.error_on_missing_fault:
387 self._raise_for_faults()
388 else:
389 io.debug(_(
390 "skipping {item} on {node} because it is missing Faults "
391 "(most of the time this means you're missing "
392 "a required key in your .secrets.cfg)"
393 ).format(
394 item=self.id,
395 node=self.node.name,
396 ))
397 status_code = self.STATUS_SKIPPED
398 keys_to_fix = [_("Fault unavailable")]
399 else:
400 if status_before.correct:
401 status_code = self.STATUS_OK
402
403 if status_code is None:
404 keys_to_fix = self.display_keys(
405 copy(self.cached_cdict),
406 copy(status_before.sdict),
407 status_before.keys_to_fix[:],
408 )
409 if not interactive:
410 with io.job(_(" {node} {bundle} {item} fixing...").format(
411 bundle=self.bundle.name,
412 item=self.id,
413 node=self.node.name,
414 )):
415 self.fix(status_before)
416 else:
417 if status_before.must_be_created:
418 question_text = _("Doesn't exist. Will be created.")
419 elif status_before.must_be_deleted:
420 question_text = _("Found on node. Will be removed.")
421 else:
422 cdict, sdict = self.display_dicts(
423 copy(self.cached_cdict),
424 copy(status_before.sdict),
425 keys_to_fix,
426 )
427 question_text = self.ask(cdict, sdict, keys_to_fix)
428 question = wrap_question(
429 self.id,
430 question_text,
431 _("Fix {}?").format(bold(self.id)),
432 prefix="{x} {node} ".format(
433 node=bold(self.node.name),
434 x=blue("?"),
435 ),
436 )
437 answer = io.ask(
438 question,
439 interactive_default,
440 epilogue="{x} {node}".format(
441 node=bold(self.node.name),
442 x=blue("?"),
443 ),
444 )
445 if answer:
446 with io.job(_(" {node} {bundle} {item} fixing...").format(
447 bundle=self.bundle.name,
448 item=self.id,
449 node=self.node.name,
450 )):
451 self.fix(status_before)
452 else:
453 status_code = self.STATUS_SKIPPED
454 keys_to_fix = [_("interactive")]
455
456 if status_code is None:
457 status_after = self.get_status(cached=False)
458 status_code = self.STATUS_FIXED if status_after.correct else self.STATUS_FAILED
459
460 if status_code == self.STATUS_SKIPPED:
461 # can't use else for this because status_before is None
462 changes = keys_to_fix
463 elif status_before.must_be_created:
464 changes = True
465 elif status_before.must_be_deleted:
466 changes = False
467 elif status_code == self.STATUS_FAILED:
468 changes = self.display_keys(
469 self.cached_cdict.copy(),
470 status_after.sdict.copy(),
471 status_after.keys_to_fix[:],
472 )
473 else:
474 changes = keys_to_fix
475
476 self.node.repo.hooks.item_apply_end(
477 self.node.repo,
478 self.node,
479 self,
480 duration=datetime.now() - start_time,
481 status_code=status_code,
482 status_before=status_before,
483 status_after=status_after,
484 )
485 return (status_code, changes)
486
487 def ask(self, status_should, status_actual, relevant_keys):
488 """
489 Returns a string asking the user if this item should be
490 implemented.
491 """
492 result = []
493 for key in relevant_keys:
494 result.append(diff_value(key, status_actual[key], status_should[key]))
495 return "\n\n".join(result)
496
497 def cdict(self):
498 """
499 Return a statedict that describes the target state of this item
500 as configured in the repo. An empty dict means that the item
501 should not exist.
502
503 MAY be overridden by subclasses.
504 """
505 return self.attributes
506
507 def covered_by_autoskip_selector(self, autoskip_selector):
508 """
509 True if this item should be skipped based on the given selector
510 string (e.g. "tag:foo,bundle:bar").
511 """
512 components = [c.strip() for c in autoskip_selector.split(",")]
513 if (
514 "*" in components or
515 self.id in components or
516 "bundle:{}".format(self.bundle.name) in components or
517 "{}:".format(self.ITEM_TYPE_NAME) in components
518 ):
519 return True
520 for tag in self.tags:
521 if "tag:{}".format(tag) in components:
522 return True
523 return False
524
525 def fix(self, status):
526 """
527 This is supposed to actually implement stuff on the target node.
528
529 MUST be overridden by subclasses.
530 """
531 raise NotImplementedError()
532
533 def get_auto_deps(self, items):
534 """
535 Return a list of item IDs this item should have dependencies on.
536
537 Be very careful when using this. There are few circumstances
538 where this is really necessary. Only use this if you really need
539 to examine the actual list of items in order to figure out your
540 dependencies.
541
542 MAY be overridden by subclasses.
543 """
544 return []
545
546 def get_canned_actions(self):
547 """
548 Return a dictionary of action definitions (mapping action names
549 to dicts of action attributes, as in bundles).
550
551 MAY be overridden by subclasses.
552 """
553 return {}
554
555 def get_status(self, cached=True):
556 """
557 Returns an ItemStatus instance describing the current status of
558 the item on the actual node.
559 """
560 with io.job(_(" {node} {bundle} {item} checking...").format(
561 bundle=self.bundle.name,
562 item=self.id,
563 node=self.node.name,
564 )):
565 if not cached:
566 del self._cache['cached_sdict']
567 return ItemStatus(self.cached_cdict, self.cached_sdict)
568
569 def hash(self):
570 return hash_statedict(self.cached_cdict)
571
572 @property
573 def id(self):
574 if self.ITEM_TYPE_NAME == 'action' and ":" in self.name:
575 # canned actions don't have an "action:" prefix
576 return self.name
577 return "{}:{}".format(self.ITEM_TYPE_NAME, self.name)
578
579 def display_dicts(self, cdict, sdict, keys):
580 """
581 Given cdict and sdict as implemented above, modify them to
582 better suit interactive presentation. The keys parameter is the
583 return value of display_keys (see below) and provided for
584 reference only.
585
586 MAY be overridden by subclasses.
587 """
588 return (cdict, sdict)
589
590 def display_keys(self, cdict, sdict, keys):
591 """
592 Given a list of keys whose values differ between cdict and
593 sdict, modify them to better suit presentation to the user.
594
595 MAY be overridden by subclasses.
596 """
597 return keys
598
599 def patch_attributes(self, attributes):
600 """
601 Allows an item to preprocess the attributes it is initialized
602 with. Returns the modified attributes dictionary.
603
604 MAY be overridden by subclasses.
605 """
606 return attributes
607
608 def sdict(self):
609 """
610 Return a statedict that describes the actual state of this item
611 on the node. An empty dict means that the item does not exist
612 on the node.
613
614 For the item to validate as correct, the values for all keys in
615 self.cdict() have to match this statedict.
616
617 MUST be overridden by subclasses.
618 """
619 raise NotImplementedError()
620
621 def test(self):
622 """
623 Used by `bw repo test`. Should do as much as possible to detect
624 what would become a runtime error during a `bw apply`. Files
625 will attempt to render their templates for example.
626
627 SHOULD be overridden by subclasses
628 """
629 pass
630
631 @classmethod
632 def validate_attributes(cls, bundle, item_id, attributes):
633 """
634 Raises BundleError if something is amiss with the user-specified
635 attributes.
636
637 SHOULD be overridden by subclasses.
638 """
639 pass
640
641 @classmethod
642 def validate_name(cls, bundle, name):
643 """
644 Raise BundleError if the given name is not valid (e.g. contains
645 invalid characters for this kind of item.
646
647 MAY be overridden by subclasses.
648 """
649 pass
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from datetime import datetime
4
5 from bundlewrap.exceptions import ActionFailure, BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.ui import io
8 from bundlewrap.utils.text import mark_for_translation as _
9 from bundlewrap.utils.text import blue, bold, wrap_question
10
11
12 class Action(Item):
13 """
14 A command that is run on a node.
15 """
16 BUNDLE_ATTRIBUTE_NAME = 'actions'
17 ITEM_ATTRIBUTES = {
18 'command': None,
19 'expected_stderr': None,
20 'expected_stdout': None,
21 'expected_return_code': 0,
22 'interactive': None,
23 }
24 ITEM_TYPE_NAME = 'action'
25 REQUIRED_ATTRIBUTES = ['command']
26
27 def _get_result(
28 self,
29 autoskip_selector="",
30 my_soft_locks=(),
31 other_peoples_soft_locks=(),
32 interactive=False,
33 interactive_default=True,
34 ):
35
36 if self.covered_by_autoskip_selector(autoskip_selector):
37 io.debug(_(
38 "autoskip matches {item} on {node}"
39 ).format(item=self.id, node=self.node.name))
40 return (self.STATUS_SKIPPED, [_("cmdline")])
41
42 if self._skip_with_soft_locks(my_soft_locks, other_peoples_soft_locks):
43 return (self.STATUS_SKIPPED, [_("soft locked")])
44
45 if interactive is False and self.attributes['interactive'] is True:
46 return (self.STATUS_SKIPPED, [_("interactive only")])
47
48 if self.triggered and not self.has_been_triggered:
49 io.debug(_("skipping {} because it wasn't triggered").format(self.id))
50 return (self.STATUS_SKIPPED, [_("no trigger")])
51
52 if self.unless:
53 with io.job(_(" {node} {bundle} {item} checking 'unless' condition...").format(
54 bundle=self.bundle.name,
55 item=self.id,
56 node=self.node.name,
57 )):
58 unless_result = self.bundle.node.run(
59 self.unless,
60 may_fail=True,
61 )
62 if unless_result.return_code == 0:
63 io.debug(_("{node}:{bundle}:action:{name}: failed 'unless', not running").format(
64 bundle=self.bundle.name,
65 name=self.name,
66 node=self.bundle.node.name,
67 ))
68 return (self.STATUS_SKIPPED, ["unless"])
69
70 if (
71 interactive and
72 self.attributes['interactive'] is not False and
73 not io.ask(
74 wrap_question(
75 self.id,
76 self.attributes['command'],
77 _("Run action {}?").format(
78 bold(self.name),
79 ),
80 prefix="{x} {node} ".format(
81 node=bold(self.node.name),
82 x=blue("?"),
83 ),
84 ),
85 interactive_default,
86 epilogue="{x} {node}".format(
87 node=bold(self.node.name),
88 x=blue("?"),
89 ),
90 )
91 ):
92 return (self.STATUS_SKIPPED, [_("interactive")])
93 try:
94 self.run()
95 return (self.STATUS_ACTION_SUCCEEDED, None)
96 except ActionFailure as exc:
97 return (self.STATUS_FAILED, [str(exc)])
98
99 def apply(self, *args, **kwargs):
100 return self.get_result(*args, **kwargs)
101
102 def cdict(self):
103 raise AttributeError(_("actions don't have cdicts"))
104
105 def get_result(self, *args, **kwargs):
106 self.node.repo.hooks.action_run_start(
107 self.node.repo,
108 self.node,
109 self,
110 )
111 start_time = datetime.now()
112
113 status_code = self._get_result(*args, **kwargs)
114
115 self.node.repo.hooks.action_run_end(
116 self.node.repo,
117 self.node,
118 self,
119 duration=datetime.now() - start_time,
120 status=status_code[0],
121 )
122
123 return status_code
124
125 def run(self):
126 with io.job(_(" {node} {bundle} {item} running...").format(
127 bundle=self.bundle.name,
128 item=self.id,
129 node=self.node.name,
130 )):
131 result = self.bundle.node.run(
132 self.attributes['command'],
133 may_fail=True,
134 )
135
136 if self.attributes['expected_return_code'] is not None and \
137 not result.return_code == self.attributes['expected_return_code']:
138 raise ActionFailure(_("wrong return code: {}").format(result.return_code))
139
140 if self.attributes['expected_stderr'] is not None and \
141 result.stderr_text != self.attributes['expected_stderr']:
142 raise ActionFailure(_("wrong stderr"))
143
144 if self.attributes['expected_stdout'] is not None and \
145 result.stdout_text != self.attributes['expected_stdout']:
146 raise ActionFailure(_("wrong stdout"))
147
148 return result
149
150 @classmethod
151 def validate_attributes(cls, bundle, item_id, attributes):
152 if attributes.get('interactive', None) not in (True, False, None):
153 raise BundleError(_(
154 "invalid interactive setting for action '{item}' in bundle '{bundle}'"
155 ).format(item=item_id, bundle=bundle.name))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from collections import defaultdict
4 from os.path import normpath
5 from pipes import quote
6
7 from bundlewrap.exceptions import BundleError
8 from bundlewrap.items import Item
9 from bundlewrap.utils.remote import PathInfo
10 from bundlewrap.utils.text import mark_for_translation as _
11 from bundlewrap.utils.text import is_subdirectory
12 from bundlewrap.utils.ui import io
13
14
15 UNMANAGED_PATH_DESC = _("unmanaged subpaths")
16
17
18 def validator_mode(item_id, value):
19 value = str(value)
20 if not value.isdigit():
21 raise BundleError(
22 _("mode for {item} should be written as digits, got: '{value}'"
23 "").format(item=item_id, value=value)
24 )
25 for digit in value:
26 if int(digit) > 7 or int(digit) < 0:
27 raise BundleError(_(
28 "invalid mode for {item}: '{value}'"
29 ).format(item=item_id, value=value))
30 if not len(value) == 3 and not len(value) == 4:
31 raise BundleError(_(
32 "mode for {item} should be three or four digits long, was: '{value}'"
33 ).format(item=item_id, value=value))
34
35 ATTRIBUTE_VALIDATORS = defaultdict(lambda: lambda id, value: None)
36 ATTRIBUTE_VALIDATORS.update({
37 'mode': validator_mode,
38 })
39
40
41 class Directory(Item):
42 """
43 A directory.
44 """
45 BUNDLE_ATTRIBUTE_NAME = "directories"
46 ITEM_ATTRIBUTES = {
47 'group': None,
48 'mode': None,
49 'owner': None,
50 'purge': False,
51 }
52 ITEM_TYPE_NAME = "directory"
53
54 def __repr__(self):
55 return "<Directory path:{}>".format(
56 quote(self.name),
57 )
58
59 def cdict(self):
60 cdict = {
61 'paths_to_purge': [],
62 'type': 'directory',
63 }
64 for optional_attr in ('group', 'mode', 'owner'):
65 if self.attributes[optional_attr] is not None:
66 cdict[optional_attr] = self.attributes[optional_attr]
67 return cdict
68
69 def display_dicts(self, cdict, sdict, keys):
70 if UNMANAGED_PATH_DESC in keys:
71 cdict[UNMANAGED_PATH_DESC] = cdict['paths_to_purge']
72 sdict[UNMANAGED_PATH_DESC] = sdict['paths_to_purge']
73 del cdict['paths_to_purge']
74 del sdict['paths_to_purge']
75 return (cdict, sdict)
76
77 def display_keys(self, cdict, sdict, keys):
78 try:
79 keys.remove('paths_to_purge')
80 except ValueError:
81 pass
82 else:
83 keys.append(UNMANAGED_PATH_DESC)
84 return keys
85
86 def fix(self, status):
87 if status.must_be_created or 'type' in status.keys_to_fix:
88 # fixing the type fixes everything
89 self._fix_type(status)
90 return
91
92 for path in status.sdict.get('paths_to_purge', []):
93 self.node.run("rm -rf -- {}".format(quote(path)))
94
95 for fix_type in ('mode', 'owner', 'group'):
96 if fix_type in status.keys_to_fix:
97 if fix_type == 'group' and 'owner' in status.keys_to_fix:
98 # owner and group are fixed with a single chown
99 continue
100 getattr(self, "_fix_" + fix_type)(status)
101
102 def _fix_mode(self, status):
103 if self.node.os in self.node.OS_FAMILY_BSD:
104 chmod_command = "chmod {} {}"
105 else:
106 chmod_command = "chmod {} -- {}"
107 self.node.run(chmod_command.format(
108 self.attributes['mode'],
109 quote(self.name),
110 ))
111
112 def _fix_owner(self, status):
113 group = self.attributes['group'] or ""
114 if group:
115 group = ":" + quote(group)
116 if self.node.os in self.node.OS_FAMILY_BSD:
117 command = "chown {}{} {}"
118 else:
119 command = "chown {}{} -- {}"
120 self.node.run(command.format(
121 quote(self.attributes['owner'] or ""),
122 group,
123 quote(self.name),
124 ))
125 _fix_group = _fix_owner
126
127 def _fix_type(self, status):
128 self.node.run("rm -rf -- {}".format(quote(self.name)))
129 self.node.run("mkdir -p -- {}".format(quote(self.name)))
130 if self.attributes['mode']:
131 self._fix_mode(status)
132 if self.attributes['owner'] or self.attributes['group']:
133 self._fix_owner(status)
134
135 def _get_paths_to_purge(self):
136 result = self.node.run("find {} -maxdepth 1 -print0".format(quote(self.name)))
137 for line in result.stdout.split(b"\0"):
138 line = line.decode('utf-8')
139 found = False
140 for item_type in ('directory', 'file', 'symlink'):
141 if found:
142 break
143 for item in self.node.items:
144 if (
145 item.id == "{}:{}".format(item_type, line) or
146 item.id.startswith("{}:{}/".format(item_type, line))
147 ):
148 found = True
149 break
150 if not found:
151 # this file or directory is not managed
152 io.debug((
153 "found unmanaged path below {dirpath} on {node}, "
154 "marking for removal: {path}"
155 ).format(
156 dirpath=self.name,
157 node=self.node.name,
158 path=line,
159 ))
160 yield line
161
162
163
164 def get_auto_deps(self, items):
165 deps = []
166 for item in items:
167 if item == self:
168 continue
169 if ((
170 item.ITEM_TYPE_NAME == "file" and
171 is_subdirectory(item.name, self.name)
172 ) or (
173 item.ITEM_TYPE_NAME in ("file", "symlink") and
174 item.name == self.name
175 )):
176 raise BundleError(_(
177 "{item1} (from bundle '{bundle1}') blocking path to "
178 "{item2} (from bundle '{bundle2}')"
179 ).format(
180 item1=item.id,
181 bundle1=item.bundle.name,
182 item2=self.id,
183 bundle2=self.bundle.name,
184 ))
185 elif item.ITEM_TYPE_NAME == "user" and item.name == self.attributes['owner']:
186 if item.attributes['delete']:
187 raise BundleError(_(
188 "{item1} (from bundle '{bundle1}') depends on item "
189 "{item2} (from bundle '{bundle2}') which is set to be deleted"
190 ).format(
191 item1=self.id,
192 bundle1=self.bundle.name,
193 item2=item.id,
194 bundle2=item.bundle.name,
195 ))
196 else:
197 deps.append(item.id)
198 elif item.ITEM_TYPE_NAME == "group" and item.name == self.attributes['group']:
199 if item.attributes['delete']:
200 raise BundleError(_(
201 "{item1} (from bundle '{bundle1}') depends on item "
202 "{item2} (from bundle '{bundle2}') which is set to be deleted"
203 ).format(
204 item1=self.id,
205 bundle1=self.bundle.name,
206 item2=item.id,
207 bundle2=item.bundle.name,
208 ))
209 else:
210 deps.append(item.id)
211 elif item.ITEM_TYPE_NAME in ("directory", "symlink"):
212 if is_subdirectory(item.name, self.name):
213 deps.append(item.id)
214 return deps
215
216 def sdict(self):
217 path_info = PathInfo(self.node, self.name)
218 if not path_info.exists:
219 return None
220 else:
221 paths_to_purge = []
222 if self.attributes['purge']:
223 paths_to_purge = list(self._get_paths_to_purge())
224 return {
225 'type': path_info.path_type,
226 'mode': path_info.mode,
227 'owner': path_info.owner,
228 'group': path_info.group,
229 'paths_to_purge': paths_to_purge,
230 }
231
232 def patch_attributes(self, attributes):
233 if 'mode' in attributes and attributes['mode'] is not None:
234 attributes['mode'] = str(attributes['mode']).zfill(4)
235 return attributes
236
237 @classmethod
238 def validate_attributes(cls, bundle, item_id, attributes):
239 for key, value in attributes.items():
240 ATTRIBUTE_VALIDATORS[key](item_id, value)
241
242 @classmethod
243 def validate_name(cls, bundle, name):
244 if normpath(name) != name:
245 raise BundleError(_(
246 "'{path}' is an invalid directory path, "
247 "should be '{normpath}' (bundle '{bundle}')"
248 ).format(
249 bundle=bundle.name,
250 normpath=normpath(name),
251 path=name,
252 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from base64 import b64decode
4 from collections import defaultdict
5 from contextlib import contextmanager
6 from datetime import datetime
7 from os.path import basename, dirname, exists, join, normpath
8 from pipes import quote
9 from subprocess import call
10 from sys import exc_info
11 from traceback import format_exception
12
13 from bundlewrap.exceptions import BundleError, FaultUnavailable, TemplateError
14 from bundlewrap.items import BUILTIN_ITEM_ATTRIBUTES, Item
15 from bundlewrap.items.directories import validator_mode
16 from bundlewrap.utils import cached_property, hash_local_file, sha1, tempfile
17 from bundlewrap.utils.remote import PathInfo
18 from bundlewrap.utils.text import force_text, mark_for_translation as _
19 from bundlewrap.utils.text import is_subdirectory
20 from bundlewrap.utils.ui import io
21
22
23 DIFF_MAX_FILE_SIZE = 1024 * 1024 * 5 # bytes
24
25
26 def content_processor_base64(item):
27 # .encode() is required for pypy3 only
28 return b64decode(item._template_content.encode())
29
30
31 def content_processor_jinja2(item):
32 try:
33 from jinja2 import Environment, FileSystemLoader
34 except ImportError:
35 raise TemplateError(_(
36 "Unable to load Jinja2 (required to render {item}). "
37 "You probably have to install it using `pip install Jinja2`."
38 ).format(item=item.id))
39
40 loader = FileSystemLoader(searchpath=[item.item_data_dir, item.item_dir])
41 env = Environment(loader=loader)
42
43 template = env.from_string(item._template_content)
44
45 io.debug("{node}:{bundle}:{item}: rendering with Jinja2...".format(
46 bundle=item.bundle.name,
47 item=item.id,
48 node=item.node.name,
49 ))
50 start = datetime.now()
51 try:
52 content = template.render(
53 item=item,
54 bundle=item.bundle,
55 node=item.node,
56 repo=item.node.repo,
57 **item.attributes['context']
58 )
59 except FaultUnavailable:
60 raise
61 except Exception as e:
62 io.debug("".join(format_exception(*exc_info())))
63 raise TemplateError(_(
64 "Error while rendering template for {node}:{bundle}:{item}: {error}"
65 ).format(
66 bundle=item.bundle.name,
67 error=e,
68 item=item.id,
69 node=item.node.name,
70 ))
71 duration = datetime.now() - start
72 io.debug("{node}:{bundle}:{item}: rendered in {time}s".format(
73 bundle=item.bundle.name,
74 item=item.id,
75 node=item.node.name,
76 time=duration.total_seconds(),
77 ))
78 return content.encode(item.attributes['encoding'])
79
80
81 def content_processor_mako(item):
82 from mako.lookup import TemplateLookup
83 from mako.template import Template
84 template = Template(
85 item._template_content.encode('utf-8'),
86 input_encoding='utf-8',
87 lookup=TemplateLookup(directories=[item.item_data_dir, item.item_dir]),
88 output_encoding=item.attributes['encoding'],
89 )
90 io.debug("{node}:{bundle}:{item}: rendering with Mako...".format(
91 bundle=item.bundle.name,
92 item=item.id,
93 node=item.node.name,
94 ))
95 start = datetime.now()
96 try:
97 content = template.render(
98 item=item,
99 bundle=item.bundle,
100 node=item.node,
101 repo=item.node.repo,
102 **item.attributes['context']
103 )
104 except FaultUnavailable:
105 raise
106 except Exception as e:
107 io.debug("".join(format_exception(*exc_info())))
108 if isinstance(e, NameError) and str(e) == "Undefined":
109 # Mako isn't very verbose here. Try to give a more useful
110 # error message - even though we can't pinpoint the excat
111 # location of the error. :/
112 e = _("Undefined variable (look for '${...}')")
113 raise TemplateError(_(
114 "Error while rendering template for {node}:{bundle}:{item}: {error}"
115 ).format(
116 bundle=item.bundle.name,
117 error=e,
118 item=item.id,
119 node=item.node.name,
120 ))
121 duration = datetime.now() - start
122 io.debug("{node}:{bundle}:{item}: rendered in {time}s".format(
123 bundle=item.bundle.name,
124 item=item.id,
125 node=item.node.name,
126 time=duration.total_seconds(),
127 ))
128 return content
129
130
131 def content_processor_text(item):
132 return item._template_content.encode(item.attributes['encoding'])
133
134
135 CONTENT_PROCESSORS = {
136 'any': lambda item: b"",
137 'base64': content_processor_base64,
138 'binary': None,
139 'jinja2': content_processor_jinja2,
140 'mako': content_processor_mako,
141 'text': content_processor_text,
142 }
143
144
145 def get_remote_file_contents(node, path):
146 """
147 Returns the contents of the given path as a string.
148 """
149 with tempfile() as tmp_file:
150 node.download(path, tmp_file)
151 with open(tmp_file, 'rb') as f:
152 content = f.read()
153 return content
154
155
156 def validator_content_type(item_id, value):
157 if value not in CONTENT_PROCESSORS:
158 raise BundleError(_(
159 "invalid content_type for {item}: '{value}'"
160 ).format(item=item_id, value=value))
161
162
163 ATTRIBUTE_VALIDATORS = defaultdict(lambda: lambda id, value: None)
164 ATTRIBUTE_VALIDATORS.update({
165 'content_type': validator_content_type,
166 'mode': validator_mode,
167 })
168
169
170 class File(Item):
171 """
172 A file.
173 """
174 BUNDLE_ATTRIBUTE_NAME = "files"
175 ITEM_ATTRIBUTES = {
176 'content': None,
177 'content_type': 'text',
178 'context': None,
179 'delete': False,
180 'encoding': "utf-8",
181 'group': None,
182 'mode': None,
183 'owner': None,
184 'source': None,
185 'verify_with': None,
186 }
187 ITEM_TYPE_NAME = "file"
188
189 def __repr__(self):
190 return "<File path:{}>".format(quote(self.name))
191
192 @property
193 def _template_content(self):
194 if self.attributes['source'] is not None:
195 filename = join(self.item_data_dir, self.attributes['source'])
196 if exists(filename):
197 with open(filename, 'rb') as f:
198 content = f.read()
199 else:
200 filename = join(self.item_dir, self.attributes['source'])
201 with open(filename, 'rb') as f:
202 content = f.read()
203 return force_text(content)
204 else:
205 return force_text(self.attributes['content'])
206
207 @cached_property
208 def content(self):
209 return CONTENT_PROCESSORS[self.attributes['content_type']](self)
210
211 @cached_property
212 def content_hash(self):
213 if self.attributes['content_type'] == 'binary':
214 return hash_local_file(self.template)
215 else:
216 return sha1(self.content)
217
218 @cached_property
219 def template(self):
220 data_template = join(self.item_data_dir, self.attributes['source'])
221 if exists(data_template):
222 return data_template
223 return join(self.item_dir, self.attributes['source'])
224
225 def cdict(self):
226 if self.attributes['delete']:
227 return None
228 cdict = {'type': 'file'}
229 if self.attributes['content_type'] != 'any':
230 cdict['content_hash'] = self.content_hash
231 for optional_attr in ('group', 'mode', 'owner'):
232 if self.attributes[optional_attr] is not None:
233 cdict[optional_attr] = self.attributes[optional_attr]
234 return cdict
235
236 def fix(self, status):
237 if status.must_be_created or status.must_be_deleted or 'type' in status.keys_to_fix:
238 self._fix_type(status)
239 else:
240 for fix_type in ('content_hash', 'mode', 'owner', 'group'):
241 if fix_type in status.keys_to_fix:
242 if fix_type == 'group' and \
243 'owner' in status.keys_to_fix:
244 # owner and group are fixed with a single chown
245 continue
246 if fix_type in ('mode', 'owner', 'group') and \
247 'content' in status.keys_to_fix:
248 # fixing content implies settings mode and owner/group
249 continue
250 getattr(self, "_fix_" + fix_type)(status)
251
252 def _fix_content_hash(self, status):
253 with self._write_local_file() as local_path:
254 self.node.upload(
255 local_path,
256 self.name,
257 mode=self.attributes['mode'],
258 owner=self.attributes['owner'] or "",
259 group=self.attributes['group'] or "",
260 )
261
262 def _fix_mode(self, status):
263 if self.node.os in self.node.OS_FAMILY_BSD:
264 command = "chmod {} {}"
265 else:
266 command = "chmod {} -- {}"
267 self.node.run(command.format(
268 self.attributes['mode'],
269 quote(self.name),
270 ))
271
272 def _fix_owner(self, status):
273 group = self.attributes['group'] or ""
274 if group:
275 group = ":" + quote(group)
276 if self.node.os in self.node.OS_FAMILY_BSD:
277 command = "chown {}{} {}"
278 else:
279 command = "chown {}{} -- {}"
280 self.node.run(command.format(
281 quote(self.attributes['owner'] or ""),
282 group,
283 quote(self.name),
284 ))
285 _fix_group = _fix_owner
286
287 def _fix_type(self, status):
288 if status.sdict:
289 self.node.run("rm -rf -- {}".format(quote(self.name)))
290 if not status.must_be_deleted:
291 self.node.run("mkdir -p -- {}".format(quote(dirname(self.name))))
292 self._fix_content_hash(status)
293
294 def get_auto_deps(self, items):
295 deps = []
296 for item in items:
297 if item.ITEM_TYPE_NAME == "file" and is_subdirectory(item.name, self.name):
298 raise BundleError(_(
299 "{item1} (from bundle '{bundle1}') blocking path to "
300 "{item2} (from bundle '{bundle2}')"
301 ).format(
302 item1=item.id,
303 bundle1=item.bundle.name,
304 item2=self.id,
305 bundle2=self.bundle.name,
306 ))
307 elif item.ITEM_TYPE_NAME == "user" and item.name == self.attributes['owner']:
308 if item.attributes['delete']:
309 raise BundleError(_(
310 "{item1} (from bundle '{bundle1}') depends on item "
311 "{item2} (from bundle '{bundle2}') which is set to be deleted"
312 ).format(
313 item1=self.id,
314 bundle1=self.bundle.name,
315 item2=item.id,
316 bundle2=item.bundle.name,
317 ))
318 else:
319 deps.append(item.id)
320 elif item.ITEM_TYPE_NAME == "group" and item.name == self.attributes['group']:
321 if item.attributes['delete']:
322 raise BundleError(_(
323 "{item1} (from bundle '{bundle1}') depends on item "
324 "{item2} (from bundle '{bundle2}') which is set to be deleted"
325 ).format(
326 item1=self.id,
327 bundle1=self.bundle.name,
328 item2=item.id,
329 bundle2=item.bundle.name,
330 ))
331 else:
332 deps.append(item.id)
333 elif item.ITEM_TYPE_NAME in ("directory", "symlink"):
334 if is_subdirectory(item.name, self.name):
335 deps.append(item.id)
336 return deps
337
338 def sdict(self):
339 path_info = PathInfo(self.node, self.name)
340 if not path_info.exists:
341 return None
342 else:
343 return {
344 'type': path_info.path_type,
345 'content_hash': path_info.sha1 if path_info.path_type == 'file' else None,
346 'mode': path_info.mode,
347 'owner': path_info.owner,
348 'group': path_info.group,
349 'size': path_info.size,
350 }
351
352 def display_dicts(self, cdict, sdict, keys):
353 if 'content' in keys:
354 del cdict['content_hash']
355 del sdict['content_hash']
356 cdict['content'] = self.content
357 sdict['content'] = get_remote_file_contents(self.node, self.name)
358 return (cdict, sdict)
359
360 def display_keys(self, cdict, sdict, keys):
361 if (
362 'content_hash' in keys and
363 self.attributes['content_type'] not in ('base64', 'binary') and
364 sdict['size'] < DIFF_MAX_FILE_SIZE and
365 len(self.content) < DIFF_MAX_FILE_SIZE
366 ):
367 keys.remove('content_hash')
368 keys.append('content')
369 return keys
370
371 def patch_attributes(self, attributes):
372 if (
373 'content' not in attributes and
374 'source' not in attributes and
375 attributes.get('content_type', 'text') != 'any' and
376 attributes.get('delete', False) is False
377 ):
378 attributes['source'] = basename(self.name)
379 if 'context' not in attributes:
380 attributes['context'] = {}
381 if 'mode' in attributes and attributes['mode'] is not None:
382 attributes['mode'] = str(attributes['mode']).zfill(4)
383 return attributes
384
385 def test(self):
386 if self.attributes['source'] and not exists(self.template):
387 raise BundleError(_(
388 "{item} from bundle '{bundle}' refers to missing "
389 "file '{path}' in its 'source' attribute"
390 ).format(
391 bundle=self.bundle.name,
392 item=self.id,
393 path=self.template,
394 ))
395
396 if not self.attributes['delete'] and not self.attributes['content_type'] == 'any':
397 with self._write_local_file():
398 pass
399
400 @classmethod
401 def validate_attributes(cls, bundle, item_id, attributes):
402 if attributes.get('delete', False):
403 for attr in attributes.keys():
404 if attr not in ['delete'] + list(BUILTIN_ITEM_ATTRIBUTES.keys()):
405 raise BundleError(_(
406 "{item} from bundle '{bundle}' cannot have other "
407 "attributes besides 'delete'"
408 ).format(item=item_id, bundle=bundle.name))
409 if 'content' in attributes and 'source' in attributes:
410 raise BundleError(_(
411 "{item} from bundle '{bundle}' cannot have both 'content' and 'source'"
412 ).format(item=item_id, bundle=bundle.name))
413
414 if 'content' in attributes and attributes.get('content_type') == 'binary':
415 raise BundleError(_(
416 "{item} from bundle '{bundle}' cannot have binary inline content "
417 "(use content_type 'base64' instead)"
418 ).format(item=item_id, bundle=bundle.name))
419
420 if 'encoding' in attributes and attributes.get('content_type') in (
421 'any',
422 'base64',
423 'binary',
424 ):
425 raise BundleError(_(
426 "content_type of {item} from bundle '{bundle}' cannot provide different encoding "
427 "(remove the 'encoding' attribute)"
428 ).format(item=item_id, bundle=bundle.name))
429
430 if (
431 attributes.get('content_type', None) == "any" and (
432 'content' in attributes or
433 'encoding' in attributes or
434 'source' in attributes
435 )
436 ):
437 raise BundleError(_(
438 "{item} from bundle '{bundle}' with content_type 'any' "
439 "must not define 'content', 'encoding' and/or 'source'"
440 ).format(item=item_id, bundle=bundle.name))
441
442 for key, value in attributes.items():
443 ATTRIBUTE_VALIDATORS[key](item_id, value)
444
445 @classmethod
446 def validate_name(cls, bundle, name):
447 if normpath(name) == "/":
448 raise BundleError(_("'/' cannot be a file"))
449 if normpath(name) != name:
450 raise BundleError(_(
451 "'{path}' is an invalid file path, should be '{normpath}' (bundle '{bundle}')"
452 ).format(
453 bundle=bundle.name,
454 normpath=normpath(name),
455 path=name,
456 ))
457
458 @contextmanager
459 def _write_local_file(self):
460 """
461 Makes the file contents available at the returned temporary path
462 and performs local verification if necessary or requested.
463
464 The calling method is responsible for cleaning up the file at
465 the returned path (only if not a binary).
466 """
467 with tempfile() as tmp_file:
468 if self.attributes['content_type'] == 'binary':
469 local_path = self.template
470 else:
471 local_path = tmp_file
472 with open(local_path, 'wb') as f:
473 f.write(self.content)
474
475 if self.attributes['verify_with']:
476 cmd = self.attributes['verify_with'].format(quote(local_path))
477 io.debug("calling local verify command for {i}: {c}".format(c=cmd, i=self.id))
478 if call(cmd, shell=True) == 0:
479 io.debug("{i} passed local validation".format(i=self.id))
480 else:
481 raise BundleError(_(
482 "{i} failed local validation using: {c}"
483 ).format(c=cmd, i=self.id))
484
485 yield local_path
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from bundlewrap.exceptions import BundleError
4 from bundlewrap.items import BUILTIN_ITEM_ATTRIBUTES, Item
5 from bundlewrap.items.users import _USERNAME_VALID_CHARACTERS
6 from bundlewrap.utils.text import mark_for_translation as _
7
8
9 def _parse_group_line(line):
10 """
11 Parses a line from /etc/group and returns the information as a
12 dictionary.
13 """
14 result = dict(zip(
15 ('groupname', 'password', 'gid', 'members'),
16 line.strip().split(":"),
17 ))
18 result['gid'] = result['gid']
19 del result['password'] # nothing useful here
20 return result
21
22
23 class Group(Item):
24 """
25 A group.
26 """
27 BUNDLE_ATTRIBUTE_NAME = "groups"
28 ITEM_ATTRIBUTES = {
29 'delete': False,
30 'gid': None,
31 }
32 ITEM_TYPE_NAME = "group"
33 REQUIRED_ATTRIBUTES = []
34
35 def __repr__(self):
36 return "<Group name:{}>".format(self.name)
37
38 def cdict(self):
39 if self.attributes['delete']:
40 return None
41 cdict = {}
42 if self.attributes.get('gid') is not None:
43 cdict['gid'] = self.attributes['gid']
44 return cdict
45
46 def fix(self, status):
47 if status.must_be_created:
48 if self.attributes['gid'] is None:
49 command = "groupadd {}".format(self.name)
50 else:
51 command = "groupadd -g {gid} {groupname}".format(
52 gid=self.attributes['gid'],
53 groupname=self.name,
54 )
55 self.node.run(command, may_fail=True)
56 elif status.must_be_deleted:
57 self.node.run("groupdel {}".format(self.name), may_fail=True)
58 else:
59 self.node.run(
60 "groupmod -g {gid} {groupname}".format(
61 gid=self.attributes['gid'],
62 groupname=self.name,
63 ),
64 may_fail=True,
65 )
66
67 def sdict(self):
68 # verify content of /etc/group
69 grep_result = self.node.run(
70 "grep -e '^{}:' /etc/group".format(self.name),
71 may_fail=True,
72 )
73 if grep_result.return_code != 0:
74 return None
75 else:
76 return _parse_group_line(grep_result.stdout_text)
77
78 def patch_attributes(self, attributes):
79 if isinstance(attributes.get('gid'), int):
80 attributes['gid'] = str(attributes['gid'])
81 return attributes
82
83 @classmethod
84 def validate_attributes(cls, bundle, item_id, attributes):
85 if attributes.get('delete', False):
86 for attr in attributes.keys():
87 if attr not in ['delete'] + list(BUILTIN_ITEM_ATTRIBUTES.keys()):
88 raise BundleError(_(
89 "{item} from bundle '{bundle}' cannot have other "
90 "attributes besides 'delete'"
91 ).format(item=item_id, bundle=bundle.name))
92
93 @classmethod
94 def validate_name(cls, bundle, name):
95 for char in name:
96 if char not in _USERNAME_VALID_CHARACTERS:
97 raise BundleError(_(
98 "Invalid character in group name '{name}': {char} (bundle '{bundle}')"
99 ).format(
100 char=char,
101 bundle=bundle.name,
102 name=name,
103 ))
104
105 if name.endswith("_") or name.endswith("-"):
106 raise BundleError(_(
107 "Group name '{name}' must not end in dash or underscore (bundle '{bundle}')"
108 ).format(
109 bundle=bundle.name,
110 name=name,
111 ))
112
113 if len(name) > 30:
114 raise BundleError(_(
115 "Group name '{name}' is longer than 30 characters (bundle '{bundle}')"
116 ).format(
117 bundle=bundle.name,
118 name=name,
119 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4
5 from bundlewrap.exceptions import BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.text import mark_for_translation as _
8
9
10 def pkg_install(node, pkgname):
11 return node.run("DEBIAN_FRONTEND=noninteractive "
12 "apt-get -qy -o Dpkg::Options::=--force-confold --no-install-recommends "
13 "install {}".format(quote(pkgname)))
14
15
16 def pkg_installed(node, pkgname):
17 result = node.run(
18 "dpkg -s {} | grep '^Status: '".format(quote(pkgname)),
19 may_fail=True,
20 )
21 if result.return_code != 0 or " installed" not in result.stdout_text:
22 return False
23 else:
24 return True
25
26
27 def pkg_remove(node, pkgname):
28 return node.run("DEBIAN_FRONTEND=noninteractive "
29 "apt-get -qy purge {}".format(quote(pkgname)))
30
31
32 class AptPkg(Item):
33 """
34 A package installed by apt.
35 """
36 BLOCK_CONCURRENT = ["pkg_apt"]
37 BUNDLE_ATTRIBUTE_NAME = "pkg_apt"
38 ITEM_ATTRIBUTES = {
39 'installed': True,
40 }
41 ITEM_TYPE_NAME = "pkg_apt"
42
43 def __repr__(self):
44 return "<AptPkg name:{} installed:{}>".format(
45 self.name,
46 self.attributes['installed'],
47 )
48
49 def fix(self, status):
50 if self.attributes['installed'] is False:
51 pkg_remove(self.node, self.name)
52 else:
53 pkg_install(self.node, self.name)
54
55 def sdict(self):
56 return {
57 'installed': pkg_installed(self.node, self.name),
58 }
59
60 @classmethod
61 def validate_attributes(cls, bundle, item_id, attributes):
62 if not isinstance(attributes.get('installed', True), bool):
63 raise BundleError(_(
64 "expected boolean for 'installed' on {item} in bundle '{bundle}'"
65 ).format(
66 bundle=bundle.name,
67 item=item_id,
68 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4
5 from bundlewrap.exceptions import BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.text import mark_for_translation as _
8
9
10 def pkg_install(node, pkgname):
11 return node.run("dnf -d0 -e0 -y install {}".format(quote(pkgname)))
12
13
14 def pkg_installed(node, pkgname):
15 result = node.run(
16 "dnf -d0 -e0 list installed {}".format(quote(pkgname)),
17 may_fail=True,
18 )
19 if result.return_code != 0:
20 return False
21 else:
22 return True
23
24
25 def pkg_remove(node, pkgname):
26 return node.run("dnf -d0 -e0 -y remove {}".format(quote(pkgname)))
27
28
29 class DnfPkg(Item):
30 """
31 A package installed by dnf.
32 """
33 BLOCK_CONCURRENT = ["pkg_dnf", "pkg_yum"]
34 BUNDLE_ATTRIBUTE_NAME = "pkg_dnf"
35 ITEM_ATTRIBUTES = {
36 'installed': True,
37 }
38 ITEM_TYPE_NAME = "pkg_dnf"
39
40 def __repr__(self):
41 return "<DnfPkg name:{} installed:{}>".format(
42 self.name,
43 self.attributes['installed'],
44 )
45
46 def fix(self, status):
47 if self.attributes['installed'] is False:
48 pkg_remove(self.node, self.name)
49 else:
50 pkg_install(self.node, self.name)
51
52 def sdict(self):
53 return {
54 'installed': pkg_installed(self.node, self.name),
55 }
56
57 @classmethod
58 def validate_attributes(cls, bundle, item_id, attributes):
59 if not isinstance(attributes.get('installed', True), bool):
60 raise BundleError(_(
61 "expected boolean for 'installed' on {item} in bundle '{bundle}'"
62 ).format(
63 bundle=bundle.name,
64 item=item_id,
65 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4 import re
5
6 from bundlewrap.exceptions import BundleError
7 from bundlewrap.items import Item
8 from bundlewrap.utils.text import mark_for_translation as _
9
10
11 PKGSPEC_REGEX = re.compile(r"^(.+)-(\d.+)$")
12
13
14 def pkg_install(node, pkgname, version):
15 full_name = "{}-{}".format(pkgname, version) if version else pkgname
16 return node.run("pkg_add -r -I {}".format(full_name))
17
18
19 def pkg_installed(node, pkgname):
20 result = node.run(
21 "pkg_info | cut -f 1 -d ' '",
22 may_fail=True,
23 )
24 for line in result.stdout.decode('utf-8').strip().split("\n"):
25 installed_package, installed_version = PKGSPEC_REGEX.match(line).groups()
26 if installed_package == pkgname:
27 return installed_version
28 return False
29
30
31 def pkg_remove(node, pkgname):
32 return node.run("pkg_delete -I -D dependencies {}".format(quote(pkgname)))
33
34
35 class OpenBSDPkg(Item):
36 """
37 A package installed by pkg_add/pkg_delete.
38 """
39 BLOCK_CONCURRENT = ["pkg_openbsd"]
40 BUNDLE_ATTRIBUTE_NAME = "pkg_openbsd"
41 ITEM_ATTRIBUTES = {
42 'installed': True,
43 'version': None,
44 }
45 ITEM_TYPE_NAME = "pkg_openbsd"
46
47 def __repr__(self):
48 return "<OpenBSDPkg name:{} installed:{}>".format(
49 self.name,
50 self.attributes['installed'],
51 )
52
53 def cdict(self):
54 cdict = self.attributes.copy()
55 if cdict['version'] is None or not cdict['installed']:
56 del cdict['version']
57 return cdict
58
59 def fix(self, status):
60 if self.attributes['installed'] is False:
61 pkg_remove(self.node, self.name)
62 else:
63 pkg_install(self.node, self.name, self.attributes['version'])
64
65 def sdict(self):
66 version = pkg_installed(self.node, self.name)
67 return {
68 'installed': bool(version),
69 'version': version if version else _("none"),
70 }
71
72 @classmethod
73 def validate_attributes(cls, bundle, item_id, attributes):
74 if not isinstance(attributes.get('installed', True), bool):
75 raise BundleError(_(
76 "expected boolean for 'installed' on {item} in bundle '{bundle}'"
77 ).format(
78 bundle=bundle.name,
79 item=item_id,
80 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from os.path import basename, join
4 from pipes import quote
5
6 from bundlewrap.exceptions import BundleError
7 from bundlewrap.items import Item
8 from bundlewrap.utils.text import mark_for_translation as _
9
10
11 def pkg_install(node, pkgname, operation='S'):
12 return node.run("pacman --noconfirm -{} {}".format(operation,
13 quote(pkgname)))
14
15
16 def pkg_install_tarball(node, local_file):
17 remote_file = "/tmp/{}".format(basename(local_file))
18 node.upload(local_file, remote_file)
19 pkg_install(node, remote_file, operation='U')
20 node.run("rm -- {}".format(quote(remote_file)))
21
22
23 def pkg_installed(node, pkgname):
24 result = node.run(
25 "pacman -Q {}".format(quote(pkgname)),
26 may_fail=True,
27 )
28 if result.return_code != 0:
29 return False
30 else:
31 return True
32
33
34 def pkg_remove(node, pkgname):
35 return node.run("pacman --noconfirm -Rs {}".format(quote(pkgname)))
36
37
38 class PacmanPkg(Item):
39 """
40 A package installed by pacman.
41 """
42 BLOCK_CONCURRENT = ["pkg_pacman"]
43 BUNDLE_ATTRIBUTE_NAME = "pkg_pacman"
44 ITEM_ATTRIBUTES = {
45 'installed': True,
46 'tarball': None,
47 }
48 ITEM_TYPE_NAME = "pkg_pacman"
49
50 def __repr__(self):
51 return "<PacmanPkg name:{} installed:{} tarball:{}>".format(
52 self.name,
53 self.attributes['installed'],
54 self.attributes['tarball'],
55 )
56
57 def cdict(self):
58 # TODO/FIXME: this is bad because it ignores tarball
59 return {'installed': self.attributes['installed']}
60
61 def fix(self, status):
62 if self.attributes['installed'] is False:
63 pkg_remove(self.node, self.name)
64 else:
65 if self.attributes['tarball']:
66 pkg_install_tarball(self.node, join(self.item_dir,
67 self.attributes['tarball']))
68 else:
69 pkg_install(self.node, self.name)
70
71 def sdict(self):
72 return {
73 'installed': pkg_installed(self.node, self.name),
74 }
75
76 @classmethod
77 def validate_attributes(cls, bundle, item_id, attributes):
78 if not isinstance(attributes.get('installed', True), bool):
79 raise BundleError(_(
80 "expected boolean for 'installed' on {item} in bundle '{bundle}'"
81 ).format(
82 bundle=bundle.name,
83 item=item_id,
84 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from os.path import join, split
4 from pipes import quote
5
6 from bundlewrap.exceptions import BundleError
7 from bundlewrap.items import Item
8 from bundlewrap.utils.text import mark_for_translation as _
9
10
11 def pkg_install(node, pkgname, version=None):
12 if version:
13 pkgname = "{}=={}".format(pkgname, version)
14 pip_path, pkgname = split_path(pkgname)
15 return node.run("{} install -U {}".format(quote(pip_path), quote(pkgname)))
16
17
18 def pkg_installed(node, pkgname):
19 pip_path, pkgname = split_path(pkgname)
20 result = node.run(
21 "{} freeze | grep -i '^{}=='".format(quote(pip_path), pkgname),
22 may_fail=True,
23 )
24 if result.return_code != 0:
25 return False
26 else:
27 return result.stdout_text.split("=")[-1].strip()
28
29
30 def pkg_remove(node, pkgname):
31 pip_path, pkgname = split_path(pkgname)
32 return node.run("{} uninstall -y {}".format(quote(pip_path), quote(pkgname)))
33
34
35 class PipPkg(Item):
36 """
37 A package installed by pip.
38 """
39 BLOCK_CONCURRENT = ["pkg_pip"]
40 BUNDLE_ATTRIBUTE_NAME = "pkg_pip"
41 ITEM_ATTRIBUTES = {
42 'installed': True,
43 'version': None,
44 }
45 ITEM_TYPE_NAME = "pkg_pip"
46
47 def __repr__(self):
48 return "<PipPkg name:{} installed:{}>".format(
49 self.name,
50 self.attributes['installed'],
51 )
52
53 def cdict(self):
54 cdict = {'installed': self.attributes['installed']}
55 if self.attributes.get('version') is not None:
56 cdict['version'] = self.attributes['version']
57 return cdict
58
59 def get_auto_deps(self, items):
60 for item in items:
61 if item == self:
62 continue
63 if (
64 item.ITEM_TYPE_NAME == self.ITEM_TYPE_NAME and
65 item.name.lower() == self.name.lower()
66 ):
67 raise BundleError(_(
68 "{item1} (from bundle '{bundle1}') has name collision with "
69 "{item2} (from bundle '{bundle2}')"
70 ).format(
71 item1=item.id,
72 bundle1=item.bundle.name,
73 item2=self.id,
74 bundle2=self.bundle.name,
75 ))
76 return []
77
78 def fix(self, status):
79 if self.attributes['installed'] is False:
80 pkg_remove(self.node, self.name)
81 else:
82 pkg_install(self.node, self.name, version=self.attributes['version'])
83
84 def sdict(self):
85 install_status = pkg_installed(self.node, self.name)
86 return {
87 'installed': bool(install_status),
88 'version': None if install_status is False else install_status,
89 }
90
91 @classmethod
92 def validate_attributes(cls, bundle, item_id, attributes):
93 if not isinstance(attributes.get('installed', True), bool):
94 raise BundleError(_(
95 "expected boolean for 'installed' on {item} in bundle '{bundle}'"
96 ).format(
97 bundle=bundle.name,
98 item=item_id,
99 ))
100
101 if 'version' in attributes and attributes.get('installed', True) is False:
102 raise BundleError(_(
103 "cannot set version for uninstalled package on {item} in bundle '{bundle}'"
104 ).format(
105 bundle=bundle.name,
106 item=item_id,
107 ))
108
109
110 def split_path(pkgname):
111 virtualenv, pkgname = split(pkgname)
112 pip_path = join(virtualenv, "bin", "pip") if virtualenv else "pip"
113 return pip_path, pkgname
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4
5 from bundlewrap.exceptions import BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.text import mark_for_translation as _
8
9
10 def pkg_install(node, pkgname):
11 return node.run("yum -d0 -e0 -y install {}".format(quote(pkgname)))
12
13
14 def pkg_installed(node, pkgname):
15 result = node.run(
16 "yum -d0 -e0 list installed {}".format(quote(pkgname)),
17 may_fail=True,
18 )
19 if result.return_code != 0:
20 return False
21 else:
22 return True
23
24
25 def pkg_remove(node, pkgname):
26 return node.run("yum -d0 -e0 -y remove {}".format(quote(pkgname)))
27
28
29 class YumPkg(Item):
30 """
31 A package installed by yum.
32 """
33 BLOCK_CONCURRENT = ["pkg_yum", "pkg_dnf"]
34 BUNDLE_ATTRIBUTE_NAME = "pkg_yum"
35 ITEM_ATTRIBUTES = {
36 'installed': True,
37 }
38 ITEM_TYPE_NAME = "pkg_yum"
39
40 def __repr__(self):
41 return "<YumPkg name:{} installed:{}>".format(
42 self.name,
43 self.attributes['installed'],
44 )
45
46 def fix(self, status):
47 if self.attributes['installed'] is False:
48 pkg_remove(self.node, self.name)
49 else:
50 pkg_install(self.node, self.name)
51
52 def sdict(self):
53 return {
54 'installed': pkg_installed(self.node, self.name),
55 }
56
57 @classmethod
58 def validate_attributes(cls, bundle, item_id, attributes):
59 if not isinstance(attributes.get('installed', True), bool):
60 raise BundleError(_(
61 "expected boolean for 'installed' on {item} in bundle '{bundle}'"
62 ).format(
63 bundle=bundle.name,
64 item=item_id,
65 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4
5 from bundlewrap.exceptions import BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.text import mark_for_translation as _
8
9
10 ZYPPER_OPTS = "--non-interactive " + \
11 "--non-interactive-include-reboot-patches " + \
12 "--quiet"
13
14
15 def pkg_install(node, pkgname):
16 return node.run("zypper {} install {}".format(ZYPPER_OPTS, quote(pkgname)))
17
18
19 def pkg_installed(node, pkgname):
20 result = node.run(
21 "zypper search --match-exact --installed-only "
22 "--type package {}".format(quote(pkgname)),
23 may_fail=True,
24 )
25 if result.return_code != 0:
26 return False
27 else:
28 return True
29
30
31 def pkg_remove(node, pkgname):
32 return node.run("zypper {} remove {}".format(ZYPPER_OPTS, quote(pkgname)))
33
34
35 class ZypperPkg(Item):
36 """
37 A package installed by zypper.
38 """
39 BLOCK_CONCURRENT = ["pkg_zypper"]
40 BUNDLE_ATTRIBUTE_NAME = "pkg_zypper"
41 ITEM_ATTRIBUTES = {
42 'installed': True,
43 }
44 ITEM_TYPE_NAME = "pkg_zypper"
45
46 def __repr__(self):
47 return "<ZypperPkg name:{} installed:{}>".format(
48 self.name,
49 self.attributes['installed'],
50 )
51
52 def fix(self, status):
53 if self.attributes['installed'] is False:
54 pkg_remove(self.node, self.name)
55 else:
56 pkg_install(self.node, self.name)
57
58 def sdict(self):
59 return {
60 'installed': pkg_installed(self.node, self.name),
61 }
62
63 @classmethod
64 def validate_attributes(cls, bundle, item_id, attributes):
65 if not isinstance(attributes.get('installed', True), bool):
66 raise BundleError(_(
67 "expected boolean for 'installed' on {item} in bundle '{bundle}'"
68 ).format(
69 bundle=bundle.name,
70 item=item_id,
71 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4
5 from bundlewrap.exceptions import BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.text import force_text, mark_for_translation as _
8
9
10 def create_db(node, name, owner):
11 return node.run("sudo -u postgres createdb -wO {owner} {name}".format(
12 name=name,
13 owner=owner,
14 ))
15
16
17 def drop_db(node, name):
18 return node.run("sudo -u postgres dropdb -w {}".format(quote(name)))
19
20
21 def get_databases(node):
22 output = node.run("echo '\\l' | sudo -u postgres psql -Anqt -F '|' | grep '|'").stdout
23 result = {}
24 for line in force_text(output).strip().split("\n"):
25 db, owner = line.strip().split("|", 2)[:2]
26 result[db] = {
27 'owner': owner,
28 }
29 return result
30
31
32 def set_owner(node, name, owner):
33 return node.run(
34 "echo 'ALTER DATABASE {name} OWNER TO {owner}' | "
35 "sudo -u postgres psql -nqw".format(
36 name=name,
37 owner=owner,
38 ),
39 )
40
41
42 class PostgresDB(Item):
43 """
44 A postgres database.
45 """
46 BUNDLE_ATTRIBUTE_NAME = "postgres_dbs"
47 ITEM_ATTRIBUTES = {
48 'delete': False,
49 'owner': "postgres",
50 }
51 ITEM_TYPE_NAME = "postgres_db"
52
53 def __repr__(self):
54 return "<PostgresDB name:{}>".format(self.name)
55
56 def cdict(self):
57 if self.attributes['delete']:
58 return None
59 else:
60 return {'owner': self.attributes['owner']}
61
62 def fix(self, status):
63 if status.must_be_deleted:
64 drop_db(self.node, self.name)
65 elif status.must_be_created:
66 create_db(self.node, self.name, self.attributes['owner'])
67 elif 'owner' in status.keys_to_fix:
68 set_owner(self.node, self.name, self.attributes['owner'])
69 else:
70 raise AssertionError("this shouldn't happen")
71
72 def get_auto_deps(self, items):
73 deps = []
74 for item in items:
75 if item.ITEM_TYPE_NAME == "postgres_role" and item.name == self.attributes['owner']:
76 deps.append(item.id)
77 return deps
78
79 def sdict(self):
80 databases = get_databases(self.node)
81 if self.name not in databases:
82 return None
83 else:
84 return {'owner': databases[self.name]['owner']}
85
86 @classmethod
87 def validate_attributes(cls, bundle, item_id, attributes):
88 if not isinstance(attributes.get('delete', True), bool):
89 raise BundleError(_(
90 "expected boolean for 'delete' on {item} in bundle '{bundle}'"
91 ).format(
92 bundle=bundle.name,
93 item=item_id,
94 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from passlib.apps import postgres_context
4
5 from bundlewrap.exceptions import BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.text import force_text, mark_for_translation as _
8
9
10 AUTHID_COLUMNS = {
11 "rolcanlogin": 'can_login',
12 "rolsuper": 'superuser',
13 "rolpassword": 'password_hash',
14 }
15
16
17 def delete_role(node, role):
18 node.run("sudo -u postgres dropuser -w {}".format(role))
19
20
21 def fix_role(node, role, attrs, create=False):
22 password = " PASSWORD '{}'".format(attrs['password_hash'])
23 node.run(
24 "echo \"{operation} ROLE {role} WITH LOGIN {superuser}SUPERUSER{password}\" "
25 "| sudo -u postgres psql -nqw".format(
26 operation="CREATE" if create else "ALTER",
27 password="" if attrs['password_hash'] is None else password,
28 role=role,
29 superuser="" if attrs['superuser'] is True else "NO",
30 )
31 )
32
33
34 def get_role(node, role):
35 result = node.run("echo \"SELECT rolcanlogin, rolsuper, rolpassword from pg_authid "
36 "WHERE rolname='{}'\" "
37 "| sudo -u postgres psql -Anqwx -F '|'".format(role))
38
39 role_attrs = {}
40 for line in force_text(result.stdout).strip().split("\n"):
41 try:
42 key, value = line.split("|")
43 except ValueError:
44 pass
45 else:
46 role_attrs[AUTHID_COLUMNS[key]] = value
47
48 for bool_attr in ('can_login', 'superuser'):
49 if bool_attr in role_attrs:
50 role_attrs[bool_attr] = role_attrs[bool_attr] == "t"
51
52 return role_attrs if role_attrs else None
53
54
55 class PostgresRole(Item):
56 """
57 A postgres role.
58 """
59 BUNDLE_ATTRIBUTE_NAME = "postgres_roles"
60 ITEM_ATTRIBUTES = {
61 'can_login': True,
62 'delete': False,
63 'password': None,
64 'password_hash': None,
65 'superuser': False,
66 }
67 ITEM_TYPE_NAME = "postgres_role"
68
69 def __repr__(self):
70 return "<PostgresRole name:{}>".format(self.name)
71
72 def cdict(self):
73 if self.attributes['delete']:
74 return None
75 cdict = self.attributes.copy()
76 del cdict['delete']
77 del cdict['password']
78 return cdict
79
80 def fix(self, status):
81 if status.must_be_deleted:
82 delete_role(self.node, self.name)
83 elif status.must_be_created:
84 fix_role(self.node, self.name, self.attributes, create=True)
85 else:
86 fix_role(self.node, self.name, self.attributes)
87
88 def sdict(self):
89 return get_role(self.node, self.name)
90
91 def patch_attributes(self, attributes):
92 if 'password' in attributes:
93 attributes['password_hash'] = postgres_context.encrypt(
94 force_text(attributes['password']),
95 user=self.name,
96 )
97 return attributes
98
99 @classmethod
100 def validate_attributes(cls, bundle, item_id, attributes):
101 if not attributes.get('delete', False):
102 if attributes.get('password') is None and attributes.get('password_hash') is None:
103 raise BundleError(_(
104 "expected either 'password' or 'password_hash' on {item} in bundle '{bundle}'"
105 ).format(
106 bundle=bundle.name,
107 item=item_id,
108 ))
109 if attributes.get('password') is not None and attributes.get('password_hash') is not None:
110 raise BundleError(_(
111 "can't define both 'password' and 'password_hash' on {item} in bundle '{bundle}'"
112 ).format(
113 bundle=bundle.name,
114 item=item_id,
115 ))
116 if not isinstance(attributes.get('delete', True), bool):
117 raise BundleError(_(
118 "expected boolean for 'delete' on {item} in bundle '{bundle}'"
119 ).format(
120 bundle=bundle.name,
121 item=item_id,
122 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4
5 from bundlewrap.exceptions import BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.text import mark_for_translation as _
8
9
10 def svc_start(node, svcname):
11 return node.run("/etc/rc.d/{} start".format(quote(svcname)))
12
13
14 def svc_running(node, svcname):
15 result = node.run("/etc/rc.d/{} check".format(quote(svcname)), may_fail=True)
16 return "ok" in result.stdout_text
17
18
19 def svc_stop(node, svcname):
20 return node.run("/etc/rc.d/{} stop".format(quote(svcname)))
21
22
23 def svc_enable(node, svcname):
24 return node.run("rcctl set {} status on".format(quote(svcname)))
25
26
27 def svc_enabled(node, svcname):
28 result = node.run(
29 "rcctl ls on | grep '^{}$'".format(svcname),
30 may_fail=True,
31 )
32 return result.return_code == 0
33
34
35 def svc_disable(node, svcname):
36 return node.run("rcctl set {} status off".format(quote(svcname)))
37
38
39 class SvcOpenBSD(Item):
40 """
41 A service managed by OpenBSD rc.d.
42 """
43 BUNDLE_ATTRIBUTE_NAME = "svc_openbsd"
44 ITEM_ATTRIBUTES = {
45 'running': True,
46 'enabled': True
47 }
48 ITEM_TYPE_NAME = "svc_openbsd"
49
50 def __repr__(self):
51 return "<SvcOpenBSD name:{} running:{} enabled:{}>".format(
52 self.name,
53 self.attributes['running'],
54 self.attributes['enabled'],
55 )
56
57 def fix(self, status):
58 if 'enabled' in status.keys_to_fix:
59 if self.attributes['enabled'] is False:
60 svc_disable(self.node, self.name)
61 else:
62 svc_enable(self.node, self.name)
63
64 if self.attributes['running'] is False:
65 svc_stop(self.node, self.name)
66 else:
67 svc_start(self.node, self.name)
68
69 def get_canned_actions(self):
70 return {
71 'restart': {
72 'command': "/etc/rc.d/{} restart".format(self.name),
73 'needs': [self.id],
74 },
75 'stopstart': {
76 'command': "/etc/rc.d/{0} stop && /etc/rc.d/{0} start".format(self.name),
77 'needs': [self.id],
78 },
79 }
80
81 def sdict(self):
82 return {
83 'enabled': svc_enabled(self.node, self.name),
84 'running': svc_running(self.node, self.name),
85 }
86
87 @classmethod
88 def validate_attributes(cls, bundle, item_id, attributes):
89 if not isinstance(attributes.get('running', True), bool):
90 raise BundleError(_(
91 "expected boolean for 'running' on {item} in bundle '{bundle}'"
92 ).format(
93 bundle=bundle.name,
94 item=item_id,
95 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4
5 from bundlewrap.exceptions import BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.text import mark_for_translation as _
8
9
10 def svc_start(node, svcname):
11 return node.run("systemctl start -- {}".format(quote(svcname)))
12
13
14 def svc_running(node, svcname):
15 result = node.run(
16 "systemctl status -- {}".format(quote(svcname)),
17 may_fail=True,
18 )
19 return result.return_code == 0
20
21
22 def svc_stop(node, svcname):
23 return node.run("systemctl stop -- {}".format(quote(svcname)))
24
25
26 def svc_enable(node, svcname):
27 return node.run("systemctl enable -- {}".format(quote(svcname)))
28
29
30 def svc_enabled(node, svcname):
31 result = node.run(
32 "systemctl is-enabled -- {}".format(quote(svcname)),
33 may_fail=True,
34 )
35 return result.return_code == 0
36
37
38 def svc_disable(node, svcname):
39 return node.run("systemctl disable -- {}".format(quote(svcname)))
40
41
42 class SvcSystemd(Item):
43 """
44 A service managed by systemd.
45 """
46 BUNDLE_ATTRIBUTE_NAME = "svc_systemd"
47 ITEM_ATTRIBUTES = {
48 'enabled': None,
49 'running': True,
50 }
51 ITEM_TYPE_NAME = "svc_systemd"
52
53 def __repr__(self):
54 return "<SvcSystemd name:{} enabled:{} running:{}>".format(
55 self.name,
56 self.attributes['enabled'],
57 self.attributes['running'],
58 )
59
60 # Note for bw 3.0: We're planning to make "True" the default value
61 # for "enabled". Once that's done, we can remove this custom cdict.
62 def cdict(self):
63 cdict = self.attributes.copy()
64 if 'enabled' in cdict and cdict['enabled'] is None:
65 del cdict['enabled']
66 return cdict
67
68 def fix(self, status):
69 if 'enabled' in status.keys_to_fix:
70 if self.attributes['enabled'] is False:
71 svc_disable(self.node, self.name)
72 else:
73 svc_enable(self.node, self.name)
74
75 if 'running' in status.keys_to_fix:
76 if self.attributes['running'] is False:
77 svc_stop(self.node, self.name)
78 else:
79 svc_start(self.node, self.name)
80
81 def get_canned_actions(self):
82 return {
83 'reload': {
84 'command': "systemctl reload -- {}".format(self.name),
85 'needs': [self.id],
86 },
87 'restart': {
88 'command': "systemctl restart -- {}".format(self.name),
89 'needs': [self.id],
90 },
91 }
92
93 def sdict(self):
94 return {
95 'enabled': svc_enabled(self.node, self.name),
96 'running': svc_running(self.node, self.name),
97 }
98
99 @classmethod
100 def validate_attributes(cls, bundle, item_id, attributes):
101 for attribute in ('enabled', 'running'):
102 if not isinstance(attributes.get(attribute, True), bool):
103 raise BundleError(_(
104 "expected boolean for '{attribute}' on {item} in bundle '{bundle}'"
105 ).format(
106 attribute=attribute,
107 bundle=bundle.name,
108 item=item_id,
109 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4
5 from bundlewrap.exceptions import BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.text import mark_for_translation as _
8
9
10 def svc_start(node, svcname):
11 return node.run("/etc/init.d/{} start".format(quote(svcname)))
12
13
14 def svc_running(node, svcname):
15 result = node.run(
16 "/etc/init.d/{} status".format(quote(svcname)),
17 may_fail=True,
18 )
19 return result.return_code == 0
20
21
22 def svc_stop(node, svcname):
23 return node.run("/etc/init.d/{} stop".format(quote(svcname)))
24
25
26 class SvcSystemV(Item):
27 """
28 A service managed by traditional System V init scripts.
29 """
30 BUNDLE_ATTRIBUTE_NAME = "svc_systemv"
31 ITEM_ATTRIBUTES = {
32 'running': True,
33 }
34 ITEM_TYPE_NAME = "svc_systemv"
35
36 def __repr__(self):
37 return "<SvcSystemV name:{} running:{}>".format(
38 self.name,
39 self.attributes['running'],
40 )
41
42 def fix(self, status):
43 if self.attributes['running'] is False:
44 svc_stop(self.node, self.name)
45 else:
46 svc_start(self.node, self.name)
47
48 def get_canned_actions(self):
49 return {
50 'reload': {
51 'command': "/etc/init.d/{} reload".format(self.name),
52 'needs': [self.id],
53 },
54 'restart': {
55 'command': "/etc/init.d/{} restart".format(self.name),
56 'needs': [self.id],
57 },
58 }
59
60 def sdict(self):
61 return {'running': svc_running(self.node, self.name)}
62
63 @classmethod
64 def validate_attributes(cls, bundle, item_id, attributes):
65 if not isinstance(attributes.get('running', True), bool):
66 raise BundleError(_(
67 "expected boolean for 'running' on {item} in bundle '{bundle}'"
68 ).format(
69 bundle=bundle.name,
70 item=item_id,
71 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4
5 from bundlewrap.exceptions import BundleError
6 from bundlewrap.items import Item
7 from bundlewrap.utils.text import mark_for_translation as _
8
9
10 def svc_start(node, svcname):
11 return node.run("initctl start --no-wait -- {}".format(quote(svcname)))
12
13
14 def svc_running(node, svcname):
15 result = node.run("initctl status -- {}".format(quote(svcname)))
16 return " start/" in result.stdout_text
17
18
19 def svc_stop(node, svcname):
20 return node.run("initctl stop --no-wait -- {}".format(quote(svcname)))
21
22
23 class SvcUpstart(Item):
24 """
25 A service managed by Upstart.
26 """
27 BUNDLE_ATTRIBUTE_NAME = "svc_upstart"
28 ITEM_ATTRIBUTES = {
29 'running': True,
30 }
31 ITEM_TYPE_NAME = "svc_upstart"
32
33 def __repr__(self):
34 return "<SvcUpstart name:{} running:{}>".format(
35 self.name,
36 self.attributes['running'],
37 )
38
39 def fix(self, status):
40 if self.attributes['running'] is False:
41 svc_stop(self.node, self.name)
42 else:
43 svc_start(self.node, self.name)
44
45 def get_canned_actions(self):
46 return {
47 'reload': {
48 'command': "reload {}".format(self.name),
49 'needs': [self.id],
50 },
51 'restart': {
52 'command': "restart {}".format(self.name),
53 'needs': [self.id],
54 },
55 'stopstart': {
56 'command': "stop {0} && start {0}".format(self.name),
57 'needs': [self.id],
58 },
59 }
60
61 def sdict(self):
62 return {'running': svc_running(self.node, self.name)}
63
64 @classmethod
65 def validate_attributes(cls, bundle, item_id, attributes):
66 if not isinstance(attributes.get('running', True), bool):
67 raise BundleError(_(
68 "expected boolean for 'running' on {item} in bundle '{bundle}'"
69 ).format(
70 bundle=bundle.name,
71 item=item_id,
72 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from collections import defaultdict
4 from os.path import dirname, normpath
5 from pipes import quote
6
7 from bundlewrap.exceptions import BundleError
8 from bundlewrap.items import Item
9 from bundlewrap.utils.remote import PathInfo
10 from bundlewrap.utils.text import mark_for_translation as _
11 from bundlewrap.utils.text import is_subdirectory
12
13
14 ATTRIBUTE_VALIDATORS = defaultdict(lambda: lambda id, value: None)
15
16
17 class Symlink(Item):
18 """
19 A symbolic link.
20 """
21 BUNDLE_ATTRIBUTE_NAME = "symlinks"
22 ITEM_ATTRIBUTES = {
23 'group': None,
24 'owner': None,
25 'target': None,
26 }
27 ITEM_TYPE_NAME = "symlink"
28 REQUIRED_ATTRIBUTES = ['target']
29
30 def __repr__(self):
31 return "<Symlink path:{} target:{}>".format(
32 quote(self.name),
33 self.attributes['target'],
34 )
35
36 def cdict(self):
37 cdict = {
38 'target': self.attributes['target'],
39 'type': 'symlink',
40 }
41 for optional_attr in ('group', 'owner'):
42 if self.attributes[optional_attr] is not None:
43 cdict[optional_attr] = self.attributes[optional_attr]
44 return cdict
45
46 def fix(self, status):
47 if status.must_be_created or 'type' in status.keys_to_fix:
48 # fixing the type fixes everything
49 self._fix_type(status)
50 return
51
52 for fix_type in ('owner', 'group', 'target'):
53 if fix_type in status.keys_to_fix:
54 if fix_type == 'group' and 'owner' in status.keys_to_fix:
55 # owner and group are fixed with a single chown
56 continue
57 getattr(self, "_fix_" + fix_type)(status)
58
59 def _fix_owner(self, status):
60 group = self.attributes['group'] or ""
61 if group:
62 group = ":" + quote(group)
63 self.node.run("chown -h {}{} -- {}".format(
64 quote(self.attributes['owner'] or ""),
65 group,
66 quote(self.name),
67 ))
68 _fix_group = _fix_owner
69
70 def _fix_target(self, status):
71 self.node.run("ln -sf -- {} {}".format(
72 quote(self.attributes['target']),
73 quote(self.name),
74 ))
75
76 def _fix_type(self, status):
77 self.node.run("rm -rf -- {}".format(quote(self.name)))
78 self.node.run("mkdir -p -- {}".format(quote(dirname(self.name))))
79 self.node.run("ln -s -- {} {}".format(
80 quote(self.attributes['target']),
81 quote(self.name),
82 ))
83 if self.attributes['owner'] or self.attributes['group']:
84 self._fix_owner(status)
85
86 def get_auto_deps(self, items):
87 deps = []
88 for item in items:
89 if item == self:
90 continue
91 if item.ITEM_TYPE_NAME == "file" and (
92 is_subdirectory(item.name, self.name) or
93 item.name == self.name
94 ):
95 raise BundleError(_(
96 "{item1} (from bundle '{bundle1}') blocking path to "
97 "{item2} (from bundle '{bundle2}')"
98 ).format(
99 item1=item.id,
100 bundle1=item.bundle.name,
101 item2=self.id,
102 bundle2=self.bundle.name,
103 ))
104 elif item.ITEM_TYPE_NAME == "user" and item.name == self.attributes['owner']:
105 if item.attributes['delete']:
106 raise BundleError(_(
107 "{item1} (from bundle '{bundle1}') depends on item "
108 "{item2} (from bundle '{bundle2}') which is set to be deleted"
109 ).format(
110 item1=self.id,
111 bundle1=self.bundle.name,
112 item2=item.id,
113 bundle2=item.bundle.name,
114 ))
115 else:
116 deps.append(item.id)
117 elif item.ITEM_TYPE_NAME == "group" and item.name == self.attributes['group']:
118 if item.attributes['delete']:
119 raise BundleError(_(
120 "{item1} (from bundle '{bundle1}') depends on item "
121 "{item2} (from bundle '{bundle2}') which is set to be deleted"
122 ).format(
123 item1=self.id,
124 bundle1=self.bundle.name,
125 item2=item.id,
126 bundle2=item.bundle.name,
127 ))
128 else:
129 deps.append(item.id)
130 elif item.ITEM_TYPE_NAME in ("directory", "symlink"):
131 if is_subdirectory(item.name, self.name):
132 deps.append(item.id)
133 return deps
134
135 def sdict(self):
136 path_info = PathInfo(self.node, self.name)
137 if not path_info.exists:
138 return None
139 else:
140 return {
141 'target': path_info.symlink_target if path_info.path_type == 'symlink' else "",
142 'type': path_info.path_type,
143 'owner': path_info.owner,
144 'group': path_info.group,
145 }
146
147 @classmethod
148 def validate_attributes(cls, bundle, item_id, attributes):
149 for key, value in attributes.items():
150 ATTRIBUTE_VALIDATORS[key](item_id, value)
151
152 @classmethod
153 def validate_name(cls, bundle, name):
154 if normpath(name) == "/":
155 raise BundleError(_("'/' cannot be a file"))
156 if normpath(name) != name:
157 raise BundleError(_(
158 "'{path}' is an invalid symlink path, should be '{normpath}' (bundle '{bundle}')"
159 ).format(
160 path=name,
161 normpath=normpath(name),
162 bundle=bundle.name,
163 ))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from logging import ERROR, getLogger
4 from pipes import quote
5 from string import ascii_lowercase, digits
6
7 from passlib.hash import bcrypt, md5_crypt, sha256_crypt, sha512_crypt
8
9 from bundlewrap.exceptions import BundleError
10 from bundlewrap.items import BUILTIN_ITEM_ATTRIBUTES, Item
11 from bundlewrap.utils.text import force_text, mark_for_translation as _
12
13
14 getLogger('passlib').setLevel(ERROR)
15
16 _ATTRIBUTE_NAMES = {
17 'full_name': _("full name"),
18 'gid': _("GID"),
19 'groups': _("groups"),
20 'home': _("home dir"),
21 'password_hash': _("password hash"),
22 'shell': _("shell"),
23 'uid': _("UID"),
24 }
25
26 _ATTRIBUTE_OPTIONS = {
27 'full_name': "-c",
28 'gid': "-g",
29 'groups': "-G",
30 'home': "-d",
31 'password_hash': "-p",
32 'shell': "-s",
33 'uid': "-u",
34 }
35
36 # a random static salt if users don't provide one
37 _DEFAULT_SALT = "uJzJlYdG"
38
39 # bcrypt needs special salts. 22 characters long, ending in ".", "O", "e", "u"
40 # see https://bitbucket.org/ecollins/passlib/issues/25
41 _DEFAULT_BCRYPT_SALT = "oo2ahgheen9Tei0IeJohTO"
42
43 HASH_METHODS = {
44 'md5': md5_crypt,
45 'sha256': sha256_crypt,
46 'sha512': sha512_crypt,
47 'bcrypt': bcrypt
48 }
49
50 _USERNAME_VALID_CHARACTERS = ascii_lowercase + digits + "-_"
51
52
53 def _group_name_for_gid(node, gid):
54 """
55 Returns the group name that matches the gid.
56 """
57 group_output = node.run("grep -e ':{}:[^:]*$' /etc/group".format(gid), may_fail=True)
58 if group_output.return_code != 0:
59 return None
60 else:
61 return group_output.stdout_text.split(":")[0]
62
63
64 def _groups_for_user(node, username):
65 """
66 Returns the list of group names for the given username on the given
67 node.
68 """
69 groups = node.run("id -Gn {}".format(username)).stdout_text.strip().split(" ")
70 primary_group = node.run("id -gn {}".format(username)).stdout_text.strip()
71 groups.remove(primary_group)
72 return groups
73
74
75 def _parse_passwd_line(line, entries):
76 """
77 Parses a line from /etc/passwd and returns the information as a
78 dictionary.
79 """
80
81 result = dict(zip(
82 entries,
83 line.strip().split(":"),
84 ))
85 result['full_name'] = result['gecos'].split(",")[0]
86 return result
87
88
89 class User(Item):
90 """
91 A user account.
92 """
93 BUNDLE_ATTRIBUTE_NAME = "users"
94 ITEM_ATTRIBUTES = {
95 'delete': False,
96 'full_name': None,
97 'gid': None,
98 'groups': None,
99 'hash_method': 'sha512',
100 'home': None,
101 'password': None,
102 'password_hash': None,
103 'salt': None,
104 'shell': None,
105 'uid': None,
106 'use_shadow': None,
107 }
108 ITEM_TYPE_NAME = "user"
109
110 def __repr__(self):
111 return "<User name:{}>".format(self.name)
112
113 def cdict(self):
114 if self.attributes['delete']:
115 return None
116 cdict = self.attributes.copy()
117 del cdict['delete']
118 del cdict['hash_method']
119 del cdict['password']
120 del cdict['salt']
121 del cdict['use_shadow']
122 for key in list(cdict.keys()):
123 if cdict[key] is None:
124 del cdict[key]
125 if 'groups' in cdict:
126 cdict['groups'] = set(cdict['groups'])
127 return cdict
128
129 def fix(self, status):
130 if status.must_be_deleted:
131 self.node.run("userdel {}".format(self.name), may_fail=True)
132 else:
133 command = "useradd " if status.must_be_created else "usermod "
134 for attr, option in sorted(_ATTRIBUTE_OPTIONS.items()):
135 if (attr in status.keys_to_fix or status.must_be_created) and \
136 self.attributes[attr] is not None:
137 if attr == 'groups':
138 value = ",".join(self.attributes[attr])
139 else:
140 value = str(self.attributes[attr])
141 command += "{} {} ".format(option, quote(value))
142 command += self.name
143 self.node.run(command, may_fail=True)
144
145 def get_auto_deps(self, items):
146 deps = []
147 for item in items:
148 if item.ITEM_TYPE_NAME == "group":
149 if item.attributes['delete']:
150 raise BundleError(_(
151 "{item1} (from bundle '{bundle1}') depends on item "
152 "{item2} (from bundle '{bundle2}') which is set to be deleted"
153 ).format(
154 item1=self.id,
155 bundle1=self.bundle.name,
156 item2=item.id,
157 bundle2=item.bundle.name,
158 ))
159 else:
160 deps.append(item.id)
161 return deps
162
163 def sdict(self):
164 # verify content of /etc/passwd
165 if self.node.os in self.node.OS_FAMILY_BSD:
166 password_command = "grep -ae '^{}:' /etc/master.passwd"
167 else:
168 password_command = "grep -ae '^{}:' /etc/passwd"
169 passwd_grep_result = self.node.run(
170 password_command.format(self.name),
171 may_fail=True,
172 )
173 if passwd_grep_result.return_code != 0:
174 return None
175
176 if self.node.os in self.node.OS_FAMILY_BSD:
177 entries = (
178 'username',
179 'passwd_hash',
180 'uid',
181 'gid',
182 'class',
183 'change',
184 'expire',
185 'gecos',
186 'home',
187 'shell',
188 )
189 else:
190 entries = ('username', 'passwd_hash', 'uid', 'gid', 'gecos', 'home', 'shell')
191
192 sdict = _parse_passwd_line(passwd_grep_result.stdout_text, entries)
193
194 if self.attributes['gid'] is not None and not self.attributes['gid'].isdigit():
195 sdict['gid'] = _group_name_for_gid(self.node, sdict['gid'])
196
197 if self.attributes['password_hash'] is not None:
198 if self.attributes['use_shadow'] and self.node.os not in self.node.OS_FAMILY_BSD:
199 # verify content of /etc/shadow unless we are on OpenBSD
200 shadow_grep_result = self.node.run(
201 "grep -e '^{}:' /etc/shadow".format(self.name),
202 may_fail=True,
203 )
204 if shadow_grep_result.return_code != 0:
205 sdict['password_hash'] = None
206 else:
207 sdict['password_hash'] = shadow_grep_result.stdout_text.split(":")[1]
208 else:
209 sdict['password_hash'] = sdict['passwd_hash']
210 del sdict['passwd_hash']
211
212 # verify content of /etc/group
213 sdict['groups'] = set(_groups_for_user(self.node, self.name))
214
215 return sdict
216
217 def patch_attributes(self, attributes):
218 if attributes.get('password', None) is not None:
219 # defaults aren't set yet
220 hash_method = HASH_METHODS[attributes.get(
221 'hash_method',
222 self.ITEM_ATTRIBUTES['hash_method'],
223 )]
224 salt = attributes.get('salt', None)
225 if self.node.os in self.node.OS_FAMILY_BSD:
226 attributes['password_hash'] = bcrypt.encrypt(
227 force_text(attributes['password']),
228 rounds=8, # default rounds for OpenBSD accounts
229 salt=_DEFAULT_BCRYPT_SALT if salt is None else salt,
230 )
231 else:
232 attributes['password_hash'] = hash_method.encrypt(
233 force_text(attributes['password']),
234 rounds=5000, # default from glibc
235 salt=_DEFAULT_SALT if salt is None else salt,
236 )
237
238 if 'use_shadow' not in attributes:
239 attributes['use_shadow'] = self.node.use_shadow_passwords
240
241 for attr in ('gid', 'uid'):
242 if isinstance(attributes.get(attr), int):
243 attributes[attr] = str(attributes[attr])
244
245 return attributes
246
247 @classmethod
248 def validate_attributes(cls, bundle, item_id, attributes):
249 if attributes.get('delete', False):
250 for attr in attributes.keys():
251 if attr not in ['delete'] + list(BUILTIN_ITEM_ATTRIBUTES.keys()):
252 raise BundleError(_(
253 "{item} from bundle '{bundle}' cannot have other "
254 "attributes besides 'delete'"
255 ).format(item=item_id, bundle=bundle.name))
256
257 if 'hash_method' in attributes and \
258 attributes['hash_method'] not in HASH_METHODS:
259 raise BundleError(
260 _("Invalid hash method for {item} in bundle '{bundle}': '{method}'").format(
261 bundle=bundle.name,
262 item=item_id,
263 method=attributes['hash_method'],
264 )
265 )
266
267 if 'password_hash' in attributes and (
268 'password' in attributes or
269 'salt' in attributes
270 ):
271 raise BundleError(_(
272 "{item} in bundle '{bundle}': 'password_hash' "
273 "cannot be used with 'password' or 'salt'"
274 ).format(bundle=bundle.name, item=item_id))
275
276 if 'salt' in attributes and 'password' not in attributes:
277 raise BundleError(
278 _("{}: salt given without a password").format(item_id)
279 )
280
281 @classmethod
282 def validate_name(cls, bundle, name):
283 for char in name:
284 if char not in _USERNAME_VALID_CHARACTERS:
285 raise BundleError(_(
286 "Invalid character in username '{user}': {char} (bundle '{bundle}')"
287 ).format(bundle=bundle.name, char=char, user=name))
288
289 if name.endswith("_") or name.endswith("-"):
290 raise BundleError(_(
291 "Username '{user}' must not end in dash or underscore (bundle '{bundle}')"
292 ).format(bundle=bundle.name, user=name))
293
294 if len(name) > 30:
295 raise BundleError(_(
296 "Username '{user}' is longer than 30 characters (bundle '{bundle}')"
297 ).format(bundle=bundle.name, user=name))
0 from datetime import datetime
1 from getpass import getuser
2 import json
3 from os import environ
4 from pipes import quote
5 from socket import gethostname
6 from time import time
7
8 from .exceptions import NodeLockedException
9 from .utils import cached_property, tempfile
10 from .utils.text import blue, bold, mark_for_translation as _, red, wrap_question
11 from .utils.time import format_duration, format_timestamp, parse_duration
12 from .utils.ui import io
13
14
15 HARD_LOCK_PATH = "/tmp/bundlewrap.lock"
16 HARD_LOCK_FILE = HARD_LOCK_PATH + "/info"
17 SOFT_LOCK_PATH = "/tmp/bundlewrap.softlock.d"
18 SOFT_LOCK_FILE = "/tmp/bundlewrap.softlock.d/{id}"
19
20
21 def identity():
22 return environ.get('BW_IDENTITY', "{}@{}".format(
23 getuser(),
24 gethostname(),
25 ))
26
27
28 class NodeLock(object):
29 def __init__(self, node, interactive=False, ignore=False):
30 self.node = node
31 self.ignore = ignore
32 self.interactive = interactive
33
34 def __enter__(self):
35 with tempfile() as local_path:
36 if not self.ignore:
37 with io.job(_(" {node} checking hard lock status...").format(node=self.node.name)):
38 result = self.node.run("mkdir " + quote(HARD_LOCK_PATH), may_fail=True)
39 if result.return_code != 0:
40 self.node.download(HARD_LOCK_FILE, local_path, ignore_failure=True)
41 with open(local_path, 'r') as f:
42 try:
43 info = json.loads(f.read())
44 except:
45 io.stderr(_(
46 "{warning} corrupted lock on {node}: "
47 "unable to read or parse lock file contents "
48 "(clear it with `bw run {node} 'rm -R {path}'`)"
49 ).format(
50 node=self.node.name,
51 path=HARD_LOCK_FILE,
52 warning=red(_("WARNING")),
53 ))
54 info = {}
55 expired = False
56 try:
57 d = info['date']
58 except KeyError:
59 info['date'] = _("<unknown>")
60 info['duration'] = _("<unknown>")
61 else:
62 duration = datetime.now() - datetime.fromtimestamp(d)
63 info['date'] = format_timestamp(d)
64 info['duration'] = format_duration(duration)
65 if duration > parse_duration(environ.get('BW_HARDLOCK_EXPIRY', "8h")):
66 expired = True
67 io.debug("ignoring expired hard lock on {}".format(self.node.name))
68 if 'user' not in info:
69 info['user'] = _("<unknown>")
70 if expired or self.ignore or (self.interactive and io.ask(
71 self._warning_message_hard(info),
72 False,
73 epilogue=blue("?") + " " + bold(self.node.name),
74 )):
75 pass
76 else:
77 raise NodeLockedException(info)
78
79 with io.job(_(" {node} uploading lock file...").format(node=self.node.name)):
80 if self.ignore:
81 self.node.run("mkdir -p " + quote(HARD_LOCK_PATH))
82 with open(local_path, 'w') as f:
83 f.write(json.dumps({
84 'date': time(),
85 'user': identity(),
86 }))
87 self.node.upload(local_path, HARD_LOCK_FILE)
88
89 return self
90
91 def __exit__(self, type, value, traceback):
92 with io.job(_(" {node} removing hard lock...").format(node=self.node.name)):
93 result = self.node.run("rm -R {}".format(quote(HARD_LOCK_PATH)), may_fail=True)
94
95 if result.return_code != 0:
96 io.stderr(_("{x} {node} could not release hard lock").format(
97 node=bold(self.node.name),
98 x=red("!"),
99 ))
100
101 def _warning_message_hard(self, info):
102 return wrap_question(
103 red(_("NODE LOCKED")),
104 _(
105 "Looks like somebody is currently using BundleWrap on this node.\n"
106 "You should let them finish or override the lock if it has gone stale.\n"
107 "\n"
108 "locked by {user}\n"
109 " since {date} ({duration} ago)"
110 ).format(
111 user=bold(info['user']),
112 date=info['date'],
113 duration=info['duration'],
114 ),
115 bold(_("Override lock?")),
116 prefix="{x} {node} ".format(node=bold(self.node.name), x=blue("?")),
117 )
118
119 @cached_property
120 def soft_locks(self):
121 return softlock_list(self.node)
122
123 @cached_property
124 def my_soft_locks(self):
125 for lock in self.soft_locks:
126 if lock['user'] == identity():
127 yield lock
128
129 @cached_property
130 def other_peoples_soft_locks(self):
131 for lock in self.soft_locks:
132 if lock['user'] != identity():
133 yield lock
134
135
136 def softlock_add(node, lock_id, comment="", expiry="8h", item_selectors=None):
137 if "\n" in comment:
138 raise ValueError(_("Lock comments must not contain any newlines"))
139 if not item_selectors:
140 item_selectors = ["*"]
141
142 expiry_timedelta = parse_duration(expiry)
143 now = time()
144 expiry_timestamp = now + expiry_timedelta.days * 86400 + expiry_timedelta.seconds
145
146 content = json.dumps({
147 'comment': comment,
148 'date': now,
149 'expiry': expiry_timestamp,
150 'id': lock_id,
151 'items': item_selectors,
152 'user': identity(),
153 }, indent=None, sort_keys=True)
154
155 with tempfile() as local_path:
156 with open(local_path, 'w') as f:
157 f.write(content + "\n")
158 node.run("mkdir -p " + quote(SOFT_LOCK_PATH))
159 node.upload(local_path, SOFT_LOCK_FILE.format(id=lock_id), mode='0644')
160
161 return lock_id
162
163
164 def softlock_list(node):
165 with io.job(_(" {} checking soft locks...").format(node.name)):
166 cat = node.run("cat {}".format(SOFT_LOCK_FILE.format(id="*")), may_fail=True)
167 if cat.return_code != 0:
168 return []
169 result = []
170 for line in cat.stdout.decode('utf-8').strip().split("\n"):
171 try:
172 result.append(json.loads(line.strip()))
173 except json.decoder.JSONDecodeError:
174 io.stderr(_(
175 "{x} {node} unable to parse soft lock file contents, ignoring: {line}"
176 ).format(
177 x=red("!"),
178 node=bold(node.name),
179 line=line.strip(),
180 ))
181 for lock in result[:]:
182 if lock['expiry'] < time():
183 io.debug(_("removing expired soft lock {id} from node {node}").format(
184 id=lock['id'],
185 node=node.name,
186 ))
187 softlock_remove(node, lock['id'])
188 result.remove(lock)
189 return result
190
191
192 def softlock_remove(node, lock_id):
193 io.debug(_("removing soft lock {id} from node {node}").format(
194 id=lock_id,
195 node=node.name,
196 ))
197 node.run("rm {}".format(SOFT_LOCK_FILE.format(id=lock_id)))
0 from copy import copy
1 from hashlib import sha1
2 from json import dumps, JSONEncoder
3
4 from .exceptions import RepositoryError
5 from .utils import ATOMIC_TYPES, Fault, merge_dict
6 from .utils.text import force_text, mark_for_translation as _
7
8
9 try:
10 text_type = unicode
11 byte_type = str
12 except NameError:
13 text_type = str
14 byte_type = bytes
15
16 METADATA_TYPES = (
17 bool,
18 byte_type,
19 Fault,
20 int,
21 text_type,
22 type(None),
23 )
24
25
26 def atomic(obj):
27 try:
28 cls = ATOMIC_TYPES[type(obj)]
29 except KeyError:
30 raise ValueError("atomic() can only be applied to dicts, lists, sets, or tuples "
31 "(not: {})".format(repr(obj)))
32 else:
33 return cls(obj)
34
35
36 def check_for_unsolvable_metadata_key_conflicts(node):
37 """
38 Finds metadata keys defined by two groups that are not part of a
39 shared subgroup hierarchy.
40 """
41 # First, we build a list of subgroup chains.
42 #
43 # A chain is simply a list of groups starting with a parent group
44 # that has no parent groups itself and then descends depth-first
45 # into its subgroups until a subgroup is reached that the node is
46 # not a member of.
47 # Every possible path on every subgroup tree is a separate chain.
48 #
49 # group4
50 # / \
51 # group2 group3
52 # \ /
53 # group1
54 #
55 # This example has two chains, even though both start and end at the
56 # some groups:
57 #
58 # group1 -> group2 -> group4
59 # group1 -> group3 -> group4
60 #
61
62 # find all groups whose subgroups this node is *not* a member of
63 lowest_subgroups = set()
64 for group in node.groups:
65 in_subgroup = False
66 for subgroup in group.subgroups:
67 if subgroup in node.groups:
68 in_subgroup = True
69 break
70 if not in_subgroup:
71 lowest_subgroups.add(group)
72
73 chains = []
74 incomplete_chains = [[group] for group in lowest_subgroups]
75
76 while incomplete_chains:
77 for chain in incomplete_chains[:]:
78 highest_group = chain[-1]
79 if list(highest_group.parent_groups):
80 chain_so_far = chain[:]
81 # continue this chain with the first parent group
82 chain.append(list(highest_group.parent_groups)[0])
83 # further parent groups form new chains
84 for further_parents in list(highest_group.parent_groups)[1:]:
85 new_chain = chain_so_far[:]
86 new_chain.append(further_parents)
87 incomplete_chains.append(new_chain)
88 else:
89 # chain has ended
90 chains.append(chain)
91 incomplete_chains.remove(chain)
92
93 # chains now look like this (parents right of children):
94 # [
95 # [group1],
96 # [group2, group3, group5],
97 # [group2, group4, group5],
98 # [group2, group4, group6, group7],
99 # ]
100
101 # let's merge metadata for each chain
102 chain_metadata = []
103 for chain in chains:
104 metadata = {}
105 for group in chain:
106 metadata = merge_dict(metadata, group.metadata)
107 chain_metadata.append(metadata)
108
109 # create a "key path map" for each chain's metadata
110 chain_metadata_keys = [list(dictionary_key_map(metadata)) for metadata in chain_metadata]
111
112 # compare all metadata keys with other chains and find matches
113 for index1, keymap1 in enumerate(chain_metadata_keys):
114 for keypath in keymap1:
115 for index2, keymap2 in enumerate(chain_metadata_keys):
116 if index1 == index2:
117 # same keymap, don't compare
118 continue
119 else:
120 if keypath in keymap2:
121 if (
122 type(value_at_key_path(chain_metadata[index1], keypath)) ==
123 type(value_at_key_path(chain_metadata[index2], keypath)) and
124 type(value_at_key_path(chain_metadata[index2], keypath)) in
125 (set, dict)
126 ):
127 continue
128 # We now know that there is a conflict between the first
129 # and second chain we're looking at right now.
130 # That is however not a problem if the conflict is caused
131 # by a group that is present in both chains.
132 # So all that's left is to figure out which two single groups
133 # within those chains are at fault so we can report them
134 # to the user if necessary.
135 find_groups_causing_metadata_conflict(
136 node.name,
137 chains[index1],
138 chains[index2],
139 keypath,
140 )
141
142
143 def deepcopy_metadata(obj):
144 """
145 Our own version of deepcopy.copy that doesn't pickle and ensures
146 a limited range of types is used in metadata.
147 """
148 if isinstance(obj, dict):
149 new_obj = {}
150 for key, value in obj.items():
151 if not isinstance(key, METADATA_TYPES):
152 raise ValueError(_("illegal metadata key type: {}").format(repr(key)))
153 new_key = copy(key)
154 new_obj[new_key] = deepcopy_metadata(value)
155 elif isinstance(obj, (list, tuple)):
156 new_obj = []
157 for member in obj:
158 new_obj.append(deepcopy_metadata(member))
159 elif isinstance(obj, set):
160 new_obj = set()
161 for member in obj:
162 new_obj.add(deepcopy_metadata(member))
163 elif isinstance(obj, METADATA_TYPES):
164 return obj
165 else:
166 raise ValueError(_("illegal metadata value type: {}").format(repr(obj)))
167 return new_obj
168
169
170 def dictionary_key_map(mapdict):
171 """
172 For the dict
173
174 {
175 "key1": 1,
176 "key2": {
177 "key3": 3,
178 "key4": ["foo"],
179 },
180 }
181
182 the key map would look like this:
183
184 [
185 ("key1",),
186 ("key2",),
187 ("key2", "key3"),
188 ("key2", "key4"),
189 ]
190
191 """
192 for key, value in mapdict.items():
193 if isinstance(value, dict):
194 for child_keys in dictionary_key_map(value):
195 yield (key,) + child_keys
196 yield (key,)
197
198
199 def find_groups_causing_metadata_conflict(node_name, chain1, chain2, keypath):
200 """
201 Given two chains (lists of groups), find one group in each chain
202 that has conflicting metadata with the other for the given key path.
203 """
204 chain1_metadata = [list(dictionary_key_map(group.metadata)) for group in chain1]
205 chain2_metadata = [list(dictionary_key_map(group.metadata)) for group in chain2]
206
207 bad_keypath = None
208
209 for index1, keymap1 in enumerate(chain1_metadata):
210 for index2, keymap2 in enumerate(chain2_metadata):
211 if chain1[index1] == chain2[index2]:
212 # same group, ignore
213 continue
214 if (
215 keypath in keymap1 and
216 keypath in keymap2 and
217 chain1[index1] not in chain2[index2].subgroups and
218 chain2[index2] not in chain1[index1].subgroups
219 ):
220 bad_keypath = keypath
221 bad_group1 = chain1[index1]
222 bad_group2 = chain2[index2]
223
224 if bad_keypath is not None:
225 raise RepositoryError(_(
226 "Conflicting metadata keys between groups '{group1}' and '{group2}' on node '{node}':\n\n"
227 " metadata['{keypath}']\n\n"
228 "You must either connect both groups through subgroups or have them not define "
229 "conflicting metadata keys. Otherwise there is no way for BundleWrap to determine "
230 "which group's metadata should win when they are merged."
231 ).format(
232 keypath="']['".join(bad_keypath),
233 group1=bad_group1.name,
234 group2=bad_group2.name,
235 node=node_name,
236 ))
237
238
239 class MetadataJSONEncoder(JSONEncoder):
240 def default(self, obj):
241 if isinstance(obj, Fault):
242 return obj.value
243 if isinstance(obj, set):
244 return sorted(obj)
245 if isinstance(obj, bytes):
246 return force_text(obj)
247 else:
248 raise ValueError(_("illegal metadata value type: {}").format(repr(obj)))
249
250
251 def hash_metadata(sdict):
252 """
253 Returns a canonical SHA1 hash to describe this dict.
254 """
255 return sha1(dumps(
256 sdict,
257 cls=MetadataJSONEncoder,
258 indent=None,
259 sort_keys=True,
260 ).encode('utf-8')).hexdigest()
261
262
263 def value_at_key_path(dict_obj, path):
264 if not path:
265 return dict_obj
266 else:
267 return value_at_key_path(dict_obj[path[0]], path[1:])
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from datetime import datetime, timedelta
4 from os import environ
5 from sys import exit
6 from threading import Lock
7
8 from . import operations
9 from .bundle import Bundle
10 from .concurrency import WorkerPool
11 from .deps import (
12 DummyItem,
13 find_item,
14 )
15 from .exceptions import (
16 DontCache,
17 FaultUnavailable,
18 ItemDependencyError,
19 NodeLockedException,
20 NoSuchBundle,
21 RepositoryError,
22 )
23 from .group import GROUP_ATTR_DEFAULTS
24 from .itemqueue import ItemQueue, ItemTestQueue
25 from .items import Item
26 from .lock import NodeLock
27 from .metadata import check_for_unsolvable_metadata_key_conflicts, hash_metadata
28 from .utils import cached_property, graph_for_items, names
29 from .utils.statedict import hash_statedict
30 from .utils.text import blue, bold, cyan, green, red, validate_name, yellow
31 from .utils.text import force_text, mark_for_translation as _
32 from .utils.ui import io
33
34
35 class ApplyResult(object):
36 """
37 Holds information about an apply run for a node.
38 """
39 def __init__(self, node, item_results):
40 self.node_name = node.name
41 self.correct = 0
42 self.fixed = 0
43 self.skipped = 0
44 self.failed = 0
45 self.profiling_info = []
46
47 for item_id, result, time_elapsed in item_results:
48 self.profiling_info.append((time_elapsed, item_id))
49 if result == Item.STATUS_ACTION_SUCCEEDED:
50 self.correct += 1
51 elif result == Item.STATUS_OK:
52 self.correct += 1
53 elif result == Item.STATUS_FIXED:
54 self.fixed += 1
55 elif result == Item.STATUS_SKIPPED:
56 self.skipped += 1
57 elif result == Item.STATUS_FAILED:
58 self.failed += 1
59 else:
60 raise RuntimeError(_(
61 "can't make sense of results for {} on {}: {}"
62 ).format(item_id, self.node_name, result))
63
64 self.profiling_info.sort()
65 self.profiling_info.reverse()
66
67 self.start = None
68 self.end = None
69
70 @property
71 def duration(self):
72 return self.end - self.start
73
74
75 def format_node_result(result):
76 output = []
77 output.append(("{count} OK").format(count=result.correct))
78
79 if result.fixed:
80 output.append(green(_("{count} fixed").format(count=result.fixed)))
81 else:
82 output.append(_("{count} fixed").format(count=result.fixed))
83
84 if result.skipped:
85 output.append(yellow(_("{count} skipped").format(count=result.skipped)))
86 else:
87 output.append(_("{count} skipped").format(count=result.skipped))
88
89 if result.failed:
90 output.append(red(_("{count} failed").format(count=result.failed)))
91 else:
92 output.append(_("{count} failed").format(count=result.failed))
93
94 return ", ".join(output)
95
96
97 def handle_apply_result(node, item, status_code, interactive, changes=None):
98 formatted_result = format_item_result(
99 status_code,
100 node.name,
101 item.bundle.name if item.bundle else "", # dummy items don't have bundles
102 item.id,
103 interactive=interactive,
104 changes=changes,
105 )
106 if formatted_result is not None:
107 if status_code == Item.STATUS_FAILED:
108 io.stderr(formatted_result)
109 else:
110 io.stdout(formatted_result)
111
112
113 def apply_items(
114 node,
115 autoskip_selector="",
116 my_soft_locks=(),
117 other_peoples_soft_locks=(),
118 workers=1,
119 interactive=False,
120 profiling=False,
121 ):
122 with io.job(_(" {node} processing dependencies...").format(node=node.name)):
123 item_queue = ItemQueue(node.items)
124
125 results = []
126
127 def tasks_available():
128 return bool(item_queue.items_without_deps)
129
130 def next_task():
131 item, skipped_items = item_queue.pop()
132 for skipped_item in skipped_items:
133 handle_apply_result(
134 node,
135 skipped_item,
136 Item.STATUS_SKIPPED,
137 interactive,
138 changes=[_("no pre-trigger")],
139 )
140 results.append((skipped_item.id, Item.STATUS_SKIPPED, timedelta(0)))
141
142 return {
143 'task_id': "{}:{}".format(node.name, item.id),
144 'target': item.apply,
145 'kwargs': {
146 'autoskip_selector': autoskip_selector,
147 'my_soft_locks': my_soft_locks,
148 'other_peoples_soft_locks': other_peoples_soft_locks,
149 'interactive': interactive,
150 },
151 }
152
153 def handle_result(task_id, return_value, duration):
154 item_id = task_id.split(":", 1)[1]
155 item = find_item(item_id, item_queue.pending_items)
156
157 status_code, changes = return_value
158
159 if status_code == Item.STATUS_FAILED:
160 for skipped_item in item_queue.item_failed(item):
161 handle_apply_result(
162 node,
163 skipped_item,
164 Item.STATUS_SKIPPED,
165 interactive,
166 changes=[_("dep failed")],
167 )
168 results.append((skipped_item.id, Item.STATUS_SKIPPED, timedelta(0)))
169 elif status_code in (Item.STATUS_FIXED, Item.STATUS_ACTION_SUCCEEDED):
170 item_queue.item_fixed(item)
171 elif status_code == Item.STATUS_OK:
172 item_queue.item_ok(item)
173 elif status_code == Item.STATUS_SKIPPED:
174 for skipped_item in item_queue.item_skipped(item):
175 skipped_reason = [_("dep skipped")]
176 for lock in other_peoples_soft_locks:
177 for selector in lock['items']:
178 if skipped_item.covered_by_autoskip_selector(selector):
179 skipped_reason = [_("soft locked")]
180 break
181 handle_apply_result(
182 node,
183 skipped_item,
184 Item.STATUS_SKIPPED,
185 interactive,
186 changes=skipped_reason,
187 )
188 results.append((skipped_item.id, Item.STATUS_SKIPPED, timedelta(0)))
189 else:
190 raise AssertionError(_(
191 "unknown item status returned for {item}: {status}".format(
192 item=item.id,
193 status=repr(status_code),
194 ),
195 ))
196
197 handle_apply_result(node, item, status_code, interactive, changes=changes)
198 if not isinstance(item, DummyItem):
199 results.append((item.id, status_code, duration))
200
201 worker_pool = WorkerPool(
202 tasks_available,
203 next_task,
204 handle_result=handle_result,
205 pool_id="apply_{}".format(node.name),
206 workers=workers,
207 )
208 worker_pool.run()
209
210 # we have no items without deps left and none are processing
211 # there must be a loop
212 if item_queue.items_with_deps:
213 io.debug(_(
214 "There was a dependency problem. Look at the debug.svg generated "
215 "by the following command and try to find a loop:\n"
216 "printf '{}' | dot -Tsvg -odebug.svg"
217 ).format("\\n".join(graph_for_items(node.name, item_queue.items_with_deps))))
218
219 raise ItemDependencyError(
220 _("bad dependencies between these items: {}").format(
221 ", ".join([i.id for i in item_queue.items_with_deps]),
222 )
223 )
224
225 return results
226
227
228 def _flatten_group_hierarchy(groups):
229 """
230 Takes a list of groups and returns a list of group names ordered so
231 that parent groups will appear before any of their subgroups.
232 """
233 # dict mapping groups to subgroups
234 child_groups = {}
235 for group in groups:
236 child_groups[group.name] = list(names(group.subgroups))
237
238 # dict mapping groups to parent groups
239 parent_groups = {}
240 for child_group in child_groups.keys():
241 parent_groups[child_group] = []
242 for parent_group, subgroups in child_groups.items():
243 if child_group in subgroups:
244 parent_groups[child_group].append(parent_group)
245
246 order = []
247
248 while True:
249 top_level_group = None
250 for group, parents in parent_groups.items():
251 if parents:
252 continue
253 else:
254 top_level_group = group
255 break
256 if not top_level_group:
257 if parent_groups:
258 raise RuntimeError(
259 _("encountered subgroup loop that should have been detected")
260 )
261 else:
262 break
263 order.append(top_level_group)
264 del parent_groups[top_level_group]
265 for group in parent_groups.keys():
266 if top_level_group in parent_groups[group]:
267 parent_groups[group].remove(top_level_group)
268
269 return order
270
271
272 def format_item_result(result, node, bundle, item, interactive=False, changes=None):
273 if changes is True:
274 changes_text = "({})".format(_("create"))
275 elif changes is False:
276 changes_text = "({})".format(_("remove"))
277 elif changes is None:
278 changes_text = ""
279 else:
280 changes_text = "({})".format(", ".join(sorted(changes)))
281 if result == Item.STATUS_FAILED:
282 return "{x} {node} {bundle} {item} {status} {changes}".format(
283 bundle=bold(bundle),
284 changes=changes_text,
285 item=item,
286 node=bold(node),
287 status=red(_("failed")),
288 x=bold(red("✘")),
289 )
290 elif result == Item.STATUS_ACTION_SUCCEEDED:
291 return "{x} {node} {bundle} {item} {status}".format(
292 bundle=bold(bundle),
293 item=item,
294 node=bold(node),
295 status=green(_("succeeded")),
296 x=bold(green("✓")),
297 )
298 elif result == Item.STATUS_SKIPPED:
299 return "{x} {node} {bundle} {item} {status} {changes}".format(
300 bundle=bold(bundle),
301 changes=changes_text,
302 item=item,
303 node=bold(node),
304 x=bold(yellow("»")),
305 status=yellow(_("skipped")),
306 )
307 elif result == Item.STATUS_FIXED:
308 return "{x} {node} {bundle} {item} {status} {changes}".format(
309 bundle=bold(bundle),
310 changes=changes_text,
311 item=item,
312 node=bold(node),
313 x=bold(green("✓")),
314 status=green(_("fixed")),
315 )
316
317
318 class Node(object):
319 OS_FAMILY_BSD = (
320 'freebsd',
321 'macos',
322 'netbsd',
323 'openbsd',
324 )
325 OS_FAMILY_DEBIAN = (
326 'debian',
327 'ubuntu',
328 'raspbian',
329 )
330 OS_FAMILY_REDHAT = (
331 'rhel',
332 'centos',
333 'fedora',
334 )
335
336 OS_FAMILY_LINUX = (
337 'amazonlinux',
338 'arch',
339 'opensuse',
340 'gentoo',
341 'linux',
342 'oraclelinux',
343 ) + \
344 OS_FAMILY_DEBIAN + \
345 OS_FAMILY_REDHAT
346
347 OS_KNOWN = OS_FAMILY_BSD + OS_FAMILY_LINUX
348
349 def __init__(self, name, infodict=None):
350 if infodict is None:
351 infodict = {}
352
353 if not validate_name(name):
354 raise RepositoryError(_("'{}' is not a valid node name").format(name))
355
356 self.name = name
357 self._bundles = infodict.get('bundles', [])
358 self._compiling_metadata = Lock()
359 self._dynamic_group_lock = Lock()
360 self._dynamic_groups_resolved = False # None means we're currently doing it
361 self._metadata_so_far = {}
362 self._node_metadata = infodict.get('metadata', {})
363 self._ssh_conn_established = False
364 self._ssh_first_conn_lock = Lock()
365 self.add_ssh_host_keys = False
366 self.hostname = infodict.get('hostname', self.name)
367
368 for attr in GROUP_ATTR_DEFAULTS:
369 setattr(self, "_{}".format(attr), infodict.get(attr))
370
371 def __lt__(self, other):
372 return self.name < other.name
373
374 def __repr__(self):
375 return "<Node '{}'>".format(self.name)
376
377 @cached_property
378 def bundles(self):
379 with io.job(_(" {node} loading bundles...").format(node=self.name)):
380 added_bundles = []
381 found_bundles = []
382 for group in self.groups:
383 for bundle_name in group.bundle_names:
384 found_bundles.append(bundle_name)
385
386 for bundle_name in found_bundles + list(self._bundles):
387 if bundle_name not in added_bundles:
388 added_bundles.append(bundle_name)
389 try:
390 yield Bundle(self, bundle_name)
391 except NoSuchBundle:
392 raise NoSuchBundle(_(
393 "Node '{node}' wants bundle '{bundle}', but it doesn't exist."
394 ).format(
395 bundle=bundle_name,
396 node=self.name,
397 ))
398
399 @cached_property
400 def cdict(self):
401 node_dict = {}
402 for item in self.items:
403 try:
404 node_dict[item.id] = item.hash()
405 except AttributeError: # actions have no cdict
406 pass
407 return node_dict
408
409 def covered_by_autoskip_selector(self, autoskip_selector):
410 """
411 True if this node should be skipped based on the given selector
412 string (e.g. "node:foo,group:bar").
413 """
414 components = [c.strip() for c in autoskip_selector.split(",")]
415 if "node:{}".format(self.name) in components:
416 return True
417 for group in self.groups:
418 if "group:{}".format(group.name) in components:
419 return True
420 return False
421
422 def group_membership_hash(self):
423 return hash_statedict(sorted(names(self.groups)))
424
425 @cached_property
426 def groups(self):
427 _groups = set(self.repo._static_groups_for_node(self))
428 # lock to avoid infinite recursion when .members_add/remove
429 # use stuff like node.in_group() that in turn calls this function
430 if self._dynamic_group_lock.acquire(False):
431 cache_result = True
432 self._dynamic_groups_resolved = None
433 # first we remove ourselves from all static groups whose
434 # .members_remove matches us
435 for group in list(_groups):
436 if group.members_remove is not None and group.members_remove(self):
437 try:
438 _groups.remove(group)
439 except KeyError:
440 pass
441 # now add all groups whose .members_add (but not .members_remove)
442 # matches us
443 _groups = _groups.union(self._groups_dynamic)
444 self._dynamic_groups_resolved = True
445 self._dynamic_group_lock.release()
446 else:
447 cache_result = False
448
449 # we have to add parent groups at the very end, since we might
450 # have added or removed subgroups thru .members_add/remove
451 for group in list(_groups):
452 for parent_group in group.parent_groups:
453 if cache_result:
454 with self._dynamic_group_lock:
455 self._dynamic_groups_resolved = None
456 if (
457 not parent_group.members_remove or
458 not parent_group.members_remove(self)
459 ):
460 _groups.add(parent_group)
461 self._dynamic_groups_resolved = True
462 else:
463 _groups.add(parent_group)
464
465 if cache_result:
466 return sorted(_groups)
467 else:
468 raise DontCache(sorted(_groups))
469
470 @property
471 def _groups_dynamic(self):
472 """
473 Returns all groups whose members_add matches this node.
474 """
475 _groups = set()
476 for group in self.repo.groups:
477 if group.members_add is not None and group.members_add(self):
478 _groups.add(group)
479 if group.members_remove is not None and group.members_remove(self):
480 try:
481 _groups.remove(group)
482 except KeyError:
483 pass
484 return _groups
485
486 def has_any_bundle(self, bundle_list):
487 for bundle_name in bundle_list:
488 if self.has_bundle(bundle_name):
489 return True
490 return False
491
492 def has_bundle(self, bundle_name):
493 for bundle in self.bundles:
494 if bundle.name == bundle_name:
495 return True
496 return False
497
498 def hash(self):
499 return hash_statedict(self.cdict)
500
501 def in_any_group(self, group_list):
502 for group_name in group_list:
503 if self.in_group(group_name):
504 return True
505 return False
506
507 def in_group(self, group_name):
508 for group in self.groups:
509 if group.name == group_name:
510 return True
511 return False
512
513 @cached_property
514 def items(self):
515 if not self.dummy:
516 for bundle in self.bundles:
517 for item in bundle.items:
518 yield item
519
520 @property
521 def _static_items(self):
522 for bundle in self.bundles:
523 for item in bundle._static_items:
524 yield item
525
526 def apply(
527 self,
528 autoskip_selector="",
529 interactive=False,
530 force=False,
531 workers=4,
532 profiling=False,
533 ):
534 if not list(self.items):
535 io.stdout(_("{x} {node} has no items").format(node=bold(self.name), x=yellow("!")))
536 return None
537
538 if self.covered_by_autoskip_selector(autoskip_selector):
539 io.debug(_("skipping {}, matches autoskip selector").format(self.name))
540 return None
541
542 start = datetime.now()
543
544 io.stdout(_("{x} {node} run started at {time}").format(
545 node=bold(self.name),
546 time=start.strftime("%Y-%m-%d %H:%M:%S"),
547 x=blue("i"),
548 ))
549 self.repo.hooks.node_apply_start(
550 self.repo,
551 self,
552 interactive=interactive,
553 )
554
555 try:
556 with NodeLock(self, interactive=interactive, ignore=force) as lock:
557 item_results = apply_items(
558 self,
559 autoskip_selector=autoskip_selector,
560 my_soft_locks=lock.my_soft_locks,
561 other_peoples_soft_locks=lock.other_peoples_soft_locks,
562 workers=workers,
563 interactive=interactive,
564 profiling=profiling,
565 )
566 except NodeLockedException as e:
567 if not interactive:
568 io.stderr(_("{x} {node} already locked by {user} at {date} ({duration} ago, `bw apply -f` to override)").format(
569 date=bold(e.args[0]['date']),
570 duration=e.args[0]['duration'],
571 node=bold(self.name),
572 user=bold(e.args[0]['user']),
573 x=red("!"),
574 ))
575 item_results = []
576 result = ApplyResult(self, item_results)
577 result.start = start
578 result.end = datetime.now()
579
580 io.stdout(_("{x} {node} run completed after {time}s").format(
581 node=bold(self.name),
582 time=(result.end - start).total_seconds(),
583 x=blue("i"),
584 ))
585 io.stdout(_("{x} {node} stats: {stats}").format(
586 node=bold(self.name),
587 stats=format_node_result(result),
588 x=blue("i"),
589 ))
590
591 self.repo.hooks.node_apply_end(
592 self.repo,
593 self,
594 duration=result.duration,
595 interactive=interactive,
596 result=result,
597 )
598
599 return result
600
601 def download(self, remote_path, local_path, ignore_failure=False):
602 return operations.download(
603 self.hostname,
604 remote_path,
605 local_path,
606 add_host_keys=True if environ.get('BW_ADD_HOST_KEYS', False) == "1" else False,
607 wrapper_inner=self.cmd_wrapper_inner,
608 wrapper_outer=self.cmd_wrapper_outer,
609 )
610
611 def get_item(self, item_id):
612 return find_item(item_id, self.items)
613
614 @property
615 def metadata(self):
616 """
617 Returns full metadata for a node. MUST NOT be used from inside a
618 metadata processor. Use .partial_metadata instead.
619 """
620 if self._dynamic_groups_resolved is None:
621 # return only metadata set directly at the node level if
622 # we're still in the process of figuring out which groups
623 # we belong to
624 return self._node_metadata
625 else:
626 return self.repo._metadata_for_node(self.name, partial=False)
627
628 def metadata_hash(self):
629 return hash_metadata(self.metadata)
630
631 @property
632 def metadata_processors(self):
633 for bundle in self.bundles:
634 for metadata_processor in bundle.metadata_processors:
635 yield (
636 "{}.{}".format(
637 bundle.name,
638 metadata_processor.__name__,
639 ),
640 metadata_processor,
641 )
642
643 @property
644 def partial_metadata(self):
645 """
646 Only to be used from inside metadata processors. Can't use the
647 normal .metadata there because it might deadlock when nodes
648 have interdependent metadata.
649
650 It's OK for metadata processors to work with partial metadata
651 because they will be fed all metadata updates until no more
652 changes are made by any metadata processor.
653 """
654 return self.repo._metadata_for_node(self.name, partial=True)
655
656 def run(self, command, may_fail=False, log_output=False):
657 if log_output:
658 def log_function(msg):
659 io.stdout("{x} {node} {msg}".format(
660 node=bold(self.name),
661 msg=force_text(msg).rstrip("\n"),
662 x=cyan("›"),
663 ))
664 else:
665 log_function = None
666
667 add_host_keys = True if environ.get('BW_ADD_HOST_KEYS', False) == "1" else False
668
669 if not self._ssh_conn_established:
670 # Sometimes we're opening SSH connections to a node too fast
671 # for OpenSSH to establish the ControlMaster socket for the
672 # second and following connections to use.
673 # To prevent this, we just wait until a first dummy command
674 # has completed on the node before trying to reuse the
675 # multiplexed connection.
676 if self._ssh_first_conn_lock.acquire(False):
677 try:
678 operations.run(self.hostname, "true", add_host_keys=add_host_keys)
679 self._ssh_conn_established = True
680 finally:
681 self._ssh_first_conn_lock.release()
682 else:
683 # we didn't get the lock immediately, now we just wait
684 # until it is released before we proceed
685 with self._ssh_first_conn_lock:
686 pass
687
688 return operations.run(
689 self.hostname,
690 command,
691 add_host_keys=add_host_keys,
692 ignore_failure=may_fail,
693 log_function=log_function,
694 wrapper_inner=self.cmd_wrapper_inner,
695 wrapper_outer=self.cmd_wrapper_outer,
696 )
697
698 def test(self, ignore_missing_faults=False, workers=4):
699 with io.job(_(" {node} checking for metadata collisions...").format(node=self.name)):
700 check_for_unsolvable_metadata_key_conflicts(self)
701 io.stdout(_("{x} {node} has no metadata collisions").format(
702 x=green("✓"),
703 node=bold(self.name),
704 ))
705 if self.items:
706 test_items(self, ignore_missing_faults=ignore_missing_faults, workers=workers)
707 else:
708 io.stdout(_("{x} {node} has no items").format(node=bold(self.name), x=yellow("!")))
709
710 self.repo.hooks.test_node(self.repo, self)
711
712 def upload(self, local_path, remote_path, mode=None, owner="", group=""):
713 return operations.upload(
714 self.hostname,
715 local_path,
716 remote_path,
717 add_host_keys=True if environ.get('BW_ADD_HOST_KEYS', False) == "1" else False,
718 group=group,
719 mode=mode,
720 owner=owner,
721 wrapper_inner=self.cmd_wrapper_inner,
722 wrapper_outer=self.cmd_wrapper_outer,
723 )
724
725 def verify(self, show_all=False, workers=4):
726 bad = 0
727 good = 0
728 if not self.items:
729 io.stdout(_("{x} {node} has no items").format(node=bold(self.name), x=yellow("!")))
730 else:
731 for item_status in verify_items(
732 self,
733 show_all=show_all,
734 workers=workers,
735 ):
736 if item_status:
737 good += 1
738 else:
739 bad += 1
740
741 return {'good': good, 'bad': bad}
742
743
744 def build_attr_property(attr, default):
745 def method(self):
746 attr_source = None
747 attr_value = None
748 group_order = [
749 self.repo.get_group(group_name)
750 for group_name in _flatten_group_hierarchy(self.groups)
751 ]
752
753 for group in group_order:
754 if getattr(group, attr) is not None:
755 attr_source = "group:{}".format(group.name)
756 attr_value = getattr(group, attr)
757
758 if getattr(self, "_{}".format(attr)) is not None:
759 attr_source = "node"
760 attr_value = getattr(self, "_{}".format(attr))
761
762 if attr_value is None:
763 attr_source = "default"
764 attr_value = default
765
766 io.debug(_("node {node} gets its {attr} attribute from: {source}").format(
767 node=self.name,
768 attr=attr,
769 source=attr_source,
770 ))
771 if self._dynamic_groups_resolved:
772 return attr_value
773 else:
774 raise DontCache(attr_value)
775 method.__name__ = str("_group_attr_{}".format(attr)) # required for cached_property
776 # str() for Python 2 compatibility
777 return cached_property(method)
778
779 for attr, default in GROUP_ATTR_DEFAULTS.items():
780 setattr(Node, attr, build_attr_property(attr, default))
781
782
783 def test_items(node, ignore_missing_faults=False, workers=1):
784 item_queue = ItemTestQueue(node.items)
785
786 def tasks_available():
787 return bool(item_queue.items_without_deps)
788
789 def next_task():
790 try:
791 # Get the next non-DummyItem in the queue.
792 while True:
793 item = item_queue.pop()
794 if not isinstance(item, DummyItem):
795 break
796 except IndexError: # no more items available right now
797 return None
798 else:
799 return {
800 'task_id': item.node.name + ":" + item.bundle.name + ":" + item.id,
801 'target': item._test,
802 }
803
804 def handle_result(task_id, return_value, duration):
805 node_name, bundle_name, item_id = task_id.split(":", 2)
806 io.stdout("{x} {node} {bundle} {item}".format(
807 bundle=bold(bundle_name),
808 item=item_id,
809 node=bold(node_name),
810 x=green("✓"),
811 ))
812
813 def handle_exception(task_id, exception, traceback):
814 node_name, bundle_name, item_id = task_id.split(":", 2)
815 if ignore_missing_faults and isinstance(exception, FaultUnavailable):
816 io.stderr(_("{x} {node} {bundle} {item} ({msg})").format(
817 bundle=bold(bundle_name),
818 item=item_id,
819 msg=yellow(_("Fault unavailable")),
820 node=bold(node_name),
821 x=yellow("»"),
822 ))
823 else:
824 io.stderr("{x} {node} {bundle} {item}".format(
825 bundle=bold(bundle_name),
826 item=item_id,
827 node=bold(node_name),
828 x=red("!"),
829 ))
830 io.stderr(traceback)
831 io.stderr("{}: {}".format(type(exception), str(exception)))
832 exit(1)
833
834 worker_pool = WorkerPool(
835 tasks_available,
836 next_task,
837 handle_result=handle_result,
838 handle_exception=handle_exception,
839 pool_id="test_{}".format(node.name),
840 workers=workers,
841 )
842 worker_pool.run()
843
844 if item_queue.items_with_deps:
845 io.stderr(_(
846 "There was a dependency problem. Look at the debug.svg generated "
847 "by the following command and try to find a loop:\n"
848 "printf '{}' | dot -Tsvg -odebug.svg"
849 ).format("\\n".join(graph_for_items(node.name, item_queue.items_with_deps))))
850
851 raise ItemDependencyError(
852 _("bad dependencies between these items: {}").format(
853 ", ".join([i.id for i in item_queue.items_with_deps]),
854 )
855 )
856
857
858 def verify_items(node, show_all=False, workers=1):
859 items = []
860 for item in node.items:
861 if (
862 not item.ITEM_TYPE_NAME == 'action' and
863 not item.triggered
864 ):
865 items.append(item)
866
867 def tasks_available():
868 return bool(items)
869
870 def next_task():
871 while True:
872 try:
873 item = items.pop()
874 except IndexError:
875 return None
876 if item._faults_missing_for_attributes:
877 if item.error_on_missing_fault:
878 item._raise_for_faults()
879 else:
880 io.stdout(_("{x} {node} {bundle} {item} ({msg})").format(
881 bundle=bold(item.bundle.name),
882 item=item.id,
883 msg=yellow(_("Fault unavailable")),
884 node=bold(node.name),
885 x=yellow("»"),
886 ))
887 else:
888 return {
889 'task_id': node.name + ":" + item.bundle.name + ":" + item.id,
890 'target': item.get_status,
891 }
892
893 def handle_result(task_id, item_status, duration):
894 node_name, bundle_name, item_id = task_id.split(":", 2)
895 if not item_status.correct:
896 if item_status.must_be_created:
897 changes_text = _("create")
898 elif item_status.must_be_deleted:
899 changes_text = _("remove")
900 else:
901 changes_text = ", ".join(sorted(item_status.keys_to_fix))
902 io.stderr("{x} {node} {bundle} {item} ({changes})".format(
903 bundle=bold(bundle_name),
904 changes=changes_text,
905 item=item_id,
906 node=bold(node_name),
907 x=red("✘"),
908 ))
909 return False
910 else:
911 if show_all:
912 io.stdout("{x} {node} {bundle} {item}".format(
913 bundle=bold(bundle_name),
914 item=item_id,
915 node=bold(node_name),
916 x=green("✓"),
917 ))
918 return True
919
920 worker_pool = WorkerPool(
921 tasks_available,
922 next_task,
923 handle_result,
924 pool_id="verify_{}".format(node.name),
925 workers=workers,
926 )
927 return worker_pool.run()
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4 from select import select
5 from shlex import split
6 from subprocess import Popen, PIPE
7 from threading import Event, Thread
8 from os import close, environ, pipe, read, setpgrp
9
10 from .exceptions import RemoteException
11 from .utils import cached_property
12 from .utils.text import force_text, LineBuffer, mark_for_translation as _, randstr
13 from .utils.ui import io
14
15
16 def output_thread_body(line_buffer, read_fd, quit_event, read_until_eof):
17 # see run() for details
18 while True:
19 r, w, x = select([read_fd], [], [], 0.1)
20 if r:
21 chunk = read(read_fd, 1024)
22 if chunk:
23 line_buffer.write(chunk)
24 else: # EOF
25 return
26 elif quit_event.is_set() and not read_until_eof:
27 # one last chance to read output after the child process
28 # has died
29 while True:
30 r, w, x = select([read_fd], [], [], 0)
31 if r:
32 line_buffer.write(read(read_fd, 1024))
33 else:
34 break
35 return
36
37
38 def download(
39 hostname,
40 remote_path,
41 local_path,
42 add_host_keys=False,
43 wrapper_inner="{}",
44 wrapper_outer="{}",
45 ):
46 """
47 Download a file.
48 """
49 io.debug(_("downloading {host}:{path} -> {target}").format(
50 host=hostname, path=remote_path, target=local_path))
51
52 result = run(
53 hostname,
54 "cat {}".format(quote(remote_path)), # See issue #39.
55 add_host_keys=add_host_keys,
56 wrapper_inner=wrapper_inner,
57 wrapper_outer=wrapper_outer,
58 )
59
60 if result.return_code == 0:
61 with open(local_path, "wb") as f:
62 f.write(result.stdout)
63 else:
64 raise RemoteException(_(
65 "reading file '{path}' on {host} failed: {error}"
66 ).format(
67 error=force_text(result.stderr) + force_text(result.stdout),
68 host=hostname,
69 path=remote_path,
70 ))
71
72
73 class RunResult(object):
74 def __init__(self):
75 self.return_code = None
76 self.stderr = None
77 self.stdout = None
78
79 @cached_property
80 def stderr_text(self):
81 return force_text(self.stderr)
82
83 @cached_property
84 def stdout_text(self):
85 return force_text(self.stdout)
86
87
88 def run(
89 hostname,
90 command,
91 add_host_keys=False,
92 ignore_failure=False,
93 log_function=None,
94 wrapper_inner="{}",
95 wrapper_outer="{}",
96 ):
97 """
98 Runs a command on a remote system.
99 """
100 # LineBuffer objects take care of always printing complete lines
101 # which have been properly terminated by a newline. This is only
102 # relevant when using `bw run`.
103 # Does nothing when log_function is None.
104 stderr_lb = LineBuffer(log_function)
105 stdout_lb = LineBuffer(log_function)
106
107 # Create pipes which will be used by the SSH child process. We do
108 # not use subprocess.PIPE because we need to be able to continuously
109 # check those pipes for new output, so we can feed it to the
110 # LineBuffers during `bw run`.
111 stdout_fd_r, stdout_fd_w = pipe()
112 stderr_fd_r, stderr_fd_w = pipe()
113
114 # Launch OpenSSH. It's important that SSH gets a dummy stdin, i.e.
115 # it must *not* read from the terminal. Otherwise, it can steal user
116 # input.
117 ssh_command = [
118 "ssh",
119 "-o", "KbdInteractiveAuthentication=no",
120 "-o", "PasswordAuthentication=no",
121 "-o", "StrictHostKeyChecking=no" if add_host_keys else "StrictHostKeyChecking=yes",
122 ]
123 extra_args = environ.get("BW_SSH_ARGS", "").strip()
124 if extra_args:
125 ssh_command.extend(split(extra_args))
126 ssh_command.append(hostname)
127 ssh_command.append(wrapper_outer.format(quote(wrapper_inner.format(command))))
128 cmd_id = randstr(length=4).upper()
129 io.debug("running command with ID {}: {}".format(cmd_id, " ".join(ssh_command)))
130
131 ssh_process = Popen(
132 ssh_command,
133 preexec_fn=setpgrp,
134 stdin=PIPE,
135 stderr=stderr_fd_w,
136 stdout=stdout_fd_w,
137 )
138 io._ssh_pids.append(ssh_process.pid)
139
140 quit_event = Event()
141 stdout_thread = Thread(
142 args=(stdout_lb, stdout_fd_r, quit_event, True),
143 target=output_thread_body,
144 )
145 stderr_thread = Thread(
146 args=(stderr_lb, stderr_fd_r, quit_event, False),
147 target=output_thread_body,
148 )
149 stdout_thread.start()
150 stderr_thread.start()
151
152 try:
153 ssh_process.communicate()
154 finally:
155 # Once we end up here, the OpenSSH process has terminated.
156 #
157 # Now, the big question is: Why do we need an Event here?
158 #
159 # Problem is, a user could use SSH multiplexing with
160 # auto-forking (e.g., "ControlPersist 10m"). In this case,
161 # OpenSSH forks another process which holds the "master"
162 # connection. This forked process *inherits* our pipes (at least
163 # for stderr). Thus, only when that master process finally
164 # terminates (possibly after many minutes), we will be informed
165 # about EOF on our stderr pipe. That doesn't work. bw will hang.
166 #
167 # So, instead, we use a busy loop in output_thread_body() which
168 # checks for quit_event being set. Unfortunately there is no way
169 # to be absolutely sure that we received all output from stderr
170 # because we never get a proper EOF there. All we can do is hope
171 # that all output has arrived on the reading end of the pipe by
172 # the time the quit_event is checked in the thread.
173 #
174 # Luckily stdout is a somewhat simpler affair: we can just close
175 # the writing end of the pipe, causing the reader thread to
176 # shut down as it sees the EOF.
177 io._ssh_pids.remove(ssh_process.pid)
178 quit_event.set()
179 close(stdout_fd_w)
180 stdout_thread.join()
181 stderr_thread.join()
182 stdout_lb.close()
183 stderr_lb.close()
184 for fd in (stdout_fd_r, stderr_fd_r, stderr_fd_w):
185 close(fd)
186
187 io.debug("command with ID {} finished with return code {}".format(
188 cmd_id,
189 ssh_process.returncode,
190 ))
191
192 result = RunResult()
193 result.stdout = stdout_lb.record.getvalue()
194 result.stderr = stderr_lb.record.getvalue()
195 result.return_code = ssh_process.returncode
196
197 if result.return_code != 0:
198 error_msg = _(
199 "Non-zero return code ({rcode}) running '{command}' "
200 "with ID {id} on '{host}':\n\n{result}\n\n"
201 ).format(
202 command=command,
203 host=hostname,
204 id=cmd_id,
205 rcode=result.return_code,
206 result=force_text(result.stdout) + force_text(result.stderr),
207 )
208 io.debug(error_msg)
209 if not ignore_failure or result.return_code == 255:
210 raise RemoteException(error_msg)
211 return result
212
213
214 def upload(
215 hostname,
216 local_path,
217 remote_path,
218 add_host_keys=False,
219 group="",
220 mode=None,
221 owner="",
222 wrapper_inner="{}",
223 wrapper_outer="{}",
224 ):
225 """
226 Upload a file.
227 """
228 io.debug(_("uploading {path} -> {host}:{target}").format(
229 host=hostname, path=local_path, target=remote_path))
230 temp_filename = ".bundlewrap_tmp_" + randstr()
231
232 scp_process = Popen(
233 [
234 "scp",
235 "-o",
236 "StrictHostKeyChecking=no" if add_host_keys else "StrictHostKeyChecking=yes",
237 local_path,
238 "{}:{}".format(hostname, temp_filename),
239 ],
240 preexec_fn=setpgrp,
241 stdin=PIPE,
242 stdout=PIPE,
243 stderr=PIPE,
244 )
245 io._ssh_pids.append(scp_process.pid)
246 stdout, stderr = scp_process.communicate()
247 io._ssh_pids.remove(scp_process.pid)
248
249 if scp_process.returncode != 0:
250 raise RemoteException(_(
251 "Upload to {host} failed for {failed}:\n\n{result}\n\n"
252 ).format(
253 failed=remote_path,
254 host=hostname,
255 result=force_text(stdout) + force_text(stderr),
256 ))
257
258 if owner or group:
259 if group:
260 group = ":" + quote(group)
261 run(
262 hostname,
263 "chown {}{} {}".format(
264 quote(owner),
265 group,
266 quote(temp_filename),
267 ),
268 add_host_keys=add_host_keys,
269 wrapper_inner=wrapper_inner,
270 wrapper_outer=wrapper_outer,
271 )
272
273 if mode:
274 run(
275 hostname,
276 "chmod {} {}".format(
277 mode,
278 quote(temp_filename),
279 ),
280 add_host_keys=add_host_keys,
281 wrapper_inner=wrapper_inner,
282 wrapper_outer=wrapper_outer,
283 )
284
285 run(
286 hostname,
287 "mv -f {} {}".format(
288 quote(temp_filename),
289 quote(remote_path),
290 ),
291 add_host_keys=add_host_keys,
292 wrapper_inner=wrapper_inner,
293 wrapper_outer=wrapper_outer,
294 )
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from json import dumps, loads
4 from os import chmod, remove
5 from os.path import exists, join
6 from stat import S_IREAD, S_IRGRP, S_IROTH
7
8 from requests import get
9
10 from .exceptions import NoSuchPlugin, PluginError, PluginLocalConflict
11 from .utils import download, hash_local_file
12 from .utils.text import mark_for_translation as _
13 from .utils.ui import io
14
15
16 BASE_URL = "https://raw.githubusercontent.com/bundlewrap/plugins/master"
17
18
19 class PluginManager(object):
20 def __init__(self, path, base_url=BASE_URL):
21 self.base_url = base_url
22 self.path = path
23 if exists(join(self.path, "plugins.json")):
24 with open(join(self.path, "plugins.json")) as f:
25 self.plugin_db = loads(f.read())
26 else:
27 self.plugin_db = {}
28
29 @property
30 def index(self):
31 return get(
32 "{}/index.json".format(self.base_url)
33 ).json()
34
35 def install(self, plugin, force=False):
36 if plugin in self.plugin_db:
37 raise PluginError(_("plugin '{plugin}' is already installed").format(plugin=plugin))
38
39 manifest = self.manifest_for_plugin(plugin)
40
41 for file in manifest['provides']:
42 target_path = join(self.path, file)
43 if exists(target_path) and not force:
44 raise PluginLocalConflict(_(
45 "cannot install '{plugin}' because it provides "
46 "'{path}' which already exists"
47 ).format(path=target_path, plugin=plugin))
48
49 url = "{}/{}/{}".format(self.base_url, plugin, file)
50 download(url, target_path)
51
52 # make file read-only to discourage users from editing them
53 # which will block future updates of the plugin
54 chmod(target_path, S_IREAD | S_IRGRP | S_IROTH)
55
56 self.record_as_installed(plugin, manifest)
57
58 return manifest
59
60 def list(self):
61 for plugin, info in self.plugin_db.items():
62 yield (plugin, info['version'])
63
64 def local_modifications(self, plugin):
65 try:
66 plugin_data = self.plugin_db[plugin]
67 except KeyError:
68 raise NoSuchPlugin(_(
69 "The plugin '{plugin}' is not installed."
70 ).format(plugin=plugin))
71 local_changes = []
72 for filename, checksum in plugin_data['files'].items():
73 target_path = join(self.path, filename)
74 actual_checksum = hash_local_file(target_path)
75 if actual_checksum != checksum:
76 local_changes.append((
77 target_path,
78 actual_checksum,
79 checksum,
80 ))
81 return local_changes
82
83 def manifest_for_plugin(self, plugin):
84 r = get(
85 "{}/{}/manifest.json".format(self.base_url, plugin)
86 )
87 if r.status_code == 404:
88 raise NoSuchPlugin(plugin)
89 else:
90 return r.json()
91
92 def record_as_installed(self, plugin, manifest):
93 file_hashes = {}
94
95 for file in manifest['provides']:
96 target_path = join(self.path, file)
97 file_hashes[file] = hash_local_file(target_path)
98
99 self.plugin_db[plugin] = {
100 'files': file_hashes,
101 'version': manifest['version'],
102 }
103 self.write_db()
104
105 def remove(self, plugin, force=False):
106 if plugin not in self.plugin_db:
107 raise NoSuchPlugin(_("plugin '{plugin}' is not installed").format(plugin=plugin))
108
109 for file, db_checksum in self.plugin_db[plugin]['files'].items():
110 file_path = join(self.path, file)
111 if not exists(file_path):
112 continue
113
114 current_checksum = hash_local_file(file_path)
115 if db_checksum != current_checksum and not force:
116 io.stderr(_(
117 "not removing '{path}' because it has been modified since installation"
118 ).format(path=file_path))
119 continue
120
121 remove(file_path)
122
123 del self.plugin_db[plugin]
124 self.write_db()
125
126 def search(self, term):
127 term = term.lower()
128 for plugin_name, plugin_data in self.index.items():
129 if term in plugin_name.lower() or term in plugin_data['desc'].lower():
130 yield (plugin_name, plugin_data['desc'])
131
132 def update(self, plugin, check_only=False, force=False):
133 if plugin not in self.plugin_db:
134 raise PluginError(_("plugin '{plugin}' is not installed").format(plugin=plugin))
135
136 # before updating anything, we need to check for local modifications
137 local_changes = self.local_modifications(plugin)
138 if local_changes and not force:
139 files = [path for path, c1, c2 in local_changes]
140 raise PluginLocalConflict(_(
141 "cannot update '{plugin}' because the following files have been modified locally:"
142 "\n{files}"
143 ).format(files="\n".join(files), plugin=plugin))
144
145 manifest = self.manifest_for_plugin(plugin)
146
147 for file in manifest['provides']:
148 file_path = join(self.path, file)
149 if exists(file_path) and file not in self.plugin_db[plugin]['files'] and not force:
150 # new version added a file that already existed locally
151 raise PluginLocalConflict(_(
152 "cannot update '{plugin}' because it would overwrite '{path}'"
153 ).format(path=file, plugin=plugin))
154
155 old_version = self.plugin_db[plugin]['version']
156 new_version = manifest['version']
157
158 if not check_only and old_version != new_version:
159 # actually install files
160 for file in manifest['provides']:
161 target_path = join(self.path, file)
162 url = "{}/{}/{}".format(self.base_url, plugin, file)
163 download(url, target_path)
164
165 # make file read-only to discourage users from editing them
166 # which will block future updates of the plugin
167 chmod(target_path, S_IREAD | S_IRGRP | S_IROTH)
168
169 # check for files that have been removed in the new version
170 for file, db_checksum in self.plugin_db[plugin]['files'].items():
171 if file not in manifest['provides']:
172 file_path = join(self.path, file)
173 current_checksum = hash_local_file(file_path)
174 if db_checksum != current_checksum and not force:
175 io.stderr(_(
176 "not removing '{path}' because it has been modified since installation"
177 ).format(path=file_path))
178 continue
179 remove(file_path)
180
181 self.record_as_installed(plugin, manifest)
182
183 return (old_version, new_version)
184
185 def write_db(self):
186 with open(join(self.path, "plugins.json"), 'w') as f:
187 f.write(dumps(self.plugin_db, indent=4, sort_keys=True))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from imp import load_source
4 from os import listdir, mkdir
5 from os.path import isdir, isfile, join
6 from threading import Lock
7
8 from pkg_resources import DistributionNotFound, require, VersionConflict
9
10 from . import items, utils, VERSION_STRING
11 from .bundle import FILENAME_BUNDLE
12 from .exceptions import (
13 BundleError,
14 NoSuchGroup,
15 NoSuchNode,
16 NoSuchRepository,
17 MissingRepoDependency,
18 RepositoryError,
19 )
20 from .group import Group
21 from .metadata import deepcopy_metadata
22 from .node import _flatten_group_hierarchy, Node
23 from .secrets import FILENAME_SECRETS, generate_initial_secrets_cfg, SecretProxy
24 from .utils import cached_property, merge_dict, names
25 from .utils.scm import get_rev
26 from .utils.statedict import hash_statedict
27 from .utils.text import mark_for_translation as _, red, validate_name
28 from .utils.ui import io, QUIT_EVENT
29
30 DIRNAME_BUNDLES = "bundles"
31 DIRNAME_DATA = "data"
32 DIRNAME_HOOKS = "hooks"
33 DIRNAME_ITEM_TYPES = "items"
34 DIRNAME_LIBS = "libs"
35 FILENAME_GROUPS = "groups.py"
36 FILENAME_NODES = "nodes.py"
37 FILENAME_REQUIREMENTS = "requirements.txt"
38
39 HOOK_EVENTS = (
40 'action_run_start',
41 'action_run_end',
42 'apply_start',
43 'apply_end',
44 'item_apply_start',
45 'item_apply_end',
46 'node_apply_start',
47 'node_apply_end',
48 'node_run_start',
49 'node_run_end',
50 'run_start',
51 'run_end',
52 'test',
53 'test_node',
54 )
55
56 INITIAL_CONTENT = {
57 FILENAME_GROUPS: _("""
58 groups = {
59 #'group1': {
60 # 'bundles': (
61 # 'bundle1',
62 # ),
63 # 'members': (
64 # 'node1',
65 # ),
66 # 'subgroups': (
67 # 'group2',
68 # ),
69 #},
70 'all': {
71 'member_patterns': (
72 r".*",
73 ),
74 },
75 }
76 """),
77
78 FILENAME_NODES: _("""
79 nodes = {
80 'node1': {
81 'hostname': "localhost",
82 },
83 }
84 """),
85 FILENAME_REQUIREMENTS: "bundlewrap>={}\n".format(VERSION_STRING),
86 FILENAME_SECRETS: generate_initial_secrets_cfg,
87 }
88 META_PROC_MAX_ITER = 1000 # maximum iterations for metadata processors
89
90
91 def groups_from_file(filepath, libs, repo_path, vault):
92 """
93 Returns all groups as defined in the given groups.py.
94 """
95 try:
96 flat_group_dict = utils.getattr_from_file(
97 filepath,
98 'groups',
99 base_env={
100 'libs': libs,
101 'repo_path': repo_path,
102 'vault': vault,
103 },
104 )
105 except KeyError:
106 raise RepositoryError(_(
107 "{} must define a 'groups' variable"
108 ).format(filepath))
109 for groupname, infodict in flat_group_dict.items():
110 yield Group(groupname, infodict)
111
112
113 class HooksProxy(object):
114 def __init__(self, path):
115 self.__hook_cache = {}
116 self.__module_cache = {}
117 self.__path = path
118 self.__registered_hooks = None
119
120 def __getattr__(self, attrname):
121 if attrname not in HOOK_EVENTS:
122 raise AttributeError
123
124 if self.__registered_hooks is None:
125 self._register_hooks()
126
127 event = attrname
128
129 if event not in self.__hook_cache:
130 # build a list of files that define a hook for the event
131 files = []
132 for filename, events in self.__registered_hooks.items():
133 if event in events:
134 files.append(filename)
135
136 # define a function that calls all hook functions
137 def hook(*args, **kwargs):
138 for filename in files:
139 self.__module_cache[filename][event](*args, **kwargs)
140 self.__hook_cache[event] = hook
141
142 return self.__hook_cache[event]
143
144 def _register_hooks(self):
145 """
146 Builds an internal dictionary of defined hooks.
147
148 Priming __module_cache here is just a performance shortcut and
149 could be left out.
150 """
151 self.__registered_hooks = {}
152
153 if not isdir(self.__path):
154 return
155
156 for filename in listdir(self.__path):
157 filepath = join(self.__path, filename)
158 if not filename.endswith(".py") or \
159 not isfile(filepath) or \
160 filename.startswith("_"):
161 continue
162 self.__module_cache[filename] = {}
163 self.__registered_hooks[filename] = []
164 for name, obj in utils.get_all_attrs_from_file(filepath).items():
165 if name not in HOOK_EVENTS:
166 continue
167 self.__module_cache[filename][name] = obj
168 self.__registered_hooks[filename].append(name)
169
170
171 def items_from_path(path):
172 """
173 Looks for Item subclasses in the given path.
174
175 An alternative method would involve metaclasses (as Django
176 does it), but then it gets very hard to have two separate repos
177 in the same process, because both of them would register config
178 item classes globally.
179 """
180 if not isdir(path):
181 raise StopIteration()
182 for filename in listdir(path):
183 filepath = join(path, filename)
184 if not filename.endswith(".py") or \
185 not isfile(filepath) or \
186 filename.startswith("_"):
187 continue
188 for name, obj in \
189 utils.get_all_attrs_from_file(filepath).items():
190 if obj == items.Item or name.startswith("_"):
191 continue
192 try:
193 if issubclass(obj, items.Item):
194 yield obj
195 except TypeError:
196 pass
197
198
199 class LibsProxy(object):
200 def __init__(self, path):
201 self.__module_cache = {}
202 self.__path = path
203
204 def __getattr__(self, attrname):
205 if attrname.startswith("__") and attrname.endswith("__"):
206 raise AttributeError(attrname)
207 if attrname not in self.__module_cache:
208 filename = attrname + ".py"
209 filepath = join(self.__path, filename)
210 try:
211 m = load_source('bundlewrap.repo.libs_{}'.format(attrname), filepath)
212 except:
213 io.stderr(_("Exception while trying to load {}:").format(filepath))
214 raise
215 self.__module_cache[attrname] = m
216 return self.__module_cache[attrname]
217
218
219 def nodes_from_file(filepath, libs, repo_path, vault):
220 """
221 Returns a list of nodes as defined in the given nodes.py.
222 """
223 try:
224 flat_node_dict = utils.getattr_from_file(
225 filepath,
226 'nodes',
227 base_env={
228 'libs': libs,
229 'repo_path': repo_path,
230 'vault': vault,
231 },
232 )
233 except KeyError:
234 raise RepositoryError(
235 _("{} must define a 'nodes' variable").format(filepath)
236 )
237 for nodename, infodict in flat_node_dict.items():
238 yield Node(nodename, infodict)
239
240
241 class Repository(object):
242 def __init__(self, repo_path=None):
243 self.path = "/dev/null" if repo_path is None else repo_path
244
245 self._set_path(self.path)
246
247 self.bundle_names = []
248 self.group_dict = {}
249 self.node_dict = {}
250 self._node_metadata_complete = {}
251 self._node_metadata_partial = {}
252 self._node_metadata_static_complete = set()
253 self._node_metadata_lock = Lock()
254
255 if repo_path is not None:
256 self.populate_from_path(repo_path)
257 else:
258 self.item_classes = list(items_from_path(items.__path__[0]))
259
260 def __eq__(self, other):
261 if self.path == "/dev/null":
262 # in-memory repos are never equal
263 return False
264 return self.path == other.path
265
266 def __repr__(self):
267 return "<Repository at '{}'>".format(self.path)
268
269 @staticmethod
270 def is_repo(path):
271 """
272 Validates whether the given path is a bundlewrap repository.
273 """
274 try:
275 assert isdir(path)
276 assert isfile(join(path, "nodes.py"))
277 assert isfile(join(path, "groups.py"))
278 except AssertionError:
279 return False
280 return True
281
282 def add_group(self, group):
283 """
284 Adds the given group object to this repo.
285 """
286 if group.name in utils.names(self.nodes):
287 raise RepositoryError(_("you cannot have a node and a group "
288 "both named '{}'").format(group.name))
289 if group.name in utils.names(self.groups):
290 raise RepositoryError(_("you cannot have two groups "
291 "both named '{}'").format(group.name))
292 group.repo = self
293 self.group_dict[group.name] = group
294
295 def add_node(self, node):
296 """
297 Adds the given node object to this repo.
298 """
299 if node.name in utils.names(self.groups):
300 raise RepositoryError(_("you cannot have a node and a group "
301 "both named '{}'").format(node.name))
302 if node.name in utils.names(self.nodes):
303 raise RepositoryError(_("you cannot have two nodes "
304 "both named '{}'").format(node.name))
305
306 node.repo = self
307 self.node_dict[node.name] = node
308
309 @cached_property
310 def cdict(self):
311 repo_dict = {}
312 for node in self.nodes:
313 repo_dict[node.name] = node.hash()
314 return repo_dict
315
316 @classmethod
317 def create(cls, path):
318 """
319 Creates and returns a repository at path, which must exist and
320 be empty.
321 """
322 for filename, content in INITIAL_CONTENT.items():
323 if callable(content):
324 content = content()
325 with open(join(path, filename), 'w') as f:
326 f.write(content.strip() + "\n")
327
328 mkdir(join(path, DIRNAME_BUNDLES))
329 mkdir(join(path, DIRNAME_ITEM_TYPES))
330
331 return cls(path)
332
333 def create_bundle(self, bundle_name):
334 """
335 Creates an empty bundle.
336 """
337 if not validate_name(bundle_name):
338 raise ValueError(_("'{}' is not a valid bundle name").format(bundle_name))
339
340 bundle_dir = join(self.bundles_dir, bundle_name)
341
342 # deliberately not using makedirs() so this will raise an
343 # exception if the directory exists
344 mkdir(bundle_dir)
345 mkdir(join(bundle_dir, "files"))
346
347 open(join(bundle_dir, FILENAME_BUNDLE), 'a').close()
348
349 def create_node(self, node_name):
350 """
351 Creates an adhoc node with the given name.
352 """
353 node = Node(node_name)
354 self.add_node(node)
355 return node
356
357 def get_group(self, group_name):
358 try:
359 return self.group_dict[group_name]
360 except KeyError:
361 raise NoSuchGroup(group_name)
362
363 def get_node(self, node_name):
364 try:
365 return self.node_dict[node_name]
366 except KeyError:
367 raise NoSuchNode(node_name)
368
369 def group_membership_hash(self):
370 return hash_statedict(sorted(names(self.groups)))
371
372 @property
373 def groups(self):
374 return sorted(self.group_dict.values())
375
376 def _static_groups_for_node(self, node):
377 for group in self.groups:
378 if node in group._static_nodes:
379 yield group
380
381 def hash(self):
382 return hash_statedict(self.cdict)
383
384 @property
385 def nodes(self):
386 return sorted(self.node_dict.values())
387
388 def nodes_in_all_groups(self, group_names):
389 """
390 Returns a list of nodes where every node is a member of every
391 group given.
392 """
393 base_group = set(self.get_group(group_names[0]).nodes)
394 for group_name in group_names[1:]:
395 if not base_group:
396 # quit early if we have already eliminated every node
397 break
398 base_group.intersection_update(set(self.get_group(group_name).nodes))
399 result = list(base_group)
400 result.sort()
401 return result
402
403 def nodes_in_any_group(self, group_names):
404 """
405 Returns all nodes that are a member of at least one of the given
406 groups.
407 """
408 for node in self.nodes:
409 if node.in_any_group(group_names):
410 yield node
411
412 def nodes_in_group(self, group_name):
413 """
414 Returns a list of nodes in the given group.
415 """
416 return self.nodes_in_all_groups([group_name])
417
418 def _metadata_for_node(self, node_name, partial=False):
419 """
420 Returns full or partial metadata for this node.
421
422 Partial metadata may only be requested from inside a metadata
423 processor.
424
425 If necessary, this method will build complete metadata for this
426 node and all related nodes. Related meaning nodes that this node
427 depends on in one of its metadata processors.
428 """
429 try:
430 return self._node_metadata_complete[node_name]
431 except KeyError:
432 pass
433
434 if partial:
435 self._node_metadata_partial.setdefault(node_name, {})
436 return self._node_metadata_partial[node_name]
437
438 with self._node_metadata_lock:
439 try:
440 # maybe our metadata got completed while waiting for the lock
441 return self._node_metadata_complete[node_name]
442 except KeyError:
443 pass
444
445 self._node_metadata_partial[node_name] = {}
446 self._build_node_metadata()
447
448 # now that we have completed all metadata for this
449 # node and all related nodes, copy that data over
450 # to the complete dict
451 self._node_metadata_complete.update(self._node_metadata_partial)
452
453 # reset temporary vars
454 self._node_metadata_partial = {}
455 self._node_metadata_static_complete = set()
456
457 return self._node_metadata_complete[node_name]
458
459 def _build_node_metadata(self):
460 """
461 Builds complete metadata for all nodes that appear in
462 self._node_metadata_partial.keys().
463 """
464 iterations = {}
465 while (
466 not iterations or max(iterations.values()) <= META_PROC_MAX_ITER
467 ) and not QUIT_EVENT.is_set():
468 # First, get the static metadata out of the way
469 for node_name in list(self._node_metadata_partial):
470 if QUIT_EVENT.is_set():
471 break
472 node = self.get_node(node_name)
473 # check if static metadata for this node is already done
474 if node_name in self._node_metadata_static_complete:
475 continue
476 else:
477 self._node_metadata_static_complete.add(node_name)
478
479 with io.job(_(" {node} building group metadata...").format(node=node.name)):
480 group_order = _flatten_group_hierarchy(node.groups)
481 for group_name in group_order:
482 self._node_metadata_partial[node.name] = merge_dict(
483 self._node_metadata_partial[node.name],
484 self.get_group(group_name).metadata,
485 )
486
487 with io.job(_(" {node} merging node metadata...").format(node=node.name)):
488 self._node_metadata_partial[node.name] = merge_dict(
489 self._node_metadata_partial[node.name],
490 node._node_metadata,
491 )
492
493 # Now for the interesting part: We run all metadata processors
494 # in sequence until none of them return changed metadata.
495 modified = False
496 for node_name in list(self._node_metadata_partial):
497 if QUIT_EVENT.is_set():
498 break
499 node = self.get_node(node_name)
500 with io.job(_(" {node} running metadata processors...").format(node=node.name)):
501 for metadata_processor_name, metadata_processor in node.metadata_processors:
502 iterations.setdefault((node.name, metadata_processor_name), 1)
503 io.debug(_(
504 "running metadata processor {metaproc} for node {node}, "
505 "iteration #{i}"
506 ).format(
507 metaproc=metadata_processor_name,
508 node=node.name,
509 i=iterations[(node.name, metadata_processor_name)],
510 ))
511 processed = metadata_processor(
512 deepcopy_metadata(self._node_metadata_partial[node.name]),
513 )
514 iterations[(node.name, metadata_processor_name)] += 1
515 if not isinstance(processed, dict):
516 raise ValueError(_(
517 "metadata processor {metaproc} for node {node} did not return "
518 "a dictionary"
519 ).format(
520 metaproc=metadata_processor_name,
521 node=node.name,
522 ))
523 if processed != self._node_metadata_partial[node.name]:
524 io.debug(_(
525 "metadata processor {metaproc} for node {node} changed metadata, "
526 "rerunning all metadata processors for this node"
527 ).format(
528 metaproc=metadata_processor_name,
529 node=node.name,
530 ))
531 self._node_metadata_partial[node.name] = processed
532 modified = True
533 if not modified:
534 if self._node_metadata_static_complete != set(self._node_metadata_partial.keys()):
535 # During metadata processor execution, partial metadata may
536 # have been requested for nodes we did not previously
537 # consider. Since partial metadata may defaults to
538 # just an empty dict, we still need to make sure to
539 # generate static metadata for these new nodes, as
540 # that may trigger additional runs of metadata
541 # processors.
542 continue
543 else:
544 break
545
546 for culprit, number_of_iterations in iterations.items():
547 if number_of_iterations >= META_PROC_MAX_ITER:
548 node, metadata_processor = culprit
549 raise BundleError(_(
550 "Metadata processor '{proc}' stopped after too many iterations "
551 "({max_iter}) for node '{node}' to prevent infinite loop. "
552 "This usually means one of two things: "
553 "1) You have two metadata processors that keep overwriting each other's "
554 "data or 2) You have a single metadata processor that keeps changing its own "
555 "data. "
556 "To fix this, use `bw --debug metadata {node}` and look for repeated messages "
557 "indicating that the same metadata processor keeps changing metadata. Then "
558 "rewrite that metadata processor to eventually stop changing metadata.".format(
559 max_iter=META_PROC_MAX_ITER,
560 node=node,
561 proc=metadata_processor,
562 ),
563 ))
564
565 def metadata_hash(self):
566 repo_dict = {}
567 for node in self.nodes:
568 repo_dict[node.name] = node.metadata_hash()
569 return hash_statedict(repo_dict)
570
571 def populate_from_path(self, path):
572 if not self.is_repo(path):
573 raise NoSuchRepository(
574 _("'{}' is not a bundlewrap repository").format(path)
575 )
576
577 if path != self.path:
578 self._set_path(path)
579
580 # check requirements.txt
581 try:
582 with open(join(path, FILENAME_REQUIREMENTS)) as f:
583 lines = f.readlines()
584 except:
585 pass
586 else:
587 try:
588 require(lines)
589 except DistributionNotFound as exc:
590 raise MissingRepoDependency(_(
591 "{x} Python package '{pkg}' is listed in {filename}, but wasn't found. "
592 "You probably have to install it with `pip install {pkg}`."
593 ).format(
594 filename=FILENAME_REQUIREMENTS,
595 pkg=exc.req,
596 x=red("!"),
597 ))
598 except VersionConflict as exc:
599 raise MissingRepoDependency(_(
600 "{x} Python package '{required}' is listed in {filename}, "
601 "but only '{existing}' was found. "
602 "You probably have to upgrade it with `pip install {required}`."
603 ).format(
604 existing=exc.dist,
605 filename=FILENAME_REQUIREMENTS,
606 required=exc.req,
607 x=red("!"),
608 ))
609
610 self.vault = SecretProxy(self)
611
612 # populate bundles
613 self.bundle_names = []
614 for dir_entry in listdir(self.bundles_dir):
615 if validate_name(dir_entry):
616 self.bundle_names.append(dir_entry)
617
618 # populate groups
619 self.group_dict = {}
620 for group in groups_from_file(self.groups_file, self.libs, self.path, self.vault):
621 self.add_group(group)
622
623 # populate items
624 self.item_classes = list(items_from_path(items.__path__[0]))
625 for item_class in items_from_path(self.items_dir):
626 self.item_classes.append(item_class)
627
628 # populate nodes
629 self.node_dict = {}
630 for node in nodes_from_file(self.nodes_file, self.libs, self.path, self.vault):
631 self.add_node(node)
632
633 @utils.cached_property
634 def revision(self):
635 return get_rev()
636
637 def _set_path(self, path):
638 self.path = path
639 self.bundles_dir = join(self.path, DIRNAME_BUNDLES)
640 self.data_dir = join(self.path, DIRNAME_DATA)
641 self.hooks_dir = join(self.path, DIRNAME_HOOKS)
642 self.items_dir = join(self.path, DIRNAME_ITEM_TYPES)
643 self.groups_file = join(self.path, FILENAME_GROUPS)
644 self.libs_dir = join(self.path, DIRNAME_LIBS)
645 self.nodes_file = join(self.path, FILENAME_NODES)
646
647 self.hooks = HooksProxy(self.hooks_dir)
648 self.libs = LibsProxy(self.libs_dir)
0 from base64 import b64encode, urlsafe_b64decode
1 try:
2 from configparser import SafeConfigParser
3 except ImportError: # Python 2
4 from ConfigParser import SafeConfigParser
5 import hashlib
6 import hmac
7 from os import environ
8 from os.path import join
9 from string import ascii_letters, punctuation, digits
10
11 from cryptography.fernet import Fernet
12
13 from .exceptions import FaultUnavailable
14 from .utils import Fault, get_file_contents
15 from .utils.text import mark_for_translation as _
16 from .utils.ui import io
17
18
19 FILENAME_SECRETS = ".secrets.cfg"
20
21
22 def generate_initial_secrets_cfg():
23 return (
24 "# DO NOT COMMIT THIS FILE\n"
25 "# share it with your team through a secure channel\n\n"
26 "[generate]\nkey = {}\n\n"
27 "[encrypt]\nkey = {}\n"
28 ).format(
29 SecretProxy.random_key(),
30 SecretProxy.random_key(),
31 )
32
33
34 def random(seed):
35 """
36 Provides a way to get repeatable random numbers from the given seed.
37 Unlike random.seed(), this approach provides consistent results
38 across platforms.
39 See also http://stackoverflow.com/a/18992474
40 """
41 while True:
42 seed = hashlib.sha512(seed).digest()
43 for character in seed:
44 try:
45 yield ord(character)
46 except TypeError: # Python 3
47 yield character
48
49
50 class SecretProxy(object):
51 @staticmethod
52 def random_key():
53 """
54 Provided as a helper to generate new keys from `bw debug`.
55 """
56 return Fernet.generate_key().decode('utf-8')
57
58 def __init__(self, repo):
59 self.repo = repo
60 self.keys = self._load_keys()
61
62 def _decrypt(self, cryptotext=None, key='encrypt'):
63 """
64 Decrypts a given encrypted password.
65 """
66 if environ.get("BW_VAULT_DUMMY_MODE", "0") != "0":
67 return "decrypted text"
68 try:
69 key = self.keys[key]
70 except KeyError:
71 raise FaultUnavailable(_(
72 "Key '{key}' not available for decryption of the following cryptotext, "
73 "check your {file}: {cryptotext}"
74 ).format(
75 cryptotext=cryptotext,
76 file=FILENAME_SECRETS,
77 key=key,
78 ))
79
80 return Fernet(key).decrypt(cryptotext.encode('utf-8')).decode('utf-8')
81
82 def _decrypt_file(self, source_path=None, key='encrypt'):
83 """
84 Decrypts the file at source_path (relative to data/) and
85 returns the plaintext as unicode.
86 """
87 if environ.get("BW_VAULT_DUMMY_MODE", "0") != "0":
88 return "decrypted file"
89 try:
90 key = self.keys[key]
91 except KeyError:
92 raise FaultUnavailable(_(
93 "Key '{key}' not available for decryption of the following file, "
94 "check your {file}: {source_path}"
95 ).format(
96 file=FILENAME_SECRETS,
97 key=key,
98 source_path=source_path,
99 ))
100
101 f = Fernet(key)
102 return f.decrypt(get_file_contents(join(self.repo.data_dir, source_path))).decode('utf-8')
103
104 def _decrypt_file_as_base64(self, source_path=None, key='encrypt'):
105 """
106 Decrypts the file at source_path (relative to data/) and
107 returns the plaintext as base64.
108 """
109 if environ.get("BW_VAULT_DUMMY_MODE", "0") != "0":
110 return b64encode("decrypted file as base64").decode('utf-8')
111 try:
112 key = self.keys[key]
113 except KeyError:
114 raise FaultUnavailable(_(
115 "Key '{key}' not available for decryption of the following file, "
116 "check your {file}: {source_path}"
117 ).format(
118 file=FILENAME_SECRETS,
119 key=key,
120 source_path=source_path,
121 ))
122
123 f = Fernet(key)
124 return b64encode(f.decrypt(get_file_contents(
125 join(self.repo.data_dir, source_path),
126 ))).decode('utf-8')
127
128 def _generate_password(self, identifier=None, key='generate', length=32, symbols=False):
129 """
130 Derives a password from the given identifier and the shared key
131 in the repository.
132
133 This is done by seeding a random generator with an SHA512 HMAC built
134 from the key and the given identifier.
135 One could just use the HMAC digest itself as a password, but the
136 PRNG allows for more control over password length and complexity.
137 """
138 if environ.get("BW_VAULT_DUMMY_MODE", "0") != "0":
139 return "generatedpassword"
140 try:
141 key_encoded = self.keys[key]
142 except KeyError:
143 raise FaultUnavailable(_(
144 "Key '{key}' not available to generate password '{password}', check your {file}"
145 ).format(
146 file=FILENAME_SECRETS,
147 key=key,
148 password=identifier,
149 ))
150
151 alphabet = ascii_letters + digits
152 if symbols:
153 alphabet += punctuation
154
155 h = hmac.new(urlsafe_b64decode(key_encoded), digestmod=hashlib.sha512)
156 h.update(identifier.encode('utf-8'))
157 prng = random(h.digest())
158 return "".join([alphabet[next(prng) % (len(alphabet) - 1)] for i in range(length)])
159
160 def _load_keys(self):
161 config = SafeConfigParser()
162 secrets_file = join(self.repo.path, FILENAME_SECRETS)
163 try:
164 config.read(secrets_file)
165 except IOError:
166 io.debug(_("unable to read {}").format(secrets_file))
167 return {}
168 result = {}
169 for section in config.sections():
170 result[section] = config.get(section, 'key').encode('utf-8')
171 return result
172
173 def decrypt(self, cryptotext, key='encrypt'):
174 return Fault(
175 self._decrypt,
176 cryptotext=cryptotext,
177 key=key,
178 )
179
180 def decrypt_file(self, source_path, key='encrypt'):
181 return Fault(
182 self._decrypt_file,
183 source_path=source_path,
184 key=key,
185 )
186
187 def decrypt_file_as_base64(self, source_path, key='encrypt'):
188 return Fault(
189 self._decrypt_file_as_base64,
190 source_path=source_path,
191 key=key,
192 )
193
194 def encrypt(self, plaintext, key='encrypt'):
195 """
196 Encrypts a given plaintext password and returns a string that can
197 be fed into decrypt() to get the password back.
198 """
199 try:
200 key = self.keys[key]
201 except KeyError:
202 raise KeyError(_(
203 "Key '{key}' not available for encryption, check your {file}"
204 ).format(
205 file=FILENAME_SECRETS,
206 key=key,
207 ))
208
209 return Fernet(key).encrypt(plaintext.encode('utf-8')).decode('utf-8')
210
211 def encrypt_file(self, source_path, target_path, key='encrypt'):
212 """
213 Encrypts the file at source_path and places the result at
214 target_path. The source_path is relative to CWD or absolute,
215 while target_path is relative to data/.
216 """
217 try:
218 key = self.keys[key]
219 except KeyError:
220 raise KeyError(_(
221 "Key '{key}' not available for file encryption, check your {file}"
222 ).format(
223 file=FILENAME_SECRETS,
224 key=key,
225 ))
226
227 plaintext = get_file_contents(source_path)
228 fernet = Fernet(key)
229 target_file = join(self.repo.data_dir, target_path)
230 with open(target_file, 'wb') as f:
231 f.write(fernet.encrypt(plaintext))
232 return target_file
233
234 def _format(self, format_str=None, faults=None):
235 return format_str.format(*[fault.value for fault in faults])
236
237 def format(self, format_str, *faults):
238 """
239 Returns a Fault for a string formatted with the given Faults,
240 e.g.:
241
242 vault.format("password: {}", vault.password_for("something"))
243
244 DEPRECATED, remove in 3.0, use Fault.format_into instead.
245 """
246 return Fault(
247 self._format,
248 format_str=format_str,
249 faults=faults,
250 )
251
252 def password_for(self, identifier, key='generate', length=32, symbols=False):
253 return Fault(
254 self._generate_password,
255 identifier=identifier,
256 key=key,
257 length=length,
258 symbols=symbols,
259 )
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from codecs import getwriter
4 from contextlib import contextmanager
5 import hashlib
6 from inspect import isgenerator
7 from os import chmod, close, makedirs, remove
8 from os.path import dirname, exists
9 import stat
10 from sys import stderr, stdout
11 from tempfile import mkstemp
12
13 from requests import get
14
15 from ..exceptions import DontCache, FaultUnavailable
16
17 __GETATTR_CACHE = {}
18 __GETATTR_NODEFAULT = "very_unlikely_default_value"
19
20
21 MODE644 = stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH
22
23 try:
24 STDERR_WRITER = getwriter('utf-8')(stderr.buffer)
25 STDOUT_WRITER = getwriter('utf-8')(stdout.buffer)
26 except AttributeError: # Python 2
27 STDERR_WRITER = getwriter('utf-8')(stderr)
28 STDOUT_WRITER = getwriter('utf-8')(stdout)
29
30
31 def cached_property(prop):
32 """
33 A replacement for the property decorator that will only compute the
34 attribute's value on the first call and serve a cached copy from
35 then on.
36 """
37 def cache_wrapper(self):
38 if not hasattr(self, "_cache"):
39 self._cache = {}
40 if prop.__name__ not in self._cache:
41 try:
42 return_value = prop(self)
43 if isgenerator(return_value):
44 return_value = tuple(return_value)
45 except DontCache as exc:
46 return exc.obj
47 else:
48 self._cache[prop.__name__] = return_value
49 return self._cache[prop.__name__]
50 return property(cache_wrapper)
51
52
53 def download(url, path):
54 if not exists(dirname(path)):
55 makedirs(dirname(path))
56 if exists(path):
57 chmod(path, MODE644)
58 with open(path, 'wb') as f:
59 r = get(url, stream=True)
60 r.raise_for_status()
61 for block in r.iter_content(1024):
62 if not block:
63 break
64 else:
65 f.write(block)
66
67
68 class Fault(object):
69 """
70 A proxy object for lazy access to things that may not really be
71 available at the time of use.
72
73 This let's us gracefully skip items that require information that's
74 currently not available.
75 """
76 def __init__(self, callback, **kwargs):
77 self._available = None
78 self._exc = None
79 self._value = None
80 self.callback = callback
81 self.kwargs = kwargs
82
83 def _resolve(self):
84 if self._available is None:
85 try:
86 self._value = self.callback(**self.kwargs)
87 self._available = True
88 except FaultUnavailable as exc:
89 self._available = False
90 self._exc = exc
91
92 def __add__(self, other):
93 if isinstance(other, Fault):
94 def callback():
95 return self.value + other.value
96 return Fault(callback)
97 else:
98 def callback():
99 return self.value + other
100 return Fault(callback)
101
102 def __len__(self):
103 return len(self.value)
104
105 def __str__(self):
106 return str(self.value)
107
108 def format_into(self, format_string):
109 def callback():
110 return format_string.format(self.value)
111 return Fault(callback)
112
113 @property
114 def is_available(self):
115 self._resolve()
116 return self._available
117
118 @property
119 def value(self):
120 self._resolve()
121 if not self._available:
122 raise self._exc
123 return self._value
124
125
126 def _make_method_callback(method_name):
127 def method(self, *args, **kwargs):
128 def callback():
129 return getattr(self.value, method_name)(*args, **kwargs)
130 return Fault(callback)
131 return method
132
133
134 for method_name in (
135 'format',
136 'lower',
137 'lstrip',
138 'replace',
139 'rstrip',
140 'strip',
141 'upper',
142 'zfill',
143 ):
144 setattr(Fault, method_name, _make_method_callback(method_name))
145
146
147 def get_file_contents(path):
148 with open(path, 'rb') as f:
149 content = f.read()
150 return content
151
152
153 def get_all_attrs_from_file(path, cache=True, base_env=None):
154 """
155 Reads all 'attributes' (if it were a module) from a source file.
156 """
157 if base_env is None:
158 base_env = {}
159 if base_env:
160 # do not allow caching when passing in a base env because that
161 # breaks repeated calls with different base envs for the same
162 # file
163 cache = False
164 if path not in __GETATTR_CACHE or not cache:
165 source = get_file_contents(path)
166 env = base_env.copy()
167 try:
168 exec(source, env)
169 except:
170 from .ui import io
171 io.stderr("Exception while executing {}".format(path))
172 raise
173 if cache:
174 __GETATTR_CACHE[path] = env
175 else:
176 env = __GETATTR_CACHE[path]
177 return env
178
179
180 def getattr_from_file(path, attrname, base_env=None, cache=True, default=__GETATTR_NODEFAULT):
181 """
182 Reads a specific 'attribute' (if it were a module) from a source
183 file.
184 """
185 env = get_all_attrs_from_file(path, base_env=base_env, cache=cache)
186 if default == __GETATTR_NODEFAULT:
187 return env[attrname]
188 else:
189 return env.get(attrname, default)
190
191
192 def graph_for_items(
193 title,
194 items,
195 cluster=True,
196 concurrency=True,
197 static=True,
198 regular=True,
199 reverse=True,
200 auto=True,
201 ):
202 items = sorted(items)
203
204 yield "digraph bundlewrap"
205 yield "{"
206
207 # Print subgraphs *below* each other
208 yield "rankdir = LR"
209
210 # Global attributes
211 yield ("graph [color=\"#303030\"; "
212 "fontname=Helvetica; "
213 "penwidth=2; "
214 "shape=box; "
215 "style=\"rounded,dashed\"]")
216 yield ("node [color=\"#303030\"; "
217 "fillcolor=\"#303030\"; "
218 "fontcolor=white; "
219 "fontname=Helvetica; "
220 "shape=box; "
221 "style=\"rounded,filled\"]")
222 yield "edge [arrowhead=vee]"
223
224 item_ids = []
225 for item in items:
226 item_ids.append(item.id)
227
228 if cluster:
229 # Define which items belong to which bundle
230 bundle_number = 0
231 bundles_seen = []
232 for item in items:
233 if item.bundle is None or item.bundle.name in bundles_seen:
234 continue
235 yield "subgraph cluster_{}".format(bundle_number)
236 bundle_number += 1
237 yield "{"
238 yield "label = \"{}\"".format(item.bundle.name)
239 yield "\"bundle:{}\"".format(item.bundle.name)
240 for bitem in item.bundle.items:
241 if bitem.id in item_ids:
242 yield "\"{}\"".format(bitem.id)
243 yield "}"
244 bundles_seen.append(item.bundle.name)
245
246 # Define dependencies between items
247 for item in items:
248 if regular:
249 for dep in item.needs:
250 if dep in item_ids:
251 yield "\"{}\" -> \"{}\" [color=\"#C24948\",penwidth=2]".format(item.id, dep)
252
253 if auto:
254 for dep in sorted(item._deps):
255 if dep in item._concurrency_deps:
256 if concurrency:
257 yield "\"{}\" -> \"{}\" [color=\"#714D99\",penwidth=2]".format(item.id, dep)
258 elif dep in item._reverse_deps:
259 if reverse:
260 yield "\"{}\" -> \"{}\" [color=\"#D18C57\",penwidth=2]".format(item.id, dep)
261 elif dep not in item.needs:
262 if dep in item_ids:
263 yield "\"{}\" -> \"{}\" [color=\"#6BB753\",penwidth=2]".format(item.id, dep)
264
265 # Global graph title
266 yield "fontsize = 28"
267 yield "label = \"{}\"".format(title)
268 yield "labelloc = \"t\""
269 yield "}"
270
271
272 def hash_local_file(path):
273 """
274 Retuns the sha1 hash of a file on the local machine.
275 """
276 return sha1(get_file_contents(path))
277
278
279 class _Atomic(object):
280 """
281 This and the following related classes are used to mark objects as
282 non-mergeable for the purposes of merge_dict().
283 """
284 pass
285
286
287 class _AtomicDict(dict, _Atomic):
288 pass
289
290
291 class _AtomicList(list, _Atomic):
292 pass
293
294
295 class _AtomicSet(set, _Atomic):
296 pass
297
298
299 class _AtomicTuple(tuple, _Atomic):
300 pass
301
302
303 ATOMIC_TYPES = {
304 dict: _AtomicDict,
305 list: _AtomicList,
306 set: _AtomicSet,
307 tuple: _AtomicTuple,
308 }
309
310
311 def merge_dict(base, update):
312 """
313 Recursively merges the base dict into the update dict.
314 """
315 if not isinstance(update, dict):
316 return update
317
318 merged = base.copy()
319
320 for key, value in update.items():
321 merge = key in base and not isinstance(value, _Atomic)
322 if merge and isinstance(base[key], dict):
323 merged[key] = merge_dict(base[key], value)
324 elif (
325 merge and
326 isinstance(base[key], list) and
327 (
328 isinstance(value, list) or
329 isinstance(value, set) or
330 isinstance(value, tuple)
331 )
332 ):
333 extended = base[key][:]
334 extended.extend(value)
335 merged[key] = extended
336 elif (
337 merge and
338 isinstance(base[key], tuple) and
339 (
340 isinstance(value, list) or
341 isinstance(value, set) or
342 isinstance(value, tuple)
343 )
344 ):
345 merged[key] = base[key] + tuple(value)
346 elif (
347 merge and
348 isinstance(base[key], set) and
349 (
350 isinstance(value, list) or
351 isinstance(value, set) or
352 isinstance(value, tuple)
353 )
354 ):
355 merged[key] = base[key].union(set(value))
356 else:
357 merged[key] = value
358
359 return merged
360
361
362 def names(obj_list):
363 """
364 Iterator over the name properties of a given list of objects.
365
366 repo.nodes will give you node objects
367 names(repo.nodes) will give you node names
368 """
369 for obj in obj_list:
370 yield obj.name
371
372
373 def sha1(data):
374 """
375 Returns hex SHA1 hash for input.
376 """
377 hasher = hashlib.sha1()
378 hasher.update(data)
379 return hasher.hexdigest()
380
381
382 @contextmanager
383 def tempfile():
384 handle, path = mkstemp()
385 close(handle)
386 yield path
387 remove(path)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2 from sys import exit
3
4 from ..exceptions import NoSuchNode, NoSuchGroup
5 from . import names
6 from .text import mark_for_translation as _, red
7 from .ui import io
8
9
10 def get_group(repo, group_name):
11 try:
12 return repo.get_group(group_name)
13 except NoSuchGroup:
14 io.stderr(_("{x} No such group: {group}").format(
15 group=group_name,
16 x=red("!!!"),
17 ))
18 exit(1)
19
20
21 def get_item(node, item_id):
22 try:
23 return node.get_item(item_id)
24 except NoSuchGroup:
25 io.stderr(_("{x} No such item on node '{node}': {item}").format(
26 item=item_id,
27 node=node.name,
28 x=red("!!!"),
29 ))
30 exit(1)
31
32
33 def get_node(repo, node_name, adhoc_nodes=False):
34 try:
35 return repo.get_node(node_name)
36 except NoSuchNode:
37 if adhoc_nodes:
38 return repo.create_node(node_name)
39 else:
40 io.stderr(_("{x} No such node: {node}").format(
41 node=node_name,
42 x=red("!!!"),
43 ))
44 exit(1)
45
46
47 def get_target_nodes(repo, target_string, adhoc_nodes=False):
48 """
49 Returns a list of nodes. The input is a string like this:
50
51 "node1,node2,group3,bundle:foo"
52
53 Meaning: Targets are 'node1', 'node2', all nodes in 'group3',
54 and all nodes with the bundle 'foo'.
55 """
56 targets = []
57 for name in target_string.split(","):
58 name = name.strip()
59 if name.startswith("bundle:"):
60 bundle_name = name.split(":", 1)[1]
61 for node in repo.nodes:
62 if bundle_name in names(node.bundles):
63 targets.append(node)
64 elif name.startswith("!bundle:"):
65 bundle_name = name.split(":", 1)[1]
66 for node in repo.nodes:
67 if bundle_name not in names(node.bundles):
68 targets.append(node)
69 elif name.startswith("!group:"):
70 group_name = name.split(":", 1)[1]
71 for node in repo.nodes:
72 if group_name not in names(node.groups):
73 targets.append(node)
74 else:
75 try:
76 targets.append(repo.get_node(name))
77 except NoSuchNode:
78 try:
79 targets += list(repo.get_group(name).nodes)
80 except NoSuchGroup:
81 if adhoc_nodes:
82 targets.append(repo.create_node(name))
83 else:
84 io.stderr(_("{x} No such node or group: {name}").format(
85 x=red("!!!"),
86 name=name,
87 ))
88 exit(1)
89 return sorted(set(targets))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from pipes import quote
4
5 from . import cached_property
6 from .text import force_text, mark_for_translation as _
7 from .ui import io
8
9
10 def _parse_file_output(file_output):
11 if file_output.startswith("cannot open "):
12 # required for Mac OS X, OpenBSD, and CentOS/RHEL
13 return ('nonexistent', "")
14 elif file_output.endswith("directory"):
15 return ('directory', file_output)
16 elif file_output.startswith("block special") or \
17 file_output.startswith("character special"):
18 return ('other', file_output)
19 elif file_output.startswith("symbolic link to ") or \
20 file_output.startswith("broken symbolic link to "):
21 return ('symlink', file_output)
22 else:
23 return ('file', file_output)
24
25
26 def get_path_type(node, path):
27 """
28 Returns (TYPE, DESC) where TYPE is one of:
29
30 'directory', 'file', 'nonexistent', 'other', 'symlink'
31
32 and DESC is the output of the 'file' command line utility.
33 """
34 result = node.run("file -bh -- {}".format(quote(path)), may_fail=True)
35 file_output = force_text(result.stdout.strip())
36 if (
37 result.return_code != 0 or
38 "No such file or directory" in file_output # thanks CentOS
39 ):
40 return ('nonexistent', "")
41
42 return _parse_file_output(file_output)
43
44
45 def stat(node, path):
46 if node.os in node.OS_FAMILY_BSD:
47 result = node.run("stat -f '%Su:%Sg:%p:%z' -- {}".format(quote(path)))
48 else:
49 result = node.run("stat -c '%U:%G:%a:%s' -- {}".format(quote(path)))
50 owner, group, mode, size = force_text(result.stdout).split(":")
51 mode = mode[-4:].zfill(4) # cut off BSD file type
52 file_stat = {
53 'owner': owner,
54 'group': group,
55 'mode': mode,
56 'size': int(size),
57 }
58 io.debug(_("stat for '{path}' on {node}: {result}".format(
59 node=node.name,
60 path=path,
61 result=repr(file_stat),
62 )))
63 return file_stat
64
65
66 class PathInfo(object):
67 """
68 Serves as a proxy to get_path_type.
69 """
70 def __init__(self, node, path):
71 self.node = node
72 self.path = path
73 self.path_type, self.desc = get_path_type(node, path)
74 self.stat = stat(node, path) if self.path_type != 'nonexistent' else {}
75
76 def __repr__(self):
77 return "<PathInfo for {}:{}>".format(self.node.name, quote(self.path))
78
79 @property
80 def exists(self):
81 return self.path_type != 'nonexistent'
82
83 @property
84 def group(self):
85 return self.stat['group']
86
87 @property
88 def is_binary_file(self):
89 return self.is_file and not self.is_text_file
90
91 @property
92 def is_directory(self):
93 return self.path_type == 'directory'
94
95 @property
96 def is_file(self):
97 return self.path_type == 'file'
98
99 @property
100 def is_symlink(self):
101 return self.path_type == 'symlink'
102
103 @property
104 def is_text_file(self):
105 return self.is_file and (
106 "text" in self.desc or
107 self.desc in (
108 "empty",
109 "OpenSSH RSA public key",
110 "OpenSSH DSA public key",
111 )
112 )
113
114 @property
115 def mode(self):
116 return self.stat['mode']
117
118 @property
119 def owner(self):
120 return self.stat['owner']
121
122 @cached_property
123 def sha1(self):
124 if self.node.os == 'macos':
125 result = self.node.run("shasum -a 1 -- {}".format(quote(self.path)))
126 elif self.node.os in self.node.OS_FAMILY_BSD:
127 result = self.node.run("sha1 -q -- {}".format(quote(self.path)))
128 else:
129 result = self.node.run("sha1sum -- {}".format(quote(self.path)))
130 return force_text(result.stdout).strip().split()[0]
131
132 @property
133 def size(self):
134 return self.stat['size']
135
136 @property
137 def symlink_target(self):
138 if not self.is_symlink:
139 raise ValueError("{} is not a symlink".format(quote(self.path)))
140 if self.desc.startswith("symbolic link to `"):
141 return self.desc[18:-1]
142 elif self.desc.startswith("broken symbolic link to `"):
143 return self.desc[25:-1]
144 elif self.desc.startswith("symbolic link to "):
145 return self.desc[17:]
146 elif self.desc.startswith("broken symbolic link to "):
147 return self.desc[24:]
148 else:
149 raise ValueError("unable to find target for {}".format(quote(self.path)))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from subprocess import CalledProcessError, check_output, STDOUT
4
5
6 def get_bzr_rev():
7 try:
8 return check_output(
9 "bzr revno",
10 shell=True,
11 stderr=STDOUT,
12 ).strip()
13 except CalledProcessError:
14 return None
15
16
17 def get_git_rev():
18 try:
19 return check_output(
20 "git rev-parse HEAD",
21 shell=True,
22 stderr=STDOUT,
23 ).strip()
24 except CalledProcessError:
25 return None
26
27
28 def get_hg_rev():
29 try:
30 return check_output(
31 "hg --debug id -i",
32 shell=True,
33 stderr=STDOUT,
34 ).strip().rstrip("+")
35 except CalledProcessError:
36 return None
37
38
39 def get_rev():
40 for scm_rev in (get_git_rev, get_hg_rev, get_bzr_rev):
41 rev = scm_rev()
42 if rev is not None:
43 return rev
44 return None
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from difflib import unified_diff
4 from hashlib import sha1
5 from json import dumps, JSONEncoder
6
7 from . import Fault
8 from .text import bold, green, red
9 from .text import force_text, mark_for_translation as _
10
11
12 try:
13 text_type = unicode
14 byte_type = str
15 except NameError:
16 text_type = str
17 byte_type = bytes
18
19 DIFF_MAX_INLINE_LENGTH = 36
20 DIFF_MAX_LINE_LENGTH = 1024
21
22
23 def diff_keys(sdict1, sdict2):
24 """
25 Compares the keys of two statedicts and returns the keys with
26 differing values.
27
28 Note that only keys in the first statedict are considered. If a key
29 only exists in the second one, it is disregarded.
30 """
31 if sdict1 is None:
32 return []
33 if sdict2 is None:
34 return sdict1.keys()
35 differing_keys = []
36 for key, value in sdict1.items():
37 if value != sdict2[key]:
38 differing_keys.append(key)
39 return differing_keys
40
41
42 def diff_value_bool(title, value1, value2):
43 return diff_value_text(
44 title,
45 "yes" if value1 else "no",
46 "yes" if value2 else "no",
47 )
48
49
50 def diff_value_int(title, value1, value2):
51 return diff_value_text(
52 title,
53 "{}".format(value1),
54 "{}".format(value2),
55 )
56
57
58 def diff_value_list(title, value1, value2):
59 if isinstance(value1, set):
60 value1 = sorted(value1)
61 value2 = sorted(value2)
62 elif isinstance(value1, tuple):
63 value1 = list(value1)
64 value2 = list(value2)
65 # make sure that *if* we have lines, the last one will also end with
66 # a newline
67 if value1:
68 value1.append("")
69 if value2:
70 value2.append("")
71 return diff_value_text(
72 title,
73 "\n".join([str(i) for i in value1]),
74 "\n".join([str(i) for i in value2]),
75 )
76
77
78 def diff_value_text(title, value1, value2):
79 max_length = max(len(value1), len(value2))
80 value1, value2 = force_text(value1), force_text(value2)
81 if (
82 "\n" not in value1 and
83 "\n" not in value2
84 ):
85 if max_length < DIFF_MAX_INLINE_LENGTH:
86 return "{} {} → {}".format(
87 bold(title),
88 red(value1),
89 green(value2),
90 )
91 elif max_length < DIFF_MAX_LINE_LENGTH:
92 return "{} {}\n{}→ {}".format(
93 bold(title),
94 red(value1),
95 " " * (len(title) - 1),
96 green(value2),
97 )
98 output = bold(title) + "\n"
99 for line in unified_diff(
100 value1.splitlines(True),
101 value2.splitlines(True),
102 fromfile=_("<node>"),
103 tofile=_("<bundlewrap>"),
104 ):
105 suffix = ""
106 if len(line) > DIFF_MAX_LINE_LENGTH:
107 suffix += _(" (line truncated after {} characters)").format(DIFF_MAX_LINE_LENGTH)
108 if not line.endswith("\n"):
109 suffix += _(" (no newline at end of file)")
110 line = line[:DIFF_MAX_LINE_LENGTH].rstrip("\n")
111 if line.startswith("+"):
112 line = green(line)
113 elif line.startswith("-"):
114 line = red(line)
115 output += line + suffix + "\n"
116 return output
117
118
119 TYPE_DIFFS = {
120 bool: diff_value_bool,
121 byte_type: diff_value_text,
122 float: diff_value_int,
123 int: diff_value_int,
124 list: diff_value_list,
125 set: diff_value_list,
126 text_type: diff_value_text,
127 tuple: diff_value_list,
128 }
129
130
131 def diff_value(title, value1, value2):
132 value_type = type(value1)
133 assert value_type == type(value2)
134 diff_func = TYPE_DIFFS[value_type]
135 return diff_func(title, value1, value2)
136
137
138 class FaultResolvingJSONEncoder(JSONEncoder):
139 def default(self, obj):
140 if isinstance(obj, Fault):
141 return obj.value
142 else:
143 return JSONEncoder.default(obj)
144
145
146 def hash_statedict(sdict):
147 """
148 Returns a canonical SHA1 hash to describe this dict.
149 """
150 return sha1(statedict_to_json(sdict).encode('utf-8')).hexdigest()
151
152
153 def statedict_to_json(sdict, pretty=False):
154 """
155 Returns a canonical JSON representation of the given statedict.
156 """
157 if sdict is None:
158 return ""
159 else:
160 return dumps(
161 sdict,
162 cls=FaultResolvingJSONEncoder,
163 indent=4 if pretty else None,
164 sort_keys=True,
165 )
166
167
168 def validate_statedict(sdict):
169 """
170 Raises ValueError if the given statedict is invalid.
171 """
172 if sdict is None:
173 return
174 for key, value in sdict.items():
175 if not isinstance(force_text(key), text_type):
176 raise ValueError(_("non-text statedict key: {}").format(key))
177
178 if type(value) not in TYPE_DIFFS and value is not None:
179 raise ValueError(
180 _("invalid statedict value for key '{k}': {v}").format(k=key, v=value)
181 )
182
183 if type(value) in (list, tuple):
184 for index, element in enumerate(value):
185 if type(element) not in TYPE_DIFFS and element is not None:
186 raise ValueError(_(
187 "invalid element #{i} in statedict key '{k}': {e}"
188 ).format(
189 e=element,
190 i=index,
191 k=key,
192 ))
0 import platform
1 from subprocess import Popen, PIPE
2
3 from ..bundle import FILENAME_BUNDLE
4 from ..secrets import FILENAME_SECRETS
5
6
7 HOST_OS = {
8 "Darwin": 'macos',
9 "Linux": 'linux',
10 }
11
12
13 def host_os():
14 return HOST_OS[platform.system()]
15
16
17 def make_repo(tmpdir, bundles=None, groups=None, nodes=None):
18 bundles = {} if bundles is None else bundles
19 groups = {} if groups is None else groups
20 nodes = {} if nodes is None else nodes
21
22 bundles_dir = tmpdir.mkdir("bundles")
23 for bundle, items in bundles.items():
24 bundle_dir = bundles_dir.mkdir(bundle)
25 bundle_dir.mkdir("files")
26 bundlepy = bundle_dir.join(FILENAME_BUNDLE)
27 bundle_content = ""
28 for itemtype, itemconfig in items.items():
29 bundle_content += "{} = {}\n".format(itemtype, repr(itemconfig))
30 bundlepy.write(bundle_content)
31
32 tmpdir.mkdir("data")
33 tmpdir.mkdir("hooks")
34
35 groupspy = tmpdir.join("groups.py")
36 groupspy.write("groups = {}\n".format(repr(groups)))
37
38 nodespy = tmpdir.join("nodes.py")
39 nodespy.write("nodes = {}\n".format(repr(nodes)))
40
41 secrets = tmpdir.join(FILENAME_SECRETS)
42 secrets.write("[generate]\nkey = {}\n\n[encrypt]\nkey = {}\n".format(
43 "Fl53iG1czBcaAPOKhSiJE7RjFU9nIAGkiKDy0k_LoTc=",
44 "DbYiUu5VMfrdeSiKYiAH4rDOAUISipvLSBJI-T0SpeY=",
45 ))
46
47
48 def run(command, path=None):
49 process = Popen(command, cwd=path, shell=True, stderr=PIPE, stdout=PIPE)
50 stdout, stderr = process.communicate()
51 print(stdout.decode('utf-8'))
52 print(stderr.decode('utf-8'))
53 return (stdout, stderr, process.returncode)
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from io import BytesIO
4 from os import environ
5 from os.path import normpath
6 from random import choice
7 import re
8 from string import digits, ascii_letters
9
10 from . import Fault, STDERR_WRITER
11
12
13 ANSI_ESCAPE = re.compile(r'\x1b[^m]*m')
14 VALID_NAME_CHARS = digits + ascii_letters + "-_.+"
15
16
17 def ansi_wrapper(colorizer):
18 if environ.get("BW_COLORS", "1") != "0":
19 return colorizer
20 else:
21 return lambda s, **kwargs: s
22
23
24 @ansi_wrapper
25 def blue(text):
26 return "\033[34m{}\033[0m".format(text)
27
28
29 @ansi_wrapper
30 def bold(text):
31 return "\033[1m{}\033[0m".format(text)
32
33
34 @ansi_wrapper
35 def cyan(text):
36 return "\033[36m{}\033[0m".format(text)
37
38
39 @ansi_wrapper
40 def inverse(text):
41 return "\033[0m\033[7m{}\033[0m".format(text)
42
43
44 @ansi_wrapper
45 def green(text):
46 return "\033[32m{}\033[0m".format(text)
47
48
49 @ansi_wrapper
50 def red(text):
51 return "\033[31m{}\033[0m".format(text)
52
53
54 @ansi_wrapper
55 def yellow(text):
56 return "\033[33m{}\033[0m".format(text)
57
58
59 def error_summary(errors):
60 if not errors:
61 return
62
63 if len(errors) == 1:
64 STDERR_WRITER.write(_("\n{x} There was an error, repeated below.\n\n").format(
65 x=red("!!!"),
66 ))
67 STDERR_WRITER.flush()
68 else:
69 STDERR_WRITER.write(_("\n{x} There were {count} errors, repeated below.\n\n").format(
70 count=len(errors),
71 x=red("!!!"),
72 ))
73 STDERR_WRITER.flush()
74
75 for e in errors:
76 STDERR_WRITER.write(e)
77 STDERR_WRITER.write("\n")
78 STDERR_WRITER.flush()
79
80
81 def force_text(data):
82 """
83 Try to return a text aka unicode object from the given data.
84 Also has Python 2/3 compatibility baked in. Oh the humanity.
85 """
86 if isinstance(data, bytes):
87 return data.decode('utf-8', 'replace')
88 elif isinstance(data, Fault):
89 return data.value
90 return data
91
92
93 def is_subdirectory(parent, child):
94 """
95 Returns True if the given child is a subdirectory of the parent.
96 """
97 parent = normpath(parent)
98 child = normpath(child)
99
100 if not parent.startswith("/") or not child.startswith("/"):
101 raise ValueError(_("directory paths must be absolute"))
102
103 if parent == child:
104 return False
105
106 if parent == "/":
107 return True
108
109 return child.startswith(parent + "/")
110
111
112 def mark_for_translation(s):
113 return s
114 _ = mark_for_translation
115
116
117 def randstr(length=24):
118 """
119 Returns a random alphanumeric string of the given length.
120 """
121 return ''.join(choice(ascii_letters + digits) for c in range(length))
122
123
124 def validate_name(name):
125 """
126 Checks whether the given string is a valid name for a node, group,
127 or bundle.
128 """
129 try:
130 for char in name:
131 assert char in VALID_NAME_CHARS
132 assert not name.startswith(".")
133 except AssertionError:
134 return False
135 return True
136
137
138 def wrap_question(title, body, question, prefix=""):
139 output = ("{0}\n"
140 "{0} ╭─ {1}\n"
141 "{0} │\n".format(prefix, title))
142 for line in body.splitlines():
143 output += "{0} │ {1}\n".format(prefix, line)
144 output += ("{0} │\n"
145 "{0} ╰─ ".format(prefix) + question)
146 return output
147
148
149 class LineBuffer(object):
150 def __init__(self, target):
151 self.buffer = b""
152 self.record = BytesIO()
153 self.target = target if target else lambda s: None
154
155 def close(self):
156 self.flush()
157 if self.buffer:
158 self.record.write(self.buffer)
159 self.target(self.buffer)
160
161 def flush(self):
162 while b"\n" in self.buffer:
163 chunk, self.buffer = self.buffer.split(b"\n", 1)
164 self.record.write(chunk + b"\n")
165 self.target(chunk + b"\n")
166
167 def write(self, msg):
168 self.buffer += msg
169 self.flush()
0 from datetime import datetime, timedelta
1
2 from .text import mark_for_translation as _
3
4
5 def format_duration(duration):
6 """
7 Takes a timedelta and returns something like "1d 5h 4m 3s".
8 """
9 components = []
10 if duration.days > 0:
11 components.append(_("{}d").format(duration.days))
12 seconds = duration.seconds
13 if seconds >= 3600:
14 hours = int(seconds / 3600)
15 seconds -= hours * 3600
16 components.append(_("{}h").format(hours))
17 if seconds >= 60:
18 minutes = int(seconds / 60)
19 seconds -= minutes * 60
20 components.append(_("{}m").format(minutes))
21 if seconds > 0 or not components:
22 components.append(_("{}s").format(seconds))
23 return " ".join(components)
24
25
26 def format_timestamp(timestamp):
27 return datetime.fromtimestamp(timestamp).strftime("%Y-%m-%d %H:%M:%S")
28
29
30 def parse_duration(duration):
31 """
32 Parses a string like "1d 5h 4m 3s" into a timedelta.
33 """
34 days = 0
35 seconds = 0
36 for component in duration.strip().split(" "):
37 component = component.strip()
38 if component[-1] == "d":
39 days += int(component[:-1])
40 elif component[-1] == "h":
41 seconds += int(component[:-1]) * 3600
42 elif component[-1] == "m":
43 seconds += int(component[:-1]) * 60
44 elif component[-1] == "s":
45 seconds += int(component[:-1])
46 else:
47 raise ValueError(_("{} is not a valid duration string").format(repr(duration)))
48 return timedelta(days=days, seconds=seconds)
0 from contextlib import contextmanager
1 from datetime import datetime
2 from errno import EPIPE
3 import fcntl
4 from functools import wraps
5 from os import _exit, environ, getpid, kill
6 from os.path import join
7 from select import select
8 from signal import signal, SIG_DFL, SIGINT, SIGTERM
9 import struct
10 import sys
11 import termios
12 from threading import Event, Lock, Thread
13
14 from . import STDERR_WRITER, STDOUT_WRITER
15 from .text import ANSI_ESCAPE, blue, bold, inverse, mark_for_translation as _
16
17 QUIT_EVENT = Event()
18 SHUTDOWN_EVENT_HARD = Event()
19 SHUTDOWN_EVENT_SOFT = Event()
20 TTY = STDOUT_WRITER.isatty()
21
22
23 if sys.version_info >= (3, 0):
24 broken_pipe_exception = BrokenPipeError
25 else:
26 broken_pipe_exception = IOError
27
28
29 def add_debug_indicator(f):
30 @wraps(f)
31 def wrapped(self, msg, **kwargs):
32 return f(self, "[DEBUG] " + msg, **kwargs)
33 return wrapped
34
35
36 def add_debug_timestamp(f):
37 @wraps(f)
38 def wrapped(self, msg, **kwargs):
39 if self.debug_mode:
40 msg = datetime.now().strftime("[%Y-%m-%d %H:%M:%S.%f] ") + msg
41 return f(self, msg, **kwargs)
42 return wrapped
43
44
45 def capture_for_debug_logfile(f):
46 @wraps(f)
47 def wrapped(self, msg, **kwargs):
48 if self.debug_log_file:
49 self.debug_log_file.write(
50 datetime.now().strftime("[%Y-%m-%d %H:%M:%S.%f] ") +
51 ANSI_ESCAPE.sub("", msg).rstrip("\n") + "\n"
52 )
53 return f(self, msg, **kwargs)
54 return wrapped
55
56
57 def clear_formatting(f):
58 """
59 Makes sure formatting from cut-off lines can't bleed into next one
60 """
61 @wraps(f)
62 def wrapped(self, msg, **kwargs):
63 if TTY and environ.get("BW_COLORS", "1") != "0":
64 msg = "\033[0m" + msg
65 return f(self, msg, **kwargs)
66 return wrapped
67
68
69 def sigint_handler(*args, **kwargs):
70 """
71 This handler is kept short since it interrupts execution of the
72 main thread. It's safer to handle these events in their own thread
73 because the main thread might be holding the IO lock while it is
74 interrupted.
75 """
76 if not SHUTDOWN_EVENT_SOFT.is_set():
77 SHUTDOWN_EVENT_SOFT.set()
78 else:
79 SHUTDOWN_EVENT_HARD.set()
80
81
82 def term_width():
83 if not TTY:
84 return 0
85
86 fd = sys.stdout.fileno()
87 _, width = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, 'aaaa'))
88 return width
89
90
91 def write_to_stream(stream, msg):
92 try:
93 if TTY:
94 stream.write(msg)
95 else:
96 stream.write(ANSI_ESCAPE.sub("", msg))
97 stream.flush()
98 except broken_pipe_exception as e:
99 if broken_pipe_exception == IOError:
100 if e.errno != EPIPE:
101 raise
102
103
104 class DrainableStdin(object):
105 def get_input(self):
106 while True:
107 if QUIT_EVENT.is_set():
108 return None
109 if select([sys.stdin], [], [], 0.1)[0]:
110 return sys.stdin.readline().strip()
111
112 def drain(self):
113 if sys.stdin.isatty():
114 termios.tcflush(sys.stdin, termios.TCIFLUSH)
115
116
117 class IOManager(object):
118 """
119 Threadsafe singleton class that handles all IO.
120 """
121 def __init__(self):
122 self._active = False
123 self.debug_log_file = None
124 self.debug_mode = False
125 self.jobs = []
126 self.lock = Lock()
127 self._signal_handler_thread = Thread(
128 target=self._signal_handler_thread_body,
129 )
130 # daemon mode is required because we need to keep the thread
131 # around until the end of a soft shutdown to wait for a hard
132 # shutdown signal, but don't have a feasible way of stopping
133 # the thread once the soft shutdown has completed
134 self._signal_handler_thread.daemon = True
135 self._ssh_pids = []
136
137 def activate(self):
138 self._active = True
139 if 'BW_DEBUG_LOG_DIR' in environ:
140 self.debug_log_file = open(join(
141 environ['BW_DEBUG_LOG_DIR'],
142 "{}_{}.log".format(
143 datetime.now().strftime("%Y-%m-%d_%H-%M-%S"),
144 getpid(),
145 ),
146 ), 'a')
147 self._signal_handler_thread.start()
148 signal(SIGINT, sigint_handler)
149
150 def ask(self, question, default, epilogue=None, input_handler=DrainableStdin()):
151 assert self._active
152 answers = _("[Y/n]") if default else _("[y/N]")
153 question = question + " " + answers + " "
154 with self.lock:
155 if QUIT_EVENT.is_set():
156 sys.exit(0)
157 self._clear_last_job()
158 while True:
159 write_to_stream(STDOUT_WRITER, "\a" + question)
160
161 input_handler.drain()
162 answer = input_handler.get_input()
163 if answer is None:
164 if epilogue:
165 write_to_stream(STDOUT_WRITER, "\n" + epilogue + "\n")
166 QUIT_EVENT.set()
167 sys.exit(0)
168 elif answer.lower() in (_("y"), _("yes")) or (
169 not answer and default
170 ):
171 answer = True
172 break
173 elif answer.lower() in (_("n"), _("no")) or (
174 not answer and not default
175 ):
176 answer = False
177 break
178 write_to_stream(
179 STDOUT_WRITER,
180 _("Please answer with 'y(es)' or 'n(o)'.\n"),
181 )
182 if epilogue:
183 write_to_stream(STDOUT_WRITER, epilogue + "\n")
184 self._write_current_job()
185 return answer
186
187 def deactivate(self):
188 self._active = False
189 signal(SIGINT, SIG_DFL)
190 self._signal_handler_thread.join()
191 if self.debug_log_file:
192 self.debug_log_file.close()
193
194 @clear_formatting
195 @add_debug_indicator
196 @capture_for_debug_logfile
197 @add_debug_timestamp
198 def debug(self, msg, append_newline=True):
199 if self.debug_mode:
200 with self.lock:
201 self._write(msg, append_newline=append_newline)
202
203 def job_add(self, msg):
204 if not self._active:
205 return
206 with self.lock:
207 if TTY:
208 self._clear_last_job()
209 write_to_stream(STDOUT_WRITER, inverse("{} ".format(msg)[:term_width() - 1]))
210 self.jobs.append(msg)
211
212 def job_del(self, msg):
213 if not self._active:
214 return
215 with self.lock:
216 self._clear_last_job()
217 self.jobs.remove(msg)
218 self._write_current_job()
219
220 @clear_formatting
221 @capture_for_debug_logfile
222 @add_debug_timestamp
223 def stderr(self, msg, append_newline=True):
224 with self.lock:
225 self._write(msg, append_newline=append_newline, err=True)
226
227 @clear_formatting
228 @capture_for_debug_logfile
229 @add_debug_timestamp
230 def stdout(self, msg, append_newline=True):
231 with self.lock:
232 self._write(msg, append_newline=append_newline)
233
234 @contextmanager
235 def job(self, job_text):
236 self.job_add(job_text)
237 try:
238 yield
239 finally:
240 self.job_del(job_text)
241
242 def _clear_last_job(self):
243 if self.jobs and TTY:
244 write_to_stream(STDOUT_WRITER, "\r\033[K")
245
246 def _signal_handler_thread_body(self):
247 while self._active:
248 if QUIT_EVENT.is_set():
249 if SHUTDOWN_EVENT_HARD.wait(0.1):
250 self.stderr(_("{x} {signal} cleanup interrupted, exiting...").format(
251 signal=bold(_("SIGINT")),
252 x=blue("i"),
253 ))
254 for ssh_pid in self._ssh_pids:
255 self.debug(_("killing SSH session with PID {pid}").format(pid=ssh_pid))
256 try:
257 kill(ssh_pid, SIGTERM)
258 except ProcessLookupError:
259 pass
260 self._clear_last_job()
261 _exit(1)
262 else:
263 if SHUTDOWN_EVENT_SOFT.wait(0.1):
264 QUIT_EVENT.set()
265 self.stderr(_(
266 "{x} {signal} canceling pending tasks... "
267 "(hit CTRL+C again for immediate dirty exit)"
268 ).format(
269 signal=bold(_("SIGINT")),
270 x=blue("i"),
271 ))
272
273 def _write(self, msg, append_newline=True, err=False):
274 if not self._active:
275 return
276 if self.jobs and TTY:
277 write_to_stream(STDOUT_WRITER, "\r\033[K")
278 if msg is not None:
279 if append_newline:
280 msg += "\n"
281 write_to_stream(STDERR_WRITER if err else STDOUT_WRITER, msg)
282 self._write_current_job()
283
284 def _write_current_job(self):
285 if self.jobs and TTY:
286 write_to_stream(STDOUT_WRITER, inverse("{} ".format(self.jobs[-1])[:term_width() - 1]))
287
288 io = IOManager()
0 docs.bundlewrap.org
0 # API
1
2 While most users will interact with BundleWrap through the `bw` command line utility, you can also use it from your own code to extract data or further automate config management tasks.
3
4 Even within BundleWrap itself (e.g. templates, libs, and hooks) you are often given repo and/or node objects to work with. Their methods and attributes are documented below.
5
6 Some general notes on using BundleWrap's API:
7
8 * There can be an arbitrary amount of `bundlewrap.repo.Repository` objects per process.
9 * Repositories are read as needed and not re-read when something changes. Modifying files in a repo during the lifetime of the matching Repository object may result in undefined behavior.
10
11 <br>
12
13 ## Example
14
15 Here's a short example of how to use BundleWrap to get the uptime for a node.
16
17 from bundlewrap.repo import Repository
18
19 repo = Repository("/path/to/my/repo")
20 node = repo.get_node("mynode")
21 uptime = node.run("uptime")
22 print(uptime.stdout)
23
24 <br>
25
26 ## Reference
27
28
29 ### bundlewrap.repo.Repository(path)
30
31 The starting point of any interaction with BundleWrap. An object of this class represents the repository at the given path.
32
33 <br>
34
35 **`.groups`**
36
37 A list of all groups in the repo (instances of `bundlewrap.group.Group`)
38
39 <br>
40
41 **`.group_names`**
42
43 A list of all group names in this repo.
44
45 <br>
46
47 **`.nodes`**
48
49 A list of all nodes in the repo (instances of `bundlewrap.node.Node`)
50
51 <br>
52
53 **`.node_names`**
54
55 A list of all node names in this repo
56
57 <br>
58
59 **`.revision`**
60
61 The current git, hg or bzr revision of this repo. `None` if no SCM was detected.
62
63 <br>
64
65 **`.get_group(group_name)`**
66
67 Returns the Group object for the given name.
68
69 <br>
70
71 **`.get_node(node_name)`**
72
73 Returns the Node object with the given name.
74
75 <br>
76
77 **`.nodes_in_all_groups(group_names)`**
78
79 Returns a list of Node objects where every node is a member of every group name given.
80
81 <br>
82
83 **`.nodes_in_any_group(group_names)`**
84
85 Returns all Node objects that are a member of at least one of the given group names.
86
87 <br>
88
89 **`.nodes_in_group(group_name)`**
90
91 Returns a list of Node objects in the named group.
92
93 <br>
94
95 ### bundlewrap.node.Node()
96
97 A system managed by BundleWrap.
98
99 <br>
100
101 **`.bundles`**
102
103 A list of all bundles associated with this node (instances of `bundlewrap.bundle.Bundle`)
104
105 <br>
106
107 **`.groups`**
108
109 A list of `bundlewrap.group.Group` objects this node belongs to
110
111 <br>
112
113 **`.hostname`**
114
115 The DNS name BundleWrap uses to connect to this node
116
117 <br>
118
119 **`.items`**
120
121 A list of items on this node (instances of subclasses of `bundlewrap.items.Item`)
122
123 <br>
124
125 **`.metadata`**
126
127 A dictionary of custom metadata, merged from information in [nodes.py](../repo/nodes.py.md) and [groups.py](../repo/groups.py.md)
128
129 <br>
130
131 **`.name`**
132
133 The internal identifier for this node
134
135 <br>
136
137 **`.download(remote_path, local_path)`**
138
139 Downloads a file from the node.
140
141 `remote_path` Which file to get from the node
142 `local_path` Where to put the file
143
144 <br>
145
146 **`.get_item(item_id)`**
147
148 Get the Item object with the given ID (e.g. "file:/etc/motd").
149
150 <br>
151
152 **`.has_bundle(bundle_name)`**
153
154 `True` if the node has a bundle with the given name.
155
156 <br>
157
158 **`.has_any_bundle(bundle_names)`**
159
160 `True` if the node has a bundle with any of the given names.
161
162 <br>
163
164 **`.in_group(group_name)`**
165
166 `True` if the node is in a group with the given name.
167
168 <br>
169
170 **`.in_any_group(group_names)`**
171
172 `True` if the node is in a group with any of the given names.
173
174 <br>
175
176 **`.run(command, may_fail=False)`**
177
178 Runs a command on the node. Returns an instance of `bundlewrap.operations.RunResult`.
179
180 `command` What should be executed on the node
181 `may_fail` If `False`, `bundlewrap.exceptions.RemoteException` will be raised if the command does not return 0.
182
183 <br>
184
185 **`.upload(local_path, remote_path, mode=None, owner="", group="")`**
186
187 Uploads a file to the node.
188
189 `local_path` Which file to upload
190 `remote_path` Where to put the file on the target node
191 `mode` File mode, e.g. "0644"
192 `owner` Username of the file owner
193 `group` Group name of the file group
194
195 <br>
196
197 ### bundlewrap.group.Group
198
199 A user-defined group of nodes.
200
201 <br>
202
203 **`.name`**
204
205 The name of this group
206
207 <br>
208
209 **`.nodes`**
210
211 A list of all nodes in this group (instances of `bundlewrap.node.Node`, includes subgroup members)
212
213 <br>
214
215 ### bundlewrap.utils.Fault
216
217 A Fault acts as a lazy stand-in object for the result of a given callback function. These objects are returned from the "vault" attached to `Repository` objects:
218
219 >>> repo.vault.password_for("demo")
220 <bundlewrap.utils.Fault object at 0x10782b208>
221
222 Only when the `value` property of a Fault is accessed or when the Fault is converted to a string, the callback function is executed. In the example above, this means that the password is only generated when it is really required (e.g. when used in a template). This is particularly useful when used in metadata in connection with [secrets](secrets.md). Users will be able to generate metadata with Faults in it, even if they lack the required keys for the decryption operation represented by the Fault. The key will only be required for files etc. that actually use it. If a Fault cannot be resolved (e.g. for lack of the required key), BundleWrap can just skip the item using the Fault, while still allowing other items on the same node to be applied.
223
224 Faults also support some rudimentary string operations such as appending a string or another Fault, as well as some string methods:
225
226 >>> f = repo.vault.password_for("1") + ":" + repo.vault.password_for("2")
227 >>> f
228 <bundlewrap.utils.Fault object at 0x10782b208>
229 >>> f.value
230 'VOd5PC:JUgYUb'
231 >>> f += " "
232 >>> f.value
233 'VOd5PC:JUgYUb '
234 >>> f.strip().value
235 'VOd5PC:JUgYUb'
236 >>> repo.vault.password_for("1").format_into("Password: {}").value
237 'Password: VOd5PC'
238
239 These string methods are supported on Faults: `format`, `lower`, `lstrip`, `replace`, `rstrip`, `strip`, `upper`, `zfill`
0 # Command Line Interface
1
2 The `bw` utility is BundleWrap's command line interface.
3
4 <div class="alert">This page is not meant as a complete reference. It provides a starting point to explore the various subcommands. If you're looking for details, <code>--help</code> is your friend.</div>
5
6 ## bw apply
7
8 <pre><code class="nohighlight">bw apply -i mynode</code></pre>
9
10 The most important and most used part of BundleWrap, `bw apply` will apply your configuration to a set of [nodes](../repo/nodes.py.md). By default, it operates in a non-interactive mode. When you're trying something new or are otherwise unsure of some changes, use the `-i` switch to have BundleWrap interactively ask before each change is made.
11
12 <br>
13
14 ## bw run
15
16 <pre><code class="nohighlight">$ bw run mygroup "uname -a"</code></pre>
17
18 Unsurprisingly, the `run` subcommand is used to run commands on nodes.
19
20 As with most commands that accept node names, you can also give a `group` name or any combination of node and group names, separated by commas (without spaces, e.g. `node1,group2,node3`). A third option is to use a bundle selector like `bundle:my_bundle`. It will select all nodes with the named `bundle`. You can freely mix and match node names, group names, and bundle selectors.
21
22 Negation is also possible for bundles and groups. `!bundle:foo` will add all nodes without the foo bundle, while `!group:foo` will add all nodes that aren't in the foo group.
23
24 <br>
25
26 ## bw nodes and bw groups
27
28 <pre><code class="nohighlight">$ bw nodes --hostnames | xargs -n 1 ping -c 1</code></pre>
29
30 With these commands you can quickly get a list of all nodes and groups in your [repository](../repo/layout.md). The example above uses `--hostnames` to get a list of all DNS names for your nodes and send a ping to each one.
31
32 <br>
33
34 ## bw debug
35
36 $ bw debug
37 bundlewrap X.Y.Z interactive repository inspector
38 > You can access the current repository as 'repo'.
39 >>> len(repo.nodes)
40 121
41
42 This command will drop you into a Python shell with direct access to BundleWrap's [API](api.md). Once you're familiar with it, it can be a very powerful tool.
43
44 <br>
45
46 ## bw plot
47
48 <div class="alert alert-info">You'll need <a href="http://www.graphviz.org">Graphviz</a> installed on your machine for this to be useful.</div>
49
50 <pre><code class="nohighlight">$ bw plot node mynode | dot -Tsvg -omynode.svg</code></pre>
51
52 You won't be using this every day, but it's pretty cool. The above command will create an SVG file (you can open these in your browser) that shows the item dependency graph for the given node. You will see bundles as dashed rectangles, static dependencies (defined in BundleWrap itself) in green, auto-generated dependencies (calculated dynamically each time you run `bw apply`) in blue and dependencies you defined yourself in red.
53
54 It offers an interesting view into the internal complexities BundleWrap has to deal with when figuring out the order in which your items can be applied to your node.
55
56 <br>
57
58 ## bw test
59
60 <pre><code class="nohighlight">$ bw test
61 ✓ node1 samba pkg_apt:samba
62 ✘ node1 samba file:/etc/samba/smb.conf
63
64 [...]
65
66 +----- traceback from worker ------
67 |
68 | Traceback (most recent call last):
69 | File "bundlewrap/concurrency.py", line 78, in _worker_process
70 | return_value = target(*msg['args'], **msg['kwargs'])
71 | File "&lt;string&gt;", line 378, in test
72 | BundleError: file:/etc/samba/smb.conf from bundle 'samba' refers to missing file '/path/to/bundlewrap/repo/bundles/samba/files/smb.conf'
73 |
74 +----------------------------------
75 </pre></code>
76 This command is meant to be run automatically like a test suite after every commit. It will try to catch any errors in your bundles and file templates by initializing every item for every node (but without touching the network).
0 # Custom item types
1
2
3 ## Step 0: Understand statedicts
4
5 To represent supposed vs. actual state, BundleWrap uses statedicts. These are
6 normal Python dictionaries with some restrictions:
7
8 * keys must be Unicode text
9 * every value must be of one of these simple data types:
10 * bool
11 * float
12 * int
13 * Unicode text
14 * None
15 * ...or a list/tuple containing only instances of one of the types above
16
17 Additional information can be stored in statedicts by using keys that start with an underscore. You may only use this for caching purposes (e.g. storing rendered file template content while the "real" sdict information only contains a hash of this content). BundleWrap will ignore these keys and hide them from the user. The type restrictions noted above do not apply.
18
19
20 ## Step 1: Create an item module
21
22 Create a new file called `/your/bundlewrap/repo/items/foo.py`. You can use this as a template:
23
24 from bundlewrap.items import Item
25
26
27 class Foo(Item):
28 """
29 A foo.
30 """
31 BLOCK_CONCURRENT = []
32 BUNDLE_ATTRIBUTE_NAME = "foo"
33 ITEM_ATTRIBUTES = {
34 'attribute': "default value",
35 }
36 ITEM_TYPE_NAME = "foo"
37 REQUIRED_ATTRIBUTES = ['attribute']
38
39 def __repr__(self):
40 return "<Foo attribute:{}>".format(self.attributes['attribute'])
41
42 def cdict(self):
43 """
44 Return a statedict that describes the target state of this item
45 as configured in the repo. An empty dict means that the item
46 should not exist.
47
48 Implementing this method is optional. The default implementation
49 uses the attributes as defined in the bundle.
50 """
51 raise NotImplementedError
52
53 def sdict(self):
54 """
55 Return a statedict that describes the actual state of this item
56 on the node. An empty dict means that the item does not exist
57 on the node.
58
59 For the item to validate as correct, the values for all keys in
60 self.cdict() have to match this statedict.
61 """
62 raise NotImplementedError
63
64 def display_dicts(self, cdict, sdict, keys):
65 """
66 Given cdict and sdict as implemented above, modify them to better
67 suit interactive presentation. The keys parameter is the return
68 value of display_keys (see below) and provided for reference only.
69
70 Implementing this method is optional.
71 """
72 return (cdict, sdict)
73
74 def display_keys(self, cdict, sdict, keys):
75 """
76 Given a list of keys whose values differ between cdict and sdict,
77 modify them to better suit presentation to the user.
78
79 Implementing this method is optional.
80 """
81 return keys
82
83 def fix(self, status):
84 """
85 Do whatever is necessary to correct this item. The given ItemStatus
86 object has the following useful information:
87
88 status.keys list of cdict keys that need fixing
89 status.cdict cached copy of self.cdict()
90 status.sdict cached copy of self.sdict()
91 """
92 raise NotImplementedError
93
94 <br>
95
96 ## Step 2: Define attributes
97
98 `BUNDLE_ATTRIBUTE_NAME` is the name of the variable defined in a bundle module that holds the items of this type. If your bundle looks like this:
99
100 foo = { [...] }
101
102 ...then you should put `BUNDLE_ATTRIBUTE_NAME = "foo"` here.
103
104
105 `ITEM_ATTRIBUTES` is a dictionary of the attributes users will be able to configure for your item. For files, that would be stuff like owner, group, and permissions. Every attribute (even if it's mandatory) needs a default value, `None` is totally acceptable:
106
107 ITEM_ATTRIBUTES = {'attr1': "default1"}
108
109
110 `ITEM_TYPE_NAME` sets the first part of an items ID. For the file items, this is "file". Therefore, file ID look this this: `file:/path`. The second part is the name a user assigns to your item in a bundle. Example:
111
112 ITEM_TYPE_NAME = "foo"
113
114
115 `BLOCK_CONCURRENT` is a list of item types (e.g. `pkg_apt`), that cannot be applied in parallel with this type of item. May include this very item type itself. For most items this is not an issue (e.g. creating multiple files at the same time), but some types of items have to be applied sequentially (e.g. package managers usually employ locks to ensure only one package is installed at a time):
116
117 BLOCK_CONCURRENT = ["pkg_apt"]
118
119
120 `REQUIRED_ATTRIBUTES` is a list of attribute names that must be set on each item of this type. If BundleWrap encounters an item without all these attributes during bundle inspection, an exception will be raised. Example:
121
122 REQUIRED_ATTRIBUTES = ['attr1', 'attr2']
123
124 <br>
125
126 Step 3: Implement methods
127 -------------------------
128
129 You should probably start with `sdict()`. Use `self.node.run("command")` to run shell commands on the current node and check the `stdout` property of the returned object.
130
131 The only other method you have to implement is `fix`. It doesn't have to return anything and just uses `self.node.run()` to fix the item. To do this efficiently, it may use the provided parameters indicating which keys differ between the should-be sdict and the actual one. Both sdicts are also provided in case you need to know their values.
132
133 If you're having trouble, try looking at the [source code for the items that come with BundleWrap](https://github.com/bundlewrap/bundlewrap/tree/master/bundlewrap/items). The `pkg_*` items are pretty simple and easy to understand while `files` is the most complex to date. Or just drop by on [IRC](irc://chat.freenode.net/bundlewrap), we're glad to help.
0 # Writing your own plugins
1
2 [Plugins](../repo/plugins.md) can provide almost any file in a BundleWrap repository: bundles, custom items, hooks, libs, etc.
3
4 Notable exceptions are `nodes.py` and `groups.py`. If your plugin wants to extend those, use a [lib](../repo/libs.md) instead and ask users to add the result of a function call in your lib to their nodes or groups dicts.
5
6 <div class="alert alert-warning">If your plugin depends on other libraries, make sure that it catches ImportErrors in a way that makes it obvious for the user what's missing. Keep in mind that people will often just <code>git pull</code> their repo and not install your plugin themselves.</div>
7
8 <br>
9
10 ## Starting a new plugin
11
12 ### Step 1: Clone the plugins repo
13
14 Create a clone of the [official plugins repo](https://github.com/bundlewrap/plugins) on GitHub.
15
16 ### Step 2: Create a branch
17
18 You should work on a branch specific to your plugin.
19
20 ### Step 3: Copy your plugin files
21
22 Now take the files that make up your plugin and move them into a subfolder of the plugins repo. The subfolder must be named like your plugin.
23
24 ### Step 4: Create required files
25
26 In your plugin subfolder, create a file called `manifest.json` from this template:
27
28 {
29 "desc": "Concise description (keep it somewhere around 80 characters)",
30 "help": "Optional verbose help text to be displayed after installing. May\ninclude\nnewlines.",
31 "provides": [
32 "bundles/example/items.py",
33 "hooks/example.py"
34 ],
35 "version": 1
36 }
37
38 The `provides` section must contain a list of all files provided by your plugin.
39
40 You also have to create an `AUTHORS` file containing your name and email address.
41
42 Last but not least we require a `LICENSE` file with an OSI-approved Free Software license.
43
44 ### Step 5: Update the plugin index
45
46 Run the `update_index.py` script at the root of the plugins repo.
47
48 ### Step 6: Run tests
49
50 Run the `test.py` script at the root of the plugins repo. It will tell you if there is anything wrong with your plugin.
51
52 ### Step 7: Commit
53
54 Commit all changes to your branch
55
56 ### Step 8: Create pull request
57
58 Create a pull request on GitHub to request inclusion of your new plugin in the official repo. Only then will your plugin become available to be installed by `bw repo plugin install yourplugin`.
59
60 <br>
61
62 ## Updating an existing plugin
63
64 To release a new version of your plugin:
65
66 * Increase the version number in `manifest.json`
67 * Update the list of provided files in `manifest.json`
68 * If you're updating someone elses plugin, you should get their consent and add your name to `AUTHORS`
69
70 Then just follow the instructions above from step 5 onward.
0 # Environment Variables
1
2 ## `BW_ADD_HOST_KEYS`
3
4 As BundleWrap uses OpenSSH to connect to hosts, host key checking is involved. By default, strict host key checking is activated. This might not be suitable for your setup. You can set this variable to `1` to cause BundleWrap to set the OpenSSH option `StrictHostKeyChecking=no`.
5
6 You can also use `bw -a ...` to achieve the same effect.
7
8
9 ## `BW_COLORS`
10
11 Colors are enabled by default. Setting this variable to `0` tells BundleWrap to never use any ANSI color escape sequences.
12
13
14 ## `BW_DEBUG_LOG_DIR`
15
16 Set this to an existing directory path to have BundleWrap write debug logs there (even when you're running `bw` without `--debug`).
17
18 <div class="alert alert-info">Debug logs are verbose and BundleWrap does not rotate them for you. Putting them on a tmpfs or ramdisk will save your SSD and get rid of old logs every time you reboot your machine.</div>
19
20
21 ## `BW_HARDLOCK_EXPIRY`
22
23 [Hard locks](locks.md) are automatically ignored after some time. By default, it's `"8h"`. You can use this variable to override that default.
24
25
26 ## `BW_IDENTITY`
27
28 When BundleWrap [locks](locks.md) a node, it stores a short description about "you". By default, this is the string `$USER@$HOSTNAME`, e.g. `john@mymachine`. You can use `BW_IDENTITY` to specify a custom string. (No variables will be evaluated in user supplied strings.)
29
30
31 ## `BW_ITEM_WORKERS` and `BW_NODE_WORKERS`
32
33 BundleWrap attempts to parallelize work. These two options specify the number of nodes and items, respectively, which will be handled concurrently. To be more precise, when setting `BW_NODE_WORKERS=8` and `BW_ITEM_WORKERS=2`, BundleWrap will work on eight nodes in parallel, each handling two items in parallel.
34
35 You can also use the command line options `-p` and `-P`, e.g. `bw apply -p ... -P ... ...`, to achieve the same effect. Command line arguments override environment variables.
36
37 There is no single default for these values. For example, when running `bw apply`, four nodes are being handled by default. However, when running `bw test`, only one node will be tested by default. `BW_NODE_WORKERS` and `BW_ITEM_WORKERS` apply to *all* these operations.
38
39 Note that you should not set these variables to very high values. First, it can cause high memory consumption on your machine. Second, not all SSH servers can handle massive parallelism. Please refer to your OpenSSH documentation on how to tune your servers for these situations.
40
41
42 ## `BW_SOFTLOCK_EXPIRY`
43
44 [Soft locks](locks.md) are automatically removed from nodes after some time. By default, it's `"8h"`. You can use this variable to override that default.
45
46
47 ## `BW_SSH_ARGS`
48
49 Extra arguments to include in every call to `ssh` BundleWrap makes. Set this to "-F ~/.ssh/otherconf" to use a different SSH config with BundleWrap.
50
51
52 ## `BW_VAULT_DUMMY_MODE`
53
54 Setting this to `1` will make `repo.vault` return dummy values for every [secret](secrets.md). This is useful for running `bw test` on a CI server that you don't want to trust with your `.secrets.cfg`.
0 # Installation
1
2 <div class="alert alert-info">You may need to install <strong>pip</strong> first. This can be accomplished through your distribution's package manager, e.g.:
3
4 <pre><code class="nohighlight">aptitude install python-pip</code></pre>
5
6 or the <a href="http://www.pip-installer.org/en/latest/installing.html">manual instructions</a>.</div>
7
8 ## Using pip
9
10 It's as simple as:
11
12 <pre><code class="nohighlight">pip install bundlewrap</code></pre>
13
14 Note that you need at least Python 2.7 to run BundleWrap. Python 3 is supported as long as it's >= 3.3.
15
16 <br>
17
18 ## From git
19
20 <div class="alert alert-warning">This type of install will give you the very latest (and thus possibly broken) bleeding edge version of BundleWrap.
21 You should only use this if you know what you're doing.</div>
22
23 <div class="alert alert-info">The instructions below are for installing on Ubuntu Server 12.10 (Quantal), but should also work for other versions of Ubuntu/Debian. If you're on some other distro, you will obviously have to adjust the package install commands.</div>
24
25 <div class="alert alert-info">The instructions assume you have root privileges.</div>
26
27 Install basic requirements:
28
29 <pre><code class="nohighlight">aptitude install build-essential git python-dev python-pip</code></pre>
30
31 Clone the GitHub repository:
32
33 <pre><code class="nohighlight">cd /opt
34 git clone https://github.com/bundlewrap/bundlewrap.git</code></pre>
35
36 Use `pip install -e` to install in "development mode":
37
38 <pre><code class="nohighlight">pip install -e /opt/bundlewrap</code></pre>
39
40 You can now try running the `bw` command line utility::
41
42 <pre><code class="nohighlight">bw --help</code></pre>
43
44 That's it.
45
46 To update your install, just pull the git repository and have setup.py` check for new dependencies:
47
48 <pre><code class="nohighlight">cd /opt/bundlewrap
49 git pull
50 python setup.py develop</code></pre>
51
52 <br>
53
54 # Requirements for managed systems
55
56 While the following list might appear long, even very minimal systems should provide everything that's needed.
57
58 * `apt-get` (only used with [pkg_apt](../items/pkg_apt.md) items)
59 * `cat`
60 * `chmod`
61 * `chown`
62 * `dpkg` (only used with [pkg_apt](../items/pkg_apt.md) items)
63 * `echo`
64 * `file`
65 * `find` (only used with [directory purging](../items/directory.md#purge))
66 * `grep`
67 * `groupadd`
68 * `groupmod`
69 * `id`
70 * `initctl` (only used with [svc_upstart](../items/svc_upstart.md) items)
71 * `mkdir`
72 * `mv`
73 * `pacman` (only used with [pkg_pacman](../items/pkg_pacman.md) items)
74 * `rm`
75 * sftp-enabled SSH server (your home directory must be writable)
76 * `sudo`
77 * `sha1sum`
78 * `stat`
79 * `systemctl` (only used with [svc_systemd](../items/svc_systemd.md) items)
80 * `useradd`
81 * `usermod`
82
83 Additionally, you need to pre-configure your SSH client so that it can connect to your nodes without having to type a password (including `sudo` on the node, which also must *not* have the `requiretty` option set).
0 # Writing file templates
1
2 BundleWrap can use [Mako](http://www.makotemplates.org) or [Jinja2](http://jinja.pocoo.org) for file templating. This enables you to dynamically contruct your config files. Templates reside in the `files` subdirectory of a bundle and are bound to a file item using the `source` [attribute](../items/file.md#source). This page explains how to get started with Mako.
3
4 The most basic example would be:
5
6 <pre><code class="nohighlight">Hello, this is ${node.name}!</code></pre>
7
8 After template rendering, it would look like this::
9
10 <pre><code class="nohighlight">Hello, this is myexamplenodename!</code></pre>
11
12 As you can see, `${...}` can be used to insert the value of a context variable into the rendered file. By default, you have access to two variables in every template: `node` and `repo`. They are `bundlewrap.node.Node` and `bundlewrap.repo.Repository` objects, respectively. You can learn more about the attributes and methods of these objects in the [API docs](api.md), but here are a few examples:
13
14 <br>
15
16 ## Examples
17
18 inserts the DNS hostname of the current node
19
20 ${node.hostname}
21
22 <br>
23
24 a list of all nodes in your repo
25
26 % for node in repo.nodes:
27 ${node.name}
28 % endfor
29
30 <br>
31
32 make exceptions for certain nodes
33
34 % if node.name == "node1":
35 option = foo
36 % elif node.name in ("node2", "node3"):
37 option = bar
38 % else:
39 option = baz
40 % endif
41
42 <br>
43
44 check for group membership
45
46 % if node.in_group("sparkle"):
47 enable_sparkles = 1
48 % endif
49
50 <br>
51
52 check for membership in any of several groups
53
54 % if node.in_any_group(("sparkle", "shiny")):
55 enable_fancy = 1
56 % endif
57
58 <br>
59
60 check for bundle
61
62 % if node.has_bundle("sparkle"):
63 enable_sparkles = 1
64 % endif
65
66 <br>
67
68 check for any of several bundles
69
70 % if node.has_any_bundle(("sparkle", "shiny")):
71 enable_fancy = 1
72 % endif
73
74 <br>
75
76 list all nodes in a group
77
78 % for gnode in repo.get_group("mygroup").nodes:
79 ${gnode.name}
80 % endfor
81
82 <br>
83
84 ## Working with node metadata
85
86 Quite often you will attach custom metadata to your nodes in `nodes.py`, e.g.:
87
88 nodes = {
89 "node1": {
90 "metadata": {
91 "interfaces": {
92 "eth0": "10.1.1.47",
93 "eth1": "10.1.2.47",
94 },
95 },
96 },
97 }
98
99 You can easily access this information in templates:
100
101 % for interface, ip in sorted(node.metadata["interfaces"].items()):
102 interface ${interface}
103 ip = ${ip}
104 % endfor
105
106 This template will render to:
107
108 interface eth0
109 ip = 10.1.1.47
110 interface eth1
111 ip = 10.1.2.47
112
0 # Locking
1
2 BundleWrap's decentralized nature makes it necessary to coordinate actions between users of a shared repository. Locking is an important part of collaborating using BundleWrap.
3
4 ## Hard locks
5
6 Since very early in the history of BundleWrap, what we call "hard locks" were used to prevent multiple users from using `bw apply` on the same node at the same time. When BundleWrap finds a hard lock on a node in interactive mode, it will display information about who acquired the lock (and when) and will ask whether to ignore the lock or abort the process. In non-interactive mode, the operation is always cancelled for the node in question unless `--force` is used.
7
8 ## Soft locks
9
10 Many teams these days are using a workflow based on pull requests. A common problem here is that changes from a feature branch might already have been applied to a set of nodes, while the master branch is still lacking these changes. While the pull request is open and waiting for review, other users might rightly use the master branch to apply to all nodes, reverting changes made by the feature branch. This can be a major nuisance.
11
12 As of version 2.6.0, BundleWrap provides "soft locks" to prevent this. The author of a feature branch can now lock the node so only he or she can use `bw apply` on it:
13
14 <pre><code class="nohighlight">$ bw lock add node1
15 ✓ node1 locked with ID B9JS (expires in 8h)</code></pre>
16
17 This will prevent all other users from changing any items on the node for the next 8 hours. BundleWrap will tell users apart by their [BW_IDENTITY](env.md#BW_IDENTITY). Now say someone else is reviewing the pull request and wants to use `bw apply`, while still keeping others out and the original author in. This can be done by simply locking the node *again* as the reviewer. Nodes can have many soft locks. Soft locks act as an exemption from a general ban on changing items that goes into effect as soon as one or more soft locks are present on the node. Of course, if no soft locks are present, anyone can change any item.
18
19 You can list all soft locks on a node with:
20
21 <pre><code class="nohighlight">$ bw lock show node1
22 i node1 ID Created Expires User Items Comment
23 › node1 Y1KD 2016-05-25 21:30:25 2016-05-26 05:30:25 alice * locks are awesome
24 › node1 B9JS 2016-05-24 13:10:11 2016-05-27 08:10:11 bob * me too</code></pre>
25
26 Note that each lock is identified by a case-insensitive 4-character ID that can be used to remove the lock:
27
28 <pre><code class="nohighlight">$ bw lock remove node1 y1kd
29 ✓ node1 lock Y1KD removed</code></pre>
30
31 Expired locks are automatically and silently purged whenever BundleWrap has the opportunity. Be sure to check out `bw lock add --help` for how to customize expiration time, add a short comment explaining the reason for the lock, or lock only certain items. Using `bw apply` on a soft locked node is not an error and affected items will simply be skipped.
0 # Migrating from BundleWrap 1.x to 2.x
1
2 As per [semver](http://semver.org), BundleWrap 2.0 breaks compatibility with repositories created for BundleWrap 1.x. This document provides a guide on how to upgrade your repositories to BundleWrap 2.x. Please read the entire document before proceeding. To aid with the transition, BundleWrap 1.6.0 has been released along with 2.0.0. It contains no new features over 1.5.x, but has builtin helpers to aid your migration to 2.0.
3
4 <br>
5
6 ## items.py
7
8 In every bundle, rename `bundle.py` to `items.py`. BundleWrap 1.6.0 can do this for you by running `bw migrate`.
9
10 <br>
11
12 ## Default file content type
13
14 The default `content_type` for [file items](../items/file.md) has changed from "mako" to "text". This means that you need to check all file items that do not define an explicit content type of "mako". Some of them might be fine because you didn't really need templating, while others may need to have their `content_type` set to "mako" explicitly.
15
16 BundleWrap 1.6.0 will print warnings for every file item affected when running `bw test`.
17
18 <br>
19
20 ## Metadata merging
21
22 The merging behavior for node and group metadata has changed. Instead of a simple `dict.update()`, metadata dicts are now merged recursively. See [the docs](../repo/groups.py.md#metadata) for details.
23
24 <br>
25
26 ## Metadata processors and item generators
27
28 These two advanced features have been replaced by a single new mechanism: [metadata.py](../repo/bundles.md#metadatapy) You will need to rethink and rewrite them.
29
30 BundleWrap 1.6.0 will print warnings for every group that uses metadata processors and any item generators when running `bw test`.
31
32 <br>
33
34 ## Custom item types
35
36 The API for defining your own items has changed. Generally, you should be able to upgrade your items with relatively little effort. Refer to [the docs](dev_item.md) for details.
37
38 <br>
39
40 ## Deterministic templates
41
42 While not a strict requirement, it is highly recommended to ensure your entire configuration can be created deterministically (i.e. remains exactly the same no matter how often you generate it). Otherwise, you won't be able to take advantage of the new functionality provided by `bw hash`.
43
44 A common pitfall here is iteration over dictionaries in templates:
45
46 % for key, value in my_dict.items():
47 ${value}
48 % endfor
49
50 Standard dictionaries in Python have no defined order. This may result in lines occasionally changing their position. To solve this, you can simply use `sorted()`:
51
52 % for key, value in sorted(my_dict.items()):
53 ${value}
54 % endfor
55
56 <br>
57
58 ## Hook arguments
59
60 Some [hooks](../repo/hooks.md) had their arguments adjusted slightly.
0 # OS compatibility
1
2 BundleWrap by necessity takes a pragmatic approach to supporting different operating systems and distributions. Our main target is Linux, but support for other UNIXes is also evolving. We cannot guarantee to be compatible with every distribution and BSD flavor under the sun, but we try to cover the common ones.
3
4 <br>
5
6 ## node.os and node.os_version
7
8 You should set these attributes for every node. Giving BundleWrap this information allows us to adapt some built-in behavior.
9
10 <br>
11
12 ## other node attributes
13
14 In some cases (e.g. when not using sudo) you will need to manually adjust some things. Check the docs [on node-level OS overrides](../repo/nodes.py.md#os-compatibility-overrides).
0 Quickstart
1 ==========
2
3 This is the 10 minute intro into BundleWrap. Fasten your seatbelt.
4
5
6 Installation
7 ------------
8
9 First, open a terminal and install BundleWrap:
10
11 <pre><code class="nohighlight">pip install bundlewrap</code></pre>
12
13
14 Create a repository
15 -------------------
16
17 Now you'll need to create your [repository](../repo/layout.md):
18
19 <pre><code class="nohighlight">mkdir my_bundlewrap_repo
20 cd my_bundlewrap_repo
21 bw repo create
22 </code></pre>
23
24 You will note that some files have been created. Let's check them out:
25
26 <pre><code class="nohighlight">cat nodes.py
27 cat groups.py
28 </code></pre>
29
30 The contents should be fairly self-explanatory, but you can always check the [docs](../repo/layout.md) on these files if you want to go deeper.
31
32 <div class="alert">It is highly recommended to use git or a similar tool to keep track of your repository. You may want to start doing that right away.</div>
33
34 At this point you will want to edit `nodes.py` and maybe change "localhost" to the hostname of a system you have passwordless (including sudo) SSH access to.
35
36 <div class="alert">BundleWrap will honor your <code>~/.ssh/config</code>, so if <code>ssh mynode.example.com sudo id</code> works without any password prompts in your terminal, you're good to go.</div>
37
38
39 Run a command
40 -------------
41
42 The first thing you can do is run a command on your army of one node:
43
44 <pre><code class="nohighlight">bw -a run node-1 "uptime"</code></pre>
45
46 <div class="alert">The <code>-a</code> switch tells bw to automatically trust unknown SSH host keys (when you're connecting to a new node). By default, only known host keys will be accepted.</div>
47
48 You should see something like this:
49
50 <pre><code class="nohighlight">› node-1 20:16:26 up 34 days, 4:10, 0 users, load average: 0.00, 0.01, 0.05
51 ✓ node-1 completed successfully after 3.499531s</code></pre>
52
53 Instead of a node name ("node-1" in this case) you can also use a group name (such as "all") from your `groups.py`.
54
55
56 Create a bundle
57 ---------------
58
59 BundleWrap stores node configuration in [bundles](../repo/bundles.md). A bundle is a collection of *items* such as files, system packages or users. To create your first bundle, type:
60
61 <pre><code class="nohighlight">bw repo bundle create mybundle</code></pre>
62
63 Now that you have created your bundle, it's important to tell BundleWrap which nodes will have this bundle. You can assign bundles to nodes using either <code>groups.py</code> or <code>nodes.py</code>, here we'll use the latter:
64
65 nodes = {
66 'node-1': {
67 'bundles': (
68 "mybundle",
69 ),
70 'hostname': "mynode-1.local",
71 },
72 }
73
74
75 Create a file template
76 ----------------------
77
78 To manage a file, you need two things:
79
80 1. a file item in your bundle
81 2. a template for the file contents
82
83 Add this to your `bundles/mybundle/items.py`:
84
85 files = {
86 '/etc/motd': {
87 'source': "etc/motd",
88 },
89 }
90
91 Then write the file template::
92
93 <pre><code class="nohighlight">mkdir bundles/mybundle/files/etc
94 vim bundles/mybundle/files/etc/motd</code></pre>
95
96 You can use this for example content:
97
98 <pre><code class="nohighlight">Welcome to ${node.name}!</code></pre>
99
100 Note that the `source` attribute in `items.py` contains a path relative to the `files` directory of your bundle.
101
102
103 Apply configuration
104 -------------------
105
106 Now all that's left is to run `bw apply`:
107
108 <pre><code class="nohighlight">bw apply -i node-1</code></pre>
109
110 BundleWrap will ask to replace your previous MOTD:
111
112 <pre><code class="nohighlight">i node-1 run started at 2016-02-13 21:25:45
113 ? node-1
114 ? node-1 ╭─ file:/etc/motd
115 ? node-1 │
116 ? node-1 │ content
117 ? node-1 │ --- &lt;node&gt;
118 ? node-1 │ +++ &lt;bundlewrap&gt;
119 ? node-1 │ @@ -1 +1 @@
120 ? node-1 │ -your old motd
121 ? node-1 │ +Welcome to node-1!
122 ? node-1 │
123 ? node-1 ╰─ Fix file:/etc/motd? [Y/n]
124 </code></pre>
125
126 That completes the quickstart tutorial!
127
128
129 Further reading
130 ---------------
131
132 Here are some suggestions on what to do next:
133
134 * set up [SSH multiplexing](https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Multiplexing) for significantly better performance
135 * take a moment to think about what groups and bundles you will create
136 * read up on how a [BundleWrap repository](../repo/layout.md) is laid out
137 * ...especially what [types of items](../repo/bundles.md#item-types) you can add to your bundles
138 * familiarize yourself with [the Mako template language](http://www.makotemplates.org/)
139 * explore the [command line interface](cli.md)
140 * follow [@bundlewrap](https://twitter.com/bundlewrap) on Twitter
141
142 Have fun! If you have any questions, feel free to drop by [on IRC](irc://chat.freenode.net/bundlewrap).
0 # Handling secrets
1
2 We strongly recommend **not** putting any sensitive information such as passwords or private keys into your repository. This page describes the helpers available in BundleWrap to manage those secrets without checking them into version control.
3
4 <div class="alert alert-info">Most of the functions described here return lazy <a href="../api/#bundlewraputilsfault">Fault objects</a>.</div>
5
6 <br>
7
8 ## .secrets.cfg
9
10 When you initially ran `bw repo create`, a file called `.secrets.cfg` was put into the root level of your repo. It's an INI-style file that by default contains two random keys BundleWrap uses to protect your secrets.
11
12 <div class="alert alert-danger">You should never commit <code>.secrets.cfg</code>. Immediately add it to your <code>.gitignore</code> or equivalent.</div>
13
14 <br>
15
16 ## Derived passwords
17
18 In some cases, you can control (i.e. manage with BundleWrap) both ends of the authentication process. A common example is a config file for a web application that holds credentials for a database also managed by BundleWrap. In this case, you don't really care what the password is, you just want it to be the same on both sides.
19
20 To accomplish that, just write this in your template (Mako syntax shown here):
21
22 <pre><code class="nohighlight">database_user = "foo"
23 database_password = "${repo.vault.password_for("my database")}"
24 </code></pre>
25
26 In your bundle, you can then configure your database user like this:
27
28 postgres_roles = {
29 "foo": {
30 'password': repo.vault.password_for("my database"),
31 },
32 }
33
34 It doesn't really matter what string you call `password_for()` with, it just has to be the same on both ends. BundleWrap will then use that string, combine it with the default key called `generate` in your `.secrets.cfg` and derive a random password from that.
35
36 This makes it easy to change all your passwords at once (e.g. when an employee leaves or when required for compliance reasons) by rotating keys.
37
38 <div class="alert alert-warning">However, it also means you have to guard your <code>.secrets.cfg</code> very closely. If it is compromised, so are <strong>all</strong> your passwords. Use your own judgement.</div>
39
40 <br>
41
42 ## Static passwords
43
44 When you need to store a specific password, you can encrypt it symmetrically:
45
46 <pre><code class="nohighlight">$ bw debug -c "print(repo.vault.encrypt('my password'))"
47 gAAAA[...]mrVMA==
48 </code></pre>
49
50 You can then use this encrypted password in a template like this:
51
52 <pre><code class="nohighlight">database_user = "foo"
53 database_password = "${repo.vault.decrypt("gAAAA[...]mrVMA==")}"
54 </code></pre>
55
56 <br>
57
58 ## Files
59
60 You can also encrypt entire files:
61
62 <pre><code class="nohighlight">$ bw debug -c "repo.vault.encrypt_file('/my/secret.file', 'encrypted.file'))"</code></pre>
63
64 <div class="alert alert-info">Encrypted files are always read and written relative to the <code>data/</code> subdirectory of your repo.</div>
65
66 If the source file was encoded using UTF-8, you can then simply pass the decrypted content into a file item:
67
68 files = {
69 "/secret": {
70 'content': repo.vault.decrypt_file("encrypted.file"),
71 },
72 }
73
74 If the source file is binary however (or any encoding other than UTF-8), you must use base64:
75
76 files = {
77 "/secret": {
78 'content': repo.vault.decrypt_file_as_base64("encrypted.file"),
79 'content_type': 'base64',
80 },
81 }
82
83 <br>
84
85 ## Key management
86
87 ### Multiple keys
88
89 You can always add more keys to your `.secrets.cfg`, but you should keep the defaults around. Adding more keys makes it possible to give different keys to different teams. **By default, BundleWrap will skip items it can't find the required keys for**.
90
91 When using `.password_for()`, `.decrypt()` etc., you can provide a `key` argument to select the key:
92
93 repo.vault.password_for("some database", key="devops")
94
95 <br>
96
97 ### Rotating keys
98
99 <div class="alert alert-info">This is applicable mostly to <code>.password_for()</code>. The other methods use symmetric encryption and require manually updating the encrypted text after the key has changed.</div>
100
101 You can generate a new key by running `bw debug -c "print(repo.vault.random_key())"`. Place the result in your `.secrets.cfg`. Then you need to distribute the new key to your team and run `bw apply` for all your nodes.
0 <style>.bs-sidebar { display: none; }</style>
1
2 BundleWrap documentation
3 ========================
4
5 Check out the [quickstart tutorial](guide/quickstart.md) to get started.
6
7 If you run into a problem that is not answered in these docs, please
8 find us on [IRC](irc://chat.freenode.net/bundlewrap) or [Twitter](https://twitter.com/bundlewrap). We’re happy to help!
9
10 <br>
11
12 Is BundleWrap the right tool for you?
13 -------------------------------------
14
15 We think you will enjoy BundleWrap a lot if you:
16
17 - know some Python
18 - like to write your configuration from scratch and control every bit
19 of it
20 - have lots of unique nodes
21 - are trying to get a lot of existing systems under management
22 - are NOT trying to handle a massive amount of nodes (let’s say more
23 than 300)
24 - like to start small
25 - don’t want yet more stuff to run on your nodes (or mess with
26 appliances as little as possible)
27 - prefer a simple tool to a fancy one
28 - want as much as possible in git/hg/bzr
29 - have strongly segmented internal networks
30
31 You might be better served with a different config management system if
32 you:
33
34 - are already using a config management system and don’t have any
35 major issues
36 - hate Python and/or JSON
37 - like to use community-maintained configuration templates
38 - need unattended bootstrapping of nodes
39 - need to manage non-Linux systems
40 - don’t trust your coworkers
41
42 We have also prepared a [comparison with other popular config management systems](misc/alternatives.md).
0 # Actions
1
2 Actions will be run on every `bw apply`. They differ from regular items in that they cannot be "correct" in the first place. They can only succeed or fail.
3
4 actions = {
5 'check_if_its_still_linux': {
6 'command': "uname",
7 'expected_return_code': 0,
8 'expected_stdout': "Linux\n",
9 },
10 }
11
12 <br>
13
14 ## Attribute reference
15
16 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
17
18 <br>
19
20 ### command
21
22 The only required attribute. This is the command that will be run on the node with root privileges.
23
24 <br>
25
26 ### expected_return_code
27
28 Defaults to `0`. If the return code of your command is anything else, the action is considered failed. You can also set this to `None` and any return code will be accepted.
29
30 <br>
31
32 ### expected_stdout
33
34 If this is given, the stdout output of the command must match the given string or the action is considered failed.
35
36 <br>
37
38 ### expected_stderr
39
40 Same as `expected_stdout`, but with stderr.
41
42 <br>
43
44 ### interactive
45
46 If set to `True`, this action will be skipped in non-interactive mode. If set to `False`, this action will always be executed without asking (even in interactive mode). Defaults to `None`.
47
48 <div class="alert alert-warning">Think hard before setting this to <code>False</code>. People might assume that interactive mode won't do anything without their consent.</div>
0 # Directory items
1
2 directories = {
3 "/path/to/directory": {
4 "mode": "0644",
5 "owner": "root",
6 "group": "root",
7 },
8 }
9
10 ## Attribute reference
11
12 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
13
14 <br>
15
16 ### group
17
18 Name of the group this directory belongs to. Defaults to `None` (don't care about group).
19
20 <br>
21
22 ### mode
23
24 Directory mode as returned by `stat -c %a <directory>`. Defaults to `None` (don't care about mode).
25
26 <br>
27
28 ### owner
29
30 Username of the directory's owner. Defaults to `None` (don't care about owner).
31
32 <br>
33
34 ### purge
35
36 Set this to `True` to remove everything from this directory that is not managed by BundleWrap. Defaults to `False`.
0 # File items
1
2 Manage regular files.
3
4 files = {
5 "/path/to/file": {
6 "mode": "0644",
7 "owner": "root",
8 "group": "root",
9 "content_type": "mako",
10 "encoding": "utf-8",
11 "source": "my_template",
12 },
13 }
14
15 <br>
16
17 Attribute reference
18 -------------------
19
20 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
21
22 <br>
23
24 ### content
25
26 May be used instead of `source` to provide file content without a template file.
27
28 <br>
29
30 ### content_type
31
32 How the file pointed to by `source` or the string given to `content` should be interpreted.
33
34 <table>
35 <tr><th>Value</th><th>Effect</th></tr>
36 <tr><td><code>any</code></td><td>only cares about file owner, group, and mode</td></tr>
37 <tr><td><code>base64</code></td><td>content is decoded from base64</td></tr>
38 <tr><td><code>binary</code></td><td>file is uploaded verbatim, no content processing occurs</td></tr>
39 <tr><td><code>jinja2</code></td><td>content is interpreted by the Jinja2 template engine</td></tr>
40 <tr><td><code>mako</code></td><td>content is interpreted by the Mako template engine</td></tr>
41 <tr><td><code>text</code> (default)</td><td>like <code>binary</code>, but will be diffed in interactive mode</td></tr>
42 </table>
43
44 <br>
45
46 ### context
47
48 Only used with Mako and Jinja2 templates. The values of this dictionary will be available from within the template as variables named after the respective keys.
49
50 <br>
51
52 ### delete
53
54 When set to `True`, the path of this file will be removed. It doesn't matter if there is not a file but a directory or something else at this path. When using `delete`, no other attributes are allowed.
55
56 <br>
57
58 ### encoding
59
60 Encoding of the target file. Note that this applies to the remote file only, your template is still conveniently written in UTF-8 and will be converted by BundleWrap. Defaults to "utf-8". Other possible values (e.g. "latin-1") can be found [here](http://docs.python.org/2/library/codecs.html#standard-encodings).
61
62 <br>
63
64 ### group
65
66 Name of the group this file belongs to. Defaults to `None` (don't care about group).
67
68 <br>
69
70 ### mode
71
72 File mode as returned by `stat -c %a <file>`. Defaults to `None` (don't care about mode).
73
74 <br>
75
76 ### owner
77
78 Username of the file's owner. Defaults to `None` (don't care about owner).
79
80 <br>
81
82 ### source
83
84 File name of the file template. If this says `my_template`, BundleWrap will look in `data/my_bundle/files/my_template` and then `bundles/my_bundle/files/my_template`. Most of the time, you will want to put config templates into the latter directory. The `data/` subdirectory is meant for files that are very specific to your infrastructure (e.g. DNS zone files). This separation allows you to write your bundles in a generic way so that they could be open-sourced and shared with other people. Defaults to the filename of this item (e.g. `foo.conf` when this item is `/etc/foo.conf`).
85
86 See also: [Writing file templates](../guide/item_file_templates.md)
87
88 <br>
89
90 ### verify_with
91
92 This can be used to run external validation commands on a file before it is applied to a node. The file is verified locally on the machine running BundleWrap. Verification is considered successful when the exit code of the verification command is 0. Use `{}` as a placeholder for the shell-quoted path to the temporary file. Here is an example for verifying sudoers files:
93
94 <pre><code class="nohighlight">visudo -cf {}</code></pre>
95
96 Keep in mind that all team members will have to have the verification command installed on their machines.
0 # Group items
1
2 Manages system groups. Group members are managed through the [user item](user.md).
3
4 groups = {
5 "acme": {
6 "gid": 2342,
7 },
8 }
9
10 <br>
11
12 ## Attribute reference
13
14 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
15
16 <br>
17
18 ### delete
19
20 When set to `True`, this group will be removed from the system. When using `delete`, no other attributes are allowed.
21
22 <br>
23
24 ### gid
25
26 Numerical ID of the group.
0 # APT package items
1
2 Handles packages installed by `apt-get` on Debian-based systems.
3
4 pkg_apt = {
5 "foopkg": {
6 "installed": True, # default
7 },
8 "bar": {
9 "installed": False,
10 },
11 }
12
13 <br>
14
15 ## Attribute reference
16
17 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
18
19 <br>
20
21 ### installed
22
23 `True` when the package is expected to be present on the system; `False` if it should be purged.
0 # dnf package items
1
2 Handles packages installed by `dnf` on RPM-based systems.
3
4 pkg_dnf = {
5 "foopkg": {
6 "installed": True, # default
7 },
8 "bar": {
9 "installed": False,
10 },
11 }
12
13 <br>
14
15 ## Attribute reference
16
17 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
18
19 <br>
20
21 ### installed
22
23 `True` when the package is expected to be present on the system; `False` if it should be removed.
0 # OpenBSD package items
1
2 Handles packages installed by `pkg_add` on OpenBSD systems.
3
4 pkg_openbsd = {
5 "foo": {
6 "installed": True, # default
7 },
8 "bar": {
9 "installed": True,
10 "version": "1.0",
11 },
12 "baz": {
13 "installed": False,
14 },
15 }
16
17 <br>
18
19 ## Attribute reference
20
21 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
22
23 <br>
24
25 ### installed
26
27 `True` when the package is expected to be present on the system; `False` if it should be purged.
28
29 <br>
30
31 ### version
32
33 Optional version string. Required for packages that offer multiple variants (like nginx or sudo). Ignored when `installed` is `False`.
0 # Pacman package items
1
2 Handles packages installed by `pacman` (e.g. Arch Linux).
3
4 pkg_pacman = {
5 "foopkg": {
6 "installed": True, # default
7 },
8 "bar": {
9 "installed": False,
10 },
11 "somethingelse": {
12 "tarball": "something-1.0.pkg.tar.gz",
13 }
14 }
15
16 <div class="alert alert-warning">System updates on Arch Linux should <strong>always</strong> be performed manually and with great care. Thus, this item type installs packages with a simple <code>pacman -S $pkgname</code> instead of the commonly recommended <code>pacman -Syu $pkgname</code>. You should <strong>manually</strong> do a full system update before installing new packages via BundleWrap!</div>
17
18 <br>
19
20 ## Attribute reference
21
22 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
23
24 <br>
25
26 ### installed
27
28 `True` when the package is expected to be present on the system; `False` if this package and all dependencies that are no longer needed should be removed.
29
30 <br>
31
32 ### tarball
33
34 Upload a local file to the node and install it using `pacman -U`. The value of `tarball` must point to a file relative to the `pkg_pacman` subdirectory of the current bundle.
0 # pip package items
1
2 Handles Python packages installed by `pip`.
3
4 pkg_pip = {
5 "foo": {
6 "installed": True, # default
7 "version": "1.0", # optional
8 },
9 "bar": {
10 "installed": False,
11 },
12 "/path/to/virtualenv/foo": {
13 # will install foo in the virtualenv at /path/to/virtualenv
14 },
15 }
16
17 <br>
18
19 ## Attribute reference
20
21 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
22
23 <br>
24
25 ### installed
26
27 `True` when the package is expected to be present on the system; `False` if it should be removed.
28
29 <br>
30
31 ### version
32
33 Force the given exact version to be installed. You can only specify a single version here, selectors like `>=1.0` are NOT supported.
34
35 If it's not given, the latest version will be installed initially, but (like the other package items) upgrades will NOT be installed.
0 # yum package items
1
2 Handles packages installed by `yum` on RPM-based systems.
3
4 pkg_yum = {
5 "foopkg": {
6 "installed": True, # default
7 },
8 "bar": {
9 "installed": False,
10 },
11 }
12
13 <br>
14
15 ## Attribute reference
16
17 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
18
19 <br>
20
21 ### installed
22
23 `True` when the package is expected to be present on the system; `False` if it should be removed.
0 # zypper package items
1
2 Handles packages installed by `zypper` on SUSE-based systems.
3
4 pkg_zypper = {
5 "foopkg": {
6 "installed": True, # default
7 },
8 "bar": {
9 "installed": False,
10 },
11 }
12
13 <br>
14
15 ## Attribute reference
16
17 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
18
19 <br>
20
21 ### installed
22
23 `True` when the package is expected to be present on the system; `False` if it should be removed.
0 # Postgres database items
1
2 Manages Postgres databases.
3
4 postgres_dbs = {
5 "mydatabase": {
6 "owner": "me",
7 },
8 }
9
10 <br>
11
12 ## Attribute reference
13
14 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
15
16 <br>
17
18 ### owner
19
20 Name of the role which owns this database (defaults to `"postgres"`).
0 # Postgres role items
1
2 Manages Postgres roles.
3
4 postgres_roles = {
5 "me": {
6 "superuser": True,
7 "password": "itsamemario",
8 },
9 }
10
11 <br>
12
13 ## Attribute reference
14
15 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
16
17 <br>
18
19 ### superuser
20
21 `True` if the role should be given superuser privileges (defaults to `False`).
22
23 <br>
24
25 ### password
26
27 Plaintext password to set for this role (will be hashed using MD5).
28
29 <div class="alert alert-warning">Please do not write any passwords into your bundles. This attribute is intended to be used with an external source of passwords and filled dynamically. If you don't have or want such an elaborate setup, specify passwords using the <code>password_hash</code> attribute instead.</div>
30
31 <br>
32
33 ### password_hash
34
35 As an alternative to `password`, this allows setting the raw hash as it will be stored in Postgres' internal database. Should start with "md5".
0 # OpenBSD service items
1
2 Handles services on OpenBSD.
3
4 svc_openbsd = {
5 "bgpd": {
6 "enabled": True, # default
7 "running": True, # default
8 },
9 "supervisord": {
10 "running": False,
11 },
12 }
13
14 <br>
15
16 ## Attribute reference
17
18 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
19
20 <br>
21
22 ### enabled
23
24 `True` if the service shall be automatically started during system bootup; `False` otherwise. `True`, the default value, is needed on OpenBSD, as starting disabled services fails.
25
26 <br>
27
28 ### running
29
30 `True` if the service is expected to be running on the system; `False` if it should be stopped.
31
32 <br>
33
34 ## Canned actions
35
36 See also: [Explanation of how canned actions work](../repo/bundles.md#canned-actions)
37
38 ### restart
39
40 Restarts the service.
41
42 <br>
43
44 ### stopstart
45
46 Stops and starts the service.
0 # systemd service items
1
2 Handles services managed by systemd.
3
4 svc_systemd = {
5 "fcron.service": {
6 "enabled": True,
7 "running": True, # default
8 },
9 "sgopherd.socket": {
10 "running": False,
11 },
12 }
13
14 <br>
15
16 ## Attribute reference
17
18 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
19
20 <br>
21
22 ### enabled
23
24 `True` if the service shall be automatically started during system bootup; `False` otherwise. `None`, the default value, makes BundleWrap ignore this setting.
25
26 <br>
27
28 ### running
29
30 `True` if the service is expected to be running on the system; `False` if it should be stopped.
31
32 <br>
33
34 ## Canned actions
35
36 See also: [Explanation of how canned actions work](../repo/bundles.md#canned-actions)
37
38 ### reload
39
40 Reloads the service.
41
42 <br>
43
44 ### restart
45
46 Restarts the service.
0 # System V service items
1
2 Handles services managed by traditional System V init scripts.
3
4 svc_systemv = {
5 "apache2": {
6 "running": True, # default
7 },
8 "mysql": {
9 "running": False,
10 },
11 }
12
13 <br>
14
15 ## Attribute reference
16
17 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
18
19 <br>
20
21 ### running
22
23 `True` if the service is expected to be running on the system; `False` if it should be stopped.
24
25 <br>
26
27 ## Canned actions
28
29 See also: [Explanation of how canned actions work](../repo/bundles.md#canned-actions)
30
31 ### reload
32
33 Reloads the service.
34
35 <br>
36
37 ### restart
38
39 Restarts the service.
0 # Upstart service items
1
2 Handles services managed by Upstart.
3
4 svc_upstart = {
5 "gunicorn": {
6 "running": True, # default
7 },
8 "celery": {
9 "running": False,
10 },
11 }
12
13 <br>
14
15 ## Attribute reference
16
17 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
18
19 <br>
20
21 ### running
22
23 `True` if the service is expected to be running on the system; `False` if it should be stopped.
24
25 <br>
26
27 ## Canned actions
28
29 See also: [Explanation of how canned actions work](../repo/bundles.md#canned-actions)
30
31 ### reload
32
33 Reloads the service.
34
35 <br>
36
37 ### restart
38
39 Restarts the service.
40
41 <br>
42
43 ### stopstart
44
45 Stops and then starts the service. This is different from `restart` in that Upstart will pick up changes to the `/etc/init/SERVICENAME.conf` file, while `restart` will continue to use the version of that file that the service was originally started with. See [http://askubuntu.com/a/238069](http://askubuntu.com/a/238069).
0 # Symlink items
1
2 symlinks = {
3 "/some/symlink": {
4 "group": "root",
5 "owner": "root",
6 "target": "/target/file",
7 },
8 }
9
10 <br>
11
12 ## Attribute reference
13
14 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
15
16 <br>
17
18 ### target
19
20 File or directory this symlink points to. **This attribute is required.**
21
22 <br>
23
24 ### group
25
26 Name of the group this symlink belongs to. Defaults to `root`. Defaults to `None` (don't care about group).
27
28 <br>
29
30 ### owner
31
32 Username of the symlink's owner. Defaults to `root`. Defaults to `None` (don't care about owner).
0 # User items
1
2 Manages system user accounts.
3
4 users = {
5 "jdoe": {
6 "full_name": "Jane Doe",
7 "gid": 2342,
8 "groups": ["admins", "users", "wheel"],
9 "home": "/home/jdoe",
10 "password_hash": "$6$abcdef$ghijklmnopqrstuvwxyz",
11 "shell": "/bin/zsh",
12 "uid": 4747,
13 },
14 }
15
16 <br>
17
18 ## Attribute reference
19
20 See also: [The list of generic builtin item attributes](../repo/bundles.md#builtin-item-attributes)
21
22 All attributes are optional.
23
24 <br>
25
26 ### delete
27
28 When set to `True`, this user will be removed from the system. Note that because of how `userdel` works, the primary group of the user will be removed if it contains no other users. When using `delete`, no other attributes are allowed.
29
30 <br>
31
32 ### full_name
33
34 Full name of the user.
35
36 <br>
37
38 ### gid
39
40 Primary group of the user as numerical ID or group name.
41
42 <div class="alert alert-info">Due to how <code>useradd</code> works, this attribute is required whenever you <strong>don't</strong> want the default behavior of <code>useradd</code> (usually that means automatically creating a group with the same name as the user). If you want to use an unmanaged group already on the node, you need this attribute. If you want to use a group managed by BundleWrap, you need this attribute. This is true even if the groups mentioned are in fact named like the user.</div>
43
44 <br>
45
46 ### groups
47
48 List of groups (names, not GIDs) the user should belong to. Must NOT include the group referenced by `gid`.
49
50 <br>
51
52 ### hash_method
53
54 One of:
55
56 * `md5`
57 * `sha256`
58 * `sha512`
59
60 Defaults to `sha512`.
61
62 <br>
63
64 ### home
65
66 Path to home directory. Defaults to `/home/USERNAME`.
67
68 <br>
69
70 ### password
71
72 The user's password in plaintext.
73
74 <div class="alert alert-danger">Please do not write any passwords into your bundles. This attribute is intended to be used with an external source of passwords and filled dynamically. If you don't have or want such an elaborate setup, specify passwords using the <code>password_hash</code> attribute instead.</div>
75
76 <div class="alert alert-info">If you don't specify a <code>salt</code> along with the password, BundleWrap will use a static salt. Be aware that this is basically the same as using no salt at all.</div>
77
78 <br>
79
80 ### password_hash
81
82 Hashed password as it would be returned by `crypt()` and written to `/etc/shadow`.
83
84 <br>
85
86 ### salt
87
88 Recommended for use with the `password` attribute. BundleWrap will use 5000 rounds of SHA-512 on this salt and the provided password.
89
90 <br>
91
92 ### shell
93
94 Path to login shell executable.
95
96 <br>
97
98 ### uid
99
100 Numerical user ID. It's your job to make sure it's unique.
0 <style>.bs-sidebar { display: none; }</style>
1
2 # About
3
4 Development on BundleWrap started in July 2012, borrowing some ideas from [Bcfg2](http://bcfg2.org/). Some key features that are meant to set BundleWrap apart from other config management systems are:
5
6 * decentralized architecture
7 * pythonic and easily extendable
8 * easy to get started with
9 * true item-level parallelism (in addition to working on multiple nodes simultaneously, BundleWrap will continue to fix config files while installing a package on the same node)
10 * very customizable item dependencies
11 * collaboration features like [node locking](../guide/locks.md) (to prevent simultaneous applies to the same node) and hooks for chat notifications
12 * built-in testing facility (`bw test`)
13 * can be used as a library
14
15 BundleWrap is a "pure" free software project licensed under the terms of the [GPLv3](http://www.gnu.org/licenses/gpl.html>), with no *Enterprise Edition* or commercial support.
0 # Alternatives
1
2 <div class="alert alert-info">This page is an effort to compare BundleWrap to other config management systems. It very hard to keep this information complete and up to date, so please feel free to raise issues or create pull requests if something is amiss.</div>
3
4 BundleWrap has the following properties that are unique to it or at least not common among other solutions:
5
6 * server- and agent-less architecture
7 * item-level parallelism to speed up convergence of complex nodes
8 * interactive mode to review configuration as it it being applied
9 * [Mako file templates](../items/file_templates)
10 * verifies that each action taken actually fixed the item in question
11 * verify mode to assess the state of your configuration without mutating it
12 * useful and actionable error messages
13 * can apply actions (and other items) prior to fixing an item (and only then)
14 * built-in visualization of node configuration
15 * nice [Python API](../guide/api.md)
16 * designed to be mastered quickly and easily remembered
17 * for better or worse: no commercial agenda/support
18 * no support for non-Linux target nodes (BundleWrap itself can be run from Mac OS as well)
19
20
21 ## Ansible
22
23 [Ansible](http://ansible.com>) is very similar to BundleWrap in how it communicates with nodes. Both systems do not use server or agent processes, but SSH. Ansible can optionally use OpenSSH instead of a Python SSH implementation to speed up performance. On the other hand, BundleWrap will always use the Python implementation, but with multiple connections to each node. This should give BundleWrap a performance advantage on very complex systems with many items, since each connection can work on a different item simultaneously.
24
25 To apply configuration, Ansible uploads pieces of code called modules to each node and runs them there. Many Ansible modules depend on the node having a Python 2.x interpreter installed. In some cases, third-party Python libraries are needed as well, increasing the footprint on the node. BundleWrap runs commands on the target node just as you would in an interactive SSH session. Most of the [commands needed](../guide/installation.md#requirements-for-managed-systems) by BundleWrap are provided by coreutils and should be present on all standard Linux systems.
26
27 Ansible ships with loads of modules while BundleWrap will only give you the most needed primitives to work with. For example, we will not add an item type for remote downloads because you can easily build that yourself using an [action](../items/action.md) with `wget`.
28
29 Ansible's playbooks roughly correspond to BundleWrap's bundles, but are written in YAML using a special playbook language. BundleWrap uses Python for this purpose, so if you know some basic Python you only need to learn the schema of the dictionaries you're building. This also means that you will never run into a problem the playbook language cannot solve. Anything you can do in Python, you can do in BundleWrap.
30
31 While you can automate application deployments in BundleWrap, Ansible is much more capable in that regard as it combines config management and sophisticated deployment mechanisms (multi-stage, rolling updates).
32
33 File templates in Ansible are [Jinja2](http://jinja2.pocoo.org), while BundleWrap offers both [Mako](http://makotemplates.org>) and Jinja2.
34
35 Ansible, Inc. offers paid support for Ansible and an optional web-based addon called [Ansible Tower](http://ansible.com/tower). No such offerings are available for BundleWrap.
36
37
38 BCFG2
39 -----
40
41 BCFG2's bundles obviously were an inspiration for BundleWrap. One important difference is that BundleWrap's bundles are usually completely isolated and self-contained within their directory while BCFG2 bundles may need resources (e.g. file templates) from elsewhere in the repository.
42
43 On a practical level BundleWrap prefers pure Python and Mako over the XML- and text-variants of Genshi used for bundle and file templating in BCFG2.
44
45 And of course BCFG2 has a very traditional client/server model while BundleWrap runs only on the operators computer.
46
47
48 Chef
49 ----
50
51 [Chef](http://www.getchef.com) has basically two modes of operation: The most widely used one involves a server component and the `chef-client` agent. The second option is `chef-solo`, which will apply configuration from a local repository to the node the repository is located on. BundleWrap supports neither of these modes and always applies configuration over SSH.
52
53 Overall, Chef is harder to get into, but will scale to thousands of nodes.
54
55 The community around Chef is quite large and probably the largest of all config management systems. This means lots of community-maintained cookbooks to choose from. BundleWrap does have a [plugin system](../repo/plugins.md) to provide almost anything in a repository, but there aren't many plugins to choose from yet.
56
57 Chef is written in Ruby and uses the popular [ERB](http://www.kuwata-lab.com/erubis/) template language. BundleWrap is heavily invested in Python and offers support for Mako and Jinja2 templates.
58
59 OpsCode offers paid support for Chef and SaaS hosting for the server component. [AWS OpsWorks](http://aws.amazon.com/opsworks/) also integrates Chef cookbooks.
0 # Contributing
1
2 We welcome all input and contributions to BundleWrap. If you've never done this sort of thing before, maybe check out [contribution-guide.org](http://www.contribution-guide.org). But don't be afraid to make mistakes, nobody expects your first contribution to be perfect. We'll gladly help you out.
3
4 <br>
5
6 ## Submitting bug reports
7
8 Please use the [GitHub issue tracker](https://github.com/bundlewrap/bundlewrap/issues) and take a few minutes to look for existing reports of the same problem (open or closed!).
9
10 <div class="alert alert-danger">If you've found a security issue or are not at all sure, just contact <a href="mailto:trehn@bundlewrap.org">trehn@bundlewrap.org</a>.</div>
11
12 <br>
13
14 ## Contributing code
15
16 <div class="alert alert-info">Before working on new features, try reaching out to one of the core authors first. We are very concerned with keeping BundleWrap lean and not introducing bloat. If your idea is not a good fit for all or most BundleWrap users, it can still be included <a href="../dev_plugins">as a plugin</a>.</div>
17
18 Here are the steps:
19
20 1. Write your code. Awesome!
21 2. If you haven't already done so, please consider writing tests. Otherwise, someone else will have to do it for you.
22 3. Same goes for documentation.
23 4. Set up a [virtualenv](http://virtualenv.readthedocs.org/en/latest/) and run `pip install -r requirements.txt`.
24 5. Make sure you can connect to your localhost via `ssh` without using a password and that you are able to run `sudo`.
25 6. Run `tox`.
26 7. Review and sign the Copyright Assignment Agreement (CAA) by adding your name and email to the `AUTHORS` file. (This step can be skipped if your contribution is too small to be considered intellectual property, e.g. spelling fixes)
27 8. Open a pull request on [GitHub](https://github.com/bundlewrap/bundlewrap).
28 9. Feel great. Thank you.
29
30 <br>
31
32 ## Contributing documentation
33
34 The process is essentially the same as detailed above for code contributions. You will find the docs in `docs/content/` and can preview them using `cd docs && mkdocs serve`.
35
36 <br>
37
38 ## Help
39
40 If at any point you need help or are not sure what to do, just drop by in [#bundlewrap on Freenode](irc://chat.freenode.net/bundlewrap) or poke [@bundlewrap on Twitter](https://twitter.com/bundlewrap).
0 # FAQ
1
2 ## Technical
3
4 ### BundleWrap says an item failed to apply, what do I do now?
5
6 Try running `bw apply -i nodename` to see which attribute of the item could not be fixed. If that doesn't tell you enough, try `bw --debug apply -i nodename` and look for the command BundleWrap is using to fix the item in question. Then try running that command yourself and check for any errors.
7
8 <br>
9
10 ### What happens when two people start applying configuration to the same node?
11
12 BundleWrap uses a [locking mechanism](../guide/locks.md) to prevent collisions like this.
13
14 <br>
15
16 ### How can I have BundleWrap reload my services after config changes?
17
18 See [canned actions](../repo/bundles.md#canned_actions) and [triggers](../repo/bundles.md#triggers).
19
20 <br>
21
22 ### Will BundleWrap keep track of package updates?
23
24 No. BundleWrap will only care about whether a package is installed or not. Updates will have to be installed through a separate mechanism (I like to create an [action](../items/action.md) with the `interactive` attribute set to `True`). Selecting specific versions should be done through your package manager.
25
26 <br>
27
28 ### Is there a probing mechanism like Ohai?
29
30 No. BundleWrap is meant to be very push-focused. The node should not have any say in what configuration it will receive.
31
32 <br>
33
34 ### Is BundleWrap secure?
35
36 BundleWrap is more concerned with safety than security. Due to its design, it is possible for your coworkers to introduce malicious code into a BundleWrap repository that could compromise your machine. You should only use trusted repositories and plugins. We also recommend following commit logs to your repos.
37
38 <br>
39
40 ## The BundleWrap Project
41
42 ### Why doesn't BundleWrap provide pre-built community bundles?
43
44 In our experience, bundles for even the most common pieces of software always contain some opinionated bits specific to local infrastructure. Making bundles truly universal (e.g. in terms of supported Linux distributions) would mean a lot of bloat. And since local modifications are hard to reconcile with an upstream community repository, bundles would have to be very feature-complete to be useful to the majority of users, increasing bloat even more.
45
46 Maintaining bundles and thus configuration for different pieces of software is therefore out of scope for the BundleWrap project. While it might seem tedious when you're getting started, with some practice, writing your own bundles will become both easy and precise in terms of infrastructure fit.
47
48 <br>
49
50 ### Why do contributors have to sign a Copyright Assignment Agreement?
51
52 While it sounds scary, Copyright assignment is used to improve the enforceability of the GPL. Even the FSF does it, [read their explanation why](http://www.gnu.org/licenses/why-assign.html). The agreement used by BundleWrap is from [harmonyagreements.org](http://harmonyagreements.org).
53
54 If you're still concerned, please do not hesitate to contact [@trehn](https://twitter.com/trehn).
55
56 <br>
57
58 ### Isn't this all very similar to Ansible?
59
60 Some parts are, but there are significant differences as well. Check out the [alternatives page](alternatives.md#ansible) for a writeup of the details.
61
62 <br>
0 # Glossary
1
2 ## action
3
4 Actions are a special kind of item used for running shell commands during each `bw apply`. They allow you to do things that aren't persistent in nature.
5
6 <br>
7
8 ## apply
9
10 An "apply" is what we call the process of what's otherwise known as "converging" the state described by your repository and the actual status quo on the node.
11
12 <br>
13
14 ## bundle
15
16 A collection of items. Most of the time, you will create one bundle per application. For example, an Apache bundle will include the httpd service, the virtual host definitions and the apache2 package.
17
18 <br>
19
20 ## group
21
22 Used for organizing your nodes.
23
24 <br>
25
26 ## hook
27
28 [Hooks](../repo/hooks.md) can be used to run your own code automatically during various stages of BundleWrap operations.
29
30 <br>
31
32 ## item
33
34 A single piece of configuration on a node, e.g. a file or an installed package.
35
36 You might be interested in [this overview of item types](../repo/bundles.md#item_types).
37
38 <br>
39
40 ## lib
41
42 [Libs](../repo/libs.md) are a way to store Python modules in your repository and make them accessible to your bundles and templates.
43
44 <br>
45
46 ## node
47
48 A managed system, no matter if physical or virtual.
49
50 <br>
51
52 ## repo
53
54 A repository is a directory with [some stuff](../repo/layout.md) in it that tells BundleWrap everything it needs to know about your infrastructure.
0 <h1>Bundles</h1>
1
2 Bundles are subdirectories of the `bundles/` directory of your BundleWrap repository.
3
4 # items.py
5
6 Within each bundle, there may be a file called `items.py`. It defines any number of magic attributes that are automatically processed by BundleWrap. Each attribute is a dictionary mapping an item name (such as a file name) to a dictionary of attributes (e.g. file ownership information).
7
8 A typical bundle might look like this:
9
10 files = {
11 '/etc/hosts': {
12 'owner': "root",
13 'group': "root",
14 'mode': "0664",
15 [...]
16 },
17 }
18
19 users = {
20 'janedoe': {
21 'home': "/home/janedoe",
22 'shell': "/bin/zsh",
23 [...]
24 },
25 'johndoe': {
26 'home': "/home/johndoe",
27 'shell': "/bin/bash",
28 [...]
29 },
30 }
31
32 This bundle defines the attributes `files` and `users`. Within the `users` attribute, there are two `user` items. Each item maps its name to a dictionary that is understood by the specific kind of item. Below you will find a reference of all builtin item types and the attributes they understand. You can also [define your own item types](../guide/dev_item.md).
33
34 <br>
35
36 ## Item types
37
38 This table lists all item types included in BundleWrap along with the bundle attributes they understand.
39
40 <table>
41 <tr><th>Type</th><th>Bundle attribute</th><th>Description</th></tr>
42 <tr><td><a href="../../items/action">action</a></td><td><code>actions</code></td><td>Actions allow you to run commands on every <code>bw apply</code></td></tr>
43 <tr><td><a href="../../items/directory">directory</a></td><td><code>directories</code></td><td>Manages permissions and ownership for directories</td></tr>
44 <tr><td><a href="../../items/file">file</a></td><td><code>files</code></td><td>Manages contents, permissions, and ownership for files</td></tr>
45 <tr><td><a href="../../items/group">group</a></td><td><code>groups</code></td><td>Manages groups by wrapping <code>groupadd</code>, <code>groupmod</code> and <code>groupdel</code></td></tr>
46 <tr><td><a href="../../items/pkg_apt">pkg_apt</a></td><td><code>pkg_apt</code></td><td>Installs and removes packages with APT</td></tr>
47 <tr><td><a href="../../items/pkg_dnf">pkg_dnf</a></td><td><code>pkg_dnf</code></td><td>Installs and removes packages with dnf</td></tr>
48 <tr><td><a href="../../items/pkg_pacman">pkg_pacman</a></td><td><code>pkg_pacman</code></td><td>Installs and removes packages with pacman</td></tr>
49 <tr><td><a href="../../items/pkg_pip">pkg_pip</a></td><td><code>pkg_pip</code></td><td>Installs and removes Python packages with pip</td></tr>
50 <tr><td><a href="../../items/pkg_yum">pkg_yum</a></td><td><code>pkg_yum</code></td><td>Installs and removes packages with yum</td></tr>
51 <tr><td><a href="../../items/pkg_zypper">pkg_zypper</a></td><td><code>pkg_zypper</code></td><td>Installs and removes packages with zypper</td></tr>
52 <tr><td><a href="../../items/postgres_db">postgres_db</a></td><td><code>postgres_dbs</code></td><td>Manages Postgres databases</td></tr>
53 <tr><td><a href="../../items/postgres_role">postgres_role</a></td><td><code>postgres_roles</code></td><td>Manages Postgres roles</td></tr>
54 <tr><td><a href="../../items/pkg_pip">pkg_pip</a></td><td><code>pkg_pip</code></td><td>Installs and removes Python packages with pip</td></tr>
55 <tr><td><a href="../../items/pkg_openbsd">pkg_openbsd</a></td><td><code>pkg_openbsd</code></td><td>Installs and removes OpenBSD packages with pkg_add/pkg_delete</td></tr>
56 <tr><td><a href="../../items/svc_openbsd">svc_openbsd</a></td><td><code>svc_openbsd</code></td><td>Starts and stops services with OpenBSD's rc</td></tr>
57 <tr><td><a href="../../items/svc_systemd">svc_systemd</a></td><td><code>svc_systemd</code></td><td>Starts and stops services with systemd</td></tr>
58 <tr><td><a href="../../items/svc_systemv">svc_systemv</a></td><td><code>svc_systemv</code></td><td>Starts and stops services with traditional System V init scripts</td></tr>
59 <tr><td><a href="../../items/svc_upstart">svc_upstart</a></td><td><code>svc_upstart</code></td><td>Starts and stops services with Upstart</td></tr>
60 <tr><td><a href="../../items/symlink">symlink</a></td><td><code>symlinks</code></td><td>Manages symbolic links and their ownership</td></tr>
61 <tr><td><a href="../../items/user">user</a></td><td><code>users</code></td><td>Manages users by wrapping <code>useradd</code>, <code>usermod</code> and <code>userdel</code></td></tr>
62 </table>
63
64 <br>
65
66 ## Builtin item attributes
67
68 There are also attributes that can be applied to any kind of item.
69
70 <br>
71
72 ### error_on_missing_fault
73
74 This will simply skip an item instead of raising an error when a Fault used for an attribute on the item is unavailable. Faults are special objects used by `repo.vault` to [handle secrets](../guide/secrets.md). A Fault being unavailable can mean you're missing the secret key required to decrypt a secret you're trying to use as an item attribute value.
75
76 Defaults to `False`.
77
78 <br>
79
80 ### needs
81
82 One such attribute is `needs`. It allows for setting up dependencies between items. This is not something you will have to to very often, because there are already implicit dependencies between items types (e.g. all files depend on the users owning them). Here are two examples:
83
84 my_items = {
85 'item1': {
86 [...]
87 'needs': [
88 'file:/etc/foo.conf',
89 ],
90 },
91 'item2': {
92 ...
93 'needs': [
94 'pkg_apt:',
95 'bundle:foo',
96 ],
97 }
98 }
99
100 The first item (`item1`, specific attributes have been omitted) depends on a file called `/etc/foo.conf`, while `item2` depends on all APT packages being installed and every item in the foo bundle.
101
102 <br>
103
104 ### needed_by
105
106 This attribute is an alternative way of defining dependencies. It works just like `needs`, but in the other direction. There are only three scenarios where you should use `needed_by` over `needs`:
107
108 * if you need all items of a certain type to depend on something or
109 * if you need all items in a bundle to depend on something or
110 * if you need an item in a bundle you can't edit (e.g. because it's provided by a community-maintained [plugin](plugins.md)) to depend on something in your bundles
111
112 <br>
113
114 ### tags
115
116 A list of strings to tag an item with. Tagging has no immediate effect in itself, but can be useful in a number of places. For example, you can add dependencies on all items with a given tag:
117
118 pkg_apt = {
119 "mysql-server-{}".format(node.metadata.get('mysql_version', "5.5")): {
120 'tags': ["provides-mysqld"],
121 },
122 }
123
124 svc_systemd = {
125 "myapp": {
126 'needs': ["tag:provides-mysqld"],
127 },
128 }
129
130 In this simplified example we save ourselves from duplicating the logic that gets the current MySQL version from metadata (which is probably overkill here, but you might encounter more complex situations).
131
132 <br>
133
134 ### triggers and triggered
135
136 In some scenarios, you may want to execute an [action](../items/action.md) only when an item is fixed (e.g. restart a daemon after a config file has changed or run `postmap` after updating an alias file). To do this, BundleWrap has the builtin atttribute `triggers`. You can use it to point to any item that has its `triggered` attribute set to `True`. Such items will only be checked (or in the case of actions: run) if the triggering item is fixed (or a triggering action completes successfully).
137
138 files = {
139 '/etc/daemon.conf': {
140 [...]
141 'triggers': [
142 'action:restart_daemon',
143 ],
144 },
145 }
146
147 actions = {
148 'restart_daemon': {
149 'command': "service daemon restart",
150 'triggered': True,
151 },
152 }
153
154 The above example will run `service daemon restart` every time BundleWrap successfully applies a change to `/etc/daemon.conf`. If an action is triggered multiple times, it will only be run once.
155
156 Similar to `needed_by`, `triggered_by` can be used to define a `triggers` relationship from the opposite direction.
157
158 <br>
159
160 ### preceded_by
161
162 Operates like `triggers`, but will apply the triggered item *before* the triggering item. Let's look at an example:
163
164 files = {
165 '/etc/example.conf': {
166 [...]
167 'preceded_by': [
168 'action:backup_example',
169 ],
170 },
171 }
172
173 actions = {
174 'backup_example': {
175 'command': "cp /etc/example.conf /etc/example.conf.bak",
176 'triggered': True,
177 },
178 }
179
180 In this configuration, `/etc/example.conf` will always be copied before and only if it is changed. You would probably also want to set `cascade_skip` to `False` on the action so you can skip it in interactive mode when you're sure you don't need the backup copy.
181
182 Similar to `needed_by`, `precedes` can be used to define a `preceded_by` relationship from the opposite direction.
183
184 <br>
185
186 ### unless
187
188 Another builtin item attribute is `unless`. For example, it can be used to construct a one-off file item where BundleWrap will only create the file once, but won't check or modify its contents once it exists.
189
190 files = {
191 "/path/to/file": {
192 [...]
193 "unless": "test -x /path/to/file",
194 },
195 }
196
197 This will run `test -x /path/to/file` before doing anything with the item. If the command returns 0, no action will be taken to "correct" the item.
198
199 Another common use for `unless` is with actions that perform some sort of install operation. In this case, the `unless` condition makes sure the install operation is only performed when it is needed instead of every time you run `bw apply`. In scenarios like this you will probably want to set `cascade_skip` to `False` so that skipping the installation (because the thing is already installed) will not cause every item that depends on the installed thing to be skipped. Example:
200
201 actions = {
202 'download_thing': {
203 'command': "wget http://example.com/thing.bin -O /opt/thing.bin && chmod +x /opt/thing.bin",
204 'unless': "test -x /opt/thing.bin",
205 'cascade_skip': False,
206 },
207 'run_thing': {
208 'command': "/opt/thing.bin",
209 'needs': ["action:download_thing"],
210 },
211 }
212
213 If `action:download_thing` would not set `cascade_skip` to `False`, `action:run_thing` would only be executed once: directly after the thing has been downloaded. On subsequent runs, `action:download_thing` will fail the `unless` condition and be skipped. This would also cause all items that depend on it to be skipped, including `action:run_thing`.
214
215 <br>
216
217 ### cascade_skip
218
219 There are some situations where you don't want to default behavior of skipping everything that depends on a skipped item. That's where `cascade_skip` comes in. Set it to `False` and skipping an item won't skip those that depend on it. Note that items can be skipped
220
221 * interactively or
222 * because they haven't been triggered or
223 * because one of their dependencies failed or
224 * they failed their `unless` condition or
225 * because an [action](../items/action.md) had its `interactive` attribute set to `True` during a non-interactive run
226
227 The following example will offer to run an `apt-get update` before installing a package, but continue to install the package even if the update is declined interactively.
228
229 actions = {
230 'apt_update': {
231 'cascade_skip': False,
232 'command': "apt-get update",
233 },
234 }
235
236 pkg_apt = {
237 'somepkg': {
238 'needs': ["action:apt_update"],
239 },
240 }
241
242 `cascade_skip` defaults to `True`. However, if the item uses the `unless` attribute or is triggered, the default changes to `False`. Most of the time, this is what you'll want.
243
244 <br>
245
246 ## Canned actions
247
248 Some item types have what we call "canned actions". Those are pre-defined actions attached directly to an item. Take a look at this example:
249
250 svc_upstart = {'mysql': {'running': True}}
251
252 files = {
253 "/etc/mysql/my.cnf": {
254 'source': "my.cnf",
255 'triggers': [
256 "svc_upstart:mysql:reload", # this triggers the canned action
257 ],
258 },
259 }
260
261 Canned actions always have to be triggered in order to run. In the example above, a change in the file `/etc/mysql/my.cnf` will trigger the `reload` action defined by the [svc_upstart item type](../items/svc_upstart.md) for the mysql service.
262
263 <br>
264
265 # metadata.py
266
267 Alongside `items.py` you may create another file called `metadata.py`. It can be used to do advanced processing of the metadata you configured for your nodes and groups. Specifically, it allows each bundle to modify metadata before `items.py` is evaluated. To do that, you simply write any number of functions whose name doesn't start with an underscore and put them into `metadata.py`.
268
269 <div class="alert alert-warning">Understand that <strong>any</strong> function will be used as a metadata processor, unless its name starts with an underscore. This is also true for imported functions, so you'll need to import them like this: <code>from module import func as _func</code>.</div>
270
271 These functions take the metadata dictionary generated so far as their single argument. You must then return the same dictionary with any modifications you need to make. These functions are called metadata processors. Every metadata processor from every bundle is called *repeatedly* with the latest metadata dictionary until no more changes are made to the metadata. Here's an example for how a `metadata.py` could look like (note that you have access to `repo` and `node` just like in `items.py`):
272
273 def my_metadata_processor(metadata):
274 metadata["foo"] = node.name
275 return metadata
276
277 <div class="alert alert-danger">To avoid deadlocks when accessing <strong>other</strong> nodes' metadata from within a metadata processor, use <code>other_node.partial_metadata</code> instead of <code>other_node.metadata</code>. For the same reason, always use the <code>metadata</code> parameter to access the current node's metadata, never <code>node.metadata</code>.</div>
0 # groups.py
1
2 This file lets you specify or dynamically build groups of [nodes](nodes.py.md) in your environment.
3
4 As with `nodes.py`, you define your groups as a dictionary:
5
6 groups = {
7 'all': {
8 'member_patterns': (
9 r".*",
10 ),
11 },
12 'group1': {
13 'members': (
14 'node1',
15 ),
16 },
17 }
18
19 All group attributes are optional.
20
21 <br>
22
23 # Group attribute reference
24
25 This section is a reference for all possible attributes you can define for a group:
26
27 groups = {
28 'group1': {
29 # THIS PART IS EXPLAINED HERE
30 'bundles': ["bundle1", "bundle2"],
31 'members': ["node1"],
32 'members_add': lambda node: node.os == 'debian',
33 'members_remove': lambda node: node.os == 'ubuntu',
34 'member_patterns': [r"^cluster1\."],
35 'metadata': {'foo': "bar"},
36 'os': 'linux',
37 'subgroups': ["group2", "group3"],
38 'subgroup_patterns': [r"^group.*pattern$"],
39 },
40 }
41
42 Note that many attributes from [nodes.py](nodes.py.md) (e.g. `bundles`) may also be set at group level, but aren't explicitly documented here again.
43
44 <br>
45
46 ## member_patterns
47
48 A list of regular expressions. Node names matching these expressions will be added to the group members.
49
50 Matches are determined using [the search() method](http://docs.python.org/2/library/re.html#re.RegexObject.search).
51
52 <br>
53
54 ## members
55
56 A tuple or list of node names that belong to this group.
57
58 <br>
59
60 ## members_add and members_remove
61
62 For these attributes you can provide a function that takes a node object as its only argument. The function must return a boolean. The function will be called once for every node in the repo. If `True`, this node will be added (`members_add`) to or removed (`members_remove`) from this group.
63
64 <div class="alert alert-warning">Inside your function you may query node attributes and groups, but you will not see groups or attributes added as a result of a different <code>members_add</code> / <code>members_remove</code> function. Only attributes and groups that have been set statically will be available. You can, however, remove a node with <code>members_remove</code> that you added with <code>members_add</code> (but not vice-versa).<br>You should also avoid using <code>node.metadata</code> here. Since metadata ultimately depends on group memberships, only metadata set in <code>nodes.py</code> will be returned here.</div>
65
66 <br>
67
68 ## metadata
69
70 A dictionary that will be accessible from each node's `node.metadata`. For each node, BundleWrap will merge the metadata of all of the node's groups first, then merge in the metadata from the node itself.
71
72 Metadata is merged recursively by default, meaning nested dicts will overlay each other. Lists will be appended to each other, but not recursed into. In come cases, you want to overwrite instead of merge a piece of metadata. This is accomplished through the use of `bundlewrap.metadata.atomic()` and best illustrated as an example:
73
74 from bundlewrap.metadata import atomic
75
76 groups = {
77 'all': {
78 'metadata': {
79 'interfaces': {
80 'eth0': {},
81 },
82 'nameservers': ["8.8.8.8", "8.8.4.4"],
83 'ntp_servers': ["pool.ntp.org"],
84 },
85 },
86 'internal': {
87 'metadata':
88 'interfaces': {
89 'eth1': {},
90 },
91 'nameservers': atomic(["10.0.0.1", "10.0.0.2"]),
92 'ntp_servers': ["10.0.0.1", "10.0.0.2"],
93 },
94 },
95 }
96
97 A node in both groups will end up with `eth0` *and* `eth1`.
98
99 The nameservers however are overwritten, so that nodes what are in both the "all" *and* the "internal" group will only have the `10.0.0.x` ones while nodes just in the "all" group will have the `8.8.x.x` nameservers.
100
101 The NTP servers are appended: a node in both groups will have all three nameservers.
102
103 <div class="alert alert-warning">BundleWrap will consider group hierarchy when merging metadata. For example, it is possible to define a default nameserver for the "eu" group and then override it for the "eu.frankfurt" subgroup. The catch is that this only works for groups that are connected through a subgroup hierarchy. Independent groups will have their metadata merged in an undefined order. <code>bw test</code> will report conflicting metadata in independent groups as a metadata collision.</div>
104
105 <div class="alert alert-info">Also see the <a href="../nodes.py#metadata">documentation for node.metadata</a> for more information.</div>
106
107 <br>
108
109 ## subgroups
110
111 A tuple or list of group names whose members should be recursively included in this group.
112
113 <br>
114
115 ## subgroup_patterns
116
117 A list of regular expressions. Nodes in with group names matching these expressions will be added to the group members.
118
119 Matches are determined using [the search() method](http://docs.python.org/2/library/re.html#re.RegexObject.search).
120
121 <br>
122
123 ## use_shadow_passwords
124
125 See [node attribute documentation](nodes.py.md#use_shadow_passwords). May be overridden by subgroups or individual nodes.
126
127 <br>
0 # Hooks
1
2 Hooks enable you to execute custom code at certain points during a BundleWrap run. This is useful for integrating with other systems e.g. for team notifications, logging or statistics.
3
4 To use hooks, you need to create a subdirectory in your repo called `hooks`. In that directory you can place an arbitrary number of Python source files. If those source files define certain functions, these functions will be called at the appropriate time.
5
6
7 ## Example
8
9 `hooks/my_awesome_notification.py`:
10
11 from my_awesome_notification_system import post_message
12
13 def node_apply_start(repo, node, interactive=False, **kwargs):
14 post_message("Starting apply on {}, everything is gonna be OK!".format(node.name))
15
16 <div class="alert">Always define your hooks with `**kwargs` so we can pass in more information in future updates without breaking your hook.</div>
17
18 <br>
19
20 ## Functions
21
22 This is a list of all functions a hook file may implement.
23
24 ---
25
26 **`action_run_start(repo, node, action, **kwargs)`**
27
28 Called each time a `bw apply` command reaches a new action.
29
30 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
31
32 `node` The current node (instance of `bundlewrap.node.Node`).
33
34 `item` The current action.
35
36 ---
37
38 **`action_run_end(repo, node, action, duration=None, status=None, **kwargs)`**
39
40 Called each time a `bw apply` command completes processing an action.
41
42 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
43
44 `node` The current node (instance of `bundlewrap.node.Node`).
45
46 `item` The current action.
47
48 `duration` How long the action was running (timedelta).
49
50 `status`: One of `bundlewrap.items.Item.STATUS_FAILED`, `bundlewrap.items.Item.STATUS_SKIPPED`, or `bundlewrap.items.Item.STATUS_ACTION_SUCCEEDED`.
51
52 ---
53
54 **`apply_start(repo, target, nodes, interactive=False, **kwargs)`**
55
56 Called when you start a `bw apply` command.
57
58 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
59
60 `target` The group or node name you gave on the command line.
61
62 `nodes` A list of node objects affected (list of `bundlewrap.node.Node` instances).
63
64 `interactive` Indicates whether the apply is interactive or not.
65
66 ---
67
68 **`apply_end(repo, target, nodes, duration=None, **kwargs)`**
69
70 Called when a `bw apply` command completes.
71
72 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
73
74 `target` The group or node name you gave on the command line.
75
76 `nodes` A list of node objects affected (list of `bundlewrap.node.Node` instances).
77
78 `duration` How long the apply took (timedelta).
79
80 ---
81
82 **`item_apply_start(repo, node, item, **kwargs)`**
83
84 Called each time a `bw apply` command reaches a new item.
85
86 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
87
88 `node` The current node (instance of `bundlewrap.node.Node`).
89
90 `item` The current item.
91
92 ---
93
94 **`item_apply_end(repo, node, item, duration=None, status_code=None, status_before=None, status_after=None, **kwargs)`**
95
96 Called each time a `bw apply` command completes processing an item.
97
98 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
99
100 `node` The current node (instance of `bundlewrap.node.Node`).
101
102 `item` The current item.
103
104 `duration` How long the apply took (timedelta).
105
106 `status_code` One of `bundlewrap.items.Item.STATUS_FAILED`, `bundlewrap.items.Item.STATUS_SKIPPED`, `bundlewrap.items.Item.STATUS_OK`, or `bundlewrap.items.Item.STATUS_FIXED`.
107
108 `status_before` An instance of `bundlewrap.items.ItemStatus`.
109
110 `status_after` See `status_before`.
111
112 ---
113
114 **`node_apply_start(repo, node, interactive=False, **kwargs)`**
115
116 Called each time a `bw apply` command reaches a new node.
117
118 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
119
120 `node` The current node (instance of `bundlewrap.node.Node`).
121
122 `interactive` `True` if this is an interactive apply run.
123
124 ---
125
126 **`node_apply_end(repo, node, duration=None, interactive=False, result=None, **kwargs)`**
127
128 Called each time a `bw apply` command finishes processing a node.
129
130 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
131
132 `node` The current node (instance of `bundlewrap.node.Node`).
133
134 `duration` How long the apply took (timedelta).
135
136 `interactive` `True` if this was an interactive apply run.
137
138 `result: An instance of `bundlewrap.node.ApplyResult`.
139
140 ---
141
142 **`node_run_start(repo, node, command, **kwargs)`**
143
144 Called each time a `bw run` command reaches a new node.
145
146 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
147
148 `node` The current node (instance of `bundlewrap.node.Node`).
149
150 `command` The command that will be run on the node.
151
152 ---
153
154 **`node_run_end(repo, node, command, duration=None, return_code=None, stdout="", stderr="", **kwargs)`**
155
156 Called each time a `bw run` command finishes on a node.
157
158 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
159
160 `node` The current node (instance of `bundlewrap.node.Node`).
161
162 `command` The command that was run on the node.
163
164 `duration` How long it took to run the command (timedelta).
165
166 `return_code` Return code of the remote command.
167
168 `stdout` The captured stdout stream of the remote command.
169
170 `stderr` The captured stderr stream of the remote command.
171
172 ---
173
174 **`run_start(repo, target, nodes, command, **kwargs)`**
175
176 Called each time a `bw run` command starts.
177
178 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
179
180 `target` The group or node name you gave on the command line.
181
182 `nodes` A list of node objects affected (list of `bundlewrap.node.Node` instances).
183
184 `command` The command that will be run on the node.
185
186 ---
187
188 **`run_end(repo, target, nodes, command, duration=None, **kwargs)`**
189
190 Called each time a `bw run` command finishes.
191
192 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
193
194 `target` The group or node name you gave on the command line.
195
196 `nodes` A list of node objects affected (list of `bundlewrap.node.Node` instances).
197
198 `command` The command that was run.
199
200 `duration` How long it took to run the command on all nodes (timedelta).
201
202 ---
203
204 **`test(repo, **kwargs)`**
205
206 Called at the end of a full `bw test`.
207
208 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
209
210 ---
211
212 **`test_node(repo, node, **kwargs)`**
213
214 Called during `bw test` for each node.
215
216 `repo` The current repository (instance of `bundlewrap.repo.Repository`).
217
218 `node` The current node (instance of `bundlewrap.node.Node`).
0 <style>.bs-sidebar { display: none; }</style>
1
2 Repository layout
3 =================
4
5 A BundleWrap repository contains everything you need to contruct the configuration for your systems.
6
7 This page describes the various subdirectories and files than can exist inside a repo.
8
9 <br>
10
11 <table>
12 <tr>
13 <td><a href="/repo/nodes.py">nodes.py</a></td>
14 <td>This file tells BundleWrap what nodes (servers, VMs, ...) there are in your environment and lets you configure options such as hostnames.</td>
15 </tr>
16 <tr>
17 <td><a href="/repo/groups.py">groups.py</a></td>
18 <td>This file allows you to organize your nodes into groups.</td>
19 </tr>
20 <tr>
21 <td><a href="/repo/bundles">bundles/</a></td>
22 <td>This required subdirectory contains the bulk of your configuration, organized into bundles of related items.</td>
23 </tr>
24 <tr>
25 <td>data/</td>
26 <td>This optional subdirectory contains data files that are not generic enough to be included in bundles (which are meant to be shareable).</td>
27 </tr>
28 <tr>
29 <td><a href="/repo/hooks">hooks/</a></td>
30 <td>This optional subdirectory contains hooks you can use to act on certain events when using BundleWrap.</td>
31 </tr>
32 <tr>
33 <td><a href="/repo/dev_item">items/</a></td>
34 <td>This optional subdirectory contains the code for your custom item types.</td>
35 </tr>
36 <tr>
37 <td><a href="/repo/libs">libs/</a></td>
38 <td>This optional subdirectory contains reusable custom code for your bundles.</td>
39 </tr>
40 </table>
0 <style>.bs-sidebar { display: none; }</style>
1
2 # Custom code
3
4 The `libs/` subdirectory of your repository provides a convenient place to put reusable code used throughout your bundles and hooks.
5
6 A Python module called `example.py` placed in this directory will be available as `repo.libs.example` wherever you have access to a `bundlewrap.repo.Repository` object. In `nodes.py` and `groups.py`, you can do the same thing with just `libs.example`.
7
8 <div class="alert alert-warning">Only single files, no subdirectories or packages, are supported at the moment.</div>
0 # nodes.py
1
2 This file lets you specify or dynamically build a list of nodes in your environment.
3
4 All you have to do here is define a Python dictionary called `nodes`. It should look something like this:
5
6 nodes = {
7 "node-1": {
8 'hostname': "node-1.example.com",
9 },
10 }
11
12
13
14 With BundleWrap, the DNS name and the internal identifier for a node ("node-1" in this case) are two separate things.
15
16 All fields for a node (including `hostname`) are optional. If you don't give one, BundleWrap will attempt to use the internal identifier to connect to a node:
17
18 nodes = {
19 "node-1.example.com": {},
20 }
21
22 <br>
23
24 # Dynamic node list
25
26 You are not confined to the static way of defining a node list as shown above. You can also assemble the `nodes` dictionary dynamically:
27
28 def get_my_nodes_from_ldap():
29 [...]
30 return ldap_nodes
31
32 nodes = get_my_nodes_from_ldap()
33
34 <br>
35
36 # Node attribute reference
37
38 This section is a reference for all possible attributes you can define for a node:
39
40 nodes = {
41 'node-1': {
42 # THIS PART IS EXPLAINED HERE
43 },
44 }
45
46 All attributes can also be set at the group level, unless noted otherwise.
47
48 <br>
49
50 ## Regular attributes
51
52 ### bundles
53
54 A list of [bundle names](bundles.md) to be assigned to this node. Bundles set at [group level](groups.py.md) will be added.
55
56 <br>
57
58 ### dummy
59
60 Set this to `True` to prevent BundleWrap from creating items for and connecting to this node. This is useful for unmanaged nodes because you can still assign them bundles and metadata like regular nodes and access that from managed nodes (e.g. for monitoring).
61
62 <br>
63
64 ### hostname
65
66 A string used as a DNS name when connecting to this node. May also be an IP address.
67
68 <div class="alert">The username and SSH private key for connecting to the node cannot be configured in BundleWrap. If you need to customize those, BundleWrap will honor your <code>~/.ssh/config</code>.</div>
69
70 Cannot be set at group level.
71
72
73 ### metadata
74
75 This can be a dictionary of arbitrary data (some type restrictions apply). You can access it from your templates as `node.metadata`. Use this to attach custom data (such as a list of IP addresses that should be configured on the target node) to the node. Note that you can also define metadata at the [group level](groups.py.md#metadata), but node metadata has higher priority.
76
77 You are restricted to using only the following types in metadata:
78
79 * `dict`
80 * `list`
81 * `tuple`
82 * `set`
83 * `bool`
84 * `text` / `unicode`
85 * `bytes` / `str` (only if decodable into text using UTF-8)
86 * `int`
87 * `None`
88 * `bundlewrap.utils.Fault`
89
90 <div class="alert alert-info">Also see the <a href="../groups.py#metadata">documentation for group.metadata</a> for more information.</div>
91
92 <br>
93
94 ### os
95
96 Defaults to `"linux"`.
97
98 A list of supported OSes can be obtained with `bw debug -n ANY_NODE_NAME -c "print(node.OS_KNOWN)"`.
99
100 <br>
101
102 ### os_version
103
104 Set this to your OS version. Note that it must be a tuple of integers, e.g. if you're running Ubuntu 16.04 LTS, it should be `(16, 4)`.
105
106 Tuples of integers can be used for easy comparison of versions: `(12, 4) < (16, 4)`
107
108 <br>
109
110 ## OS compatibility overrides
111
112 ### cmd_wrapper_outer
113
114 Used whenever a command needs to be run on a node. Defaults to `"sudo sh -c {}"`. `{}` will be replaced by the quoted command to be run (after `cmd_wrapper_inner` has been applied).
115
116 You will need to override this if you're not using `sudo` to gain root privileges (e.g. `doas`) on the node.
117
118 <br>
119
120 ### cmd_wrapper_inner
121
122 Used whenever a command needs to be run on a node. Defaults to `"export LANG=C; {}"`. `{}` will be replaced by the command to be run.
123
124 You will need to override this if the shell on your node sets environment variables differently.
125
126 <br>
127
128 ### use_shadow_passwords
129
130 <div class="alert alert-warning">Changing this setting will affect the security of the target system. Only do this for legacy systems that don't support shadow passwords.</div>
131
132 This setting will affect how the [user item](../items/user.md) item operates. If set to `False`, password hashes will be written directly to `/etc/passwd` and thus be accessible to any user on the system. If the OS of the node is set to "openbsd", this setting has no effect as `master.shadow` is always used.
0 # Plugins
1
2 The plugin system in BundleWrap is an easy way of integrating third-party code into your repository.
3
4 <div class="alert alert-warning">While plugins are subject to some superficial code review by BundleWrap developers before being accepted, we cannot make any guarantees as to the quality and trustworthiness of plugins. Always do your due diligence before running third-party code.</div>
5
6 <br>
7
8 ## Finding plugins
9
10 It's as easy as `bw repo plugin search <term>`. Or you can browse [plugins.bundlewrap.org](http://plugins.bundlewrap.org).
11
12 <br>
13
14 ## Installing plugins
15
16 You probably guessed it: `bw repo plugin install <plugin>`
17
18 Installing the first plugin in your repo will create a file called `plugins.json`. You should commit this file (and any files installed by the plugin of course) to version control.
19
20 <div class="alert alert-info">Avoid editing files provided by plugins at all costs. Local modifications will prevent future updates to the plugin.</div>
21
22 <br>
23
24 ## Updating plugins
25
26 You can update all installed plugins with this command: `bw repo plugin update`
27
28 <br>
29
30 ## Removing a plugin
31
32 `bw repo plugin remove <plugin>`
33
34 <br>
35
36 ## Writing your own
37
38 See the [guide on publishing your own plugins](../guide/dev_plugin.md).
0 <style>.bs-sidebar { display: none; }</style>
1
2 # requirements.txt
3
4 This optional file can be used to ensure minimum required versions of BundleWrap and other Python packages on every machine that uses a repository.
5
6 `bw repo create` will initially add your current version of BundleWrap:
7
8 <pre><code class="nohighlight">bundlewrap>=2.4.0</code></pre>
9
10 You can add more packages as you like (you do not have to specify a version for each one), just append each package in a separate line. When someone then tries to use your repo without one of those packages, BundleWrap will exit early with a friendly error message:
11
12 <pre><code class="nohighlight">! Python package 'foo' is listed in requirements.txt, but wasn't found. You probably have to install it with `pip install foo`.</code></pre>
0 site_name: BundleWrap
1 docs_dir: content
2 site_dir: build
3 theme: cinder
4 repo_url: "https://github.com/bundlewrap/bundlewrap"
5 remote_name: github
6 copyright: "BundleWrap is published under the <a href='https://github.com/bundlewrap/bundlewrap/blob/master/LICENSE'>GPL license</a>"
7 google_analytics: ['UA-33891245-2', 'docs.bundlewrap.org']
8 pages:
9 - <i class="fa fa-home"></i>: index.md
10 - Guides:
11 - Quickstart: guide/quickstart.md
12 - Installation: guide/installation.md
13 - CLI: guide/cli.md
14 - Environment Variables: guide/env.md
15 - File templates: guide/item_file_templates.md
16 - Handling secrets: guide/secrets.md
17 - Locking: guide/locks.md
18 - Custom items: guide/dev_item.md
19 - Writing plugins: guide/dev_plugin.md
20 - Python API: guide/api.md
21 - OS compatibility: guide/os_compatibility.md
22 - Migrating to 2.0: guide/migrate_12.md
23 - Repository:
24 - Overview: repo/layout.md
25 - nodes.py: repo/nodes.py.md
26 - groups.py: repo/groups.py.md
27 - requirements.txt: repo/requirements.txt.md
28 - bundles/: repo/bundles.md
29 - hooks/: repo/hooks.md
30 - libs/: repo/libs.md
31 - Plugins: repo/plugins.md
32 - Items:
33 - action: items/action.md
34 - directory: items/directory.md
35 - file: items/file.md
36 - group: items/group.md
37 - pkg_apt: items/pkg_apt.md
38 - pkg_dnf: items/pkg_dnf.md
39 - pkg_pacman: items/pkg_pacman.md
40 - pkg_pip: items/pkg_pip.md
41 - pkg_yum: items/pkg_yum.md
42 - pkg_zypper: items/pkg_zypper.md
43 - postgres_db: items/postgres_db.md
44 - postgres_role: items/postgres_role.md
45 - svc_openbsd: items/svc_openbsd.md
46 - svc_systemd: items/svc_systemd.md
47 - svc_systemv: items/svc_systemv.md
48 - svc_upstart: items/svc_upstart.md
49 - symlink: items/symlink.md
50 - user: items/user.md
51 - Misc:
52 - About: misc/about.md
53 - Glossary: misc/glossary.md
54 - FAQ: misc/faq.md
55 - Alternatives: misc/alternatives.md
56 - Contributing: misc/contributing.md
0 # deps in this file are for local dev purposes only
1 mkdocs
2 mkdocs-cinder
3 pytest
4 wheel
0 [flake8]
1 max-line-length = 100
2 max-complexity = 10
3
4 [tool:pytest]
5 python_files=*.py
6 python_classes=Test
7 python_functions=test_*
8
9 [bdist_wheel]
10 universal = 1
0 from sys import version_info
1
2 from setuptools import find_packages, setup
3
4
5 dependencies = [
6 "cryptography",
7 "Jinja2",
8 "Mako",
9 "passlib",
10 "requests >= 1.0.0",
11 ]
12 if version_info < (3, 2, 0):
13 dependencies.append("futures")
14
15 setup(
16 name="bundlewrap",
17 version="2.12.2",
18 description="Config management with Python",
19 long_description=(
20 "By allowing for easy and low-overhead config management, BundleWrap fills the gap between complex deployments using Chef or Puppet and old school system administration over SSH.\n"
21 "While most other config management systems rely on a client-server architecture, BundleWrap works off a repository cloned to your local machine. It then automates the process of SSHing into your servers and making sure everything is configured the way it's supposed to be. You won't have to install anything on managed servers."
22 ),
23 author="Torsten Rehn",
24 author_email="torsten@rehn.email",
25 license="GPLv3",
26 url="http://bundlewrap.org",
27 packages=find_packages(),
28 entry_points={
29 'console_scripts': [
30 "bw=bundlewrap.cmdline:main",
31 ],
32 },
33 keywords=["configuration", "config", "management"],
34 classifiers=[
35 "Development Status :: 5 - Production/Stable",
36 "Environment :: Console",
37 "Intended Audience :: System Administrators",
38 "License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
39 "Natural Language :: English",
40 "Operating System :: POSIX :: Linux",
41 "Programming Language :: Python",
42 "Programming Language :: Python :: 2.7",
43 "Programming Language :: Python :: 3.3",
44 "Programming Language :: Python :: 3.4",
45 "Programming Language :: Python :: 3.5",
46 "Programming Language :: Python :: 3.6",
47 "Topic :: System :: Installation/Setup",
48 "Topic :: System :: Systems Administration",
49 ],
50 install_requires=dependencies,
51 extras_require={ # used for wheels
52 ':python_version=="2.7"': ["futures"],
53 },
54 zip_safe=False,
55 )
0 from os.path import exists, join
1
2 from bundlewrap.utils.testing import host_os, make_repo, run
3
4
5 def test_apply(tmpdir):
6 make_repo(
7 tmpdir,
8 bundles={
9 "bundle1": {
10 'files': {
11 join(str(tmpdir), "test"): {
12 'content': "test",
13 },
14 },
15 },
16 },
17 groups={
18 "adhoc-localhost": {
19 'bundles': ["bundle1"],
20 'member_patterns': ["localhost"],
21 'os': host_os(),
22 },
23 },
24 )
25
26 assert not exists(join(str(tmpdir), "test"))
27 stdout, stderr, rcode = run("bw -A apply localhost", path=str(tmpdir))
28 assert rcode == 0
29 assert exists(join(str(tmpdir), "test"))
30
31
32 def test_apply_fail(tmpdir):
33 make_repo(
34 tmpdir,
35 bundles={
36 "bundle1": {
37 'files': {
38 join(str(tmpdir), "test"): {
39 'content': "test",
40 },
41 },
42 },
43 },
44 groups={
45 "adhoc-localhost": {
46 'bundles': ["bundle1"],
47 'member_patterns': ["localhost"],
48 'os': host_os(),
49 },
50 },
51 )
52
53 assert not exists(join(str(tmpdir), "test"))
54 stdout, stderr, rcode = run("bw apply localhost", path=str(tmpdir))
55 assert rcode == 1
56 assert not exists(join(str(tmpdir), "test"))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from bundlewrap.utils.testing import host_os, make_repo, run
4
5
6 def test_action_success(tmpdir):
7 make_repo(
8 tmpdir,
9 bundles={
10 "test": {
11 'actions': {
12 "success": {
13 'command': "true",
14 },
15 },
16 },
17 },
18 nodes={
19 "localhost": {
20 'bundles': ["test"],
21 'os': host_os(),
22 },
23 },
24 )
25 run("bw apply localhost", path=str(tmpdir))
26
27
28 def test_action_fail(tmpdir):
29 make_repo(
30 tmpdir,
31 bundles={
32 "test": {
33 'actions': {
34 "failure": {
35 'command': "false",
36 },
37 },
38 },
39 },
40 nodes={
41 "localhost": {
42 'bundles': ["test"],
43 'os': host_os(),
44 },
45 },
46 )
47 run("bw apply localhost", path=str(tmpdir))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from os.path import exists, join
4
5 from bundlewrap.utils.testing import host_os, make_repo, run
6
7
8 def test_skip_bundle(tmpdir):
9 make_repo(
10 tmpdir,
11 bundles={
12 "test": {
13 'files': {
14 join(str(tmpdir), "foo"): {
15 'content': "nope",
16 },
17 },
18 },
19 },
20 nodes={
21 "localhost": {
22 'bundles': ["test"],
23 'os': host_os(),
24 },
25 },
26 )
27 run("bw apply --skip bundle:test localhost", path=str(tmpdir))
28 assert not exists(join(str(tmpdir), "foo"))
29
30
31 def test_skip_group(tmpdir):
32 make_repo(
33 tmpdir,
34 bundles={
35 "test": {
36 'files': {
37 join(str(tmpdir), "foo"): {
38 'content': "nope",
39 },
40 },
41 },
42 },
43 nodes={
44 "localhost": {
45 'bundles': ["test"],
46 'os': host_os(),
47 },
48 },
49 groups={
50 "foo": {'members': ["localhost"]},
51 },
52 )
53 run("bw apply --skip group:foo localhost", path=str(tmpdir))
54 assert not exists(join(str(tmpdir), "foo"))
55
56
57 def test_skip_id(tmpdir):
58 make_repo(
59 tmpdir,
60 bundles={
61 "test": {
62 'files': {
63 join(str(tmpdir), "foo"): {
64 'content': "nope",
65 },
66 },
67 },
68 },
69 nodes={
70 "localhost": {
71 'bundles': ["test"],
72 'os': host_os(),
73 },
74 },
75 )
76 run("bw apply --skip file:{} localhost".format(join(str(tmpdir), "foo")), path=str(tmpdir))
77 assert not exists(join(str(tmpdir), "foo"))
78
79
80 def test_skip_node(tmpdir):
81 make_repo(
82 tmpdir,
83 bundles={
84 "test": {
85 'files': {
86 join(str(tmpdir), "foo"): {
87 'content': "nope",
88 },
89 },
90 },
91 },
92 nodes={
93 "localhost": {
94 'bundles': ["test"],
95 'os': host_os(),
96 },
97 },
98 )
99 run("bw apply --skip node:localhost localhost", path=str(tmpdir))
100 assert not exists(join(str(tmpdir), "foo"))
101
102
103 def test_skip_tag(tmpdir):
104 make_repo(
105 tmpdir,
106 bundles={
107 "test": {
108 'files': {
109 join(str(tmpdir), "foo"): {
110 'content': "nope",
111 'tags': ["nope"],
112 },
113 },
114 },
115 },
116 nodes={
117 "localhost": {
118 'bundles': ["test"],
119 'os': host_os(),
120 },
121 },
122 )
123 run("bw apply --skip tag:nope localhost", path=str(tmpdir))
124 assert not exists(join(str(tmpdir), "foo"))
125
126
127 def test_skip_type(tmpdir):
128 make_repo(
129 tmpdir,
130 bundles={
131 "test": {
132 'files': {
133 join(str(tmpdir), "foo"): {
134 'content': "nope",
135 },
136 },
137 },
138 },
139 nodes={
140 "localhost": {
141 'bundles': ["test"],
142 'os': host_os(),
143 },
144 },
145 )
146 run("bw apply --skip file: localhost", path=str(tmpdir))
147 assert not exists(join(str(tmpdir), "foo"))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from base64 import b64encode
4 from os import mkdir
5 from os.path import exists, join
6
7 from bundlewrap.utils.testing import host_os, make_repo, run
8
9
10 def test_purge(tmpdir):
11 make_repo(
12 tmpdir,
13 bundles={
14 "test": {
15 'files': {
16 join(str(tmpdir), "purgedir", "managed_file"): {
17 'content': "content",
18 },
19 join(str(tmpdir), "purgedir", "subdir1", "managed_file"): {
20 'content': "content",
21 },
22 },
23 'directories': {
24 join(str(tmpdir), "purgedir"): {
25 'purge': True,
26 },
27 },
28 },
29 },
30 nodes={
31 "localhost": {
32 'bundles': ["test"],
33 'os': host_os(),
34 },
35 },
36 )
37
38 mkdir(join(str(tmpdir), "purgedir"))
39 mkdir(join(str(tmpdir), "purgedir", "subdir2"))
40 mkdir(join(str(tmpdir), "purgedir", "subdir3"))
41
42 with open(join(str(tmpdir), "purgedir", "unmanaged_file"), 'w') as f:
43 f.write("content")
44 with open(join(str(tmpdir), "purgedir", "subdir3", "unmanaged_file"), 'w') as f:
45 f.write("content")
46
47 run("bw apply localhost", path=str(tmpdir))
48
49 assert not exists(join(str(tmpdir), "purgedir", "unmanaged_file"))
50 assert not exists(join(str(tmpdir), "purgedir", "subdir3", "unmanaged_file"))
51 assert not exists(join(str(tmpdir), "purgedir", "subdir2"))
52 assert exists(join(str(tmpdir), "purgedir", "subdir1", "managed_file"))
53 assert exists(join(str(tmpdir), "purgedir", "managed_file"))
54
55
56 def test_purge_special_chars(tmpdir):
57 make_repo(
58 tmpdir,
59 bundles={
60 "test": {
61 'files': {
62 join(str(tmpdir), "purgedir", "mänäged_file"): {
63 'content': "content",
64 },
65 join(str(tmpdir), "purgedir", "managed_`id`_file"): {
66 'content': "content",
67 },
68 },
69 'directories': {
70 join(str(tmpdir), "purgedir"): {
71 'purge': True,
72 },
73 },
74 },
75 },
76 nodes={
77 "localhost": {
78 'bundles': ["test"],
79 'os': host_os(),
80 },
81 },
82 )
83
84 mkdir(join(str(tmpdir), "purgedir"))
85
86 with open(join(str(tmpdir), "purgedir", "unmänäged_file"), 'w') as f:
87 f.write("content")
88 with open(join(str(tmpdir), "purgedir", "unmanaged_`uname`_file"), 'w') as f:
89 f.write("content")
90 with open(join(str(tmpdir), "purgedir", "unmanaged_:'_file"), 'w') as f:
91 f.write("content")
92
93 run("bw apply localhost", path=str(tmpdir))
94
95 assert not exists(join(str(tmpdir), "purgedir", "unmänäged_file"))
96 assert not exists(join(str(tmpdir), "purgedir", "unmanaged_`uname`_file"))
97 assert not exists(join(str(tmpdir), "purgedir", "unmanaged_:'_file"))
98 assert exists(join(str(tmpdir), "purgedir", "mänäged_file"))
99 assert exists(join(str(tmpdir), "purgedir", "managed_`id`_file"))
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from base64 import b64encode
4 from os.path import exists, join
5
6 from bundlewrap.utils.testing import host_os, make_repo, run
7
8
9 def test_any_content_create(tmpdir):
10 make_repo(
11 tmpdir,
12 bundles={
13 "test": {
14 'files': {
15 join(str(tmpdir), "foo"): {
16 'content_type': 'any',
17 },
18 },
19 },
20 },
21 nodes={
22 "localhost": {
23 'bundles': ["test"],
24 'os': host_os(),
25 },
26 },
27 )
28
29 run("bw apply localhost", path=str(tmpdir))
30 with open(join(str(tmpdir), "foo"), 'rb') as f:
31 content = f.read()
32 assert content == b""
33
34
35 def test_any_content_exists(tmpdir):
36 make_repo(
37 tmpdir,
38 bundles={
39 "test": {
40 'files': {
41 join(str(tmpdir), "foo"): {
42 'content_type': 'any',
43 },
44 },
45 },
46 },
47 nodes={
48 "localhost": {
49 'bundles': ["test"],
50 'os': host_os(),
51 },
52 },
53 )
54 with open(join(str(tmpdir), "foo"), 'wb') as f:
55 f.write(b"existing content")
56
57 run("bw apply localhost", path=str(tmpdir))
58 with open(join(str(tmpdir), "foo"), 'rb') as f:
59 content = f.read()
60 assert content == b"existing content"
61
62
63 def test_binary_inline_content(tmpdir):
64 make_repo(
65 tmpdir,
66 bundles={
67 "test": {
68 'files': {
69 join(str(tmpdir), "foo.bin"): {
70 'content_type': 'base64',
71 'content': b64encode("ö".encode('latin-1')),
72 },
73 },
74 },
75 },
76 nodes={
77 "localhost": {
78 'bundles': ["test"],
79 'os': host_os(),
80 },
81 },
82 )
83 run("bw apply localhost", path=str(tmpdir))
84 with open(join(str(tmpdir), "foo.bin"), 'rb') as f:
85 content = f.read()
86 assert content.decode('latin-1') == "ö"
87
88
89 def test_binary_template_content(tmpdir):
90 make_repo(
91 tmpdir,
92 bundles={
93 "test": {
94 'files': {
95 join(str(tmpdir), "foo.bin"): {
96 'encoding': 'latin-1',
97 },
98 },
99 },
100 },
101 nodes={
102 "localhost": {
103 'bundles': ["test"],
104 'os': host_os(),
105 },
106 },
107 )
108 with open(join(str(tmpdir), "bundles", "test", "files", "foo.bin"), 'wb') as f:
109 f.write("ö".encode('utf-8'))
110
111 run("bw apply localhost", path=str(tmpdir))
112 with open(join(str(tmpdir), "foo.bin"), 'rb') as f:
113 content = f.read()
114 assert content.decode('latin-1') == "ö"
115
116
117 def test_delete(tmpdir):
118 with open(join(str(tmpdir), "foo"), 'w') as f:
119 f.write("foo")
120 make_repo(
121 tmpdir,
122 bundles={
123 "test": {
124 'files': {
125 join(str(tmpdir), "foo"): {
126 'delete': True,
127 },
128 },
129 },
130 },
131 nodes={
132 "localhost": {
133 'bundles': ["test"],
134 'os': host_os(),
135 },
136 },
137 )
138 run("bw apply localhost", path=str(tmpdir))
139 assert not exists(join(str(tmpdir), "foo"))
140
141
142 def test_mako_template_content(tmpdir):
143 make_repo(
144 tmpdir,
145 bundles={
146 "test": {
147 'files': {
148 join(str(tmpdir), "foo"): {
149 'content_type': 'mako',
150 'content': "${node.name}",
151 },
152 },
153 },
154 },
155 nodes={
156 "localhost": {
157 'bundles': ["test"],
158 'os': host_os(),
159 },
160 },
161 )
162 run("bw apply localhost", path=str(tmpdir))
163 with open(join(str(tmpdir), "foo"), 'rb') as f:
164 content = f.read()
165 assert content == b"localhost"
166
167
168 def test_mako_template_content_with_secret(tmpdir):
169 make_repo(
170 tmpdir,
171 bundles={
172 "test": {
173 'files': {
174 join(str(tmpdir), "foo"): {
175 'content_type': 'mako',
176 'content': "${repo.vault.password_for('testing')}",
177 },
178 },
179 },
180 },
181 nodes={
182 "localhost": {
183 'bundles': ["test"],
184 'os': host_os(),
185 },
186 },
187 )
188 run("bw apply localhost", path=str(tmpdir))
189 with open(join(str(tmpdir), "foo"), 'rb') as f:
190 content = f.read()
191 assert content == b"faCTT76kagtDuZE5wnoiD1CxhGKmbgiX"
192
193
194 def test_text_template_content(tmpdir):
195 make_repo(
196 tmpdir,
197 bundles={
198 "test": {
199 'files': {
200 join(str(tmpdir), "foo"): {
201 'content_type': 'text',
202 'content': "${node.name}",
203 },
204 },
205 },
206 },
207 nodes={
208 "localhost": {
209 'bundles': ["test"],
210 'os': host_os(),
211 },
212 },
213 )
214 run("bw apply localhost", path=str(tmpdir))
215 with open(join(str(tmpdir), "foo"), 'rb') as f:
216 content = f.read()
217 assert content == b"${node.name}"
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2 from os.path import join
3
4 from bundlewrap.utils.testing import host_os, make_repo, run
5
6
7 def test_precedes(tmpdir):
8 make_repo(
9 tmpdir,
10 bundles={
11 "test": {
12 'files': {
13 join(str(tmpdir), "file"): {
14 'content': "1\n",
15 'triggered': True,
16 'precedes': ["tag:tag1"],
17 },
18 },
19 'actions': {
20 "action2": {
21 'command': "echo 2 >> {}".format(join(str(tmpdir), "file")),
22 'tags': ["tag1"],
23 },
24 "action3": {
25 'command': "echo 3 >> {}".format(join(str(tmpdir), "file")),
26 'tags': ["tag1"],
27 'needs': ["action:action2"],
28 },
29 },
30 },
31 },
32 nodes={
33 "localhost": {
34 'bundles': ["test"],
35 'os': host_os(),
36 },
37 },
38 )
39 run("bw apply localhost", path=str(tmpdir))
40 with open(join(str(tmpdir), "file")) as f:
41 content = f.read()
42 assert content == "1\n2\n3\n"
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from os.path import exists, join
4
5 from bundlewrap.utils.testing import host_os, make_repo, run
6
7
8 def test_fault_content(tmpdir):
9 make_repo(
10 tmpdir,
11 bundles={
12 "test": {},
13 },
14 nodes={
15 "localhost": {
16 'bundles': ["test"],
17 'os': host_os(),
18 },
19 },
20 )
21
22 with open(join(str(tmpdir), "bundles", "test", "items.py"), 'w') as f:
23 f.write("""
24 files = {{
25 "{}": {{
26 'content': repo.vault.password_for("test"),
27 }},
28 }}
29 """.format(join(str(tmpdir), "secret")))
30
31 run("bw apply localhost", path=str(tmpdir))
32 with open(join(str(tmpdir), "secret")) as f:
33 content = f.read()
34 assert content == "sQDdTXu5OmCki8gdGgYdfTxooevckXcB"
35
36
37 def test_fault_content_mako(tmpdir):
38 make_repo(
39 tmpdir,
40 bundles={
41 "test": {
42 'files': {
43 join(str(tmpdir), "secret"): {
44 'content': "${repo.vault.password_for('test')}",
45 'content_type': 'mako',
46 },
47 },
48 },
49 },
50 nodes={
51 "localhost": {
52 'bundles': ["test"],
53 'os': host_os(),
54 },
55 },
56 )
57
58 run("bw apply localhost", path=str(tmpdir))
59 with open(join(str(tmpdir), "secret")) as f:
60 content = f.read()
61 assert content == "sQDdTXu5OmCki8gdGgYdfTxooevckXcB"
62
63
64 def test_fault_content_mako_metadata(tmpdir):
65 make_repo(
66 tmpdir,
67 bundles={
68 "test": {
69 'files': {
70 join(str(tmpdir), "secret"): {
71 'content': "${node.metadata['secret']}",
72 'content_type': 'mako',
73 },
74 },
75 },
76 },
77 )
78
79 with open(join(str(tmpdir), "nodes.py"), 'w') as f:
80 f.write("""
81 nodes = {{
82 "localhost": {{
83 'bundles': ["test"],
84 'metadata': {{'secret': vault.password_for("test")}},
85 'os': "{}",
86 }},
87 }}
88 """.format(host_os()))
89
90 run("bw apply localhost", path=str(tmpdir))
91 with open(join(str(tmpdir), "secret")) as f:
92 content = f.read()
93 assert content == "sQDdTXu5OmCki8gdGgYdfTxooevckXcB"
94
95
96 def test_fault_content_jinja2(tmpdir):
97 make_repo(
98 tmpdir,
99 bundles={
100 "test": {
101 'files': {
102 join(str(tmpdir), "secret"): {
103 'content': "{{ repo.vault.password_for('test') }}",
104 'content_type': 'jinja2',
105 },
106 },
107 },
108 },
109 nodes={
110 "localhost": {
111 'bundles': ["test"],
112 'os': host_os(),
113 },
114 },
115 )
116
117 run("bw apply localhost", path=str(tmpdir))
118 with open(join(str(tmpdir), "secret")) as f:
119 content = f.read()
120 assert content == "sQDdTXu5OmCki8gdGgYdfTxooevckXcB"
121
122
123 def test_fault_content_skipped(tmpdir):
124 make_repo(
125 tmpdir,
126 bundles={
127 "test": {},
128 },
129 nodes={
130 "localhost": {
131 'bundles': ["test"],
132 'os': host_os(),
133 },
134 },
135 )
136
137 with open(join(str(tmpdir), "bundles", "test", "items.py"), 'w') as f:
138 f.write("""
139 files = {{
140 "{}": {{
141 'content': repo.vault.password_for("test", key='unavailable'),
142 }},
143 }}
144 """.format(join(str(tmpdir), "secret")))
145
146 stdout, stderr, rcode = run("bw apply localhost", path=str(tmpdir))
147 assert rcode == 0
148 assert not exists(join(str(tmpdir), "secret"))
149
150
151 def test_fault_content_skipped_mako(tmpdir):
152 make_repo(
153 tmpdir,
154 bundles={
155 "test": {
156 'files': {
157 join(str(tmpdir), "secret"): {
158 'content': "${repo.vault.password_for('test', key='unavailable')}",
159 'content_type': 'mako',
160 },
161 },
162 },
163 },
164 nodes={
165 "localhost": {
166 'bundles': ["test"],
167 'os': host_os(),
168 },
169 },
170 )
171
172 stdout, stderr, rcode = run("bw apply localhost", path=str(tmpdir))
173 assert rcode == 0
174 assert not exists(join(str(tmpdir), "secret"))
175
176
177 def test_fault_content_skipped_jinja2(tmpdir):
178 make_repo(
179 tmpdir,
180 bundles={
181 "test": {
182 'files': {
183 join(str(tmpdir), "secret"): {
184 'content': "{{ repo.vault.password_for('test', key='unavailable') }}",
185 'content_type': 'jinja2',
186 },
187 },
188 },
189 },
190 nodes={
191 "localhost": {
192 'bundles': ["test"],
193 'os': host_os(),
194 },
195 },
196 )
197
198
199 def test_fault_content_error(tmpdir):
200 make_repo(
201 tmpdir,
202 bundles={
203 "test": {},
204 },
205 nodes={
206 "localhost": {
207 'bundles': ["test"],
208 'os': host_os(),
209 },
210 },
211 )
212
213 with open(join(str(tmpdir), "bundles", "test", "items.py"), 'w') as f:
214 f.write("""
215 files = {{
216 "{}": {{
217 'content': repo.vault.password_for("test", key='unavailable'),
218 'error_on_missing_fault': True,
219 }},
220 }}
221 """.format(join(str(tmpdir), "secret")))
222
223 stdout, stderr, rcode = run("bw -d apply localhost", path=str(tmpdir))
224 print(stdout)
225 assert rcode == 1
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from os import mkdir, readlink, symlink
4 from os.path import join
5
6 from bundlewrap.utils.testing import host_os, make_repo, run
7
8
9 def test_create(tmpdir):
10 make_repo(
11 tmpdir,
12 bundles={
13 "test": {
14 'symlinks': {
15 join(str(tmpdir), "foo"): {
16 'target': "/dev/null",
17 },
18 },
19 },
20 },
21 nodes={
22 "localhost": {
23 'bundles': ["test"],
24 'os': host_os(),
25 },
26 },
27 )
28 stdout, stderr, rcode = run("bw apply localhost", path=str(tmpdir))
29 assert rcode == 0
30 assert readlink(join(str(tmpdir), "foo")) == "/dev/null"
31
32
33 def test_fix(tmpdir):
34 symlink(join(str(tmpdir), "bar"), join(str(tmpdir), "foo"))
35 make_repo(
36 tmpdir,
37 bundles={
38 "test": {
39 'symlinks': {
40 join(str(tmpdir), "foo"): {
41 'target': "/dev/null",
42 },
43 },
44 },
45 },
46 nodes={
47 "localhost": {
48 'bundles': ["test"],
49 'os': host_os(),
50 },
51 },
52 )
53 stdout, stderr, rcode = run("bw apply localhost", path=str(tmpdir))
54 assert rcode == 0
55 assert readlink(join(str(tmpdir), "foo")) == "/dev/null"
56
57
58 def test_fix_dir(tmpdir):
59 mkdir(join(str(tmpdir), "foo"))
60 make_repo(
61 tmpdir,
62 bundles={
63 "test": {
64 'symlinks': {
65 join(str(tmpdir), "foo"): {
66 'target': "/dev/null",
67 },
68 },
69 },
70 },
71 nodes={
72 "localhost": {
73 'bundles': ["test"],
74 'os': host_os(),
75 },
76 },
77 )
78 stdout, stderr, rcode = run("bw apply localhost", path=str(tmpdir))
79 assert rcode == 0
80 assert readlink(join(str(tmpdir), "foo")) == "/dev/null"
0 from os.path import join
1
2 from bundlewrap.utils.testing import make_repo, run
3
4
5 def test_empty(tmpdir):
6 make_repo(tmpdir)
7 stdout, stderr, rcode = run("bw hash", path=str(tmpdir))
8 assert stdout == b"bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f\n"
9 assert stderr == b""
10
11
12 def test_nondeterministic(tmpdir):
13 make_repo(
14 tmpdir,
15 nodes={
16 "node1": {
17 'bundles': ["bundle1"],
18 },
19 },
20 bundles={
21 "bundle1": {
22 'files': {
23 "/test": {
24 'content_type': 'mako',
25 'content': "<% import random %>${random.randint(1, 9999)}",
26 },
27 },
28 },
29 },
30 )
31
32 hashes = set()
33
34 for i in range(3):
35 stdout, stderr, rcode = run("bw hash", path=str(tmpdir))
36 hashes.add(stdout.strip())
37
38 assert len(hashes) > 1
39
40
41 def test_deterministic(tmpdir):
42 make_repo(
43 tmpdir,
44 nodes={
45 "node1": {
46 'bundles': ["bundle1"],
47 },
48 },
49 bundles={
50 "bundle1": {
51 'files': {
52 "/test": {
53 'content': "${node.name}",
54 },
55 },
56 },
57 },
58 )
59
60 hashes = set()
61
62 for i in range(3):
63 stdout, stderr, rcode = run("bw hash", path=str(tmpdir))
64 hashes.add(stdout.strip())
65
66 assert len(hashes) == 1
67 assert hashes.pop() == b"8c155b4e7056463eb2c8a8345f4f316f6d7359f6"
68
69
70 def test_dict(tmpdir):
71 make_repo(
72 tmpdir,
73 nodes={
74 "node1": {
75 'bundles': ["bundle1"],
76 },
77 },
78 bundles={
79 "bundle1": {
80 'files': {
81 "/test": {
82 'content': "yes please",
83 },
84 },
85 },
86 },
87 )
88
89 stdout, stderr, rcode = run("bw hash -d", path=str(tmpdir))
90 assert rcode == 0
91 assert stdout == b"8ab35c696b63a853ccf568b27a50e24a69964487 node1\n"
92
93 stdout, stderr, rcode = run("bw hash -d node1", path=str(tmpdir))
94 assert rcode == 0
95 assert stdout == b"503583964eadabacb18fda32cc9fb1e9f66e424b file:/test\n"
96
97 stdout, stderr, rcode = run("bw hash -d node1 file:/test", path=str(tmpdir))
98 assert rcode == 0
99 assert stdout == (
100 b"content_hash\tc05a36d547e2b1682472f76985018038d1feebc5\n"
101 b"type\tfile\n"
102 )
103
104
105 def test_metadata_empty(tmpdir):
106 make_repo(
107 tmpdir,
108 nodes={
109 "node1": {
110 'metadata': {},
111 },
112 },
113 )
114
115 stdout, stderr, rcode = run("bw hash -m node1", path=str(tmpdir))
116 assert rcode == 0
117 assert stdout == b"bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f\n"
118
119
120 def test_metadata_fault(tmpdir):
121 make_repo(tmpdir)
122 with open(join(str(tmpdir), "nodes.py"), 'w') as f:
123 f.write("""
124 nodes = {
125 'node1': {
126 'metadata': {'foo': vault.password_for("testing")},
127 },
128 'node2': {
129 'metadata': {'foo': vault.password_for("testing").value},
130 },
131 'node3': {
132 'metadata': {'foo': "faCTT76kagtDuZE5wnoiD1CxhGKmbgiX"},
133 },
134 'node4': {
135 'metadata': {'foo': "something else entirely"},
136 },
137 }
138 """)
139 print(run("bw debug -c 'print(repo.vault.password_for(\"testing\"))'", path=str(tmpdir)))
140 stdout1, stderr, rcode = run("bw hash -m node1", path=str(tmpdir))
141 assert stdout1 == b"d0c998fd17a68322a03345954bb0a75301d3a127\n"
142 assert stderr == b""
143 assert rcode == 0
144 stdout2, stderr, rcode = run("bw hash -m node2", path=str(tmpdir))
145 assert stdout2 == stdout1
146 assert stderr == b""
147 assert rcode == 0
148 stdout3, stderr, rcode = run("bw hash -m node3", path=str(tmpdir))
149 assert stdout3 == stdout1
150 assert stderr == b""
151 assert rcode == 0
152 stdout4, stderr, rcode = run("bw hash -m node4", path=str(tmpdir))
153 assert stdout4 != stdout1
154 assert stderr == b""
155 assert rcode == 0
156
157
158 def test_metadata_nested_sort(tmpdir):
159 make_repo(
160 tmpdir,
161 nodes={
162 "node1": {
163 'metadata': {
164 'nested': {
165 'one': True,
166 'two': False,
167 'three': 3,
168 'four': "four",
169 'five': None,
170 },
171 },
172 },
173 "node2": {
174 'metadata': {
175 'nested': {
176 'five': None,
177 'four': "four",
178 'one': True,
179 'three': 3,
180 'two': False,
181 },
182 },
183 },
184 },
185 )
186
187 stdout1, stderr, rcode = run("bw hash -m node1", path=str(tmpdir))
188 assert rcode == 0
189 assert stdout1 == b"bc403a093ca3399cd3efa7a64ec420e0afef5e70\n"
190
191 stdout2, stderr, rcode = run("bw hash -m node2", path=str(tmpdir))
192 assert rcode == 0
193 assert stdout1 == stdout2
194
195
196 def test_metadata_repo(tmpdir):
197 make_repo(
198 tmpdir,
199 nodes={
200 "node1": {
201 'metadata': {
202 'foo': 47,
203 },
204 },
205 },
206 )
207
208 stdout, stderr, rcode = run("bw hash -m", path=str(tmpdir))
209 assert rcode == 0
210 assert stdout == b"c0cc160ab1b6e71155cd4f65139bc7f66304d7f3\n"
211
212
213 def test_metadata_repo_dict(tmpdir):
214 make_repo(
215 tmpdir,
216 nodes={
217 "node1": {
218 'metadata': {
219 'foo': 47,
220 },
221 },
222 },
223 )
224
225 stdout, stderr, rcode = run("bw hash -md", path=str(tmpdir))
226 assert rcode == 0
227 assert stdout == b"node1\t013b3a8199695eb45c603ea4e0a910148d80e7ed\n"
228
229
230 def test_groups_repo(tmpdir):
231 make_repo(
232 tmpdir,
233 groups={
234 "group1": {},
235 "group2": {},
236 },
237 )
238
239 stdout, stderr, rcode = run("bw hash -g", path=str(tmpdir))
240 assert rcode == 0
241 assert stdout == b"479c737e191339e5fae20ac8a8903a75f6b91f4d\n"
242
243
244 def test_groups_repo_dict(tmpdir):
245 make_repo(
246 tmpdir,
247 groups={
248 "group1": {},
249 "group2": {},
250 },
251 )
252
253 stdout, stderr, rcode = run("bw hash -dg", path=str(tmpdir))
254 assert rcode == 0
255 assert stdout == b"group1\ngroup2\n"
256
257
258 def test_groups(tmpdir):
259 make_repo(
260 tmpdir,
261 groups={
262 "group1": {'members': ["node1", "node2"]},
263 "group2": {'members': ["node3"]},
264 },
265 nodes={
266 "node1": {},
267 "node2": {},
268 "node3": {},
269 },
270 )
271
272 stdout, stderr, rcode = run("bw hash -g group1", path=str(tmpdir))
273 assert rcode == 0
274 assert stdout == b"59f5a812acd22592b046b20e9afedc1cfcd37c77\n"
275
276
277 def test_groups_dict(tmpdir):
278 make_repo(
279 tmpdir,
280 groups={
281 "group1": {'members': ["node1", "node2"]},
282 "group2": {'members': ["node3"]},
283 },
284 nodes={
285 "node1": {},
286 "node2": {},
287 "node3": {},
288 },
289 )
290
291 stdout, stderr, rcode = run("bw hash -dg group1", path=str(tmpdir))
292 assert rcode == 0
293 assert stdout == b"node1\nnode2\n"
294
295
296 def test_groups_node(tmpdir):
297 make_repo(
298 tmpdir,
299 groups={
300 "group1": {'members': ["node1", "node2"]},
301 "group2": {'members': ["node3"]},
302 },
303 nodes={
304 "node1": {},
305 "node2": {},
306 "node3": {},
307 },
308 )
309
310 stdout, stderr, rcode = run("bw hash -g node1", path=str(tmpdir))
311 assert rcode == 0
312 assert stdout == b"6f4615dc71426549e22df7961bd2b88ba95ad1fc\n"
313
314
315 def test_groups_node_dict(tmpdir):
316 make_repo(
317 tmpdir,
318 groups={
319 "group1": {'members': ["node1", "node2"]},
320 "group2": {'members': ["node3"]},
321 },
322 nodes={
323 "node1": {},
324 "node2": {},
325 "node3": {},
326 },
327 )
328
329 stdout, stderr, rcode = run("bw hash -dg node1", path=str(tmpdir))
330 assert rcode == 0
331 assert stdout == b"group1\n"
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from bundlewrap.utils.testing import make_repo, run
4
5
6 def test_file_preview(tmpdir):
7 make_repo(
8 tmpdir,
9 nodes={
10 "node1": {
11 'bundles': ["bundle1"],
12 },
13 },
14 bundles={
15 "bundle1": {
16 'files': {
17 "/test": {
18 'content': "föö",
19 'encoding': 'latin-1',
20 },
21 },
22 },
23 },
24 )
25
26 stdout, stderr, rcode = run("bw items -f /test node1", path=str(tmpdir))
27 assert stdout == "föö".encode('utf-8') # our output is always utf-8
0 from json import loads
1 from os.path import join
2
3 from bundlewrap.utils.testing import make_repo, run
4
5
6 def test_empty(tmpdir):
7 make_repo(
8 tmpdir,
9 nodes={
10 "node1": {},
11 },
12 )
13 stdout, stderr, rcode = run("bw metadata node1", path=str(tmpdir))
14 assert stdout == b"{}\n"
15 assert stderr == b""
16 assert rcode == 0
17
18
19 def test_simple(tmpdir):
20 make_repo(
21 tmpdir,
22 nodes={
23 "node1": {'metadata': {"foo": "bar"}},
24 },
25 )
26 stdout, stderr, rcode = run("bw metadata node1", path=str(tmpdir))
27 assert loads(stdout.decode()) == {"foo": "bar"}
28 assert stderr == b""
29 assert rcode == 0
30
31
32 def test_object(tmpdir):
33 make_repo(tmpdir)
34 with open(join(str(tmpdir), "nodes.py"), 'w') as f:
35 f.write("nodes = {'node1': {'metadata': {'foo': object}}}")
36 stdout, stderr, rcode = run("bw metadata node1", path=str(tmpdir))
37 assert rcode == 1
38
39
40 def test_merge(tmpdir):
41 make_repo(
42 tmpdir,
43 nodes={
44 "node1": {
45 'metadata': {
46 "foo": {
47 "bar": "baz",
48 },
49 },
50 },
51 },
52 groups={
53 "group1": {
54 'members': ["node1"],
55 'metadata': {
56 "ding": 5,
57 "foo": {
58 "bar": "ZAB",
59 "baz": "bar",
60 },
61 },
62 },
63 },
64 )
65 stdout, stderr, rcode = run("bw metadata node1", path=str(tmpdir))
66 assert loads(stdout.decode()) == {
67 "ding": 5,
68 "foo": {
69 "bar": "baz",
70 "baz": "bar",
71 },
72 }
73 assert stderr == b""
74 assert rcode == 0
75
76
77 def test_metadatapy(tmpdir):
78 make_repo(
79 tmpdir,
80 bundles={"test": {}},
81 nodes={
82 "node1": {
83 'bundles': ["test"],
84 'metadata': {"foo": "bar"},
85 },
86 },
87 )
88 with open(join(str(tmpdir), "bundles", "test", "metadata.py"), 'w') as f:
89 f.write(
90 """def foo(metadata):
91 metadata["baz"] = node.name
92 return metadata
93 """)
94 stdout, stderr, rcode = run("bw metadata node1", path=str(tmpdir))
95 assert loads(stdout.decode()) == {
96 "baz": "node1",
97 "foo": "bar",
98 }
99 assert stderr == b""
100 assert rcode == 0
101
102
103 def test_metadatapy_loop(tmpdir):
104 make_repo(
105 tmpdir,
106 bundles={"test": {}},
107 nodes={
108 "node1": {
109 'bundles': ["test"],
110 'metadata': {"foo": 1},
111 },
112 },
113 )
114 with open(join(str(tmpdir), "bundles", "test", "metadata.py"), 'w') as f:
115 f.write(
116 """def foo(metadata):
117 metadata["foo"] += 1
118 return metadata
119 """)
120 stdout, stderr, rcode = run("bw metadata node1", path=str(tmpdir))
121 assert rcode == 1
0 from json import loads
1 from os.path import join
2
3 from bundlewrap.utils.testing import make_repo, run
4
5
6 def test_empty(tmpdir):
7 make_repo(tmpdir)
8 stdout, stderr, rcode = run("bw nodes", path=str(tmpdir))
9 assert stdout == b""
10 assert stderr == b""
11 assert rcode == 0
12
13
14 def test_single(tmpdir):
15 make_repo(tmpdir, nodes={"node1": {}})
16 stdout, stderr, rcode = run("bw nodes", path=str(tmpdir))
17 assert stdout == b"node1\n"
18 assert stderr == b""
19 assert rcode == 0
20
21
22 def test_hostname(tmpdir):
23 make_repo(tmpdir, nodes={"node1": {'hostname': "node1.example.com"}})
24 stdout, stderr, rcode = run("bw nodes --attrs | grep '\thostname' | cut -f 3", path=str(tmpdir))
25 assert stdout == b"node1.example.com\n"
26 assert stderr == b""
27 assert rcode == 0
28
29
30 def test_inline(tmpdir):
31 make_repo(
32 tmpdir,
33 nodes={
34 "node1": {
35 'bundles': ["bundle1", "bundle2"],
36 },
37 "node2": {
38 'bundles': ["bundle1"],
39 },
40 },
41 bundles={
42 "bundle1": {},
43 "bundle2": {},
44 },
45 )
46 stdout, stderr, rcode = run("bw nodes -ai | grep '\tbundle' | grep bundle2 | cut -f 1", path=str(tmpdir))
47 assert stdout == b"node1\n"
48 assert stderr == b""
49 assert rcode == 0
50
51 stdout, stderr, rcode = run("bw nodes -ai | grep '\tbundle' | grep -v bundle2 | cut -f 1", path=str(tmpdir))
52 assert stdout == b"node2\n"
53 assert stderr == b""
54 assert rcode == 0
55
56
57 def test_in_group(tmpdir):
58 make_repo(
59 tmpdir,
60 groups={
61 "group1": {
62 'members': ["node2"],
63 },
64 },
65 nodes={
66 "node1": {},
67 "node2": {},
68 },
69 )
70 stdout, stderr, rcode = run("bw nodes -g group1", path=str(tmpdir))
71 assert stdout == b"node2\n"
72 assert stderr == b""
73 assert rcode == 0
74
75
76 def test_bundles(tmpdir):
77 make_repo(
78 tmpdir,
79 bundles={
80 "bundle1": {},
81 "bundle2": {},
82 },
83 nodes={
84 "node1": {'bundles': ["bundle1", "bundle2"]},
85 "node2": {'bundles': ["bundle2"]},
86 },
87 )
88 stdout, stderr, rcode = run("bw nodes --bundles", path=str(tmpdir))
89 assert stdout.decode().strip().split("\n") == [
90 "node1: bundle1, bundle2",
91 "node2: bundle2",
92 ]
93 assert stderr == b""
94 assert rcode == 0
95
96
97 def test_groups(tmpdir):
98 make_repo(
99 tmpdir,
100 groups={
101 "group1": {
102 'members': ["node2"],
103 },
104 "group2": {
105 'members': ["node1"],
106 },
107 "group3": {
108 'subgroup_patterns': ["p2"],
109 },
110 "group4": {
111 'subgroups': ["group1"],
112 },
113 },
114 nodes={
115 "node1": {},
116 "node2": {},
117 },
118 )
119 stdout, stderr, rcode = run("bw nodes --groups", path=str(tmpdir))
120 assert stdout.decode().strip().split("\n") == [
121 "node1: group2, group3",
122 "node2: group1, group4",
123 ]
124 assert stderr == b""
125 assert rcode == 0
126
127
128 def test_group_members_add(tmpdir):
129 make_repo(
130 tmpdir,
131 nodes={
132 "node1": {'os': 'centos'},
133 "node2": {'os': 'debian'},
134 "node3": {'os': 'ubuntu'},
135 },
136 )
137 with open(join(str(tmpdir), "groups.py"), 'w') as f:
138 f.write("""
139 groups = {
140 "group1": {
141 'members_add': lambda node: node.os == 'centos',
142 },
143 "group2": {
144 'members': ["node2"],
145 'members_add': lambda node: node.os != 'centos',
146 },
147 "group3": {
148 'members_add': lambda node: not node.in_group("group2"),
149 },
150 "group4": {
151 'members': ["node3"],
152 },
153 }
154 """)
155 stdout, stderr, rcode = run("bw nodes -a node1 | grep \tgroup | cut -f 3", path=str(tmpdir))
156 assert stdout == b"group1\ngroup3\n"
157 assert stderr == b""
158 assert rcode == 0
159
160 stdout, stderr, rcode = run("bw nodes -a node2 | grep \tgroup | cut -f 3", path=str(tmpdir))
161 assert stdout == b"group2\n"
162 assert stderr == b""
163 assert rcode == 0
164
165 stdout, stderr, rcode = run("bw nodes -a node3 | grep \tgroup | cut -f 3", path=str(tmpdir))
166 assert stdout == b"group2\ngroup3\ngroup4\n"
167 assert stderr == b""
168 assert rcode == 0
169
170
171 def test_group_members_remove(tmpdir):
172 make_repo(
173 tmpdir,
174 nodes={
175 "node1": {'os': 'centos'},
176 "node2": {'os': 'debian'},
177 "node3": {'os': 'ubuntu'},
178 "node4": {'os': 'ubuntu'},
179 },
180 )
181 with open(join(str(tmpdir), "groups.py"), 'w') as f:
182 f.write("""
183 groups = {
184 "group1": {
185 'members_add': lambda node: node.os == 'ubuntu',
186 },
187 "group2": {
188 'members_add': lambda node: node.os == 'ubuntu',
189 'members_remove': lambda node: node.name == "node3",
190 },
191 "group3": {
192 'members_add': lambda node: not node.in_group("group3"),
193 },
194 "group4": {
195 'subgroups': ["group3"],
196 'members_remove': lambda node: node.os == 'debian',
197 },
198 }
199 """)
200 stdout, stderr, rcode = run("bw nodes -a node1 | grep \tgroup | cut -f 3", path=str(tmpdir))
201 assert stdout == b"group3\ngroup4\n"
202 assert stderr == b""
203 assert rcode == 0
204
205 stdout, stderr, rcode = run("bw nodes -a node2 | grep \tgroup | cut -f 3", path=str(tmpdir))
206 assert stdout == b"group3\n"
207 assert stderr == b""
208 assert rcode == 0
209
210 stdout, stderr, rcode = run("bw nodes -a node3 | grep \tgroup | cut -f 3", path=str(tmpdir))
211 assert stdout == b"group1\ngroup3\ngroup4\n"
212 assert stderr == b""
213 assert rcode == 0
214
215 stdout, stderr, rcode = run("bw nodes -a node4 | grep \tgroup | cut -f 3", path=str(tmpdir))
216 assert stdout == b"group1\ngroup2\ngroup3\ngroup4\n"
217 assert stderr == b""
218 assert rcode == 0
219
220
221 def test_group_members_remove_bundle(tmpdir):
222 make_repo(
223 tmpdir,
224 bundles={
225 "bundle1": {},
226 "bundle2": {},
227 },
228 nodes={
229 "node1": {},
230 "node2": {},
231 },
232 )
233 with open(join(str(tmpdir), "groups.py"), 'w') as f:
234 f.write("""
235 groups = {
236 "group1": {
237 'bundles': ["bundle1"],
238 'members': ["node1", "node2"],
239 },
240 "group2": {
241 'bundles': ["bundle1", "bundle2"],
242 'members': ["node1", "node2"],
243 'members_remove': lambda node: node.name == "node2",
244 },
245 }
246 """)
247 stdout, stderr, rcode = run("bw nodes -a node1 | grep \tbundle | cut -f 3", path=str(tmpdir))
248 assert stdout == b"bundle1\nbundle2\n"
249 assert stderr == b""
250 assert rcode == 0
251
252 stdout, stderr, rcode = run("bw nodes -a node2 | grep \tbundle | cut -f 3", path=str(tmpdir))
253 assert stdout == b"bundle1\n"
254 assert stderr == b""
255 assert rcode == 0
256
257
258 def test_group_members_partial_metadata(tmpdir):
259 make_repo(
260 tmpdir,
261 nodes={
262 "node1": {
263 'metadata': {'foo': 1},
264 },
265 "node2": {},
266 },
267 )
268 with open(join(str(tmpdir), "groups.py"), 'w') as f:
269 f.write("""
270 groups = {
271 "group1": {
272 'members_add': lambda node: node.metadata.get('foo') == 1,
273 },
274 "group2": {
275 'members': ["node2"],
276 'metadata': {'foo': 1},
277 },
278 }
279 """)
280 stdout, stderr, rcode = run("bw nodes -a node1 | grep \tgroup | cut -f 3", path=str(tmpdir))
281 assert stdout == b"group1\n"
282 assert stderr == b""
283 assert rcode == 0
284
285 stdout, stderr, rcode = run("bw nodes -a node2 | grep \tgroup | cut -f 3", path=str(tmpdir))
286 assert stdout == b"group2\n"
287 assert stderr == b""
288 assert rcode == 0
289
290
291 def test_group_members_remove_based_on_metadata(tmpdir):
292 make_repo(
293 tmpdir,
294 nodes={
295 "node1": {
296 'metadata': {'remove': False},
297 },
298 "node2": {},
299 },
300 )
301 with open(join(str(tmpdir), "groups.py"), 'w') as f:
302 f.write("""
303 groups = {
304 "group1": {
305 'members_add': lambda node: not node.metadata.get('remove', False),
306 'members_remove': lambda node: node.metadata.get('remove', False),
307 },
308 "group2": {
309 'members': ["node2"],
310 'metadata': {'remove': True},
311 },
312 "group3": {
313 'subgroups': ["group1"],
314 'members_remove': lambda node: node.name.endswith("1") and node.metadata.get('redherring', True),
315 },
316 }
317 """)
318 stdout, stderr, rcode = run("bw nodes -a node1 | grep \tgroup | cut -f 3", path=str(tmpdir))
319 assert stdout == b"group1\n"
320 assert stderr == b""
321 assert rcode == 0
322
323 stdout, stderr, rcode = run("bw nodes -a node2 | grep \tgroup | cut -f 3", path=str(tmpdir))
324 assert stdout == b"group1\ngroup2\ngroup3\n"
325 assert stderr == b""
326 assert rcode == 0
327
328 # make sure there is no metadata deadlock
329 stdout, stderr, rcode = run("bw metadata node1", path=str(tmpdir))
330 assert loads(stdout.decode('utf-8')) == {'remove': False}
331 assert stderr == b""
332 assert rcode == 0
0 from os.path import join
1
2 from bundlewrap.utils.testing import make_repo, run
3
4
5 def test_groups_for_node(tmpdir):
6 make_repo(
7 tmpdir,
8 nodes={
9 "node-foo": {},
10 "node-bar": {},
11 "node-baz": {},
12 "node-pop": {},
13 },
14 )
15 with open(join(str(tmpdir), "groups.py"), 'w') as f:
16 f.write("""
17 groups = {
18 "group-foo": {
19 'members': ["node-foo"],
20 'member_patterns': [r".*-bar"],
21 },
22 "group-bar": {
23 'subgroups': ["group-foo"],
24 },
25 "group-baz": {
26 'members': ["node-pop"],
27 'members_add': lambda node: node.name == "node-pop",
28 },
29 "group-pop": {
30 'subgroup_patterns': [r"ba"],
31 },
32 }
33 """)
34 stdout, stderr, rcode = run("bw plot groups-for-node node-foo", path=str(tmpdir))
35 assert stdout == b"""digraph bundlewrap
36 {
37 rankdir = LR
38 node [color="#303030"; fillcolor="#303030"; fontname=Helvetica]
39 edge [arrowhead=vee]
40 "group-bar" [fontcolor=white,style=filled];
41 "group-foo" [fontcolor=white,style=filled];
42 "group-pop" [fontcolor=white,style=filled];
43 "node-foo" [fontcolor="#303030",shape=box,style=rounded];
44 "group-bar" -> "group-foo" [color="#6BB753",penwidth=2]
45 "group-pop" -> "group-bar" [color="#6BB753",penwidth=2]
46 "group-foo" -> "node-foo" [color="#D18C57",penwidth=2]
47 }
48 """
49 assert stderr == b""
50 assert rcode == 0
51
52 stdout, stderr, rcode = run("bw plot groups-for-node node-pop", path=str(tmpdir))
53 assert stdout == b"""digraph bundlewrap
54 {
55 rankdir = LR
56 node [color="#303030"; fillcolor="#303030"; fontname=Helvetica]
57 edge [arrowhead=vee]
58 "group-baz" [fontcolor=white,style=filled];
59 "group-pop" [fontcolor=white,style=filled];
60 "node-pop" [fontcolor="#303030",shape=box,style=rounded];
61 "group-pop" -> "group-baz" [color="#6BB753",penwidth=2]
62 "group-baz" -> "node-pop" [color="#D18C57",penwidth=2]
63 }
64 """
65 assert stderr == b""
66 assert rcode == 0
0 from os.path import join
1
2 from bundlewrap.utils.testing import make_repo, run
3
4
5 def test_not_a_repo_test(tmpdir):
6 assert run("bw nodes", path=str(tmpdir))[2] == 1
7
8
9 def test_subdir_invocation(tmpdir):
10 make_repo(tmpdir, nodes={"node1": {}})
11 stdout, stderr, rcode = run("bw nodes", path=join(str(tmpdir), "bundles"))
12 assert stdout == b"node1\n"
13 assert stderr == b""
14 assert rcode == 0
0 from bundlewrap.utils.testing import make_repo, run
1
2
3 def test_nondeterministic(tmpdir):
4 make_repo(
5 tmpdir,
6 nodes={
7 "node1": {
8 'bundles': ["bundle1"],
9 },
10 },
11 bundles={
12 "bundle1": {
13 'files': {
14 "/test": {
15 'content': "foo",
16 },
17 "/test2": {
18 'content': "foo",
19 },
20 },
21 },
22 },
23 )
24
25 stdout, stderr, rcode = run("bw stats", path=str(tmpdir))
26 assert stdout == b"""1 nodes
27 0 groups
28 2 items
29 2 file
30 """
0 from os.path import join
1
2 from bundlewrap.utils.testing import make_repo, run
3
4
5 def test_empty(tmpdir):
6 make_repo(tmpdir)
7 stdout, stderr, rcode = run("bw test", path=str(tmpdir))
8 assert stdout == b""
9
10
11 def test_bundle_not_found(tmpdir):
12 make_repo(
13 tmpdir,
14 nodes={
15 "node1": {
16 'bundles': ["bundle1"],
17 },
18 },
19 )
20 assert run("bw test", path=str(tmpdir))[2] == 1
21
22
23 def test_hooks(tmpdir):
24 make_repo(
25 tmpdir,
26 nodes={
27 "node1": {},
28 "node2": {},
29 },
30 )
31 with open(join(str(tmpdir), "hooks", "test.py"), 'w') as f:
32 f.write("""from bundlewrap.utils.ui import io
33 def test(repo, **kwargs):
34 io.stdout("AAA")
35
36 def test_node(repo, node, **kwargs):
37 io.stdout("BBB")
38 """)
39 assert b"AAA" in run("bw test", path=str(tmpdir))[0]
40 assert b"BBB" in run("bw test", path=str(tmpdir))[0]
41
42
43 def test_circular_dep_direct(tmpdir):
44 make_repo(
45 tmpdir,
46 nodes={
47 "node1": {
48 'bundles': ["bundle1"],
49 },
50 },
51 bundles={
52 "bundle1": {
53 "pkg_apt": {
54 "foo": {
55 'needs': ["pkg_apt:bar"],
56 },
57 "bar": {
58 'needs': ["pkg_apt:foo"],
59 },
60 },
61 },
62 },
63 )
64 assert run("bw test", path=str(tmpdir))[2] == 1
65
66
67 def test_circular_dep_indirect(tmpdir):
68 make_repo(
69 tmpdir,
70 nodes={
71 "node1": {
72 'bundles': ["bundle1"],
73 },
74 },
75 bundles={
76 "bundle1": {
77 "pkg_apt": {
78 "foo": {
79 'needs': ["pkg_apt:bar"],
80 },
81 "bar": {
82 'needs': ["pkg_apt:baz"],
83 },
84 "baz": {
85 'needs': ["pkg_apt:foo"],
86 },
87 },
88 },
89 },
90 )
91 assert run("bw test", path=str(tmpdir))[2] == 1
92
93
94 def test_circular_dep_self(tmpdir):
95 make_repo(
96 tmpdir,
97 nodes={
98 "node1": {
99 'bundles': ["bundle1"],
100 },
101 },
102 bundles={
103 "bundle1": {
104 "pkg_apt": {
105 "foo": {
106 'needs': ["pkg_apt:foo"],
107 },
108 },
109 },
110 },
111 )
112 assert run("bw test", path=str(tmpdir))[2] == 1
113
114
115 def test_circular_trigger_self(tmpdir):
116 make_repo(
117 tmpdir,
118 nodes={
119 "node1": {
120 'bundles': ["bundle1"],
121 },
122 },
123 bundles={
124 "bundle1": {
125 "pkg_apt": {
126 "foo": {
127 'triggers': ["pkg_apt:foo"],
128 },
129 },
130 },
131 },
132 )
133 assert run("bw test", path=str(tmpdir))[2] == 1
134
135
136 def test_file_invalid_attribute(tmpdir):
137 make_repo(
138 tmpdir,
139 nodes={
140 "node1": {
141 'bundles': ["bundle1"],
142 },
143 },
144 bundles={
145 "bundle1": {
146 "files": {
147 "/foo": {
148 "potato": "yes",
149 },
150 },
151 },
152 },
153 )
154 assert run("bw test", path=str(tmpdir))[2] == 1
155
156
157 def test_file_template_error(tmpdir):
158 make_repo(
159 tmpdir,
160 nodes={
161 "node1": {
162 'bundles': ["bundle1"],
163 },
164 },
165 bundles={
166 "bundle1": {
167 "files": {
168 "/foo": {
169 'content_type': 'mako',
170 'content': "${broken",
171 },
172 },
173 },
174 },
175 )
176 assert run("bw test", path=str(tmpdir))[2] == 1
177
178
179 def test_group_loop(tmpdir):
180 make_repo(
181 tmpdir,
182 groups={
183 "group1": {
184 'subgroups': ["group2"],
185 },
186 "group2": {
187 'subgroups': ["group3"],
188 },
189 "group3": {
190 'subgroups': ["group1"],
191 },
192 },
193 )
194 assert run("bw test", path=str(tmpdir))[2] == 1
195
196
197 def test_group_metadata_collision(tmpdir):
198 make_repo(
199 tmpdir,
200 nodes={"node1": {}},
201 groups={
202 "group1": {
203 'members': ["node1"],
204 'metadata': {
205 'foo': {
206 'baz': 1,
207 },
208 'bar': 2,
209 },
210 },
211 "group2": {
212 'metadata': {
213 'foo': {
214 'baz': 3,
215 },
216 'snap': 4,
217 },
218 'subgroups': ["group3"],
219 },
220 "group3": {
221 'members': ["node1"],
222 },
223 },
224 )
225 assert run("bw test", path=str(tmpdir))[2] == 1
226
227
228 def test_group_metadata_collision_subgroups(tmpdir):
229 make_repo(
230 tmpdir,
231 nodes={"node1": {}},
232 groups={
233 "group1": {
234 'members': ["node1"],
235 'metadata': {
236 'foo': {
237 'baz': 1,
238 },
239 'bar': 2,
240 },
241 },
242 "group2": {
243 'metadata': {
244 'foo': {
245 'baz': 3,
246 },
247 'snap': 4,
248 },
249 'subgroups': ["group1", "group3"],
250 },
251 "group3": {
252 'members': ["node1"],
253 },
254 },
255 )
256 assert run("bw test", path=str(tmpdir))[2] == 0
257
258
259 def test_group_metadata_collision_list(tmpdir):
260 make_repo(
261 tmpdir,
262 nodes={"node1": {}},
263 groups={
264 "group1": {
265 'members': ["node1"],
266 'metadata': {
267 'foo': [1],
268 },
269 },
270 "group2": {
271 'members': ["node1"],
272 'metadata': {
273 'foo': [2],
274 },
275 },
276 },
277 )
278 assert run("bw test", path=str(tmpdir))[2] == 1
279
280
281 def test_group_metadata_collision_dict(tmpdir):
282 make_repo(
283 tmpdir,
284 nodes={"node1": {}},
285 groups={
286 "group1": {
287 'members': ["node1"],
288 'metadata': {
289 'foo': {'bar': 1},
290 },
291 },
292 "group2": {
293 'members': ["node1"],
294 'metadata': {
295 'foo': 2,
296 },
297 },
298 },
299 )
300 assert run("bw test", path=str(tmpdir))[2] == 1
301
302
303 def test_group_metadata_collision_dict_ok(tmpdir):
304 make_repo(
305 tmpdir,
306 nodes={"node1": {}},
307 groups={
308 "group1": {
309 'members': ["node1"],
310 'metadata': {
311 'foo': {'bar': 1},
312 },
313 },
314 "group2": {
315 'members': ["node1"],
316 'metadata': {
317 'foo': {'baz': 2},
318 },
319 },
320 },
321 )
322 assert run("bw test", path=str(tmpdir))[2] == 0
323
324
325 def test_group_metadata_collision_set(tmpdir):
326 make_repo(
327 tmpdir,
328 nodes={"node1": {}},
329 groups={
330 "group1": {
331 'members': ["node1"],
332 'metadata': {
333 'foo': set([1]),
334 },
335 },
336 "group2": {
337 'members': ["node1"],
338 'metadata': {
339 'foo': 2,
340 },
341 },
342 },
343 )
344 assert run("bw test", path=str(tmpdir))[2] == 1
345
346
347 def test_group_metadata_collision_set_ok(tmpdir):
348 make_repo(
349 tmpdir,
350 nodes={"node1": {}},
351 groups={
352 "group1": {
353 'members': ["node1"],
354 'metadata': {
355 'foo': set([1]),
356 },
357 },
358 "group2": {
359 'members': ["node1"],
360 'metadata': {
361 'foo': set([2]),
362 },
363 },
364 },
365 )
366 assert run("bw test", path=str(tmpdir))[2] == 0
367
368
369 def test_fault_missing(tmpdir):
370 make_repo(
371 tmpdir,
372 nodes={
373 "node1": {
374 'bundles': ["bundle1"],
375 },
376 },
377 bundles={
378 "bundle1": {
379 "files": {
380 "/foo": {
381 'content_type': 'mako',
382 'content': "${repo.vault.decrypt('bzzt', key='unavailable')}",
383 },
384 },
385 },
386 },
387 )
388 assert run("bw test", path=str(tmpdir))[2] == 1
389 assert run("bw test --ignore-missing-faults", path=str(tmpdir))[2] == 0
390
391
392 def test_metadata_determinism_ok(tmpdir):
393 make_repo(
394 tmpdir,
395 nodes={
396 "node1": {
397 'bundles': ["bundle1"],
398 },
399 },
400 bundles={
401 "bundle1": {},
402 },
403 )
404 with open(join(str(tmpdir), "bundles", "bundle1", "metadata.py"), 'w') as f:
405 f.write("""
406 def test(metadata):
407 metadata['test'] = 1
408 return metadata
409 """)
410 assert run("bw test -m 3", path=str(tmpdir))[2] == 0
411
412
413 def test_metadata_determinism_broken(tmpdir):
414 make_repo(
415 tmpdir,
416 nodes={
417 "node1": {
418 'bundles': ["bundle1"],
419 },
420 },
421 bundles={
422 "bundle1": {},
423 },
424 )
425 with open(join(str(tmpdir), "bundles", "bundle1", "metadata.py"), 'w') as f:
426 f.write("""from random import randint as _randint
427
428 def test(metadata):
429 metadata.setdefault('test', _randint(1, 99999))
430 return metadata
431 """)
432 assert run("bw test -m 3", path=str(tmpdir))[2] == 1
433
434
435 def test_config_determinism_ok(tmpdir):
436 make_repo(
437 tmpdir,
438 nodes={
439 "node1": {
440 'bundles': ["bundle1"],
441 },
442 },
443 bundles={
444 "bundle1": {
445 "files": {
446 "/test": {
447 'content': "1",
448 'content_type': 'mako',
449 },
450 },
451 },
452 },
453 )
454 assert run("bw test -d 3", path=str(tmpdir))[2] == 0
455
456
457 def test_config_determinism_broken(tmpdir):
458 make_repo(
459 tmpdir,
460 nodes={
461 "node1": {
462 'bundles': ["bundle1"],
463 },
464 },
465 bundles={
466 "bundle1": {
467 "files": {
468 "/test": {
469 'content': "<% from random import randint %>\n${randint(1, 99999)\n}",
470 'content_type': 'mako',
471 },
472 },
473 },
474 },
475 )
476 assert run("bw test -d 3", path=str(tmpdir))[2] == 1
477
478
479 def test_unknown_subgroup(tmpdir):
480 make_repo(
481 tmpdir,
482 nodes={
483 "node1": {},
484 },
485 groups={
486 "group1": {'subgroups': ["missing-group"]},
487 "group2": {'members': ["node1"]},
488 },
489 )
490 assert run("bw test", path=str(tmpdir))[2] == 1
491 assert run("bw test group1", path=str(tmpdir))[2] == 1
492 assert run("bw test group2", path=str(tmpdir))[2] == 1
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from os.path import join
4
5 from bundlewrap.utils.testing import host_os, make_repo, run
6
7
8 def test_empty_verify(tmpdir):
9 make_repo(
10 tmpdir,
11 bundles={
12 "test": {
13 'files': {
14 join(str(tmpdir), "foo"): {
15 'content_type': 'any',
16 },
17 },
18 },
19 },
20 nodes={
21 "localhost": {
22 'bundles': ["test"],
23 'os': host_os(),
24 },
25 },
26 )
27
28 with open(join(str(tmpdir), "foo"), 'w') as f:
29 f.write("test")
30
31 stdout, stderr, rcode = run("bw verify localhost", path=str(tmpdir))
32 assert rcode == 0
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from base64 import b64decode
4 from os.path import join
5
6 from bundlewrap.utils.testing import make_repo, run
7
8
9 def test_encrypt(tmpdir):
10 make_repo(tmpdir)
11
12 stdout, stderr, rcode = run("bw debug -c 'print(repo.vault.encrypt(\"test\"))'", path=str(tmpdir))
13 assert stderr == b""
14 assert rcode == 0
15
16 stdout, stderr, rcode = run("bw debug -c 'print(repo.vault.decrypt(\"{}\"))'".format(stdout.decode('utf-8').strip()), path=str(tmpdir))
17 assert stdout == b"test\n"
18 assert stderr == b""
19 assert rcode == 0
20
21
22 def test_encrypt_file(tmpdir):
23 make_repo(tmpdir)
24
25 source_file = join(str(tmpdir), "data", "source")
26 with open(source_file, 'w') as f:
27 f.write("ohai")
28
29 stdout, stderr, rcode = run(
30 "bw debug -c 'repo.vault.encrypt_file(\"{}\", \"{}\")'".format(
31 source_file,
32 "encrypted",
33 ),
34 path=str(tmpdir),
35 )
36 assert stderr == b""
37 assert rcode == 0
38
39 stdout, stderr, rcode = run(
40 "bw debug -c 'print(repo.vault.decrypt_file(\"{}\"))'".format(
41 "encrypted",
42 ),
43 path=str(tmpdir),
44 )
45 assert stdout == b"ohai\n"
46 assert stderr == b""
47 assert rcode == 0
48
49
50 def test_encrypt_file_base64(tmpdir):
51 make_repo(tmpdir)
52
53 source_file = join(str(tmpdir), "data", "source")
54 with open(source_file, 'wb') as f:
55 f.write("öhai".encode('latin-1'))
56
57 stdout, stderr, rcode = run(
58 "bw debug -c 'repo.vault.encrypt_file(\"{}\", \"{}\")'".format(
59 source_file,
60 "encrypted",
61 ),
62 path=str(tmpdir),
63 )
64 assert stderr == b""
65 assert rcode == 0
66
67 stdout, stderr, rcode = run(
68 "bw debug -c 'print(repo.vault.decrypt_file_as_base64(\"{}\"))'".format(
69 "encrypted",
70 ),
71 path=str(tmpdir),
72 )
73 assert b64decode(stdout.decode('utf-8')) == "öhai".encode('latin-1')
74 assert stderr == b""
75 assert rcode == 0
76
77
78 def test_format_password(tmpdir):
79 make_repo(tmpdir)
80
81 stdout, stderr, rcode = run("bw debug -c 'print(repo.vault.format(\"format: {}\", repo.vault.password_for(\"testing\")))'", path=str(tmpdir))
82 assert stdout == b"format: faCTT76kagtDuZE5wnoiD1CxhGKmbgiX\n"
83 assert stderr == b""
84 assert rcode == 0
0 from bundlewrap.metadata import atomic, dictionary_key_map
1
2
3 def test_dictmap():
4 assert set(dictionary_key_map({
5 'key1': 1,
6 'key2': {
7 'key3': [3, 3, 3],
8 'key4': atomic([4, 4, 4]),
9 'key5': {
10 'key6': "6",
11 },
12 'key7': set((7, 7, 7)),
13 },
14 })) == set([
15 ("key1",),
16 ("key2",),
17 ("key2", "key3"),
18 ("key2", "key4"),
19 ("key2", "key5"),
20 ("key2", "key5", "key6"),
21 ("key2", "key7"),
22 ])
0 # -*- coding: utf-8 -*-
1 from __future__ import unicode_literals
2
3 from datetime import timedelta
4
5 from bundlewrap.utils.time import format_duration, parse_duration
6
7
8 def test_format_duration():
9 assert format_duration(timedelta()) == "0s"
10 assert format_duration(timedelta(seconds=10)) == "10s"
11 assert format_duration(timedelta(minutes=10)) == "10m"
12 assert format_duration(timedelta(hours=10)) == "10h"
13 assert format_duration(timedelta(days=10)) == "10d"
14 assert format_duration(timedelta(days=1, hours=2, minutes=3, seconds=4)) == "1d 2h 3m 4s"
15
16
17 def test_parse_duration():
18 assert parse_duration("0s") == timedelta()
19 assert parse_duration("10s") == timedelta(seconds=10)
20 assert parse_duration("10m") == timedelta(minutes=10)
21 assert parse_duration("10h") == timedelta(hours=10)
22 assert parse_duration("10d") == timedelta(days=10)
23 assert parse_duration("1d 2h 3m 4s") == timedelta(days=1, hours=2, minutes=3, seconds=4)
24
25
26 def test_parse_format_inverse():
27 for duration in (
28 "0s",
29 "1s",
30 "1m",
31 "1h",
32 "1d",
33 "1d 4h",
34 "1d 4h 7s",
35 ):
36 assert format_duration(parse_duration(duration)) == duration