Imported Upstream version 3.5.1
James Page
8 years ago
365 | 365 | cp $$manpage $(MAN_DIR)/man$$section; \ |
366 | 366 | done; \ |
367 | 367 | done |
368 | cp $(DOCS_DIR)/rabbitmq.config.example $(DOC_INSTALL_DIR)/rabbitmq.config.example | |
368 | if test "$(DOC_INSTALL_DIR)"; then \ | |
369 | cp $(DOCS_DIR)/rabbitmq.config.example $(DOC_INSTALL_DIR)/rabbitmq.config.example; \ | |
370 | fi | |
369 | 371 | |
370 | 372 | install_dirs: |
371 | 373 | @ OK=true && \ |
372 | 374 | { [ -n "$(TARGET_DIR)" ] || { echo "Please set TARGET_DIR."; OK=false; }; } && \ |
373 | 375 | { [ -n "$(SBIN_DIR)" ] || { echo "Please set SBIN_DIR."; OK=false; }; } && \ |
374 | { [ -n "$(MAN_DIR)" ] || { echo "Please set MAN_DIR."; OK=false; }; } && \ | |
375 | { [ -n "$(DOC_INSTALL_DIR)" ] || { echo "Please set DOC_INSTALL_DIR."; OK=false; }; } && $$OK | |
376 | { [ -n "$(MAN_DIR)" ] || { echo "Please set MAN_DIR."; OK=false; }; } && $$OK | |
376 | 377 | |
377 | 378 | mkdir -p $(TARGET_DIR)/sbin |
378 | 379 | mkdir -p $(SBIN_DIR) |
379 | 380 | mkdir -p $(MAN_DIR) |
380 | mkdir -p $(DOC_INSTALL_DIR) | |
381 | if test "$(DOC_INSTALL_DIR)"; then \ | |
382 | mkdir -p $(DOC_INSTALL_DIR); \ | |
383 | fi | |
381 | 384 | |
382 | 385 | $(foreach XML,$(USAGES_XML),$(eval $(call usage_dep, $(XML)))) |
383 | 386 |
0 | Please see http://www.rabbitmq.com/build-server.html for build instructions.⏎ | |
0 | Please see http://www.rabbitmq.com/build-server.html for build instructions. |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
32 | 32 | %% {handshake_timeout, 10000}, |
33 | 33 | |
34 | 34 | %% Log levels (currently just used for connection logging). |
35 | %% One of 'info', 'warning', 'error' or 'none', in decreasing order | |
36 | %% of verbosity. Defaults to 'info'. | |
37 | %% | |
38 | %% {log_levels, [{connection, info}]}, | |
35 | %% One of 'debug', 'info', 'warning', 'error' or 'none', in decreasing | |
36 | %% order of verbosity. Defaults to 'info'. | |
37 | %% | |
38 | %% {log_levels, [{connection, info}, {channel, info}]}, | |
39 | 39 | |
40 | 40 | %% Set to 'true' to perform reverse DNS lookups when accepting a |
41 | 41 | %% connection. Hostnames will then be shown instead of IP addresses |
107 | 107 | |
108 | 108 | %% This pertains to both the rabbitmq_auth_mechanism_ssl plugin and |
109 | 109 | %% STOMP ssl_cert_login configurations. See the rabbitmq_stomp |
110 | %% configuration section later in this fail and the README in | |
110 | %% configuration section later in this file and the README in | |
111 | 111 | %% https://github.com/rabbitmq/rabbitmq-auth-mechanism-ssl for further |
112 | 112 | %% details. |
113 | 113 | %% |
219 | 219 | %% |
220 | 220 | %% {cluster_nodes, {['rabbit@my.host.com'], disc}}, |
221 | 221 | |
222 | %% Interval (in milliseconds) at which we send keepalive messages | |
223 | %% to other cluster members. Note that this is not the same thing | |
224 | %% as net_ticktime; missed keepalive messages will not cause nodes | |
225 | %% to be considered down. | |
226 | %% | |
227 | %% {cluster_keepalive_interval, 10000}, | |
228 | ||
222 | 229 | %% Set (internal) statistics collection granularity. |
223 | 230 | %% |
224 | 231 | %% {collect_statistics, none}, |
234 | 241 | %% Timeout used when waiting for Mnesia tables in a cluster to |
235 | 242 | %% become available. |
236 | 243 | %% |
237 | %% {mnesia_table_loading_timeout, 30000} | |
244 | %% {mnesia_table_loading_timeout, 30000}, | |
245 | ||
246 | %% Size in bytes below which to embed messages in the queue index. See | |
247 | %% http://www.rabbitmq.com/persistence-conf.html | |
248 | %% | |
249 | %% {queue_index_embed_msgs_below, 4096} | |
238 | 250 | |
239 | 251 | ]}, |
240 | 252 | |
405 | 417 | %% ---------------------------------------------------------------------------- |
406 | 418 | %% RabbitMQ MQTT Adapter |
407 | 419 | %% |
408 | %% See http://hg.rabbitmq.com/rabbitmq-mqtt/file/stable/README.md for details | |
420 | %% See https://github.com/rabbitmq/rabbitmq-mqtt/blob/stable/README.md | |
421 | %% for details | |
409 | 422 | %% ---------------------------------------------------------------------------- |
410 | 423 | |
411 | 424 | {rabbitmq_mqtt, |
459 | 472 | %% ---------------------------------------------------------------------------- |
460 | 473 | %% RabbitMQ AMQP 1.0 Support |
461 | 474 | %% |
462 | %% See http://hg.rabbitmq.com/rabbitmq-amqp1.0/file/default/README.md | |
475 | %% See https://github.com/rabbitmq/rabbitmq-amqp1.0/blob/stable/README.md | |
463 | 476 | %% for details |
464 | 477 | %% ---------------------------------------------------------------------------- |
465 | 478 |
425 | 425 | </listitem> |
426 | 426 | </varlistentry> |
427 | 427 | <varlistentry> |
428 | <term><cmdsynopsis><command>rename_cluster_node</command> <arg choice="req">oldnode1</arg> <arg choice="req">newnode1</arg> <arg choice="opt">oldnode2</arg> <arg choice="opt">newnode2 ...</arg></cmdsynopsis></term> | |
429 | <listitem> | |
430 | <para> | |
431 | Supports renaming of cluster nodes in the local database. | |
432 | </para> | |
433 | <para> | |
434 | This subcommand causes rabbitmqctl to temporarily become | |
435 | the node in order to make the change. The local cluster | |
436 | node must therefore be completely stopped; other nodes | |
437 | can be online or offline. | |
438 | </para> | |
439 | <para> | |
440 | This subcommand takes an even number of arguments, in | |
441 | pairs representing the old and new names for nodes. You | |
442 | must specify the old and new names for this node and for | |
443 | any other nodes that are stopped and being renamed at | |
444 | the same time. | |
445 | </para> | |
446 | <para> | |
447 | It is possible to stop all nodes and rename them all | |
448 | simultaneously (in which case old and new names for all | |
449 | nodes must be given to every node) or stop and rename | |
450 | nodes one at a time (in which case each node only needs | |
451 | to be told how its own name is changing). | |
452 | </para> | |
453 | <para role="example-prefix">For example:</para> | |
454 | <screen role="example">rabbitmqctl rename_cluster_node rabbit@misshelpful rabbit@cordelia</screen> | |
455 | <para role="example"> | |
456 | This command will rename the node | |
457 | <command>rabbit@misshelpful</command> to the node | |
458 | <command>rabbit@cordelia</command>. | |
459 | </para> | |
460 | </listitem> | |
461 | </varlistentry> | |
462 | <varlistentry> | |
428 | 463 | <term><cmdsynopsis><command>update_cluster_nodes</command> <arg choice="req">clusternode</arg></cmdsynopsis> |
429 | 464 | </term> |
430 | 465 | <listitem> |
1225 | 1260 | <listitem><para>Like <command>message_bytes</command> but counting only those messages which are persistent.</para></listitem> |
1226 | 1261 | </varlistentry> |
1227 | 1262 | <varlistentry> |
1263 | <term>disk_reads</term> | |
1264 | <listitem><para>Total number of times messages have been read from disk by this queue since it started.</para></listitem> | |
1265 | </varlistentry> | |
1266 | <varlistentry> | |
1267 | <term>disk_writes</term> | |
1268 | <listitem><para>Total number of times messages have been written to disk by this queue since it started.</para></listitem> | |
1269 | </varlistentry> | |
1270 | <varlistentry> | |
1228 | 1271 | <term>consumers</term> |
1229 | 1272 | <listitem><para>Number of consumers.</para></listitem> |
1230 | 1273 | </varlistentry> |
1812 | 1855 | </varlistentry> |
1813 | 1856 | </variablelist> |
1814 | 1857 | <para> |
1815 | Starts tracing. | |
1858 | Starts tracing. Note that the trace state is not | |
1859 | persistent; it will revert to being off if the server is | |
1860 | restarted. | |
1816 | 1861 | </para> |
1817 | 1862 | </listitem> |
1818 | 1863 | </varlistentry> |
0 | 0 | {application, rabbit, %% -*- erlang -*- |
1 | 1 | [{description, "RabbitMQ"}, |
2 | 2 | {id, "RabbitMQ"}, |
3 | {vsn, "3.4.3"}, | |
3 | {vsn, "3.5.1"}, | |
4 | 4 | {modules, []}, |
5 | 5 | {registered, [rabbit_amqqueue_sup, |
6 | 6 | rabbit_log, |
28 | 28 | {heartbeat, 580}, |
29 | 29 | {msg_store_file_size_limit, 16777216}, |
30 | 30 | {queue_index_max_journal_entries, 65536}, |
31 | {queue_index_embed_msgs_below, 4096}, | |
31 | 32 | {default_user, <<"guest">>}, |
32 | 33 | {default_pass, <<"guest">>}, |
33 | 34 | {default_user_tags, [administrator]}, |
13 | 13 | %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. |
14 | 14 | %% |
15 | 15 | |
16 | %% Passed around most places | |
16 | 17 | -record(user, {username, |
17 | 18 | tags, |
18 | auth_backend, %% Module this user came from | |
19 | impl %% Scratch space for that module | |
20 | }). | |
19 | authz_backends}). %% List of {Module, AuthUserImpl} pairs | |
21 | 20 | |
21 | %% Passed to auth backends | |
22 | -record(auth_user, {username, | |
23 | tags, | |
24 | impl}). | |
25 | ||
26 | %% Implementation for the internal auth backend | |
22 | 27 | -record(internal_user, {username, password_hash, tags}). |
23 | 28 | -record(permission, {configure, write, read}). |
24 | 29 | -record(user_vhost, {username, virtual_host}). |
51 | 56 | arguments, %% immutable |
52 | 57 | pid, %% durable (just so we know home node) |
53 | 58 | slave_pids, sync_slave_pids, %% transient |
54 | down_slave_nodes, %% durable | |
59 | recoverable_slaves, %% durable | |
55 | 60 | policy, %% durable, implicit update as above |
56 | 61 | gm_pids, %% transient |
57 | 62 | decorators, %% transient, recalculated as above |
82 | 87 | is_persistent}). |
83 | 88 | |
84 | 89 | -record(ssl_socket, {tcp, ssl}). |
85 | -record(delivery, {mandatory, confirm, sender, message, msg_seq_no}). | |
90 | -record(delivery, {mandatory, confirm, sender, message, msg_seq_no, flow}). | |
86 | 91 | -record(amqp_error, {name, explanation = "", method = none}). |
87 | 92 | |
88 | 93 | -record(event, {type, props, reference = undefined, timestamp}). |
34 | 34 | toke \ |
35 | 35 | webmachine-wrapper |
36 | 36 | |
37 | BRANCH:=default | |
38 | ||
39 | HG_CORE_REPOBASE:=$(shell dirname `hg paths default 2>/dev/null` 2>/dev/null) | |
40 | ifndef HG_CORE_REPOBASE | |
41 | HG_CORE_REPOBASE:=http://hg.rabbitmq.com/ | |
37 | BRANCH:=master | |
38 | ||
39 | UMBRELLA_REPO_FETCH:=$(shell git remote -v 2>/dev/null | awk '/^origin\t.+ \(fetch\)$$/ { print $$2; }') | |
40 | ifdef UMBRELLA_REPO_FETCH | |
41 | GIT_CORE_REPOBASE_FETCH:=$(shell dirname $(UMBRELLA_REPO_FETCH)) | |
42 | GIT_CORE_SUFFIX_FETCH:=$(suffix $(UMBRELLA_REPO_FETCH)) | |
43 | else | |
44 | GIT_CORE_REPOBASE_FETCH:=https://github.com/rabbitmq | |
45 | GIT_CORE_SUFFIX_FETCH:=.git | |
46 | endif | |
47 | ||
48 | UMBRELLA_REPO_PUSH:=$(shell git remote -v 2>/dev/null | awk '/^origin\t.+ \(push\)$$/ { print $$2; }') | |
49 | ifdef UMBRELLA_REPO_PUSH | |
50 | GIT_CORE_REPOBASE_PUSH:=$(shell dirname $(UMBRELLA_REPO_PUSH)) | |
51 | GIT_CORE_SUFFIX_PUSH:=$(suffix $(UMBRELLA_REPO_PUSH)) | |
52 | else | |
53 | GIT_CORE_REPOBASE_PUSH:=git@github.com:rabbitmq | |
54 | GIT_CORE_SUFFIX_PUSH:=.git | |
42 | 55 | endif |
43 | 56 | |
44 | 57 | VERSION:=0.0.0 |
69 | 82 | rm -rf $(PLUGINS_SRC_DIST_DIR) |
70 | 83 | mkdir -p $(PLUGINS_SRC_DIST_DIR)/licensing |
71 | 84 | |
72 | rsync -a --exclude '.hg*' rabbitmq-erlang-client $(PLUGINS_SRC_DIST_DIR)/ | |
85 | rsync -a --exclude '.git*' rabbitmq-erlang-client $(PLUGINS_SRC_DIST_DIR)/ | |
73 | 86 | touch $(PLUGINS_SRC_DIST_DIR)/rabbitmq-erlang-client/.srcdist_done |
74 | 87 | |
75 | rsync -a --exclude '.hg*' rabbitmq-server $(PLUGINS_SRC_DIST_DIR)/ | |
88 | rsync -a --exclude '.git*' rabbitmq-server $(PLUGINS_SRC_DIST_DIR)/ | |
76 | 89 | touch $(PLUGINS_SRC_DIST_DIR)/rabbitmq-server/.srcdist_done |
77 | 90 | |
78 | 91 | $(MAKE) -f all-packages.mk copy-srcdist VERSION=$(VERSION) PLUGINS_SRC_DIST_DIR=$(PLUGINS_SRC_DIST_DIR) |
79 | 92 | cp Makefile *.mk generate* $(PLUGINS_SRC_DIST_DIR)/ |
80 | 93 | echo "This is the released version of rabbitmq-public-umbrella. \ |
81 | You can clone the full version with: hg clone http://hg.rabbitmq.com/rabbitmq-public-umbrella" > $(PLUGINS_SRC_DIST_DIR)/README | |
82 | ||
83 | PRESERVE_CLONE_DIR=1 make -C $(PLUGINS_SRC_DIST_DIR) clean | |
94 | You can clone the full version with: git clone https://github.com/rabbitmq/rabbitmq-public-umbrella.git" > $(PLUGINS_SRC_DIST_DIR)/README | |
95 | ||
96 | PRESERVE_CLONE_DIR=1 $(MAKE) -C $(PLUGINS_SRC_DIST_DIR) clean | |
84 | 97 | rm -rf $(PLUGINS_SRC_DIST_DIR)/rabbitmq-server |
85 | 98 | |
86 | 99 | #---------------------------------- |
104 | 117 | #---------------------------------- |
105 | 118 | |
106 | 119 | $(REPOS): |
107 | hg clone $(HG_CORE_REPOBASE)/$@ | |
120 | retries=5; \ | |
121 | while ! git clone $(GIT_CORE_REPOBASE_FETCH)/$@$(GIT_CORE_SUFFIX_FETCH); do \ | |
122 | retries=$$((retries - 1)); \ | |
123 | if test "$$retries" = 0; then break; fi; \ | |
124 | sleep 1; \ | |
125 | done | |
126 | test -d $@ | |
127 | global_user_name="$$(git config --global user.name)"; \ | |
128 | global_user_email="$$(git config --global user.email)"; \ | |
129 | user_name="$$(git config user.name)"; \ | |
130 | user_email="$$(git config user.email)"; \ | |
131 | cd $@ && \ | |
132 | git remote set-url --push origin $(GIT_CORE_REPOBASE_PUSH)/$@$(GIT_CORE_SUFFIX_PUSH) && \ | |
133 | if test "$$global_user_name" != "$$user_name"; then git config user.name "$$user_name"; fi && \ | |
134 | if test "$$global_user_email" != "$$user_email"; then git config user.email "$$user_email"; fi | |
135 | ||
108 | 136 | |
109 | 137 | .PHONY: checkout |
110 | 138 | checkout: $(REPOS) |
139 | ||
140 | .PHONY: list-repos | |
141 | list-repos: | |
142 | @for repo in $(REPOS); do echo $$repo; done | |
143 | ||
144 | .PHONY: sync-gituser | |
145 | sync-gituser: | |
146 | @global_user_name="$$(git config --global user.name)"; \ | |
147 | global_user_email="$$(git config --global user.email)"; \ | |
148 | user_name="$$(git config user.name)"; \ | |
149 | user_email="$$(git config user.email)"; \ | |
150 | for repo in $(REPOS); do \ | |
151 | cd $$repo && \ | |
152 | git config --unset user.name && \ | |
153 | git config --unset user.email && \ | |
154 | if test "$$global_user_name" != "$$user_name"; then git config user.name "$$user_name"; fi && \ | |
155 | if test "$$global_user_email" != "$$user_email"; then git config user.email "$$user_email"; fi && \ | |
156 | cd ..; done | |
157 | ||
158 | .PHONY: sync-gitremote | |
159 | sync-gitremote: | |
160 | @for repo in $(REPOS); do \ | |
161 | cd $$repo && \ | |
162 | git remote set-url --fetch origin $(GIT_CORE_REPOBASE_FETCH)/$$repo$(GIT_CORE_SUFFIX_FETCH) && \ | |
163 | git remote set-url --push origin $(GIT_CORE_REPOBASE_PUSH)/$$repo$(GIT_CORE_SUFFIX_PUSH) && \ | |
164 | cd ..; done | |
111 | 165 | |
112 | 166 | #---------------------------------- |
113 | 167 | # Subrepository management |
136 | 190 | # Do not allow status to fork with -j otherwise output will be garbled |
137 | 191 | .PHONY: status |
138 | 192 | status: checkout |
139 | $(foreach DIR,. $(REPOS), \ | |
140 | (cd $(DIR); OUT=$$(hg st -mad); \ | |
141 | if \[ ! -z "$$OUT" \]; then echo "\n$(DIR):\n$$OUT"; fi) &&) true | |
193 | @for repo in . $(REPOS); do \ | |
194 | echo "$$repo:"; \ | |
195 | cd "$$repo" && git status -s && cd - >/dev/null; \ | |
196 | done | |
142 | 197 | |
143 | 198 | .PHONY: pull |
144 | 199 | pull: $(foreach DIR,. $(REPOS),$(DIR)+pull) |
145 | 200 | |
146 | $(eval $(call repo_targets,. $(REPOS),pull,| %,(cd % && hg pull))) | |
201 | $(eval $(call repo_targets,. $(REPOS),pull,| %,\ | |
202 | (cd % && git pull --ff-only))) | |
147 | 203 | |
148 | 204 | .PHONY: update |
149 | update: $(foreach DIR,. $(REPOS),$(DIR)+update) | |
150 | ||
151 | $(eval $(call repo_targets,. $(REPOS),update,%+pull,(cd % && hg up))) | |
205 | update: pull | |
152 | 206 | |
153 | 207 | .PHONY: named_update |
154 | 208 | named_update: $(foreach DIR,. $(REPOS),$(DIR)+named_update) |
155 | 209 | |
156 | $(eval $(call repo_targets,. $(REPOS),named_update,%+pull,\ | |
157 | (cd % && hg up -C $(BRANCH)))) | |
210 | $(eval $(call repo_targets,. $(REPOS),named_update,| %,\ | |
211 | (cd % && git fetch -p && git checkout $(BRANCH) && \ | |
212 | (test "$$$$(git branch | grep '^*')" = "* (detached from $(BRANCH))" || \ | |
213 | git pull --ff-only)))) | |
158 | 214 | |
159 | 215 | .PHONY: tag |
160 | 216 | tag: $(foreach DIR,. $(REPOS),$(DIR)+tag) |
161 | 217 | |
162 | $(eval $(call repo_targets,. $(REPOS),tag,| %,(cd % && hg tag $(TAG)))) | |
218 | $(eval $(call repo_targets,. $(REPOS),tag,| %,\ | |
219 | (cd % && git tag $(TAG)))) | |
163 | 220 | |
164 | 221 | .PHONY: push |
165 | 222 | push: $(foreach DIR,. $(REPOS),$(DIR)+push) |
166 | 223 | |
167 | # "|| true" sicne hg push fails if there are no changes | |
168 | $(eval $(call repo_targets,. $(REPOS),push,| %,(cd % && hg push -f || true))) | |
224 | $(eval $(call repo_targets,. $(REPOS),push,| %,\ | |
225 | (cd % && git push && git push --tags))) | |
169 | 226 | |
170 | 227 | .PHONY: checkin |
171 | 228 | checkin: $(foreach DIR,. $(REPOS),$(DIR)+checkin) |
172 | 229 | |
173 | $(eval $(call repo_targets,. $(REPOS),checkin,| %,(cd % && hg ci))) | |
230 | $(eval $(call repo_targets,. $(REPOS),checkin,| %,\ | |
231 | (cd % && (test -z "$$$$(git status -s -uno)" || git commit -a)))) |
0 | This is the released version of rabbitmq-public-umbrella. You can clone the full version with: hg clone http://hg.rabbitmq.com/rabbitmq-public-umbrella | |
0 | This is the released version of rabbitmq-public-umbrella. You can clone the full version with: git clone https://github.com/rabbitmq/rabbitmq-public-umbrella.git |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
236 | 236 | # Work around weird github breakage (bug 25264) |
237 | 237 | cd $(CLONE_DIR) && git pull |
238 | 238 | $(if $(UPSTREAM_REVISION),cd $(CLONE_DIR) && git checkout $(UPSTREAM_REVISION)) |
239 | $(if $(WRAPPER_PATCHES),$(foreach F,$(WRAPPER_PATCHES),patch --no-backup-if-mismatch -d $(CLONE_DIR) -p1 <$(PACKAGE_DIR)/$(F) &&) :) | |
239 | $(if $(WRAPPER_PATCHES),$(foreach F,$(WRAPPER_PATCHES),patch -E -z .umbrella-orig -d $(CLONE_DIR) -p1 <$(PACKAGE_DIR)/$(F) &&) :) | |
240 | find $(CLONE_DIR) -name "*.umbrella-orig" -delete | |
240 | 241 | touch $$@ |
241 | 242 | endif # UPSTREAM_GIT |
242 | 243 | |
244 | 245 | $(CLONE_DIR)/.done: |
245 | 246 | rm -rf $(CLONE_DIR) |
246 | 247 | hg clone -r $(or $(UPSTREAM_REVISION),default) $(UPSTREAM_HG) $(CLONE_DIR) |
247 | $(if $(WRAPPER_PATCHES),$(foreach F,$(WRAPPER_PATCHES),patch --no-backup-if-mismatch -d $(CLONE_DIR) -p1 <$(PACKAGE_DIR)/$(F) &&) :) | |
248 | $(if $(WRAPPER_PATCHES),$(foreach F,$(WRAPPER_PATCHES),patch -E -z .umbrella-orig -d $(CLONE_DIR) -p1 <$(PACKAGE_DIR)/$(F) &&) :) | |
249 | find $(CLONE_DIR) -name "*.umbrella-orig" -delete | |
248 | 250 | touch $$@ |
249 | 251 | endif # UPSTREAM_HG |
250 | 252 |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
3 | 3 | |
4 | 4 | # Status |
5 | 5 | |
6 | This is a prototype. You can send and receive messages between 0-9-1 | |
7 | or 0-8 clients and 1.0 clients with broadly the same semantics as you | |
8 | would get with 0-9-1. | |
6 | This is mostly a prototype, but it is supported. We describe it as a | |
7 | prototype since the amount of real world use and thus battle-testing | |
8 | it has received is not so large as that of the STOMP or MQTT | |
9 | plugins. Howver, bugs do get fixed as they are reported. | |
10 | ||
11 | You can send and receive messages between 0-9-1 or 0-8 clients and 1.0 | |
12 | clients with broadly the same semantics as you would get with 0-9-1. | |
9 | 13 | |
10 | 14 | # Building and configuring |
11 | 15 | |
156 | 160 | | "/topic/" RK Publish to amq.topic with routing key RK |
157 | 161 | | "/amq/queue/" Q Publish to default exchange with routing key Q |
158 | 162 | | "/queue/" Q Publish to default exchange with routing key Q |
163 | | Q (no leading slash) Publish to default exchange with routing key Q | |
159 | 164 | | "/queue" Publish to default exchange with message subj as routing key |
160 | 165 | |
161 | 166 | For sources, addresses are: |
164 | 169 | | "/topic/" RK Consume from temp queue bound to amq.topic with routing key RK |
165 | 170 | | "/amq/queue/" Q Consume from Q |
166 | 171 | | "/queue/" Q Consume from Q |
172 | | Q (no leading slash) Consume from Q | |
173 | ||
174 | The intent is that the source and destination address formats should be | |
175 | mostly the same as those supported by the STOMP plugin, to the extent | |
176 | permitted by AMQP 1.0 semantics. | |
167 | 177 | |
168 | 178 | ## Virtual Hosts |
169 | 179 |
86 | 86 | def print_define(opt, source): |
87 | 87 | (name, value) = opt |
88 | 88 | if source == 'symbol': |
89 | quoted = '"%s"' % value | |
89 | quoted = '<<"%s">>' % value | |
90 | 90 | else: |
91 | 91 | quoted = value |
92 | 92 | print """-define(V_1_0_%s, {%s, %s}).""" % (name, source, quoted) |
123 | 123 | <<16#a3>>. |
124 | 124 | |
125 | 125 | generate(symbol, Value) -> |
126 | [<<(length(Value)):8>>, list_to_binary(Value)]. | |
126 | [<<(size(Value)):8>>, Value]. |
63 | 63 | [{symbol, symbolify(K)} || K <- rabbit_amqp1_0_framing0:fields(Record)]. |
64 | 64 | |
65 | 65 | symbolify(FieldName) when is_atom(FieldName) -> |
66 | re:replace(atom_to_list(FieldName), "_", "-", [{return,list}, global]). | |
66 | re:replace(atom_to_list(FieldName), "_", "-", [{return,binary}, global]). | |
67 | 67 | |
68 | 68 | %% TODO: in fields of composite types with multiple=true, "a null |
69 | 69 | %% value and a zero-length array (with a correct type for its |
104 | 104 | decode_map(Fields) -> |
105 | 105 | [{decode(K), decode(V)} || {K, V} <- Fields]. |
106 | 106 | |
107 | encode_described(list, ListOrNumber, Frame) -> | |
108 | Desc = descriptor(ListOrNumber), | |
109 | {described, Desc, | |
107 | encode_described(list, CodeNumber, Frame) -> | |
108 | {described, {ulong, CodeNumber}, | |
110 | 109 | {list, lists:map(fun encode/1, tl(tuple_to_list(Frame)))}}; |
111 | encode_described(map, ListOrNumber, Frame) -> | |
112 | Desc = descriptor(ListOrNumber), | |
113 | {described, Desc, | |
110 | encode_described(map, CodeNumber, Frame) -> | |
111 | {described, {ulong, CodeNumber}, | |
114 | 112 | {map, lists:zip(keys(Frame), |
115 | 113 | lists:map(fun encode/1, tl(tuple_to_list(Frame))))}}; |
116 | encode_described(binary, ListOrNumber, #'v1_0.data'{content = Content}) -> | |
117 | Desc = descriptor(ListOrNumber), | |
118 | {described, Desc, {binary, Content}}; | |
119 | encode_described('*', ListOrNumber, #'v1_0.amqp_value'{content = Content}) -> | |
120 | Desc = descriptor(ListOrNumber), | |
121 | {described, Desc, Content}; | |
122 | encode_described(annotations, ListOrNumber, Frame) -> | |
123 | encode_described(map, ListOrNumber, Frame). | |
114 | encode_described(binary, CodeNumber, #'v1_0.data'{content = Content}) -> | |
115 | {described, {ulong, CodeNumber}, {binary, Content}}; | |
116 | encode_described('*', CodeNumber, #'v1_0.amqp_value'{content = Content}) -> | |
117 | {described, {ulong, CodeNumber}, Content}; | |
118 | encode_described(annotations, CodeNumber, Frame) -> | |
119 | encode_described(map, CodeNumber, Frame). | |
124 | 120 | |
125 | 121 | encode(X) -> |
126 | 122 | rabbit_amqp1_0_framing0:encode(X). |
139 | 135 | number_for(X) -> |
140 | 136 | rabbit_amqp1_0_framing0:number_for(X). |
141 | 137 | |
142 | descriptor(Symbol) when is_list(Symbol) -> | |
143 | {symbol, Symbol}; | |
144 | descriptor(Number) when is_number(Number) -> | |
145 | {ulong, Number}. | |
146 | ||
147 | ||
148 | 138 | pprint(Thing) when is_tuple(Thing) -> |
149 | 139 | case rabbit_amqp1_0_framing0:fields(Thing) of |
150 | 140 | unknown -> Thing; |
46 | 46 | case ensure_target(Target, |
47 | 47 | #incoming_link{ |
48 | 48 | name = Name, |
49 | route_state = rabbit_routing_util:init_state() }, | |
49 | route_state = rabbit_routing_util:init_state(), | |
50 | delivery_count = InitTransfer }, | |
50 | 51 | DCh) of |
51 | {ok, ServerTarget, | |
52 | IncomingLink = #incoming_link{ delivery_count = InitTransfer }} -> | |
52 | {ok, ServerTarget, IncomingLink} -> | |
53 | 53 | {_, _Outcomes} = rabbit_amqp1_0_link_util:outcomes(Source), |
54 | 54 | %% Default is mixed |
55 | 55 | Confirm = |
80 | 80 | IncomingLink#incoming_link{recv_settle_mode = RcvSettleMode}, |
81 | 81 | {ok, [Attach, Flow], IncomingLink1, Confirm}; |
82 | 82 | {error, Reason} -> |
83 | rabbit_log:warning("AMQP 1.0 attach rejected ~p~n", [Reason]), | |
84 | 83 | %% TODO proper link establishment protocol here? |
85 | 84 | protocol_error(?V_1_0_AMQP_ERROR_INVALID_FIELD, |
86 | 85 | "Attach rejected: ~p", [Reason]) |
193 | 192 | timeout = _Timeout}, |
194 | 193 | Link = #incoming_link{ route_state = RouteState }, DCh) -> |
195 | 194 | DeclareParams = [{durable, rabbit_amqp1_0_link_util:durable(Durable)}, |
196 | {check_exchange, true}], | |
195 | {check_exchange, true}, | |
196 | {nowait, false}], | |
197 | 197 | case Dynamic of |
198 | 198 | true -> |
199 | 199 | protocol_error(?V_1_0_AMQP_ERROR_NOT_IMPLEMENTED, |
225 | 225 | E |
226 | 226 | end; |
227 | 227 | _Else -> |
228 | {error, {unknown_address, Address}} | |
228 | {error, {address_not_utf8_string, Address}} | |
229 | 229 | end. |
230 | 230 | |
231 | 231 | incoming_flow(#incoming_link{ delivery_count = Count }, Handle) -> |
90 | 90 | protocol_error(?V_1_0_AMQP_ERROR_INTERNAL_ERROR, |
91 | 91 | "Consume failed: ~p", [Fail]) |
92 | 92 | end; |
93 | {error, _Reason} -> | |
94 | %% TODO Deal with this properly -- detach and what have you | |
95 | {ok, [#'v1_0.attach'{source = undefined}]} | |
93 | {error, Reason} -> | |
94 | %% TODO proper link establishment protocol here? | |
95 | protocol_error(?V_1_0_AMQP_ERROR_INVALID_FIELD, | |
96 | "Attach rejected: ~p", [Reason]) | |
96 | 97 | end. |
97 | 98 | |
98 | 99 | credit_drained(#'basic.credit_drained'{credit_drained = CreditDrained}, |
155 | 156 | timeout = _Timeout}, |
156 | 157 | Link = #outgoing_link{ route_state = RouteState }, DCh) -> |
157 | 158 | DeclareParams = [{durable, rabbit_amqp1_0_link_util:durable(Durable)}, |
158 | {check_exchange, true}], | |
159 | {check_exchange, true}, | |
160 | {nowait, false}], | |
159 | 161 | case Dynamic of |
160 | 162 | true -> protocol_error(?V_1_0_AMQP_ERROR_NOT_IMPLEMENTED, |
161 | 163 | "Dynamic sources not supported", []); |
175 | 177 | ER = rabbit_routing_util:parse_routing(Dest), |
176 | 178 | ok = rabbit_routing_util:ensure_binding(Queue, ER, DCh), |
177 | 179 | {ok, Source, Link#outgoing_link{route_state = RouteState1, |
178 | queue = Queue}} | |
180 | queue = Queue}}; | |
181 | {error, _} = E -> | |
182 | E | |
179 | 183 | end; |
180 | 184 | _ -> |
181 | {error, {unknown_address, Address}} | |
185 | {error, {address_not_utf8_string, Address}} | |
182 | 186 | end. |
183 | 187 | |
184 | 188 | delivery(Deliver = #'basic.deliver'{delivery_tag = DeliveryTag, |
517 | 517 | end, |
518 | 518 | case Size of |
519 | 519 | 8 -> % length inclusive |
520 | {State, {frame_header_1_0, Mode}, 8}; %% heartbeat | |
520 | State; %% heartbeat | |
521 | 521 | _ -> |
522 | 522 | switch_callback(State, {frame_payload_1_0, Mode, DOff, Channel}, Size - 8) |
523 | 523 | end; |
544 | 544 | Ms = {array, symbol, |
545 | 545 | case application:get_env(rabbitmq_amqp1_0, default_user) of |
546 | 546 | {ok, none} -> []; |
547 | {ok, _} -> ["ANONYMOUS"] | |
548 | end ++ [ atom_to_list(M) || M <- auth_mechanisms(Sock)]}, | |
547 | {ok, _} -> [<<"ANONYMOUS">>] | |
548 | end ++ | |
549 | [list_to_binary(atom_to_list(M)) || M <- auth_mechanisms(Sock)]}, | |
549 | 550 | Mechanisms = #'v1_0.sasl_mechanisms'{sasl_server_mechanisms = Ms}, |
550 | 551 | ok = send_on_channel0(Sock, Mechanisms, rabbit_amqp1_0_sasl), |
551 | 552 | start_1_0_connection0(sasl, State); |
146 | 146 | catch exit:Reason = #'v1_0.error'{} -> |
147 | 147 | %% TODO shut down nicely like rabbit_channel |
148 | 148 | End = #'v1_0.end'{ error = Reason }, |
149 | rabbit_log:warning("Closing session for connection ~p: ~p~n", | |
149 | rabbit_log:warning("Closing session for connection ~p:~n~p~n", | |
150 | 150 | [ReaderPid, Reason]), |
151 | 151 | ok = rabbit_amqp1_0_writer:send_command_sync(Sock, End), |
152 | 152 | {stop, normal, State}; |
241 | 241 | requeue = false}; |
242 | 242 | #'v1_0.released'{} -> |
243 | 243 | #'basic.reject'{delivery_tag = DeliveryTag, |
244 | requeue = true} | |
244 | requeue = true}; | |
245 | _ -> | |
246 | protocol_error( | |
247 | ?V_1_0_AMQP_ERROR_INVALID_FIELD, | |
248 | "Unrecognised state: ~p~n" | |
249 | "Disposition was: ~p~n", [Outcome, Disp]) | |
245 | 250 | end) |
246 | 251 | end, |
247 | 252 | case rabbit_amqp1_0_session:settle(Disp, session(State), AckFun) of |
12 | 12 | mv build/tmp/$(CLIENT_DIR)/jars/*.jar build/lib |
13 | 13 | rm -rf build/tmp |
14 | 14 | cp ../lib-java/*.jar build/lib |
15 | (cd ../../../rabbitmq-java-client && ant dist) | |
16 | cp ../../../rabbitmq-java-client/build/dist/rabbitmq-client.jar build/lib | |
15 | 17 | |
16 | 18 | $(CLIENT_PKG): |
17 | 19 | @echo |
+52
-10
0 | 0 | package com.rabbitmq.amqp1_0.tests.swiftmq; |
1 | 1 | |
2 | import com.rabbitmq.client.*; | |
2 | 3 | import com.swiftmq.amqp.AMQPContext; |
3 | 4 | import com.swiftmq.amqp.v100.client.*; |
5 | import com.swiftmq.amqp.v100.client.Connection; | |
6 | import com.swiftmq.amqp.v100.client.Consumer; | |
4 | 7 | import com.swiftmq.amqp.v100.generated.messaging.message_format.*; |
5 | 8 | import com.swiftmq.amqp.v100.generated.messaging.message_format.Properties; |
6 | 9 | import com.swiftmq.amqp.v100.messaging.AMQPMessage; |
212 | 215 | route(QUEUE, "test", "", true); |
213 | 216 | route("test", "test", "", true); |
214 | 217 | |
215 | try { | |
216 | route(QUEUE, "/exchange/missing", "", false); | |
217 | fail("Missing exchange should fail"); | |
218 | } catch (Exception e) { } | |
219 | ||
220 | try { | |
221 | route("/exchange/missing/", QUEUE, "", false); | |
222 | fail("Missing exchange should fail"); | |
223 | } catch (Exception e) { } | |
224 | ||
225 | 218 | route("/topic/#.c.*", "/topic/a.b.c.d", "", true); |
226 | 219 | route("/topic/#.c.*", "/exchange/amq.topic", "a.b.c.d", true); |
227 | 220 | route("/exchange/amq.topic/#.y.*", "/topic/w.x.y.z", "", true); |
239 | 232 | route(QUEUE, "/exchange/amq.fanout", "", false); |
240 | 233 | route(QUEUE, "/exchange/amq.headers", "", false); |
241 | 234 | emptyQueue(QUEUE); |
235 | } | |
236 | ||
237 | public void testRoutingInvalidRoutes() throws Exception { | |
238 | ConnectionFactory factory = new ConnectionFactory(); | |
239 | com.rabbitmq.client.Connection connection = factory.newConnection(); | |
240 | Channel channel = connection.createChannel(); | |
241 | channel.queueDeclare("transient", false, false, false, null); | |
242 | connection.close(); | |
243 | ||
244 | for (String dest : Arrays.asList("/exchange/missing", "/queue/transient", "/fruit/orange")) { | |
245 | routeInvalidSource(dest); | |
246 | routeInvalidTarget(dest); | |
247 | } | |
242 | 248 | } |
243 | 249 | |
244 | 250 | private void emptyQueue(String q) throws Exception { |
290 | 296 | conn.close(); |
291 | 297 | } |
292 | 298 | |
299 | private void routeInvalidSource(String consumerSource) throws Exception { | |
300 | AMQPContext ctx = new AMQPContext(AMQPContext.CLIENT); | |
301 | Connection conn = new Connection(ctx, host, port, false); | |
302 | conn.connect(); | |
303 | Session s = conn.createSession(INBOUND_WINDOW, OUTBOUND_WINDOW); | |
304 | try { | |
305 | Consumer c = s.createConsumer(consumerSource, CONSUMER_LINK_CREDIT, QoS.AT_LEAST_ONCE, false, null); | |
306 | c.close(); | |
307 | fail("Source '" + consumerSource + "' should fail"); | |
308 | } | |
309 | catch (Exception e) { | |
310 | // no-op | |
311 | } | |
312 | finally { | |
313 | conn.close(); | |
314 | } | |
315 | } | |
316 | ||
317 | private void routeInvalidTarget(String producerTarget) throws Exception { | |
318 | AMQPContext ctx = new AMQPContext(AMQPContext.CLIENT); | |
319 | Connection conn = new Connection(ctx, host, port, false); | |
320 | conn.connect(); | |
321 | Session s = conn.createSession(INBOUND_WINDOW, OUTBOUND_WINDOW); | |
322 | try { | |
323 | Producer p = s.createProducer(producerTarget, QoS.AT_LEAST_ONCE); | |
324 | p.close(); | |
325 | fail("Target '" + producerTarget + "' should fail"); | |
326 | } | |
327 | catch (Exception e) { | |
328 | // no-op | |
329 | } | |
330 | finally { | |
331 | conn.close(); | |
332 | } | |
333 | } | |
334 | ||
293 | 335 | // TODO: generalise to a comparison of all immutable parts of messages |
294 | 336 | private boolean compareMessageData(AMQPMessage m1, AMQPMessage m2) throws IOException { |
295 | 337 | ByteArrayOutputStream b1 = new ByteArrayOutputStream(); |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
5 | 5 | |
6 | 6 | sudo apt-get --yes purge slapd |
7 | 7 | sudo rm -rf /var/lib/ldap |
8 | sudo apt-get --yes install slapd | |
8 | sudo apt-get --yes install slapd ldap-utils | |
9 | 9 | sleep 1 |
10 | 10 | |
11 | 11 | DIR=$(dirname $0) |
1 | 1 | DEPS:=rabbitmq-server rabbitmq-erlang-client eldap-wrapper |
2 | 2 | |
3 | 3 | ifeq ($(shell nc -z localhost 389 && echo true),true) |
4 | WITH_BROKER_TEST_COMMANDS:=eunit:test(rabbit_auth_backend_ldap_test,[verbose]) | |
4 | WITH_BROKER_TEST_COMMANDS:=eunit:test([rabbit_auth_backend_ldap_unit_test,rabbit_auth_backend_ldap_test],[verbose]) | |
5 | 5 | WITH_BROKER_TEST_CONFIG:=$(PACKAGE_DIR)/etc/rabbit-test |
6 | else | |
7 | $(warning Not running LDAP tests; no LDAP server found on localhost) | |
6 | 8 | endif |
20 | 20 | -include_lib("eldap/include/eldap.hrl"). |
21 | 21 | -include_lib("rabbit_common/include/rabbit.hrl"). |
22 | 22 | |
23 | -behaviour(rabbit_auth_backend). | |
24 | ||
25 | -export([description/0]). | |
26 | -export([check_user_login/2, check_vhost_access/2, check_resource_access/3]). | |
23 | -behaviour(rabbit_authn_backend). | |
24 | -behaviour(rabbit_authz_backend). | |
25 | ||
26 | -export([user_login_authentication/2, user_login_authorization/1, | |
27 | check_vhost_access/3, check_resource_access/3]). | |
27 | 28 | |
28 | 29 | -define(L(F, A), log("LDAP " ++ F, A)). |
29 | 30 | -define(L1(F, A), log(" LDAP " ++ F, A)). |
35 | 36 | |
36 | 37 | %%-------------------------------------------------------------------- |
37 | 38 | |
38 | description() -> | |
39 | [{name, <<"LDAP">>}, | |
40 | {description, <<"LDAP authentication / authorisation">>}]. | |
41 | ||
42 | %%-------------------------------------------------------------------- | |
43 | ||
44 | check_user_login(Username, []) -> | |
39 | user_login_authentication(Username, []) -> | |
45 | 40 | %% Without password, e.g. EXTERNAL |
46 | 41 | ?L("CHECK: passwordless login for ~s", [Username]), |
47 | 42 | R = with_ldap(creds(none), |
50 | 45 | [Username, log_result(R)]), |
51 | 46 | R; |
52 | 47 | |
53 | check_user_login(Username, [{password, <<>>}]) -> | |
48 | user_login_authentication(Username, [{password, <<>>}]) -> | |
54 | 49 | %% Password "" is special in LDAP, see |
55 | 50 | %% https://tools.ietf.org/html/rfc4513#section-5.1.2 |
56 | 51 | ?L("CHECK: unauthenticated login for ~s", [Username]), |
57 | 52 | ?L("DECISION: unauthenticated login for ~s: denied", [Username]), |
58 | 53 | {refused, "user '~s' - unauthenticated bind not allowed", [Username]}; |
59 | 54 | |
60 | check_user_login(User, [{password, PW}]) -> | |
55 | user_login_authentication(User, [{password, PW}]) -> | |
61 | 56 | ?L("CHECK: login for ~s", [User]), |
62 | 57 | R = case dn_lookup_when() of |
63 | 58 | prebind -> UserDN = username_to_dn_prebind(User), |
69 | 64 | ?L("DECISION: login for ~s: ~p", [User, log_result(R)]), |
70 | 65 | R; |
71 | 66 | |
72 | check_user_login(Username, AuthProps) -> | |
67 | user_login_authentication(Username, AuthProps) -> | |
73 | 68 | exit({unknown_auth_props, Username, AuthProps}). |
74 | 69 | |
75 | check_vhost_access(User = #user{username = Username, | |
76 | impl = #impl{user_dn = UserDN}}, VHost) -> | |
70 | user_login_authorization(Username) -> | |
71 | case user_login_authentication(Username, []) of | |
72 | {ok, #auth_user{impl = Impl}} -> {ok, Impl}; | |
73 | Else -> Else | |
74 | end. | |
75 | ||
76 | check_vhost_access(User = #auth_user{username = Username, | |
77 | impl = #impl{user_dn = UserDN}}, | |
78 | VHost, _Sock) -> | |
77 | 79 | Args = [{username, Username}, |
78 | 80 | {user_dn, UserDN}, |
79 | 81 | {vhost, VHost}], |
83 | 85 | [log_vhost(Args), log_user(User), log_result(R)]), |
84 | 86 | R. |
85 | 87 | |
86 | check_resource_access(User = #user{username = Username, | |
87 | impl = #impl{user_dn = UserDN}}, | |
88 | check_resource_access(User = #auth_user{username = Username, | |
89 | impl = #impl{user_dn = UserDN}}, | |
88 | 90 | #resource{virtual_host = VHost, kind = Type, name = Name}, |
89 | 91 | Permission) -> |
90 | 92 | Args = [{username, Username}, |
132 | 134 | evaluate({in_group, DNPattern, "member"}, Args, User, LDAP); |
133 | 135 | |
134 | 136 | evaluate0({in_group, DNPattern, Desc}, Args, |
135 | #user{impl = #impl{user_dn = UserDN}}, LDAP) -> | |
137 | #auth_user{impl = #impl{user_dn = UserDN}}, LDAP) -> | |
136 | 138 | Filter = eldap:equalityMatch(Desc, UserDN), |
137 | 139 | DN = fill(DNPattern, Args), |
138 | 140 | R = object_exists(DN, Filter, LDAP), |
336 | 338 | unknown -> username_to_dn(Username, LDAP, dn_lookup_when()); |
337 | 339 | _ -> PrebindUserDN |
338 | 340 | end, |
339 | User = #user{username = Username, | |
340 | auth_backend = ?MODULE, | |
341 | impl = #impl{user_dn = UserDN, | |
342 | password = Password}}, | |
343 | TagRes = [begin | |
344 | ?L1("CHECK: does ~s have tag ~s?", [Username, Tag]), | |
345 | R = evaluate(Q, [{username, Username}, | |
346 | {user_dn, UserDN}], User, LDAP), | |
347 | ?L1("DECISION: does ~s have tag ~s? ~p", | |
348 | [Username, Tag, R]), | |
349 | {Tag, R} | |
350 | end || {Tag, Q} <- env(tag_queries)], | |
351 | case [E || {_, E = {error, _}} <- TagRes] of | |
352 | [] -> {ok, User#user{tags = [Tag || {Tag, true} <- TagRes]}}; | |
353 | [E | _] -> E | |
354 | end. | |
341 | User = #auth_user{username = Username, | |
342 | impl = #impl{user_dn = UserDN, | |
343 | password = Password}}, | |
344 | DTQ = fun (LDAPn) -> do_tag_queries(Username, UserDN, User, LDAPn) end, | |
345 | TagRes = case env(other_bind) of | |
346 | as_user -> DTQ(LDAP); | |
347 | _ -> with_ldap(creds(User), DTQ) | |
348 | end, | |
349 | case TagRes of | |
350 | {ok, L} -> case [E || {_, E = {error, _}} <- L] of | |
351 | [] -> Tags = [Tag || {Tag, true} <- L], | |
352 | {ok, User#auth_user{tags = Tags}}; | |
353 | [E | _] -> E | |
354 | end; | |
355 | E -> E | |
356 | end. | |
357 | ||
358 | do_tag_queries(Username, UserDN, User, LDAP) -> | |
359 | {ok, [begin | |
360 | ?L1("CHECK: does ~s have tag ~s?", [Username, Tag]), | |
361 | R = evaluate(Q, [{username, Username}, | |
362 | {user_dn, UserDN}], User, LDAP), | |
363 | ?L1("DECISION: does ~s have tag ~s? ~p", | |
364 | [Username, Tag, R]), | |
365 | {Tag, R} | |
366 | end || {Tag, Q} <- env(tag_queries)]}. | |
355 | 367 | |
356 | 368 | dn_lookup_when() -> case {env(dn_lookup_attribute), env(dn_lookup_bind)} of |
357 | 369 | {none, _} -> never; |
391 | 403 | |
392 | 404 | creds(none, as_user) -> |
393 | 405 | {error, "'other_bind' set to 'as_user' but no password supplied"}; |
394 | creds(#user{impl = #impl{user_dn = UserDN, password = Password}}, as_user) -> | |
395 | {ok, {UserDN, Password}}; | |
406 | creds(#auth_user{impl = #impl{user_dn = UserDN, password = PW}}, as_user) -> | |
407 | {ok, {UserDN, PW}}; | |
396 | 408 | creds(_, Creds) -> |
397 | 409 | {ok, Creds}. |
398 | 410 | |
407 | 419 | ?L2("template result: \"~s\"", [R]), |
408 | 420 | R. |
409 | 421 | |
410 | log_result({ok, #user{}}) -> ok; | |
411 | log_result(true) -> ok; | |
412 | log_result(false) -> denied; | |
413 | log_result({refused, _, _}) -> denied; | |
414 | log_result(E) -> E. | |
415 | ||
416 | log_user(#user{username = U}) -> rabbit_misc:format("\"~s\"", [U]). | |
422 | log_result({ok, #auth_user{}}) -> ok; | |
423 | log_result(true) -> ok; | |
424 | log_result(false) -> denied; | |
425 | log_result({refused, _, _}) -> denied; | |
426 | log_result(E) -> E. | |
427 | ||
428 | log_user(#auth_user{username = U}) -> rabbit_misc:format("\"~s\"", [U]). | |
417 | 429 | |
418 | 430 | log_vhost(Args) -> |
419 | 431 | rabbit_misc:format("access to vhost \"~s\"", [pget(vhost, Args)]). |
24 | 24 | Var = [[$\\, $$, ${] ++ atom_to_list(K) ++ [$}]], |
25 | 25 | fill(re:replace(Fmt, Var, [to_repl(V)], [global]), T). |
26 | 26 | |
27 | to_repl(V) when is_atom(V) -> | |
28 | atom_to_list(V); | |
29 | to_repl(V) -> | |
30 | V. | |
27 | to_repl(V) when is_atom(V) -> to_repl(atom_to_list(V)); | |
28 | to_repl(V) when is_binary(V) -> to_repl(binary_to_list(V)); | |
29 | to_repl([]) -> []; | |
30 | to_repl([$\\ | T]) -> [$\\, $\\ | to_repl(T)]; | |
31 | to_repl([$& | T]) -> [$\\, $& | to_repl(T)]; | |
32 | to_repl([H | T]) -> [H | to_repl(T)]. | |
33 |
+33
-0
0 | %% The contents of this file are subject to the Mozilla Public License | |
1 | %% Version 1.1 (the "License"); you may not use this file except in | |
2 | %% compliance with the License. You may obtain a copy of the License | |
3 | %% at http://www.mozilla.org/MPL/ | |
4 | %% | |
5 | %% Software distributed under the License is distributed on an "AS IS" | |
6 | %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | %% the License for the specific language governing rights and | |
8 | %% limitations under the License. | |
9 | %% | |
10 | %% The Original Code is RabbitMQ | |
11 | %% | |
12 | %% The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. | |
14 | %% | |
15 | ||
16 | -module(rabbit_auth_backend_ldap_unit_test). | |
17 | ||
18 | -include_lib("eunit/include/eunit.hrl"). | |
19 | ||
20 | fill_test() -> | |
21 | F = fun(Fmt, Args, Res) -> | |
22 | ?assertEqual(Res, rabbit_auth_backend_ldap_util:fill(Fmt, Args)) | |
23 | end, | |
24 | F("x${username}x", [{username, "ab"}], "xabx"), | |
25 | F("x${username}x", [{username, ab}], "xabx"), | |
26 | F("x${username}x", [{username, <<"ab">>}], "xabx"), | |
27 | F("x${username}x", [{username, ""}], "xx"), | |
28 | F("x${username}x", [{fusername, "ab"}], "x${username}x"), | |
29 | F("x${usernamex", [{username, "ab"}], "x${usernamex"), | |
30 | F("x${username}x", [{username, "a\\b"}], "xa\\bx"), | |
31 | F("x${username}x", [{username, "a&b"}], "xa&bx"), | |
32 | ok. |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
56 | 56 | Username = case rabbit_net:peercert(Sock) of |
57 | 57 | {ok, C} -> |
58 | 58 | case rabbit_ssl:peer_cert_auth_name(C) of |
59 | unsafe -> {refused, "configuration unsafe", []}; | |
60 | not_found -> {refused, "no name found", []}; | |
59 | unsafe -> {refused, none, | |
60 | "configuration unsafe", []}; | |
61 | not_found -> {refused, none, "no name found", []}; | |
61 | 62 | Name -> Name |
62 | 63 | end; |
63 | 64 | {error, no_peercert} -> |
64 | {refused, "no peer certificate", []}; | |
65 | {refused, none, "no peer certificate", []}; | |
65 | 66 | nossl -> |
66 | {refused, "not SSL connection", []} | |
67 | {refused, none, "not SSL connection", []} | |
67 | 68 | end, |
68 | 69 | #state{username = Username}. |
69 | 70 | |
70 | 71 | handle_response(_Response, #state{username = Username}) -> |
71 | 72 | case Username of |
72 | {refused, _, _} = E -> | |
73 | {refused, _, _, _} = E -> | |
73 | 74 | E; |
74 | 75 | _ -> |
75 | 76 | rabbit_access_control:check_user_login(Username, []) |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
18 | 18 | rabbit_command_assembler, |
19 | 19 | rabbit_exchange_type, |
20 | 20 | rabbit_exchange_decorator, |
21 | rabbit_auth_backend, | |
21 | rabbit_authn_backend, | |
22 | rabbit_authz_backend, | |
22 | 23 | rabbit_auth_mechanism, |
23 | 24 | rabbit_framing_amqp_0_8, |
24 | 25 | rabbit_framing_amqp_0_9_1, |
73 | 73 | {ok, State}; |
74 | 74 | handle_message(closing_timeout, State = #state{closing_reason = Reason}) -> |
75 | 75 | {stop, {closing_timeout, Reason}, State}; |
76 | handle_message({'DOWN', _MRef, process, _ConnSup, Reason}, | |
77 | State = #state{node = Node}) -> | |
76 | handle_message({'DOWN', _MRef, process, _ConnSup, Reason}, State) -> | |
78 | 77 | {stop, {remote_node_down, Reason}, State}; |
79 | 78 | handle_message(Msg, State) -> |
80 | 79 | {stop, {unexpected_msg, Msg}, State}. |
105 | 105 | #'queue.declare'{queue = Queue, |
106 | 106 | nowait = true}, |
107 | 107 | queue, Params1), |
108 | amqp_channel:cast(Channel, Method), | |
108 | case Method#'queue.declare'.nowait of | |
109 | true -> amqp_channel:cast(Channel, Method); | |
110 | false -> amqp_channel:call(Channel, Method) | |
111 | end, | |
109 | 112 | sets:add_element(Queue, State) |
110 | 113 | end, |
111 | 114 | {ok, Queue, State1}; |
184 | 187 | Val -> Method#'queue.declare'{auto_delete = Val} |
185 | 188 | end. |
186 | 189 | |
190 | update_queue_declare_nowait(Method, Params) -> | |
191 | case proplists:get_value(nowait, Params) of | |
192 | undefined -> Method; | |
193 | Val -> Method#'queue.declare'{nowait = Val} | |
194 | end. | |
195 | ||
187 | 196 | queue_declare_method(#'queue.declare'{} = Method, Type, Params) -> |
188 | 197 | %% defaults |
189 | 198 | Method1 = case proplists:get_value(durable, Params, false) of |
195 | 204 | Method2 = lists:foldl(fun (F, Acc) -> F(Acc, Params) end, |
196 | 205 | Method1, [fun update_queue_declare_arguments/2, |
197 | 206 | fun update_queue_declare_exclusive/2, |
198 | fun update_queue_declare_auto_delete/2]), | |
207 | fun update_queue_declare_auto_delete/2, | |
208 | fun update_queue_declare_nowait/2]), | |
199 | 209 | case {Type, proplists:get_value(subscription_queue_name_gen, Params)} of |
200 | 210 | {topic, SQNG} when is_function(SQNG) -> |
201 | 211 | Method2#'queue.declare'{queue = SQNG()}; |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
73 | 73 | </td> |
74 | 74 | <td class="r"> |
75 | 75 | <% if (link.local_channel) { %> |
76 | <%= fmt_rate(link.local_channel.message_stats, 'confirm') %> | |
76 | <%= fmt_detail_rate(link.local_channel.message_stats, 'confirm') %> | |
77 | 77 | <% } %> |
78 | 78 | </td> |
79 | 79 | <td><%= link.timestamp %></td> |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
15 | 15 | # Copyright (c) 2010-2014 GoPivotal, Inc. All rights reserved. |
16 | 16 | |
17 | 17 | import sys |
18 | if sys.version_info[0] < 2 or sys.version_info[1] < 6: | |
19 | print "Sorry, rabbitmqadmin requires at least Python 2.6." | |
18 | if sys.version_info[0] < 2 or (sys.version_info[0] == 2 and sys.version_info[1] < 6): | |
19 | print("Sorry, rabbitmqadmin requires at least Python 2.6.") | |
20 | 20 | sys.exit(1) |
21 | 21 | |
22 | from ConfigParser import ConfigParser, NoSectionError | |
23 | 22 | from optparse import OptionParser, TitledHelpFormatter |
24 | import httplib | |
25 | 23 | import urllib |
26 | import urlparse | |
27 | 24 | import base64 |
28 | 25 | import json |
29 | 26 | import os |
30 | 27 | import socket |
28 | ||
29 | if sys.version_info[0] == 2: | |
30 | from ConfigParser import ConfigParser, NoSectionError | |
31 | import httplib | |
32 | import urlparse | |
33 | from urllib import quote_plus | |
34 | def b64(s): | |
35 | return base64.b64encode(s) | |
36 | else: | |
37 | from configparser import ConfigParser, NoSectionError | |
38 | import http.client as httplib | |
39 | import urllib.parse as urlparse | |
40 | from urllib.parse import quote_plus | |
41 | def b64(s): | |
42 | return base64.b64encode(s.encode('utf-8')).decode('utf-8') | |
31 | 43 | |
32 | 44 | VERSION = '%%VSN%%' |
33 | 45 | |
345 | 357 | try: |
346 | 358 | config.read(options.config) |
347 | 359 | new_conf = dict(config.items(options.node)) |
348 | except NoSectionError, error: | |
360 | except NoSectionError as error: | |
349 | 361 | if options.node == "default": |
350 | 362 | pass |
351 | 363 | else: |
388 | 400 | method() |
389 | 401 | |
390 | 402 | def output(s): |
391 | print maybe_utf8(s, sys.stdout) | |
403 | print(maybe_utf8(s, sys.stdout)) | |
392 | 404 | |
393 | 405 | def die(s): |
394 | 406 | sys.stderr.write(maybe_utf8("*** {0}\n".format(s), sys.stderr)) |
395 | 407 | exit(1) |
396 | 408 | |
397 | 409 | def maybe_utf8(s, stream): |
398 | if stream.isatty(): | |
410 | if sys.version_info[0] == 3 or stream.isatty(): | |
399 | 411 | # It will have an encoding, which Python will respect |
400 | 412 | return s |
401 | 413 | else: |
428 | 440 | else: |
429 | 441 | conn = httplib.HTTPConnection(self.options.hostname, |
430 | 442 | self.options.port) |
431 | headers = {"Authorization": | |
432 | "Basic " + base64.b64encode(self.options.username + ":" + | |
433 | self.options.password)} | |
443 | auth = (self.options.username + ":" + self.options.password) | |
444 | ||
445 | headers = {"Authorization": "Basic " + b64(auth)} | |
434 | 446 | if body != "": |
435 | 447 | headers["Content-Type"] = "application/json" |
436 | 448 | try: |
437 | 449 | conn.request(method, path, body, headers) |
438 | except socket.error, e: | |
450 | except socket.error as e: | |
439 | 451 | die("Could not connect: {0}".format(e)) |
440 | 452 | resp = conn.getresponse() |
441 | 453 | if resp.status == 400: |
453 | 465 | if resp.status < 200 or resp.status > 400: |
454 | 466 | raise Exception("Received %d %s for path %s\n%s" |
455 | 467 | % (resp.status, resp.reason, path, resp.read())) |
456 | return resp.read() | |
468 | return resp.read().decode('utf-8') | |
457 | 469 | |
458 | 470 | def verbose(self, string): |
459 | 471 | if self.options.verbose: |
481 | 493 | assert_usage(False, """help topic must be one of: |
482 | 494 | subcommands |
483 | 495 | config""") |
484 | print usage | |
496 | print(usage) | |
485 | 497 | exit(0) |
486 | 498 | |
487 | 499 | def invoke_publish(self): |
488 | 500 | (uri, upload) = self.parse_args(self.args, EXTRA_VERBS['publish']) |
489 | 501 | if not 'payload' in upload: |
490 | 502 | data = sys.stdin.read() |
491 | upload['payload'] = base64.b64encode(data) | |
503 | upload['payload'] = b64(data) | |
492 | 504 | upload['payload_encoding'] = 'base64' |
493 | 505 | resp = json.loads(self.post(uri, json.dumps(upload))) |
494 | 506 | if resp['routed']: |
544 | 556 | uri = "/%s" % obj_type |
545 | 557 | query = [] |
546 | 558 | if obj_info['vhost'] and self.options.vhost: |
547 | uri += "/%s" % urllib.quote_plus(self.options.vhost) | |
559 | uri += "/%s" % quote_plus(self.options.vhost) | |
548 | 560 | cols = self.args[1:] |
549 | 561 | if cols == [] and 'cols' in obj_info and self.use_cols(): |
550 | 562 | cols = obj_info['cols'] |
618 | 630 | uri_args = {} |
619 | 631 | for k in upload: |
620 | 632 | v = upload[k] |
621 | if v and isinstance(v, basestring): | |
622 | uri_args[k] = urllib.quote_plus(v) | |
633 | if v and isinstance(v, (str, bytes)): | |
634 | uri_args[k] = quote_plus(v) | |
623 | 635 | if k == 'destination_type': |
624 | 636 | uri_args['destination_char'] = v[0] |
625 | 637 | uri = uri_template.format(**uri_args) |
629 | 641 | try: |
630 | 642 | return json.loads(text) |
631 | 643 | except ValueError: |
632 | print "Could not parse JSON:\n {0}".format(text) | |
644 | print("Could not parse JSON:\n {0}".format(text)) | |
633 | 645 | sys.exit(1) |
634 | 646 | |
635 | 647 | def format_list(json_list, columns, args, options): |
655 | 667 | output(string) |
656 | 668 | |
657 | 669 | def display(self, json_list): |
658 | depth = sys.maxint | |
670 | depth = sys.maxsize | |
659 | 671 | if len(self.columns) == 0: |
660 | 672 | depth = int(self.options.depth) |
661 | 673 | (columns, table) = self.list_to_table(json.loads(json_list), depth) |
675 | 687 | column = prefix == '' and key or (prefix + '.' + key) |
676 | 688 | subitem = item[key] |
677 | 689 | if type(subitem) == dict: |
678 | if self.obj_info.has_key('json') and key in self.obj_info['json']: | |
690 | if 'json' in self.obj_info and key in self.obj_info['json']: | |
679 | 691 | fun(column, json.dumps(subitem)) |
680 | 692 | else: |
681 | 693 | if depth < max_depth: |
685 | 697 | # mind (which come out looking decent); the second |
686 | 698 | # one has applications in nodes (which look less |
687 | 699 | # so, but what would look good?). |
688 | if [x for x in subitem if type(x) != unicode] == []: | |
700 | if [x for x in subitem if type(x) != str] == []: | |
689 | 701 | serialised = " ".join(subitem) |
690 | 702 | else: |
691 | 703 | serialised = json.dumps(subitem) |
698 | 710 | |
699 | 711 | def add_to_row(col, val): |
700 | 712 | if col in column_ix: |
701 | row[column_ix[col]] = unicode(val) | |
713 | row[column_ix[col]] = str(val) | |
702 | 714 | |
703 | 715 | if len(self.columns) == 0: |
704 | 716 | for item in items: |
705 | 717 | add('', 1, item, add_to_columns) |
706 | columns = columns.keys() | |
718 | columns = list(columns.keys()) | |
707 | 719 | columns.sort(key=column_sort_key) |
708 | 720 | else: |
709 | 721 | columns = self.columns |
710 | 722 | |
711 | for i in xrange(0, len(columns)): | |
723 | for i in range(0, len(columns)): | |
712 | 724 | column_ix[columns[i]] = i |
713 | 725 | for item in items: |
714 | 726 | row = len(columns) * [''] |
742 | 754 | max_width = 0 |
743 | 755 | for col in columns: |
744 | 756 | max_width = max(max_width, len(col)) |
745 | fmt = "{0:>" + unicode(max_width) + "}: {1}" | |
757 | fmt = "{0:>" + str(max_width) + "}: {1}" | |
746 | 758 | output(sep) |
747 | for i in xrange(0, len(table)): | |
748 | for j in xrange(0, len(columns)): | |
759 | for i in range(0, len(table)): | |
760 | for j in range(0, len(columns)): | |
749 | 761 | output(fmt.format(columns[j], table[i][j])) |
750 | 762 | output(sep) |
751 | 763 | |
763 | 775 | def ascii_table(self, rows): |
764 | 776 | table = "" |
765 | 777 | col_widths = [0] * len(rows[0]) |
766 | for i in xrange(0, len(rows[0])): | |
767 | for j in xrange(0, len(rows)): | |
778 | for i in range(0, len(rows[0])): | |
779 | for j in range(0, len(rows)): | |
768 | 780 | col_widths[i] = max(col_widths[i], len(rows[j][i])) |
769 | 781 | self.ascii_bar(col_widths) |
770 | 782 | self.ascii_row(col_widths, rows[0], "^") |
775 | 787 | |
776 | 788 | def ascii_row(self, col_widths, row, align): |
777 | 789 | txt = "|" |
778 | for i in xrange(0, len(col_widths)): | |
779 | fmt = " {0:" + align + unicode(col_widths[i]) + "} " | |
790 | for i in range(0, len(col_widths)): | |
791 | fmt = " {0:" + align + str(col_widths[i]) + "} " | |
780 | 792 | txt += fmt.format(row[i]) + "|" |
781 | 793 | output(txt) |
782 | 794 | |
793 | 805 | self.options = options |
794 | 806 | |
795 | 807 | def display_list(self, columns, table): |
796 | for i in xrange(0, len(table)): | |
808 | for i in range(0, len(table)): | |
797 | 809 | row = [] |
798 | for j in xrange(0, len(columns)): | |
810 | for j in range(0, len(columns)): | |
799 | 811 | row.append("{0}=\"{1}\"".format(columns[j], table[i][j])) |
800 | 812 | output(" ".join(row)) |
801 | 813 | |
808 | 820 | |
809 | 821 | def display_list(self, columns, table): |
810 | 822 | ix = None |
811 | for i in xrange(0, len(columns)): | |
823 | for i in range(0, len(columns)): | |
812 | 824 | if columns[i] == 'name': |
813 | 825 | ix = i |
814 | 826 | if ix is not None: |
4 | 4 | WITH_BROKER_TEST_COMMANDS:=rabbit_test_runner:run_in_broker(\"$(PACKAGE_DIR)/test/ebin\",\"$(FILTER)\") |
5 | 5 | WITH_BROKER_TEST_CONFIG:=$(PACKAGE_DIR)/etc/rabbit-test |
6 | 6 | STANDALONE_TEST_COMMANDS:=rabbit_test_runner:run_multi(\"$(UMBRELLA_BASE_DIR)/rabbitmq-server\",\"$(PACKAGE_DIR)/test/ebin\",\"$(FILTER)\",$(COVER),\"/tmp/rabbitmq-multi-node/plugins\") |
7 | WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/src/rabbitmqadmin-test.py | |
7 | WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/src/rabbitmqadmin-test-wrapper.sh | |
8 | 8 | |
9 | 9 | CONSTRUCT_APP_PREREQS:=$(shell find $(PACKAGE_DIR)/priv -type f) $(PACKAGE_DIR)/bin/rabbitmqadmin |
10 | 10 | define construct_app_commands |
201 | 201 | <td class="path">/api/nodes/<i>name</i></td> |
202 | 202 | <td> |
203 | 203 | An individual node in the RabbitMQ cluster. Add |
204 | "?memory=true" to get memory statistics. | |
204 | "?memory=true" to get memory statistics, and "?binary=true" | |
205 | to get a breakdown of binary memory use (may be expensive if | |
206 | there are many small binaries in the system). | |
205 | 207 | </td> |
206 | 208 | </tr> |
207 | 209 | <tr> |
226 | 228 | messages. POST to upload an existing set of definitions. Note |
227 | 229 | that: |
228 | 230 | <ul> |
229 | <li>The definitions are merged. Anything already existing is | |
230 | untouched.</li> | |
231 | <li>Conflicts will cause an error.</li> | |
232 | <li>In the event of an error you will be left with a | |
233 | part-applied set of definitions.</li> | |
231 | <li> | |
232 | The definitions are merged. Anything already existing on | |
233 | the server but not in the uploaded definitions is | |
234 | untouched. | |
235 | </li> | |
236 | <li> | |
237 | Conflicting definitions on immutable objects (exchanges, | |
238 | queues and bindings) will cause an error. | |
239 | </li> | |
240 | <li> | |
241 | Conflicting definitions on mutable objects will cause | |
242 | the object in the server to be overwritten with the | |
243 | object from the definitions. | |
244 | </li> | |
245 | <li> | |
246 | In the event of an error you will be left with a | |
247 | part-applied set of definitions. | |
248 | </li> | |
234 | 249 | </ul> |
235 | 250 | For convenience you may upload a file from a browser to this |
236 | 251 | URI (i.e. you can use <code>multipart/form-data</code> as |
323 | 338 | <td>X</td> |
324 | 339 | <td></td> |
325 | 340 | <td class="path">/api/exchanges/<i>vhost</i>/<i>name</i></td> |
326 | <td>An individual exchange. To PUT an exchange, you will need a body looking something like this: | |
327 | <pre>{"type":"direct","auto_delete":false,"durable":true,"internal":false,"arguments":[]}</pre> | |
328 | The <code>type</code> key is mandatory; other keys are optional.</td> | |
341 | <td> | |
342 | An individual exchange. To PUT an exchange, you will need a body looking something like this: | |
343 | <pre>{"type":"direct","auto_delete":false,"durable":true,"internal":false,"arguments":{}}</pre> | |
344 | The <code>type</code> key is mandatory; other keys are optional. | |
345 | <p> | |
346 | When DELETEing an exchange you can add the query string | |
347 | parameter <code>if-unused=true</code>. This prevents the | |
348 | delete from succeeding if the exchange is bound to a queue | |
349 | or as a source to another exchange. | |
350 | </p> | |
351 | </td> | |
329 | 352 | </tr> |
330 | 353 | <tr> |
331 | 354 | <td>X</td> |
363 | 386 | <pre>{"routed": true}</pre> |
364 | 387 | <code>routed</code> will be true if the message was sent to |
365 | 388 | at least one queue. |
366 | <p>Please note that the publish / get paths in the HTTP API are | |
367 | intended for injecting test messages, diagnostics etc - they do not | |
368 | implement reliable delivery and so should be treated as a sysadmin's | |
369 | tool rather than a general API for messaging.</p> | |
389 | <p> | |
390 | Please note that the HTTP API is not ideal for high | |
391 | performance publishing; the need to create a new TCP | |
392 | connection for each message published can limit message | |
393 | throughput compared to AMQP or other protocols using | |
394 | long-lived connections. | |
395 | </p> | |
370 | 396 | </td> |
371 | 397 | </tr> |
372 | 398 | <tr> |
391 | 417 | <td>X</td> |
392 | 418 | <td></td> |
393 | 419 | <td class="path">/api/queues/<i>vhost</i>/<i>name</i></td> |
394 | <td>An individual queue. To PUT a queue, you will need a body looking something like this: | |
395 | <pre>{"auto_delete":false,"durable":true,"arguments":[],"node":"rabbit@smacmullen"}</pre> | |
396 | All keys are optional.</td> | |
420 | <td> | |
421 | An individual queue. To PUT a queue, you will need a body looking something like this: | |
422 | <pre>{"auto_delete":false,"durable":true,"arguments":{},"node":"rabbit@smacmullen"}</pre> | |
423 | All keys are optional. | |
424 | <p> | |
425 | When DELETEing a queue you can add the query string | |
426 | parameters <code>if-empty=true</code> and / | |
427 | or <code>if-unused=true</code>. These prevent the delete | |
428 | from succeeding if the queue contains messages, or has | |
429 | consumers, respectively. | |
430 | </p> | |
431 | </td> | |
397 | 432 | </tr> |
398 | 433 | <tr> |
399 | 434 | <td>X</td> |
450 | 485 | message payload if it is larger than the size given (in bytes).</li> |
451 | 486 | </ul> |
452 | 487 | <p><code>truncate</code> is optional; all other keys are mandatory.</p> |
453 | <p>Please note that the publish / get paths in the HTTP API are | |
454 | intended for injecting test messages, diagnostics etc - they do not | |
455 | implement reliable delivery and so should be treated as a sysadmin's | |
456 | tool rather than a general API for messaging.</p> | |
488 | <p> | |
489 | Please note that the get path in the HTTP API is intended | |
490 | for diagnostics etc - it does not implement reliable | |
491 | delivery and so should be treated as a sysadmin's tool | |
492 | rather than a general API for messaging. | |
493 | </p> | |
457 | 494 | </td> |
458 | 495 | </tr> |
459 | 496 | <tr> |
482 | 519 | queue. Remember, an exchange and a queue can be bound |
483 | 520 | together many times! To create a new binding, POST to this |
484 | 521 | URI. You will need a body looking something like this: |
485 | <pre>{"routing_key":"my_routing_key","arguments":[]}</pre> | |
522 | <pre>{"routing_key":"my_routing_key","arguments":{}}</pre> | |
486 | 523 | All keys are optional. |
487 | 524 | The response will contain a <code>Location</code> header |
488 | 525 | telling you the URI of your new binding. |
71 | 71 | |
72 | 72 | table { border-collapse: collapse; } |
73 | 73 | table th { font-weight: normal; color: black; } |
74 | table th, table td { font: 12px/17px Verdana,sans-serif; padding: 4px; } | |
74 | table th, table td { font: 12px Verdana,sans-serif; padding: 5px 4px; } | |
75 | 75 | table.list th, table.list td { vertical-align: top; min-width: 5em; width: auto; } |
76 | 76 | |
77 | 77 | table.list { border-width: 1px; margin-bottom: 1em; } |
83 | 83 | table.list th a.sort .arrow { color: #888; } |
84 | 84 | table.list td p { margin: 0; padding: 1px 0 0 0; } |
85 | 85 | table.list td p.warning { margin: 0; padding: 5px; } |
86 | ||
87 | table.list td.plain, table.list td.plain td, table.list td.plain th { border: none; background: none; } | |
88 | table.list th.plain { border-left: none; border-top: none; border-right: none; background: none; } | |
89 | table.list th.plain h3 { margin: 0; border: 0; } | |
86 | 90 | |
87 | 91 | #main .internal-purpose, #main .internal-purpose * { color: #aaa; } |
88 | 92 |
396 | 396 | </td> |
397 | 397 | </tr> |
398 | 398 | <tr> |
399 | <td><code>statistics_db_event_queue</code></td> | |
400 | <td> | |
401 | Number of outstanding statistics events yet to be processed | |
402 | by the database. | |
403 | </td> | |
404 | </tr> | |
405 | <tr> | |
399 | 406 | <td><code>statistics_db_node</code></td> |
400 | 407 | <td> |
401 | 408 | Name of the cluster node hosting the management statistics database. |
423 | 430 | </td> |
424 | 431 | </tr> |
425 | 432 | <tr> |
433 | <td><code>cluster_links</code></td> | |
434 | <td> | |
435 | A list of the other nodes in the cluster. For each node, | |
436 | there are details of the TCP connection used to connect to | |
437 | it and statistics on data that has been transferred. | |
438 | </td> | |
439 | </tr> | |
440 | <tr> | |
426 | 441 | <td><code>config_files</code></td> |
427 | 442 | <td> |
428 | 443 | List of config files read by the node. |
483 | 498 | </td> |
484 | 499 | </tr> |
485 | 500 | <tr> |
501 | <td><code>io_read_avg_time</code></td> | |
502 | <td> | |
503 | Average wall time (milliseconds) for each disk read operation in | |
504 | the last statistics interval. | |
505 | </td> | |
506 | </tr> | |
507 | <tr> | |
508 | <td><code>io_read_bytes</code></td> | |
509 | <td> | |
510 | Total number of bytes read from disk by the persister. | |
511 | </td> | |
512 | </tr> | |
513 | <tr> | |
514 | <td><code>io_read_count</code></td> | |
515 | <td> | |
516 | Total number of read operations by the persister. | |
517 | </td> | |
518 | </tr> | |
519 | <tr> | |
520 | <td><code>io_reopen_count</code></td> | |
521 | <td> | |
522 | Total number of times the persister has needed to recycle | |
523 | file handles between queues. In an ideal world this number | |
524 | will be zero; if the number is large, performance might be | |
525 | improved by increasing the number of file handles available | |
526 | to RabbitMQ. | |
527 | </td> | |
528 | </tr> | |
529 | <tr> | |
530 | <td><code>io_seek_avg_time</code></td> | |
531 | <td> | |
532 | Average wall time (milliseconds) for each seek operation in | |
533 | the last statistics interval. | |
534 | </td> | |
535 | </tr> | |
536 | </tr> | |
537 | <tr> | |
538 | <td><code>io_seek_count</code></td> | |
539 | <td> | |
540 | Total number of seek operations by the persister. | |
541 | </td> | |
542 | </tr> | |
543 | <tr> | |
544 | <td><code>io_sync_avg_time</code></td> | |
545 | <td> | |
546 | Average wall time (milliseconds) for each fsync() operation in | |
547 | the last statistics interval. | |
548 | </td> | |
549 | </tr> | |
550 | </tr> | |
551 | <tr> | |
552 | <td><code>io_sync_count</code></td> | |
553 | <td> | |
554 | Total number of fsync() operations by the persister. | |
555 | </td> | |
556 | </tr> | |
557 | <tr> | |
558 | <td><code>io_write_avg_time</code></td> | |
559 | <td> | |
560 | Average wall time (milliseconds) for each disk write operation in | |
561 | the last statistics interval. | |
562 | </td> | |
563 | </tr> | |
564 | <tr> | |
565 | <td><code>io_write_bytes</code></td> | |
566 | <td> | |
567 | Total number of bytes written to disk by the persister. | |
568 | </td> | |
569 | </tr> | |
570 | <tr> | |
571 | <td><code>io_write_count</code></td> | |
572 | <td> | |
573 | Total number of write operations by the persister. | |
574 | </td> | |
575 | </tr> | |
576 | <tr> | |
486 | 577 | <td><code>log_file</code></td> |
487 | 578 | <td> |
488 | 579 | Location of main log file. |
504 | 595 | <td><code>mem_limit</code></td> |
505 | 596 | <td> |
506 | 597 | Point at which the memory alarm will go off. |
598 | </td> | |
599 | </tr> | |
600 | <tr> | |
601 | <td><code>mnesia_disk_tx_count</code></td> | |
602 | <td> | |
603 | Number of Mnesia transactions which have been performed that | |
604 | required writes to disk. (e.g. creating a durable | |
605 | queue). Only transactions which originated on this node are | |
606 | included. | |
607 | </td> | |
608 | </tr> | |
609 | <tr> | |
610 | <td><code>mnesia_ram_tx_count</code></td> | |
611 | <td> | |
612 | Number of Mnesia transactions which have been performed that | |
613 | did not require writes to disk. (e.g. creating a transient | |
614 | queue). Only transactions which originated on this node are | |
615 | included. | |
616 | </td> | |
617 | </tr> | |
618 | <tr> | |
619 | <td><code>msg_store_read_count</code></td> | |
620 | <td> | |
621 | Number of messages which have been read from the message store. | |
622 | </td> | |
623 | </tr> | |
624 | <tr> | |
625 | <td><code>msg_store_write_count</code></td> | |
626 | <td> | |
627 | Number of messages which have been written to the message store. | |
507 | 628 | </td> |
508 | 629 | </tr> |
509 | 630 | <tr> |
547 | 668 | <td><code>processors</code></td> |
548 | 669 | <td> |
549 | 670 | Number of cores detected and usable by Erlang. |
671 | </td> | |
672 | </tr> | |
673 | <tr> | |
674 | <td><code>queue_index_journal_write_count</code></td> | |
675 | <td> | |
676 | Number of records written to the queue index journal. Each | |
677 | record represents a message being published to a queue, | |
678 | being delivered from a queue, and being acknowledged in a | |
679 | queue. | |
680 | </td> | |
681 | </tr> | |
682 | <tr> | |
683 | <td><code>queue_index_read_count</code></td> | |
684 | <td> | |
685 | Number of records read from the queue index. | |
686 | </td> | |
687 | </tr> | |
688 | <tr> | |
689 | <td><code>queue_index_write_count</code></td> | |
690 | <td> | |
691 | Number of records written to the queue index. | |
550 | 692 | </td> |
551 | 693 | </tr> |
552 | 694 | <tr> |
10 | 10 | ['Acknowledge', 'ack'], |
11 | 11 | ['Get', 'get'], ['Deliver (noack)', 'deliver_no_ack'], |
12 | 12 | ['Get (noack)', 'get_no_ack'], |
13 | ['Return', 'return_unroutable']]; | |
14 | return rates_chart_or_text(id, stats, items, fmt_rate, fmt_rate_large, fmt_rate_axis, true, 'Message rates', 'message-rates'); | |
13 | ['Return', 'return_unroutable'], | |
14 | ['Disk read', 'disk_reads'], | |
15 | ['Disk write', 'disk_writes']]; | |
16 | return rates_chart_or_text(id, stats, items, fmt_rate, fmt_rate_axis, true, 'Message rates', 'message-rates'); | |
15 | 17 | } |
16 | 18 | |
17 | 19 | function queue_lengths(id, stats) { |
18 | 20 | var items = [['Ready', 'messages_ready'], |
19 | 21 | ['Unacked', 'messages_unacknowledged'], |
20 | 22 | ['Total', 'messages']]; |
21 | return rates_chart_or_text(id, stats, items, fmt_msgs, fmt_msgs_large, fmt_num_axis, false, 'Queued messages', 'queued-messages'); | |
23 | return rates_chart_or_text(id, stats, items, fmt_num_thousands, fmt_plain_axis, false, 'Queued messages', 'queued-messages'); | |
22 | 24 | } |
23 | 25 | |
24 | 26 | function data_rates(id, stats) { |
25 | 27 | var items = [['From client', 'recv_oct'], ['To client', 'send_oct']]; |
26 | return rates_chart_or_text(id, stats, items, fmt_rate_bytes, fmt_rate_bytes_large, fmt_rate_bytes_axis, true, 'Data rates'); | |
27 | } | |
28 | ||
29 | function rates_chart_or_text(id, stats, items, chart_fmt, text_fmt, axis_fmt, chart_rates, | |
28 | return rates_chart_or_text(id, stats, items, fmt_rate_bytes, fmt_rate_bytes_axis, true, 'Data rates'); | |
29 | } | |
30 | ||
31 | function rates_chart_or_text(id, stats, items, fmt, axis_fmt, chart_rates, | |
30 | 32 | heading, heading_help) { |
31 | var mode = get_pref('rate-mode-' + id); | |
33 | var prefix = chart_h3(id, heading, heading_help); | |
34 | ||
35 | return prefix + rates_chart_or_text_no_heading( | |
36 | id, id, stats, items, fmt, axis_fmt, chart_rates); | |
37 | } | |
38 | ||
39 | function rates_chart_or_text_no_heading(type_id, id, stats, items, | |
40 | fmt, axis_fmt, chart_rates) { | |
41 | var mode = get_pref('rate-mode-' + type_id); | |
32 | 42 | var range = get_pref('chart-range'); |
33 | var prefix = chart_h3(id, heading, heading_help); | |
34 | 43 | var res; |
35 | ||
36 | 44 | if (keys(stats).length > 0) { |
37 | 45 | if (mode == 'chart') { |
38 | 46 | res = rates_chart( |
39 | id, id, items, stats, chart_fmt, axis_fmt, 'full', chart_rates); | |
47 | type_id, id, items, stats, fmt, axis_fmt, 'full', chart_rates); | |
40 | 48 | } |
41 | 49 | else { |
42 | res = rates_text(items, stats, mode, text_fmt); | |
50 | res = rates_text(items, stats, mode, fmt, chart_rates); | |
43 | 51 | } |
44 | 52 | if (res == "") res = '<p>Waiting for data...</p>'; |
45 | 53 | } |
46 | 54 | else { |
47 | 55 | res = '<p>Currently idle</p>'; |
48 | 56 | } |
49 | return prefix + '<div class="updatable">' + res + '</div>'; | |
57 | return res; | |
50 | 58 | } |
51 | 59 | |
52 | 60 | function chart_h3(id, heading, heading_help) { |
53 | 61 | var mode = get_pref('rate-mode-' + id); |
54 | 62 | var range = get_pref('chart-range'); |
55 | 63 | return '<h3>' + heading + |
56 | ' <span class="popup-options-link updatable" title="Click to change" ' + | |
64 | ' <span class="popup-options-link" title="Click to change" ' + | |
57 | 65 | 'type="rate" for="' + id + '">(' + prefix_title(mode, range) + |
58 | 66 | ')</span>' + (heading_help == undefined ? '' : |
59 | 67 | ' <span class="help" id="' + heading_help + '"></span>') + |
78 | 86 | var limit = stats[limit_key]; |
79 | 87 | if (typeof used == 'number') { |
80 | 88 | return node_stat(used_key, 'Used', limit_key, 'available', stats, |
81 | fmt_num_obj, fmt_num_axis, | |
89 | fmt_plain, fmt_plain_axis, | |
82 | 90 | fmt_color(used / limit, thresholds)); |
83 | 91 | } else { |
84 | 92 | return used; |
90 | 98 | var limit = stats[limit_key]; |
91 | 99 | if (typeof used == 'number') { |
92 | 100 | return node_stat_bar(used_key, limit_key, 'available', stats, |
93 | fmt_num_axis, fmt_color(used / limit, thresholds)); | |
101 | fmt_plain_axis, | |
102 | fmt_color(used / limit, thresholds)); | |
94 | 103 | } else { |
95 | 104 | return used; |
96 | 105 | } |
97 | 106 | } |
98 | 107 | |
99 | function node_stat(used_key, used_name, limit_key, suffix, stats, rate_fmt, | |
108 | function node_stat(used_key, used_name, limit_key, suffix, stats, fmt, | |
100 | 109 | axis_fmt, colour, help, invert) { |
101 | 110 | if (get_pref('rate-mode-node-stats') == 'chart') { |
102 | 111 | var items = [[used_name, used_key], ['Limit', limit_key]]; |
103 | 112 | add_fake_limit_details(used_key, limit_key, stats); |
104 | 113 | return rates_chart('node-stats', 'node-stats-' + used_key, items, stats, |
105 | rate_fmt, axis_fmt, 'node', false); | |
114 | fmt, axis_fmt, 'node', false); | |
106 | 115 | } else { |
107 | 116 | return node_stat_bar(used_key, limit_key, suffix, stats, axis_fmt, |
108 | 117 | colour, help, invert); |
155 | 164 | return chart_h3('node-stats', 'Node statistics'); |
156 | 165 | } |
157 | 166 | |
158 | function rates_chart(type_id, id, items, stats, rate_fmt, axis_fmt, type, | |
167 | function rates_chart(type_id, id, items, stats, fmt, axis_fmt, type, | |
159 | 168 | chart_rates) { |
160 | 169 | function show(key) { |
161 | 170 | return get_pref('chart-line-' + id + key) === 'true'; |
176 | 185 | chart_data[id]['data'][name] = stats[key_details]; |
177 | 186 | chart_data[id]['data'][name].ix = ix; |
178 | 187 | } |
188 | var value = chart_rates ? pick_rate(fmt, stats, key) : | |
189 | pick_abs(fmt, stats, key); | |
179 | 190 | legend.push({name: name, |
180 | 191 | key: key, |
181 | value: rate_fmt(stats, key), | |
192 | value: value, | |
182 | 193 | show: show(key)}); |
183 | 194 | ix++; |
184 | 195 | } |
188 | 199 | (chart_rates ? ' chart-rates' : '') + '"></div>'; |
189 | 200 | html += '<table class="legend">'; |
190 | 201 | for (var i = 0; i < legend.length; i++) { |
202 | if (i % 3 == 0 && i < legend.length - 1) { | |
203 | html += '</table><table class="legend">'; | |
204 | } | |
205 | ||
191 | 206 | html += '<tr><th><span title="Click to toggle line" '; |
192 | 207 | html += 'class="rate-visibility-option'; |
193 | 208 | html += legend[i].show ? '' : ' rate-visibility-option-hidden'; |
200 | 215 | return legend.length > 0 ? html : ''; |
201 | 216 | } |
202 | 217 | |
203 | function rates_text(items, stats, mode, rate_fmt) { | |
218 | function rates_text(items, stats, mode, fmt, chart_rates) { | |
204 | 219 | var res = ''; |
205 | 220 | for (var i in items) { |
206 | 221 | var name = items[i][0]; |
208 | 223 | var key_details = key + '_details'; |
209 | 224 | if (key_details in stats) { |
210 | 225 | var details = stats[key_details]; |
211 | res += '<div class="highlight">' + name; | |
212 | res += rate_fmt(stats, key, mode); | |
213 | res += '</div>'; | |
226 | res += '<div class="highlight">' + name + '<strong>'; | |
227 | res += chart_rates ? pick_rate(fmt, stats, key, mode) : | |
228 | pick_abs(fmt, stats, key, mode); | |
229 | res += '</strong></div>'; | |
214 | 230 | } |
215 | 231 | } |
216 | 232 | return res == '' ? '' : '<div class="box">' + res + '</div>'; |
9 | 9 | if (unknown == undefined) unknown = UNKNOWN_REPR; |
10 | 10 | if (str == undefined) return unknown; |
11 | 11 | return fmt_escape_html("" + str); |
12 | } | |
13 | ||
14 | function fmt_bytes(bytes) { | |
15 | if (bytes == undefined) return UNKNOWN_REPR; | |
16 | return fmt_si_prefix(bytes, bytes, 1024, false) + 'B'; | |
17 | 12 | } |
18 | 13 | |
19 | 14 | function fmt_si_prefix(num0, max0, thousand, allow_fractions) { |
227 | 222 | } |
228 | 223 | } |
229 | 224 | |
230 | function fmt_rate(obj, name, mode) { | |
231 | var raw = fmt_rate0(obj, name, mode, fmt_rate_num); | |
232 | return raw == '' ? '' : (raw + '/s'); | |
233 | } | |
234 | ||
235 | function fmt_rate_bytes(obj, name, mode) { | |
236 | var raw = fmt_rate0(obj, name, mode, fmt_bytes); | |
237 | return raw == '' ? '' : (raw + '/s' + | |
238 | '<sub>(' + fmt_bytes(obj[name]) + ' total)</sub>'); | |
239 | } | |
240 | ||
241 | function fmt_bytes_obj(obj, name, mode) { | |
242 | return fmt_bytes(obj[name]); | |
243 | } | |
244 | ||
245 | function fmt_num_obj(obj, name, mode) { | |
246 | return obj[name]; | |
247 | } | |
248 | ||
249 | function fmt_rate_large(obj, name, mode) { | |
250 | return '<strong>' + fmt_rate0(obj, name, mode, fmt_rate_num) + | |
251 | '</strong>msg/s'; | |
252 | } | |
253 | ||
254 | function fmt_rate_bytes_large(obj, name, mode) { | |
255 | return '<strong>' + fmt_rate0(obj, name, mode, fmt_bytes) + '/s</strong>' + | |
256 | '(' + fmt_bytes(obj[name]) + ' total)'; | |
257 | } | |
258 | ||
259 | function fmt_rate0(obj, name, mode, fmt) { | |
225 | function pick_rate(fmt, obj, name, mode) { | |
260 | 226 | if (obj == undefined || obj[name] == undefined || |
261 | 227 | obj[name + '_details'] == undefined) return ''; |
262 | 228 | var details = obj[name + '_details']; |
263 | 229 | return fmt(mode == 'avg' ? details.avg_rate : details.rate); |
264 | 230 | } |
265 | 231 | |
266 | function fmt_msgs(obj, name, mode) { | |
267 | return fmt_msgs0(obj, name, mode) + ' msg'; | |
268 | } | |
269 | ||
270 | function fmt_msgs_large(obj, name, mode) { | |
271 | return '<strong>' + fmt_msgs0(obj, name, mode) + '</strong>' + | |
272 | fmt_rate0(obj, name, mode, fmt_msgs_rate); | |
273 | } | |
274 | ||
275 | function fmt_msgs0(obj, name, mode) { | |
232 | function pick_abs(fmt, obj, name, mode) { | |
276 | 233 | if (obj == undefined || obj[name] == undefined || |
277 | 234 | obj[name + '_details'] == undefined) return ''; |
278 | 235 | var details = obj[name + '_details']; |
279 | return mode == 'avg' ? fmt_rate_num(details.avg) : | |
280 | fmt_num_thousands(obj[name]); | |
281 | } | |
282 | ||
283 | function fmt_msgs_rate(num) { | |
284 | if (num > 0) return '+' + fmt_rate_num(num) + ' msg/s'; | |
285 | else if (num < 0) return '-' + fmt_rate_num(-num) + ' msg/s'; | |
286 | else return ' '; | |
236 | return fmt(mode == 'avg' ? details.avg : obj[name]); | |
237 | } | |
238 | ||
239 | function fmt_detail_rate(obj, name, mode) { | |
240 | return pick_rate(fmt_rate, obj, name, mode); | |
241 | } | |
242 | ||
243 | function fmt_detail_rate_bytes(obj, name, mode) { | |
244 | return pick_rate(fmt_rate_bytes, obj, name, mode); | |
245 | } | |
246 | ||
247 | // --------------------------------------------------------------------- | |
248 | ||
249 | // These are pluggable for charts etc | |
250 | ||
251 | function fmt_plain(num) { | |
252 | return num; | |
253 | } | |
254 | ||
255 | function fmt_plain_axis(num, max) { | |
256 | return fmt_si_prefix(num, max, 1000, true); | |
257 | } | |
258 | ||
259 | function fmt_rate(num) { | |
260 | return fmt_rate_num(num) + '/s'; | |
287 | 261 | } |
288 | 262 | |
289 | 263 | function fmt_rate_axis(num, max) { |
290 | return fmt_si_prefix(num, max, 1000, true) + '/s'; | |
291 | } | |
292 | ||
293 | function fmt_num_axis(num, max) { | |
294 | return fmt_si_prefix(num, max, 1000, true); | |
264 | return fmt_plain_axis(num, max) + '/s'; | |
265 | } | |
266 | ||
267 | function fmt_bytes(bytes) { | |
268 | if (bytes == undefined) return UNKNOWN_REPR; | |
269 | return fmt_si_prefix(bytes, bytes, 1024, false) + 'B'; | |
295 | 270 | } |
296 | 271 | |
297 | 272 | function fmt_bytes_axis(num, max) { |
299 | 274 | return fmt_bytes(isNaN(num) ? 0 : num); |
300 | 275 | } |
301 | 276 | |
277 | function fmt_rate_bytes(num) { | |
278 | return fmt_bytes(num) + '/s'; | |
279 | } | |
302 | 280 | |
303 | 281 | function fmt_rate_bytes_axis(num, max) { |
304 | 282 | return fmt_bytes_axis(num, max) + '/s'; |
305 | 283 | } |
284 | ||
285 | function fmt_ms(num) { | |
286 | return fmt_rate_num(num) + 'ms'; | |
287 | } | |
288 | ||
289 | // --------------------------------------------------------------------- | |
306 | 290 | |
307 | 291 | function fmt_maybe_vhost(name) { |
308 | 292 | return vhosts_interesting ? |
421 | 405 | var plugins = []; |
422 | 406 | for (var i = 0; i < node.applications.length; i++) { |
423 | 407 | var application = node.applications[i]; |
424 | if (node.enabled_plugins.indexOf(application.name) != -1) { | |
408 | if (jQuery.inArray(application.name, node.enabled_plugins) != -1 ) { | |
425 | 409 | plugins.push(application.name); |
426 | 410 | } |
427 | 411 | } |
433 | 417 | var result = []; |
434 | 418 | for (var i = 0; i < node.applications.length; i++) { |
435 | 419 | var application = node.applications[i]; |
436 | if (node.enabled_plugins.indexOf(application.name) != -1) { | |
420 | if (jQuery.inArray(application.name, node.enabled_plugins) != -1 ) { | |
437 | 421 | result.push(application); |
438 | 422 | } |
439 | 423 | } |
19 | 19 | 'x-max-length': {'short': 'Lim', 'type': 'int'}, |
20 | 20 | 'x-max-length-bytes': {'short': 'Lim B', 'type': 'int'}, |
21 | 21 | 'x-dead-letter-exchange': {'short': 'DLX', 'type': 'string'}, |
22 | 'x-dead-letter-routing-key': {'short': 'DLK', 'type': 'string'}}; | |
22 | 'x-dead-letter-routing-key': {'short': 'DLK', 'type': 'string'}, | |
23 | 'x-max-priority': {'short': 'Pri', 'type': 'int'}}; | |
23 | 24 | |
24 | 25 | // Things that are like arguments that we format the same way in listings. |
25 | 26 | var IMPLICIT_ARGS = {'durable': {'short': 'D', 'type': 'boolean'}, |
136 | 137 | // All these are to do with hiding UI elements if |
137 | 138 | var rates_mode; // ...there are no fine stats |
138 | 139 | var user_administrator; // ...user is not an admin |
140 | var user_policymaker; // ...user is not a policymaker | |
139 | 141 | var user_monitor; // ...user cannot monitor |
140 | 142 | var nodes_interesting; // ...we are not in a cluster |
141 | 143 | var vhosts_interesting; // ...there is only one vhost |
163 | 165 | rates_mode = overview.rates_mode; |
164 | 166 | user_tags = expand_user_tags(user.tags.split(",")); |
165 | 167 | user_administrator = jQuery.inArray("administrator", user_tags) != -1; |
168 | user_policymaker = jQuery.inArray("policymaker", user_tags) != -1; | |
166 | 169 | user_monitor = jQuery.inArray("monitoring", user_tags) != -1; |
167 | 170 | replace_content('login-details', |
168 | 171 | '<p>User: <b>' + fmt_escape_html(user.name) + '</b></p>' + |
27 | 27 | |
28 | 28 | 'queue-dead-letter-routing-key': |
29 | 29 | 'Optional replacement routing key to use when a message is dead-lettered. If this is not set, the message\'s original routing key will be used.<br/>(Sets the "<a target="_blank" href="http://rabbitmq.com/dlx.html">x-dead-letter-routing-key</a>" argument.)', |
30 | ||
31 | 'queue-max-priority': | |
32 | 'Maximum number of priority levels for the queue to support; if not set, the queue will not support message priorities.<br/>(Sets the "<a target="_blank" href="http://rabbitmq.com/priority.html">x-max-priority</a>" argument.)', | |
30 | 33 | |
31 | 34 | 'queue-messages': |
32 | 35 | '<p>Message counts.</p><p>Note that "in memory" and "persistent" are not mutually exclusive; persistent messages can be in memory as well as on disc, and transient messages can be paged out if memory is tight. Non-durable queues will consider all messages to be transient.</p>', |
216 | 219 | <dd>Rate at which messages with the \'redelivered\' flag set are being delivered. Note that these messages will <b>also</b> be counted in one of the delivery rates above.</dd>\ |
217 | 220 | <dt>Return</dt>\ |
218 | 221 | <dd>Rate at which basic.return is sent to publishers for unroutable messages published with the \'mandatory\' flag set.</dd>\ |
219 | </dl>', | |
222 | <dt>Disk read</dt>\ | |
223 | <dd>Rate at which queues read messages from disk.</dd>\ | |
224 | <dt>Disk write</dt>\ | |
225 | <dd>Rate at which queues write messages to disk.</dd>\ | |
226 | </dl>\ | |
227 | <p>\ | |
228 | Note that the last two items are originate in queues rather than \ | |
229 | channels; they may therefore be slightly out of sync with other \ | |
230 | statistics.\ | |
231 | </p>', | |
220 | 232 | |
221 | 233 | 'disk-monitoring-no-watermark' : 'There is no <a target="_blank" href="http://www.rabbitmq.com/memory.html#diskfreesup">disk space low watermark</a> set. RabbitMQ will not take any action to avoid running out of disk space.', |
222 | 234 | |
250 | 262 | 'plugins' : |
251 | 263 | 'Note that only plugins which are both explicitly enabled and running are shown here.', |
252 | 264 | |
265 | 'io-operations': | |
266 | 'Rate of I/O operations. Only operations performed by the message \ | |
267 | persister are shown here (e.g. metadata changes in Mnesia or writes \ | |
268 | to the log files are not shown).\ | |
269 | <dl>\ | |
270 | <dt>Read</dt>\ | |
271 | <dd>Rate at which data is read from the disk.</dd>\ | |
272 | <dt>Write</dt>\ | |
273 | <dd>Rate at which data is written to the disk.</dd>\ | |
274 | <dt>Seek</dt>\ | |
275 | <dd>Rate at which the broker switches position while reading or \ | |
276 | writing to disk.</dd>\ | |
277 | <dt>Sync</dt>\ | |
278 | <dd>Rate at which the broker invokes <code>fsync()</code> to ensure \ | |
279 | data is flushed to disk.</dd>\ | |
280 | <dt>Reopen</dt>\ | |
281 | <dd>Rate at which the broker recycles file handles in order to support \ | |
282 | more queues than it has file handles. If this operation is occurring \ | |
283 | frequently you may get a performance boost from increasing the number \ | |
284 | of file handles available.</dd>\ | |
285 | </dl>', | |
286 | ||
287 | 'mnesia-transactions': | |
288 | 'Rate at which Mnesia transactions are initiated on this node (this node \ | |
289 | will also take part in Mnesia transactions initiated on other nodes).\ | |
290 | <dl>\ | |
291 | <dt>RAM only</dt>\ | |
292 | <dd>Rate at which RAM-only transactions take place (e.g. creation / \ | |
293 | deletion of transient queues).</dd>\ | |
294 | <dt>Disk</dt>\ | |
295 | <dd>Rate at which disk (and RAM) transactions take place (.e.g \ | |
296 | creation / deletion of durable queues).</dd>\ | |
297 | </dl>', | |
298 | ||
299 | 'persister-operations-msg': | |
300 | 'Rate at which per-message persister operations take place on this node. See \ | |
301 | <a href="http://www.rabbitmq.com/persistence-conf.html" target="_blank">here</a> \ | |
302 | for more information on the persister. \ | |
303 | <dl>\ | |
304 | <dt>QI Journal</dt>\ | |
305 | <dd>Rate at which message information (publishes, deliveries and \ | |
306 | acknowledgements) is written to queue index journals.</dd>\ | |
307 | <dt>Store Read</dt>\ | |
308 | <dd>Rate at which messages are read from the message store.</dd>\ | |
309 | <dt>Store Write</dt>\ | |
310 | <dd>Rate at which messages are written to the message store.</dd>\ | |
311 | </dl>', | |
312 | ||
313 | 'persister-operations-bulk': | |
314 | 'Rate at which whole-file persister operations take place on this node. See \ | |
315 | <a href="http://www.rabbitmq.com/persistence-conf.html" target="_blank">here</a> \ | |
316 | for more information on the persister. \ | |
317 | <dl>\ | |
318 | <dt>QI Read</dt>\ | |
319 | <dd>Rate at which queue index segment files are read.</dd>\ | |
320 | <dt>QI Write</dt>\ | |
321 | <dd>Rate at which queue index segment files are written. </dd>\ | |
322 | </dl>', | |
323 | ||
253 | 324 | 'foo': 'foo' // No comma. |
254 | 325 | }; |
255 | 326 |
462 | 462 | return confirm("Are you sure? This object cannot be recovered " + |
463 | 463 | "after deletion."); |
464 | 464 | }); |
465 | $('div.section h2, div.section-hidden h2').click(function() { | |
465 | $('div.section h2, div.section-hidden h2').die().live('click', function() { | |
466 | 466 | toggle_visibility($(this)); |
467 | 467 | }); |
468 | 468 | $('label').map(function() { |
506 | 506 | } |
507 | 507 | } |
508 | 508 | }); |
509 | setup_visibility(); | |
510 | 509 | $('.help').die().live('click', function() { |
511 | 510 | help($(this).attr('id')) |
512 | 511 | }); |
560 | 559 | } |
561 | 560 | |
562 | 561 | function postprocess_partial() { |
562 | setup_visibility(); | |
563 | 563 | $('.sort').click(function() { |
564 | 564 | var sort = $(this).attr('sort'); |
565 | 565 | if (current_sort == sort) { |
1 | 1 | |
2 | 2 | <div class="section"> |
3 | 3 | <h2>Overview</h2> |
4 | <div class="hider"> | |
4 | <div class="hider updatable"> | |
5 | 5 | <% if (rates_mode != 'none') { %> |
6 | 6 | <%= message_rates('msg-rates-ch', channel.message_stats) %> |
7 | 7 | <% } %> |
8 | 8 | |
9 | <div class="updatable"> | |
10 | 9 | <h3>Details</h3> |
11 | 10 | <table class="facts facts-l"> |
12 | 11 | <tr> |
62 | 61 | <td><%= channel.acks_uncommitted %></td> |
63 | 62 | </tr> |
64 | 63 | </table> |
65 | </div> | |
66 | 64 | |
67 | 65 | </div> |
68 | 66 | </div> |
166 | 166 | <% } %> |
167 | 167 | <% if (rates_mode != 'none') { %> |
168 | 168 | <% if (show_column('channels', 'rate-publish')) { %> |
169 | <td class="r"><%= fmt_rate(channel.message_stats, 'publish') %></td> | |
169 | <td class="r"><%= fmt_detail_rate(channel.message_stats, 'publish') %></td> | |
170 | 170 | <% } %> |
171 | 171 | <% if (show_column('channels', 'rate-confirm')) { %> |
172 | <td class="r"><%= fmt_rate(channel.message_stats, 'confirm') %></td> | |
172 | <td class="r"><%= fmt_detail_rate(channel.message_stats, 'confirm') %></td> | |
173 | 173 | <% } %> |
174 | 174 | <% if (show_column('channels', 'rate-return')) { %> |
175 | <td class="r"><%= fmt_rate(channel.message_stats, 'return_unroutable') %></td> | |
175 | <td class="r"><%= fmt_detail_rate(channel.message_stats, 'return_unroutable') %></td> | |
176 | 176 | <% } %> |
177 | 177 | <% if (show_column('channels', 'rate-deliver')) { %> |
178 | <td class="r"><%= fmt_rate(channel.message_stats, 'deliver_get') %></td> | |
178 | <td class="r"><%= fmt_detail_rate(channel.message_stats, 'deliver_get') %></td> | |
179 | 179 | <% } %> |
180 | 180 | <% if (show_column('channels', 'rate-redeliver')) { %> |
181 | <td class="r"><%= fmt_rate(channel.message_stats, 'redeliver') %></td> | |
181 | <td class="r"><%= fmt_detail_rate(channel.message_stats, 'redeliver') %></td> | |
182 | 182 | <% } %> |
183 | 183 | <% if (show_column('channels', 'rate-ack')) { %> |
184 | <td class="r"><%= fmt_rate(channel.message_stats, 'ack') %></td> | |
184 | <td class="r"><%= fmt_detail_rate(channel.message_stats, 'ack') %></td> | |
185 | 185 | <% } %> |
186 | 186 | <% } %> |
187 | 187 | </tr> |
1 | 1 | |
2 | 2 | <div class="section"> |
3 | 3 | <h2>Overview</h2> |
4 | <div class="hider"> | |
4 | <div class="hider updatable"> | |
5 | 5 | <%= data_rates('data-rates-conn', connection, 'Data rates') %> |
6 | 6 | |
7 | <div class="updatable"> | |
8 | 7 | <h3>Details</h3> |
9 | 8 | <table class="facts facts-l"> |
10 | 9 | <% if (nodes_interesting) { %> |
61 | 60 | </tr> |
62 | 61 | </table> |
63 | 62 | <% } %> |
64 | </div> | |
65 | 63 | |
66 | 64 | </div> |
67 | 65 | </div> |
114 | 114 | <td><%= fmt_client_name(connection.client_properties) %></td> |
115 | 115 | <% } %> |
116 | 116 | <% if (show_column('connections', 'from_client')) { %> |
117 | <td><%= fmt_rate_bytes(connection, 'recv_oct') %></td> | |
117 | <td><%= fmt_detail_rate_bytes(connection, 'recv_oct') %></td> | |
118 | 118 | <% } %> |
119 | 119 | <% if (show_column('connections', 'to_client')) { %> |
120 | <td><%= fmt_rate_bytes(connection, 'send_oct') %></td> | |
120 | <td><%= fmt_detail_rate_bytes(connection, 'send_oct') %></td> | |
121 | 121 | <% } %> |
122 | 122 | <% if (show_column('connections', 'heartbeat')) { %> |
123 | 123 | <td class="r"><%= fmt_time(connection.timeout, 's') %></td> |
1 | 1 | |
2 | 2 | <div class="section"> |
3 | 3 | <h2>Overview</h2> |
4 | <div class="hider"> | |
4 | <div class="hider updatable"> | |
5 | 5 | <% if (rates_mode != 'none') { %> |
6 | 6 | <%= message_rates('msg-rates-x', exchange.message_stats) %> |
7 | 7 | <% } %> |
8 | ||
9 | <div class="updatable"> | |
10 | 8 | <h3>Details</h3> |
11 | 9 | <table class="facts"> |
12 | 10 | <tr> |
22 | 20 | <td><%= fmt_string(exchange.policy, '') %></td> |
23 | 21 | </tr> |
24 | 22 | </table> |
25 | </div> | |
26 | 23 | </div> |
27 | 24 | </div> |
28 | 25 |
63 | 63 | <% } %> |
64 | 64 | <% if (rates_mode != 'none') { %> |
65 | 65 | <% if (show_column('exchanges', 'rate-in')) { %> |
66 | <td class="r"><%= fmt_rate(exchange.message_stats, 'publish_in') %></td> | |
66 | <td class="r"><%= fmt_detail_rate(exchange.message_stats, 'publish_in') %></td> | |
67 | 67 | <% } %> |
68 | 68 | <% if (show_column('exchanges', 'rate-out')) { %> |
69 | <td class="r"><%= fmt_rate(exchange.message_stats, 'publish_out') %></td> | |
69 | <td class="r"><%= fmt_detail_rate(exchange.message_stats, 'publish_out') %></td> | |
70 | 70 | <% } %> |
71 | 71 | <% } %> |
72 | 72 | </tr> |
19 | 19 | <% } else { %> |
20 | 20 | <td><%= link_queue(del.queue.vhost, del.queue.name) %></td> |
21 | 21 | <% } %> |
22 | <td class="r"><%= fmt_rate(del.stats, 'deliver_get') %></td> | |
23 | <td class="r"><%= fmt_rate(del.stats, 'ack') %></td> | |
22 | <td class="r"><%= fmt_detail_rate(del.stats, 'deliver_get') %></td> | |
23 | <td class="r"><%= fmt_detail_rate(del.stats, 'ack') %></td> | |
24 | 24 | </tr> |
25 | 25 | <% } %> |
26 | 26 | </table> |
33 | 33 | <% } else { %> |
34 | 34 | <td><%= link_exchange(pub.exchange.vhost, pub.exchange.name) %></td> |
35 | 35 | <% } %> |
36 | <td class="r"><%= fmt_rate(pub.stats, 'publish') %></td> | |
36 | <td class="r"><%= fmt_detail_rate(pub.stats, 'publish') %></td> | |
37 | 37 | <% if (col_confirm) { %> |
38 | <td class="r"><%= fmt_rate(pub.stats, 'confirm') %></td> | |
38 | <td class="r"><%= fmt_detail_rate(pub.stats, 'confirm') %></td> | |
39 | 39 | <% } %> |
40 | 40 | </tr> |
41 | 41 | <% } %> |
0 | 0 | <h1>Node <b><%= node.name %></b></h1> |
1 | ||
2 | <div class="section"> | |
3 | <h2>Overview</h2> | |
4 | <div class="hider updatable"> | |
1 | <div class="updatable"> | |
2 | ||
5 | 3 | <% if (!node.running) { %> |
6 | 4 | <p class="warning">Node not running</p> |
7 | 5 | <% } else if (node.os_pid == undefined) { %> |
8 | 6 | <p class="warning">Node statistics not available</p> |
9 | 7 | <% } else { %> |
10 | 8 | |
9 | <div class="section"> | |
10 | <h2>Overview</h2> | |
11 | <div class="hider"> | |
11 | 12 | <div class="box"> |
12 | 13 | <table class="facts facts-l"> |
13 | 14 | <tr> |
54 | 55 | </tr> |
55 | 56 | <% } %> |
56 | 57 | </table> |
57 | <% } %> | |
58 | 58 | </div> |
59 | 59 | </div> |
60 | 60 | |
61 | 61 | <div class="section"> |
62 | <h2>Statistics</h2> | |
63 | <div class="hider"> | |
64 | <% if (!node.running) { %> | |
65 | <p class="warning">Node not running</p> | |
66 | <% } else if (node.os_pid == undefined) { %> | |
67 | <p class="warning">Node statistics not available</p> | |
68 | <% } else { %> | |
62 | <h2>Process statistics</h2> | |
63 | <div class="hider"> | |
69 | 64 | <%= node_stats_prefs() %> |
70 | <div class="updatable"> | |
71 | 65 | <table class="facts"> |
72 | 66 | <tr> |
73 | 67 | <th> |
105 | 99 | <td> |
106 | 100 | <% if (node.mem_limit != 'memory_monitoring_disabled') { %> |
107 | 101 | <%= node_stat('mem_used', 'Used', 'mem_limit', 'high watermark', node, |
108 | fmt_bytes_obj, fmt_bytes_axis, | |
102 | fmt_bytes, fmt_bytes_axis, | |
109 | 103 | node.mem_alarm ? 'red' : 'green', |
110 | 104 | node.mem_alarm ? 'memory-alarm' : null) %> |
111 | 105 | <% } else { %> |
120 | 114 | <td> |
121 | 115 | <% if (node.disk_free_limit != 'disk_free_monitoring_disabled') { %> |
122 | 116 | <%= node_stat('disk_free', 'Free', 'disk_free_limit', 'low watermark', node, |
123 | fmt_bytes_obj, fmt_bytes_axis, | |
117 | fmt_bytes, fmt_bytes_axis, | |
124 | 118 | node.disk_free_alarm ? 'red' : 'green', |
125 | 119 | node.disk_free_alarm ? 'disk_free-alarm' : null, |
126 | 120 | true) %> |
130 | 124 | </td> |
131 | 125 | </tr> |
132 | 126 | </table> |
133 | ||
134 | </div> | |
135 | <% } %> | |
136 | </div> | |
137 | </div> | |
127 | </div> | |
128 | </div> | |
129 | ||
130 | <div class="section-hidden"> | |
131 | <h2>Persistence statistics</h2> | |
132 | <div class="hider"> | |
133 | <%= rates_chart_or_text('mnesia-stats-count', node, | |
134 | [['RAM only', 'mnesia_ram_tx_count'], | |
135 | ['Disk', 'mnesia_disk_tx_count']], | |
136 | fmt_rate, fmt_rate_axis, true, 'Mnesia transactions', 'mnesia-transactions') %> | |
137 | ||
138 | <%= rates_chart_or_text('persister-msg-stats-count', node, | |
139 | [['QI Journal', 'queue_index_journal_write_count'], | |
140 | ['Store Read', 'msg_store_read_count'], | |
141 | ['Store Write', 'msg_store_write_count']], | |
142 | fmt_rate, fmt_rate_axis, true, 'Persistence operations (messages)', 'persister-operations-msg') %> | |
143 | ||
144 | <%= rates_chart_or_text('persister-bulk-stats-count', node, | |
145 | [['QI Read', 'queue_index_read_count'], | |
146 | ['QI Write', 'queue_index_write_count']], | |
147 | fmt_rate, fmt_rate_axis, true, 'Persistence operations (bulk)', 'persister-operations-bulk') %> | |
148 | </div> | |
149 | </div> | |
150 | ||
151 | <div class="section-hidden"> | |
152 | <h2>I/O statistics</h2> | |
153 | <div class="hider"> | |
154 | <%= rates_chart_or_text('persister-io-stats-count', node, | |
155 | [['Read', 'io_read_count'], | |
156 | ['Write', 'io_write_count'], | |
157 | ['Seek', 'io_seek_count'], | |
158 | ['Sync', 'io_sync_count'], | |
159 | ['Reopen', 'io_reopen_count']], | |
160 | fmt_rate, fmt_rate_axis, true, 'I/O operations', 'io-operations') %> | |
161 | ||
162 | <%= rates_chart_or_text('persister-io-stats-bytes', node, | |
163 | [['Read', 'io_read_bytes'], | |
164 | ['Write', 'io_write_bytes']], | |
165 | fmt_rate_bytes, fmt_rate_bytes_axis, true, 'I/O data rates') %> | |
166 | ||
167 | <%= rates_chart_or_text('persister-io-stats-time', node, | |
168 | [['Read', 'io_read_avg_time'], | |
169 | ['Write', 'io_write_avg_time'], | |
170 | ['Seek', 'io_seek_avg_time'], | |
171 | ['Sync', 'io_sync_avg_time']], | |
172 | fmt_ms, fmt_ms, false, 'I/O average time per operation') %> | |
173 | </div> | |
174 | </div> | |
175 | ||
176 | <div class="section-hidden"> | |
177 | <h2>Cluster links</h2> | |
178 | <div class="hider"> | |
179 | <% if (node.cluster_links.length > 0) { %> | |
180 | <table class="list"> | |
181 | <tr> | |
182 | <th>Remote node</th> | |
183 | <th>Local address</th> | |
184 | <th>Local port</th> | |
185 | <th>Remote address</th> | |
186 | <th>Remote port</th> | |
187 | <th class="plain"> | |
188 | <%= chart_h3('cluster-link-data-rates', 'Data rates') %> | |
189 | </th> | |
190 | </tr> | |
191 | <% | |
192 | for (var i = 0; i < node.cluster_links.length; i++) { | |
193 | var link = node.cluster_links[i]; | |
194 | %> | |
195 | <tr<%= alt_rows(i)%>> | |
196 | <td><%= link_node(link.name) %></td> | |
197 | <td><%= fmt_string(link.sock_addr) %></td> | |
198 | <td><%= fmt_string(link.sock_port) %></td> | |
199 | <td><%= fmt_string(link.peer_addr) %></td> | |
200 | <td><%= fmt_string(link.peer_port) %></td> | |
201 | <td class="plain"> | |
202 | <%= rates_chart_or_text_no_heading( | |
203 | 'cluster-link-data-rates', 'cluster-link-data-rates' + link.name, | |
204 | link.stats, | |
205 | [['Recv', 'recv_bytes'], | |
206 | ['Send', 'send_bytes']], | |
207 | fmt_rate_bytes, fmt_rate_bytes_axis, true) %> | |
208 | </td> | |
209 | </tr> | |
210 | <% } %> | |
211 | </table> | |
212 | <% } else { %> | |
213 | <p>... no cluster links ...</p> | |
214 | <% } %> | |
215 | </div> | |
216 | </div> | |
217 | ||
218 | <% } %> | |
219 | ||
220 | </div> | |
221 | ||
222 | <!-- | |
223 | The next two need to be non-updatable or we will wipe the memory details | |
224 | as soon as we have drawn it. | |
225 | --> | |
226 | ||
227 | <% if (node.running && node.os_pid != undefined) { %> | |
138 | 228 | |
139 | 229 | <div class="section"> |
140 | 230 | <h2>Memory details</h2> |
156 | 246 | </div> |
157 | 247 | </div> |
158 | 248 | |
249 | <% } %> | |
250 | ||
251 | <div class="updatable"> | |
252 | <% if (node.running && node.os_pid != undefined) { %> | |
253 | ||
159 | 254 | <div class="section-hidden"> |
160 | 255 | <h2>Advanced</h2> |
161 | <div class="hider updatable"> | |
162 | <% if (!node.running) { %> | |
163 | <p class="warning">Node not running</p> | |
164 | <% } else if (node.os_pid == undefined) { %> | |
165 | <p class="warning">Node statistics not available</p> | |
166 | <% } else { %> | |
256 | <div class="hider"> | |
167 | 257 | <div class="box"> |
168 | 258 | <h3>VM</h3> |
169 | 259 | <table class="facts"> |
217 | 307 | <h3>Authentication mechanisms</h3> |
218 | 308 | <%= format('registry', {'list': node.auth_mechanisms, 'node': node, 'show_enabled': true} ) %> |
219 | 309 | |
220 | <% } %> | |
221 | </div> | |
222 | </div> | |
310 | </div> | |
311 | </div> | |
312 | ||
313 | <% } %> | |
314 | ||
315 | </div> |
1 | 1 | <% if (user_monitor) { %> |
2 | 2 | <%= format('partition', {'nodes': nodes}) %> |
3 | 3 | <% } %> |
4 | ||
5 | <div class="updatable"> | |
6 | <% if (overview.statistics_db_event_queue > 1000) { %> | |
7 | <p class="warning"> | |
8 | The management statistics database currently has a queue | |
9 | of <b><%= overview.statistics_db_event_queue %></b> events to | |
10 | process. If this number keeps increasing, so will the memory used by | |
11 | the management plugin. | |
12 | ||
13 | <% if (overview.rates_mode != 'none') { %> | |
14 | You may find it useful to set the <code>rates_mode</code> config item | |
15 | to <code>none</code>. | |
16 | <% } %> | |
17 | </p> | |
18 | <% } %> | |
19 | </div> | |
20 | ||
4 | 21 | <div class="section"> |
5 | 22 | <h2>Totals</h2> |
6 | <div class="hider"> | |
23 | <div class="hider updatable"> | |
7 | 24 | <% if (overview.statistics_db_node != 'not_running') { %> |
8 | 25 | <%= queue_lengths('lengths-over', overview.queue_totals) %> |
9 | 26 | <% if (rates_mode != 'none') { %> |
13 | 30 | Totals not available |
14 | 31 | <% } %> |
15 | 32 | |
16 | <div class="updatable"> | |
17 | 33 | <% if (overview.object_totals) { %> |
18 | 34 | <h3>Global counts <span class="help" id="resource-counts"></span></h3> |
19 | 35 | |
45 | 61 | <% } %> |
46 | 62 | </div> |
47 | 63 | <% } %> |
48 | </div> | |
49 | 64 | |
50 | 65 | </div> |
51 | 66 | </div> |
306 | 321 | |
307 | 322 | <% if (overview.rates_mode == 'none') { %> |
308 | 323 | <div class="section-hidden"> |
309 | <h2>Message Rates Disabled</h2> | |
324 | <h2>Message rates disabled</h2> | |
310 | 325 | <div class="hider"> |
311 | 326 | <p> |
312 | 327 | Message rates are currently disabled. |
1 | 1 | |
2 | 2 | <div class="section"> |
3 | 3 | <h2>Overview</h2> |
4 | <div class="hider"> | |
4 | <div class="hider updatable"> | |
5 | 5 | <%= queue_lengths('lengths-q', queue) %> |
6 | 6 | <% if (rates_mode != 'none') { %> |
7 | 7 | <%= message_rates('msg-rates-q', queue.message_stats) %> |
8 | 8 | <% } %> |
9 | 9 | |
10 | <div class="updatable"> | |
11 | 10 | <h3>Details</h3> |
12 | 11 | <table class="facts facts-l"> |
13 | 12 | <tr> |
152 | 151 | <td class="r"><%= fmt_bytes(queue.memory) %></td> |
153 | 152 | </tr> |
154 | 153 | </table> |
155 | </div> | |
156 | 154 | </div> |
157 | 155 | </div> |
158 | 156 | |
244 | 242 | </div> |
245 | 243 | </div> |
246 | 244 | |
245 | <% if (user_policymaker) { %> | |
246 | <div class="section-hidden"> | |
247 | <h2>Move messages</h2> | |
248 | <div class="hider"> | |
249 | <% if (NAVIGATION['Admin'][0]['Shovel Management'] == undefined) { %> | |
250 | <p>To move messages, the shovel plugin must be enabled, try:</p> | |
251 | <pre>$ rabbitmq-plugins enable rabbitmq_shovel rabbitmq_shovel_management</pre> | |
252 | <% } else { %> | |
253 | <p> | |
254 | The shovel plugin can be used to move messages from this queue | |
255 | to another one. The form below will create a temporary shovel to | |
256 | move messages to another queue on the same virtual host, with | |
257 | default settings. | |
258 | </p> | |
259 | <p> | |
260 | For more options <a href="#/dynamic-shovels">see the shovel | |
261 | interface</a>. | |
262 | </p> | |
263 | <form action="#/shovel-parameters" method="put"> | |
264 | <input type="hidden" name="component" value="shovel"/> | |
265 | <input type="hidden" name="vhost" value="<%= fmt_string(queue.vhost) %>"/> | |
266 | <input type="hidden" name="name" value="Move from <%= fmt_string(queue.name) %>"/> | |
267 | <input type="hidden" name="src-uri" value="amqp:///<%= esc(queue.vhost) %>"/> | |
268 | <input type="hidden" name="src-queue" value="<%= fmt_string(queue.name) %>"/> | |
269 | ||
270 | <input type="hidden" name="dest-uri" value="amqp:///<%= esc(queue.vhost) %>"/> | |
271 | <input type="hidden" name="prefetch-count" value="1000"/> | |
272 | <input type="hidden" name="add-forward-headers" value="false"/> | |
273 | <input type="hidden" name="ack-mode" value="on-confirm"/> | |
274 | <input type="hidden" name="delete-after" value="queue-length"/> | |
275 | <input type="hidden" name="redirect" value="#/queues"/> | |
276 | ||
277 | <table class="form"> | |
278 | <tr> | |
279 | <th>Destination queue:</th> | |
280 | <td><input type="text" name="dest-queue"/></td> | |
281 | </tr> | |
282 | </table> | |
283 | <input type="submit" value="Move messages"/> | |
284 | </form> | |
285 | <% } %> | |
286 | </div> | |
287 | </div> | |
288 | <% } %> | |
289 | ||
247 | 290 | <div class="section-hidden"> |
248 | 291 | <h2>Delete / purge</h2> |
249 | 292 | <div class="hider"> |
159 | 159 | <% } %> |
160 | 160 | <% if (rates_mode != 'none') { %> |
161 | 161 | <% if (show_column('queues', 'rate-incoming')) { %> |
162 | <td class="r"><%= fmt_rate(queue.message_stats, 'publish') %></td> | |
162 | <td class="r"><%= fmt_detail_rate(queue.message_stats, 'publish') %></td> | |
163 | 163 | <% } %> |
164 | 164 | <% if (show_column('queues', 'rate-deliver')) { %> |
165 | <td class="r"><%= fmt_rate(queue.message_stats, 'deliver_get') %></td> | |
165 | <td class="r"><%= fmt_detail_rate(queue.message_stats, 'deliver_get') %></td> | |
166 | 166 | <% } %> |
167 | 167 | <% if (show_column('queues', 'rate-redeliver')) { %> |
168 | <td class="r"><%= fmt_rate(queue.message_stats, 'redeliver') %></td> | |
168 | <td class="r"><%= fmt_detail_rate(queue.message_stats, 'redeliver') %></td> | |
169 | 169 | <% } %> |
170 | 170 | <% if (show_column('queues', 'rate-ack')) { %> |
171 | <td class="r"><%= fmt_rate(queue.message_stats, 'ack') %></td> | |
171 | <td class="r"><%= fmt_detail_rate(queue.message_stats, 'ack') %></td> | |
172 | 172 | <% } %> |
173 | 173 | <% } %> |
174 | 174 | </tr> |
251 | 251 | <span class="argument-link" field="arguments" key="x-max-length" type="number">Max length</span> <span class="help" id="queue-max-length"></span> | |
252 | 252 | <span class="argument-link" field="arguments" key="x-max-length-bytes" type="number">Max length bytes</span> <span class="help" id="queue-max-length-bytes"></span><br/> |
253 | 253 | <span class="argument-link" field="arguments" key="x-dead-letter-exchange" type="string">Dead letter exchange</span> <span class="help" id="queue-dead-letter-exchange"></span> | |
254 | <span class="argument-link" field="arguments" key="x-dead-letter-routing-key" type="string">Dead letter routing key</span> <span class="help" id="queue-dead-letter-routing-key"></span> | |
254 | <span class="argument-link" field="arguments" key="x-dead-letter-routing-key" type="string">Dead letter routing key</span> <span class="help" id="queue-dead-letter-routing-key"></span> | | |
255 | <span class="argument-link" field="arguments" key="x-max-priority" type="number">Maximum priority</span> <span class="help" id="queue-max-priority"></span> | |
255 | 256 | </td> |
256 | 257 | </tr> |
257 | 258 | </table> |
8 | 8 | |
9 | 9 | <div class="section"> |
10 | 10 | <h2>Overview</h2> |
11 | <div class="hider"> | |
11 | <div class="hider updatable"> | |
12 | 12 | <%= queue_lengths('lengths-vhost', vhost) %> |
13 | 13 | <% if (rates_mode != 'none') { %> |
14 | 14 | <%= message_rates('msg-rates-vhost', vhost.message_stats) %> |
15 | 15 | <% } %> |
16 | 16 | <%= data_rates('data-rates-vhost', vhost, 'Data rates') %> |
17 | <div class="updatable"> | |
18 | 17 | <h3>Details</h3> |
19 | 18 | <table class="facts"> |
20 | 19 | <tr> |
22 | 21 | <td><%= fmt_boolean(vhost.tracing) %></td> |
23 | 22 | </tr> |
24 | 23 | </table> |
25 | </div> | |
26 | 24 | </div> |
27 | 25 | </div> |
28 | 26 |
63 | 63 | <td class="r"><%= fmt_num_thousands(vhost.messages) %></td> |
64 | 64 | <% } %> |
65 | 65 | <% if (show_column('vhosts', 'from_client')) { %> |
66 | <td><%= fmt_rate_bytes(vhost, 'recv_oct') %></td> | |
66 | <td><%= fmt_detail_rate_bytes(vhost, 'recv_oct') %></td> | |
67 | 67 | <% } %> |
68 | 68 | <% if (show_column('vhosts', 'to_client')) { %> |
69 | <td><%= fmt_rate_bytes(vhost, 'send_oct') %></td> | |
69 | <td><%= fmt_detail_rate_bytes(vhost, 'send_oct') %></td> | |
70 | 70 | <% } %> |
71 | 71 | <% if (rates_mode != 'none') { %> |
72 | 72 | <% if (show_column('vhosts', 'rate-publish')) { %> |
73 | <td class="r"><%= fmt_rate(vhost.message_stats, 'publish') %></td> | |
73 | <td class="r"><%= fmt_detail_rate(vhost.message_stats, 'publish') %></td> | |
74 | 74 | <% } %> |
75 | 75 | <% if (show_column('vhosts', 'rate-deliver')) { %> |
76 | <td class="r"><%= fmt_rate(vhost.message_stats, 'deliver_get') %></td> | |
76 | <td class="r"><%= fmt_detail_rate(vhost.message_stats, 'deliver_get') %></td> | |
77 | 77 | <% } %> |
78 | 78 | <% } %> |
79 | 79 | </tr> |
147 | 147 | channel_queue_exchange_stats]). |
148 | 148 | -define(TABLES, [queue_stats, connection_stats, channel_stats, |
149 | 149 | consumers_by_queue, consumers_by_channel, |
150 | node_stats]). | |
150 | node_stats, node_node_stats]). | |
151 | 151 | |
152 | 152 | -define(DELIVER_GET, [deliver, deliver_no_ack, get, get_no_ack]). |
153 | 153 | -define(FINE_STATS, [publish, publish_in, publish_out, |
154 | 154 | ack, deliver_get, confirm, return_unroutable, redeliver] ++ |
155 | 155 | ?DELIVER_GET). |
156 | 156 | |
157 | -define(COARSE_QUEUE_STATS, | |
158 | [messages, messages_ready, messages_unacknowledged]). | |
157 | %% Most come from channels as fine stats, but queues emit these directly. | |
158 | -define(QUEUE_MSG_RATES, [disk_reads, disk_writes]). | |
159 | ||
160 | -define(MSG_RATES, ?FINE_STATS ++ ?QUEUE_MSG_RATES). | |
161 | ||
162 | -define(QUEUE_MSG_COUNTS, [messages, messages_ready, messages_unacknowledged]). | |
159 | 163 | |
160 | 164 | -define(COARSE_NODE_STATS, |
161 | [mem_used, fd_used, sockets_used, proc_used, disk_free]). | |
165 | [mem_used, fd_used, sockets_used, proc_used, disk_free, | |
166 | io_read_count, io_read_bytes, io_read_avg_time, | |
167 | io_write_count, io_write_bytes, io_write_avg_time, | |
168 | io_sync_count, io_sync_avg_time, | |
169 | io_seek_count, io_seek_avg_time, | |
170 | io_reopen_count, mnesia_ram_tx_count, mnesia_disk_tx_count, | |
171 | msg_store_read_count, msg_store_write_count, | |
172 | queue_index_journal_write_count, | |
173 | queue_index_write_count, queue_index_read_count]). | |
174 | ||
175 | -define(COARSE_NODE_NODE_STATS, [send_bytes, recv_bytes]). | |
176 | ||
177 | %% Normally 0 and no history means "has never happened, don't | |
178 | %% report". But for these things we do want to report even at 0 with | |
179 | %% no history. | |
180 | -define(ALWAYS_REPORT_STATS, | |
181 | [io_read_avg_time, io_write_avg_time, | |
182 | io_sync_avg_time | ?QUEUE_MSG_COUNTS]). | |
162 | 183 | |
163 | 184 | -define(COARSE_CONN_STATS, [recv_oct, send_oct]). |
164 | 185 | |
179 | 200 | prioritise_cast(_Msg, _Len, _State) -> |
180 | 201 | 0. |
181 | 202 | |
182 | %% We want timely replies to queries even when overloaded! | |
183 | prioritise_call(_Msg, _From, _Len, _State) -> 5. | |
203 | %% We want timely replies to queries even when overloaded, so return 5 | |
204 | %% as priority. Also we only have access to the queue length here, not | |
205 | %% in handle_call/3, so stash it in the dictionary. This is a bit ugly | |
206 | %% but better than fiddling with gen_server2 even more. | |
207 | prioritise_call(_Msg, _From, Len, _State) -> | |
208 | put(last_queue_length, Len), | |
209 | 5. | |
184 | 210 | |
185 | 211 | %%---------------------------------------------------------------------------- |
186 | 212 | %% API |
320 | 346 | %% recv_oct now! |
321 | 347 | VStats = [read_simple_stats(vhost_stats, VHost, State) || |
322 | 348 | VHost <- VHosts], |
323 | MessageStats = [overview_sum(Type, VStats) || Type <- ?FINE_STATS], | |
324 | QueueStats = [overview_sum(Type, VStats) || Type <- ?COARSE_QUEUE_STATS], | |
349 | MessageStats = [overview_sum(Type, VStats) || Type <- ?MSG_RATES], | |
350 | QueueStats = [overview_sum(Type, VStats) || Type <- ?QUEUE_MSG_COUNTS], | |
325 | 351 | F = case User of |
326 | 352 | all -> fun (L) -> length(L) end; |
327 | 353 | _ -> fun (L) -> length(rabbit_mgmt_util:filter_user(L, User)) end |
342 | 368 | {channels, F(created_events(channel_stats, Tables))}], |
343 | 369 | reply([{message_stats, format_samples(Ranges, MessageStats, State)}, |
344 | 370 | {queue_totals, format_samples(Ranges, QueueStats, State)}, |
345 | {object_totals, ObjectTotals}], State); | |
371 | {object_totals, ObjectTotals}, | |
372 | {statistics_db_event_queue, get(last_queue_length)}], State); | |
346 | 373 | |
347 | 374 | handle_call({override_lookups, Lookups}, _From, State) -> |
348 | 375 | reply(ok, State#state{lookups = Lookups}); |
433 | 460 | %% passed a queue proplist that will already have been formatted - |
434 | 461 | %% i.e. it will have name and vhost keys. |
435 | 462 | id_name(node_stats) -> name; |
463 | id_name(node_node_stats) -> route; | |
436 | 464 | id_name(vhost_stats) -> name; |
437 | 465 | id_name(queue_stats) -> name; |
438 | 466 | id_name(exchange_stats) -> name; |
475 | 503 | [{fun rabbit_mgmt_format:properties/1,[backing_queue_status]}, |
476 | 504 | {fun rabbit_mgmt_format:now_to_str/1, [idle_since]}, |
477 | 505 | {fun rabbit_mgmt_format:queue_state/1, [state]}], |
478 | ?COARSE_QUEUE_STATS, State); | |
506 | ?QUEUE_MSG_COUNTS, ?QUEUE_MSG_RATES, State); | |
479 | 507 | |
480 | 508 | handle_event(Event = #event{type = queue_deleted, |
481 | 509 | props = [{name, Name}], |
490 | 518 | %% This ceil must correspond to the ceil in append_samples/5 |
491 | 519 | TS = ceil(Timestamp, State), |
492 | 520 | OldStats = lookup_element(OldTable, Id), |
493 | [record_sample(Id, {Key, -pget(Key, OldStats, 0), TS, State}, State) | |
494 | || Key <- ?COARSE_QUEUE_STATS], | |
521 | [record_sample(Id, {Key, -pget(Key, OldStats, 0), TS, State}, true, State) | |
522 | || Key <- ?QUEUE_MSG_COUNTS], | |
495 | 523 | delete_samples(channel_queue_stats, {'_', Name}, State), |
496 | 524 | delete_samples(queue_exchange_stats, {Name, '_'}, State), |
497 | 525 | delete_samples(queue_stats, Name, State), |
506 | 534 | |
507 | 535 | handle_event(#event{type = vhost_deleted, |
508 | 536 | props = [{name, Name}]}, State) -> |
509 | delete_samples(vhost_stats, Name, State), | |
510 | {ok, State}; | |
537 | delete_samples(vhost_stats, Name, State); | |
511 | 538 | |
512 | 539 | handle_event(#event{type = connection_created, props = Stats}, State) -> |
513 | 540 | handle_created( |
542 | 569 | ets:match_delete(OldTable, {{fine, {ChPid, '_'}}, '_'}), |
543 | 570 | ets:match_delete(OldTable, {{fine, {ChPid, '_', '_'}}, '_'}), |
544 | 571 | [handle_fine_stats(Timestamp, AllStatsElem, State) |
545 | || AllStatsElem <- AllStats], | |
546 | {ok, State}; | |
572 | || AllStatsElem <- AllStats]; | |
547 | 573 | |
548 | 574 | handle_event(Event = #event{type = channel_closed, |
549 | 575 | props = [{pid, Pid}]}, |
571 | 597 | %% TODO: we don't clear up after dead nodes here - this is a very tiny |
572 | 598 | %% leak every time a node is permanently removed from the cluster. Do |
573 | 599 | %% we care? |
574 | handle_event(#event{type = node_stats, props = Stats, timestamp = Timestamp}, | |
600 | handle_event(#event{type = node_stats, props = Stats0, timestamp = Timestamp}, | |
575 | 601 | State) -> |
602 | Stats = proplists:delete(persister_stats, Stats0) ++ | |
603 | pget(persister_stats, Stats0), | |
576 | 604 | handle_stats(node_stats, Stats, Timestamp, [], ?COARSE_NODE_STATS, State); |
577 | 605 | |
578 | handle_event(_Event, State) -> | |
579 | {ok, State}. | |
606 | handle_event(#event{type = node_node_stats, props = Stats, | |
607 | timestamp = Timestamp}, State) -> | |
608 | handle_stats(node_node_stats, Stats, Timestamp, [], ?COARSE_NODE_NODE_STATS, | |
609 | State); | |
610 | ||
611 | handle_event(Event = #event{type = node_node_deleted, | |
612 | props = [{route, Route}]}, State) -> | |
613 | delete_samples(node_node_stats, Route, State), | |
614 | handle_deleted(node_node_stats, Event, State); | |
615 | ||
616 | handle_event(_Event, _State) -> | |
617 | ok. | |
580 | 618 | |
581 | 619 | handle_created(TName, Stats, Funs, State = #state{tables = Tables}) -> |
582 | 620 | Formatted = rabbit_mgmt_format:format(Stats, Funs), |
585 | 623 | pget(name, Stats)}), |
586 | 624 | {ok, State}. |
587 | 625 | |
588 | handle_stats(TName, Stats, Timestamp, Funs, RatesKeys, | |
626 | handle_stats(TName, Stats, Timestamp, Funs, RatesKeys, State) -> | |
627 | handle_stats(TName, Stats, Timestamp, Funs, RatesKeys, [], State). | |
628 | ||
629 | handle_stats(TName, Stats, Timestamp, Funs, RatesKeys, NoAggRatesKeys, | |
589 | 630 | State = #state{tables = Tables, old_stats = OldTable}) -> |
590 | 631 | Id = id(TName, Stats), |
591 | 632 | IdSamples = {coarse, {TName, Id}}, |
592 | 633 | OldStats = lookup_element(OldTable, IdSamples), |
593 | append_samples(Stats, Timestamp, OldStats, IdSamples, RatesKeys, State), | |
634 | append_samples( | |
635 | Stats, Timestamp, OldStats, IdSamples, RatesKeys, true, State), | |
636 | append_samples( | |
637 | Stats, Timestamp, OldStats, IdSamples, NoAggRatesKeys, false, State), | |
594 | 638 | StripKeys = [id_name(TName)] ++ RatesKeys ++ ?FINE_STATS_TYPES, |
595 | 639 | Stats1 = [{K, V} || {K, V} <- Stats, not lists:member(K, StripKeys)], |
596 | 640 | Stats2 = rabbit_mgmt_format:format(Stats1, Funs), |
654 | 698 | 0 -> Stats; |
655 | 699 | _ -> [{deliver_get, Total}|Stats] |
656 | 700 | end, |
657 | append_samples(Stats1, Timestamp, OldStats, {fine, Id}, all, State). | |
701 | append_samples(Stats1, Timestamp, OldStats, {fine, Id}, all, true, State). | |
658 | 702 | |
659 | 703 | delete_samples(Type, {Id, '_'}, State) -> |
660 | 704 | delete_samples_with_index(Type, Id, fun forward/2, State); |
678 | 722 | |
679 | 723 | delete_match(Type, Id) -> {{{Type, Id}, '_'}, '_'}. |
680 | 724 | |
681 | append_samples(Stats, TS, OldStats, Id, Keys, | |
725 | append_samples(Stats, TS, OldStats, Id, Keys, Agg, | |
682 | 726 | State = #state{old_stats = OldTable}) -> |
683 | 727 | case ignore_coarse_sample(Id, State) of |
684 | 728 | false -> |
686 | 730 | %% queue_deleted |
687 | 731 | NewMS = ceil(TS, State), |
688 | 732 | case Keys of |
689 | all -> [append_sample(Key, Value, NewMS, OldStats, Id, State) | |
690 | || {Key, Value} <- Stats]; | |
691 | _ -> [append_sample( | |
692 | Key, pget(Key, Stats), NewMS, OldStats, Id, State) | |
693 | || Key <- Keys] | |
733 | all -> [append_sample(K, V, NewMS, OldStats, Id, Agg, State) | |
734 | || {K, V} <- Stats]; | |
735 | _ -> [append_sample(K, V, NewMS, OldStats, Id, Agg, State) | |
736 | || K <- Keys, | |
737 | V <- [pget(K, Stats)], | |
738 | V =/= 0 orelse lists:member(K, ?ALWAYS_REPORT_STATS)] | |
694 | 739 | end, |
695 | 740 | ets:insert(OldTable, {Id, Stats}); |
696 | 741 | true -> |
697 | 742 | ok |
698 | 743 | end. |
699 | 744 | |
700 | append_sample(Key, Value, NewMS, OldStats, Id, State) when is_number(Value) -> | |
745 | append_sample(Key, Val, NewMS, OldStats, Id, Agg, State) when is_number(Val) -> | |
701 | 746 | record_sample( |
702 | Id, {Key, Value - pget(Key, OldStats, 0), NewMS, State}, State); | |
703 | ||
704 | append_sample(_Key, _Value, _NewMS, _OldStats, _Id, _State) -> | |
747 | Id, {Key, Val - pget(Key, OldStats, 0), NewMS, State}, Agg, State); | |
748 | ||
749 | append_sample(_Key, _Value, _NewMS, _OldStats, _Id, _Agg, _State) -> | |
705 | 750 | ok. |
706 | 751 | |
707 | 752 | ignore_coarse_sample({coarse, {queue_stats, Q}}, State) -> |
710 | 755 | false. |
711 | 756 | |
712 | 757 | %% Node stats do not have a vhost of course |
713 | record_sample({coarse, {node_stats, _Node} = Id}, Args, _State) -> | |
758 | record_sample({coarse, {node_stats, _Node} = Id}, Args, true, _State) -> | |
714 | 759 | record_sample0(Id, Args); |
715 | 760 | |
716 | record_sample({coarse, Id}, Args, State) -> | |
761 | record_sample({coarse, {node_node_stats, _Names} = Id}, Args, true, _State) -> | |
762 | record_sample0(Id, Args); | |
763 | ||
764 | record_sample({coarse, Id}, Args, false, _State) -> | |
765 | record_sample0(Id, Args); | |
766 | ||
767 | record_sample({coarse, Id}, Args, true, State) -> | |
717 | 768 | record_sample0(Id, Args), |
718 | 769 | record_sample0({vhost_stats, vhost(Id, State)}, Args); |
719 | 770 | |
720 | 771 | %% Deliveries / acks (Q -> Ch) |
721 | record_sample({fine, {Ch, Q = #resource{kind = queue}}}, Args, State) -> | |
772 | record_sample({fine, {Ch, Q = #resource{kind = queue}}}, Args, true, State) -> | |
722 | 773 | case object_exists(Q, State) of |
723 | 774 | true -> record_sample0({channel_queue_stats, {Ch, Q}}, Args), |
724 | 775 | record_sample0({queue_stats, Q}, Args); |
728 | 779 | record_sample0({vhost_stats, vhost(Q)}, Args); |
729 | 780 | |
730 | 781 | %% Publishes / confirms (Ch -> X) |
731 | record_sample({fine, {Ch, X = #resource{kind = exchange}}}, Args, State) -> | |
782 | record_sample({fine, {Ch, X = #resource{kind = exchange}}}, Args, true,State) -> | |
732 | 783 | case object_exists(X, State) of |
733 | 784 | true -> record_sample0({channel_exchange_stats, {Ch, X}}, Args), |
734 | 785 | record_sampleX(publish_in, X, Args); |
740 | 791 | %% Publishes (but not confirms) (Ch -> X -> Q) |
741 | 792 | record_sample({fine, {_Ch, |
742 | 793 | Q = #resource{kind = queue}, |
743 | X = #resource{kind = exchange}}}, Args, State) -> | |
794 | X = #resource{kind = exchange}}}, Args, true, State) -> | |
744 | 795 | %% TODO This one logically feels like it should be here. It would |
745 | 796 | %% correspond to "publishing channel message rates to queue" - |
746 | 797 | %% which would be nice to handle - except we don't. And just |
799 | 850 | |
800 | 851 | %% Ignore case where ID1 and ID2 are in a tuple, i.e. detailed stats, |
801 | 852 | %% when in basic mode |
802 | record_sample0({_, {_ID1, _ID2}}, {_, _, _, #state{rates_mode = basic}}) -> | |
853 | record_sample0({Type, {_ID1, _ID2}}, {_, _, _, #state{rates_mode = basic}}) | |
854 | when Type =/= node_node_stats -> | |
803 | 855 | ok; |
804 | 856 | record_sample0(Id0, {Key, Diff, TS, #state{aggregated_stats = ETS, |
805 | 857 | aggregated_stats_index = ETSi}}) -> |
833 | 885 | {channel_stats, [{publishes, channel_exchange_stats, fun first/1}, |
834 | 886 | {deliveries, channel_queue_stats, fun first/1}]}). |
835 | 887 | |
888 | -define(NODE_DETAILS, | |
889 | {node_stats, [{cluster_links, node_node_stats, fun first/1}]}). | |
890 | ||
836 | 891 | first(Id) -> {Id, '$1'}. |
837 | 892 | second(Id) -> {'$1', Id}. |
838 | 893 | |
886 | 941 | |
887 | 942 | node_stats(Ranges, Objs, State) -> |
888 | 943 | merge_stats(Objs, [basic_stats_fun(node_stats, State), |
889 | simple_stats_fun(Ranges, node_stats, State)]). | |
944 | simple_stats_fun(Ranges, node_stats, State), | |
945 | detail_and_basic_stats_fun( | |
946 | node_node_stats, Ranges, ?NODE_DETAILS, State)]). | |
890 | 947 | |
891 | 948 | merge_stats(Objs, Funs) -> |
892 | 949 | [lists:foldl(fun (Fun, Props) -> combine(Fun(Props), Props) end, Obj, Funs) |
921 | 978 | Id = id_lookup(IdType, Props), |
922 | 979 | [detail_stats(Ranges, Name, AggregatedStatsType, IdFun(Id), State) |
923 | 980 | || {Name, AggregatedStatsType, IdFun} <- FineSpecs] |
981 | end. | |
982 | ||
983 | %% This does not quite do the same as detail_stats_fun + | |
984 | %% basic_stats_fun; the basic part here assumes compound keys (like | |
985 | %% detail stats) but non-calculated (like basic stats). Currently the | |
986 | %% only user of that is node-node stats. | |
987 | %% | |
988 | %% We also assume that FineSpecs is single length here (at [1]). | |
989 | detail_and_basic_stats_fun(Type, Ranges, {IdType, FineSpecs}, | |
990 | State = #state{tables = Tables}) -> | |
991 | Table = orddict:fetch(Type, Tables), | |
992 | F = detail_stats_fun(Ranges, {IdType, FineSpecs}, State), | |
993 | fun (Props) -> | |
994 | Id = id_lookup(IdType, Props), | |
995 | BasicStatsRaw = ets:match(Table, {{{Id, '$1'}, stats}, '$2', '_'}), | |
996 | BasicStatsDict = dict:from_list([{K, V} || [K,V] <- BasicStatsRaw]), | |
997 | [{K, Items}] = F(Props), %% [1] | |
998 | Items2 = [case dict:find(id_lookup(IdType, Item), BasicStatsDict) of | |
999 | {ok, BasicStats} -> BasicStats ++ Item; | |
1000 | error -> Item | |
1001 | end || Item <- Items], | |
1002 | [{K, Items2}] | |
924 | 1003 | end. |
925 | 1004 | |
926 | 1005 | read_simple_stats(Type, Id, #state{aggregated_stats = ETS}) -> |
942 | 1021 | end, [], FromETS). |
943 | 1022 | |
944 | 1023 | extract_msg_stats(Stats) -> |
945 | FineStats = lists:append([[K, details_key(K)] || K <- ?FINE_STATS]), | |
1024 | FineStats = lists:append([[K, details_key(K)] || K <- ?MSG_RATES]), | |
946 | 1025 | {MsgStats, Other} = |
947 | 1026 | lists:partition(fun({K, _}) -> lists:member(K, FineStats) end, Stats), |
948 | 1027 | case MsgStats of |
959 | 1038 | augment_msg_stats([{channel, ChPid}], State); |
960 | 1039 | format_detail_id(#resource{name = Name, virtual_host = Vhost, kind = Kind}, |
961 | 1040 | _State) -> |
962 | [{Kind, [{name, Name}, {vhost, Vhost}]}]. | |
1041 | [{Kind, [{name, Name}, {vhost, Vhost}]}]; | |
1042 | format_detail_id(Node, _State) when is_atom(Node) -> | |
1043 | [{name, Node}]. | |
963 | 1044 | |
964 | 1045 | format_samples(Ranges, ManyStats, #state{interval = Interval}) -> |
965 | 1046 | lists:append( |
966 | 1047 | [case rabbit_mgmt_stats:is_blank(Stats) andalso |
967 | not lists:member(K, ?COARSE_QUEUE_STATS) of | |
1048 | not lists:member(K, ?ALWAYS_REPORT_STATS) of | |
968 | 1049 | true -> []; |
969 | 1050 | false -> {Details, Counter} = rabbit_mgmt_stats:format( |
970 | 1051 | pick_range(K, Ranges), |
974 | 1055 | end || {K, Stats} <- ManyStats]). |
975 | 1056 | |
976 | 1057 | pick_range(K, {RangeL, RangeM, RangeD, RangeN}) -> |
977 | case {lists:member(K, ?COARSE_QUEUE_STATS), | |
978 | lists:member(K, ?FINE_STATS), | |
1058 | case {lists:member(K, ?QUEUE_MSG_COUNTS), | |
1059 | lists:member(K, ?MSG_RATES), | |
979 | 1060 | lists:member(K, ?COARSE_CONN_STATS), |
980 | lists:member(K, ?COARSE_NODE_STATS)} of | |
1061 | lists:member(K, ?COARSE_NODE_STATS) | |
1062 | orelse lists:member(K, ?COARSE_NODE_NODE_STATS)} of | |
981 | 1063 | {true, false, false, false} -> RangeL; |
982 | 1064 | {false, true, false, false} -> RangeM; |
983 | 1065 | {false, false, true, false} -> RangeD; |
1094 | 1176 | gc_batch(0, _Policies, State) -> |
1095 | 1177 | State; |
1096 | 1178 | gc_batch(Rows, Policies, State = #state{aggregated_stats = ETS, |
1097 | gc_next_key = Key0}) -> | |
1179 | gc_next_key = Key0}) -> | |
1098 | 1180 | Key = case Key0 of |
1099 | 1181 | undefined -> ets:first(ETS); |
1100 | 1182 | _ -> ets:next(ETS, Key0) |
1116 | 1198 | end. |
1117 | 1199 | |
1118 | 1200 | retention_policy(node_stats) -> global; |
1201 | retention_policy(node_node_stats) -> global; | |
1119 | 1202 | retention_policy(vhost_stats) -> global; |
1120 | 1203 | retention_policy(queue_stats) -> basic; |
1121 | 1204 | retention_policy(exchange_stats) -> basic; |
156 | 156 | {tags, tags(User#internal_user.tags)}]. |
157 | 157 | |
158 | 158 | user(User) -> |
159 | [{name, User#user.username}, | |
160 | {tags, tags(User#user.tags)}, | |
161 | {auth_backend, User#user.auth_backend}]. | |
159 | [{name, User#user.username}, | |
160 | {tags, tags(User#user.tags)}]. | |
162 | 161 | |
163 | 162 | tags(Tags) -> |
164 | 163 | list_to_binary(string:join([atom_to_list(T) || T <- Tags], ",")). |
27 | 27 | -export([with_channel/4, with_channel/5]). |
28 | 28 | -export([props_to_method/2, props_to_method/4]). |
29 | 29 | -export([all_or_one_vhost/2, http_to_amqp/5, reply/3, filter_vhost/3]). |
30 | -export([filter_conn_ch_list/3, filter_user/2, list_login_vhosts/1]). | |
30 | -export([filter_conn_ch_list/3, filter_user/2, list_login_vhosts/2]). | |
31 | 31 | -export([with_decode/5, decode/1, decode/2, redirect/2, set_resp_header/3, |
32 | 32 | args/1]). |
33 | 33 | -export([reply_list/3, reply_list/4, sort_list/2, destination_type/1]). |
76 | 76 | case vhost(ReqData) of |
77 | 77 | not_found -> true; |
78 | 78 | none -> true; |
79 | V -> lists:member(V, list_login_vhosts(User)) | |
79 | V -> lists:member(V, list_login_vhosts(User, peersock(ReqData))) | |
80 | 80 | end. |
81 | 81 | |
82 | 82 | %% Used for connections / channels. A normal user can only see / delete |
136 | 136 | not_allowed -> |
137 | 137 | ErrFun(<<"User can only log in via localhost">>) |
138 | 138 | end; |
139 | {refused, Msg, Args} -> | |
139 | {refused, _Username, Msg, Args} -> | |
140 | 140 | rabbit_log:warning("HTTP access denied: ~s~n", |
141 | 141 | [rabbit_misc:format(Msg, Args)]), |
142 | 142 | not_authorised(<<"Login failed">>, ReqData, Context) |
143 | 143 | end. |
144 | 144 | |
145 | peer(ReqData) -> | |
146 | {ok, {IP,_Port}} = peername(peersock(ReqData)), | |
147 | IP. | |
148 | ||
145 | 149 | %% We can't use wrq:peer/1 because that trusts X-Forwarded-For. |
146 | peer(ReqData) -> | |
150 | peersock(ReqData) -> | |
147 | 151 | WMState = ReqData#wm_reqdata.wm_state, |
148 | {ok, {IP,_Port}} = peername(WMState#wm_reqstate.socket), | |
149 | IP. | |
152 | WMState#wm_reqstate.socket. | |
150 | 153 | |
151 | 154 | %% Like the one in rabbit_net, but we and webmachine have a different |
152 | 155 | %% way of wrapping |
451 | 454 | end; |
452 | 455 | {error, {auth_failure, Msg}} -> |
453 | 456 | not_authorised(Msg, ReqData, Context); |
457 | {error, access_refused} -> | |
458 | not_authorised(<<"Access refused.">>, ReqData, Context); | |
454 | 459 | {error, {nodedown, N}} -> |
455 | 460 | bad_request( |
456 | 461 | list_to_binary( |
469 | 474 | VHost -> Fun(VHost) |
470 | 475 | end. |
471 | 476 | |
472 | filter_vhost(List, _ReqData, Context) -> | |
473 | VHosts = list_login_vhosts(Context#context.user), | |
477 | filter_vhost(List, ReqData, Context) -> | |
478 | VHosts = list_login_vhosts(Context#context.user, peersock(ReqData)), | |
474 | 479 | [I || I <- List, lists:member(pget(vhost, I), VHosts)]. |
475 | 480 | |
476 | 481 | filter_user(List, _ReqData, #context{user = User}) -> |
510 | 515 | {{halt, Code}, ReqData, Context}; |
511 | 516 | post_respond({JSON, ReqData, Context}) -> |
512 | 517 | {true, set_resp_header( |
513 | "content-type", "application/json", | |
518 | "Content-Type", "application/json", | |
514 | 519 | wrq:append_to_response_body(JSON, ReqData)), Context}. |
515 | 520 | |
516 | 521 | is_admin(T) -> intersects(T, [administrator]). |
532 | 537 | list_visible_vhosts(User = #user{tags = Tags}) -> |
533 | 538 | case is_monitor(Tags) of |
534 | 539 | true -> rabbit_vhost:list(); |
535 | false -> list_login_vhosts(User) | |
536 | end. | |
537 | ||
538 | list_login_vhosts(User) -> | |
540 | false -> list_login_vhosts(User, undefined) | |
541 | end. | |
542 | ||
543 | list_login_vhosts(User, Sock) -> | |
539 | 544 | [V || V <- rabbit_vhost:list(), |
540 | case catch rabbit_access_control:check_vhost_access(User, V) of | |
545 | case catch rabbit_access_control:check_vhost_access(User, V, Sock) of | |
541 | 546 | ok -> true; |
542 | 547 | _ -> false |
543 | 548 | end]. |
54 | 54 | [{exchange, rabbit_mgmt_util:id(exchange, ReqData)}]). |
55 | 55 | |
56 | 56 | delete_resource(ReqData, Context) -> |
57 | IfUnused = "true" =:= wrq:get_qs_value("if-unused", ReqData), | |
57 | 58 | rabbit_mgmt_util:amqp_request( |
58 | 59 | rabbit_mgmt_util:vhost(ReqData), ReqData, Context, |
59 | #'exchange.delete'{ exchange = id(ReqData) }). | |
60 | #'exchange.delete'{exchange = id(ReqData), | |
61 | if_unused = IfUnused}). | |
60 | 62 | |
61 | 63 | is_authorized(ReqData, Context) -> |
62 | 64 | rabbit_mgmt_util:is_authorized_vhost(ReqData, Context). |
16 | 16 | -module(rabbit_mgmt_wm_exchange_publish). |
17 | 17 | |
18 | 18 | -export([init/1, resource_exists/2, post_is_create/2, is_authorized/2, |
19 | allowed_methods/2, process_post/2]). | |
19 | allowed_methods/2, content_types_provided/2, process_post/2]). | |
20 | 20 | |
21 | 21 | -include("rabbit_mgmt.hrl"). |
22 | 22 | -include_lib("webmachine/include/webmachine.hrl"). |
27 | 27 | |
28 | 28 | allowed_methods(ReqData, Context) -> |
29 | 29 | {['POST'], ReqData, Context}. |
30 | ||
31 | content_types_provided(ReqData, Context) -> | |
32 | {[{"application/json", to_json}], ReqData, Context}. | |
30 | 33 | |
31 | 34 | resource_exists(ReqData, Context) -> |
32 | 35 | {case rabbit_mgmt_wm_exchange:exchange(ReqData) of |
47 | 47 | case rabbit_mgmt_util:is_monitor(Tags) of |
48 | 48 | true -> |
49 | 49 | Overview0 ++ |
50 | [{K, {struct, V}} || | |
51 | {K, V} <- rabbit_mgmt_db:get_overview(Range)] ++ | |
50 | [{K, maybe_struct(V)} || | |
51 | {K,V} <- rabbit_mgmt_db:get_overview(Range)] ++ | |
52 | 52 | [{node, node()}, |
53 | 53 | {statistics_db_node, stats_db_node()}, |
54 | 54 | {listeners, listeners()}, |
55 | 55 | {contexts, web_contexts(ReqData)}]; |
56 | 56 | _ -> |
57 | 57 | Overview0 ++ |
58 | [{K, {struct, V}} || | |
58 | [{K, maybe_struct(V)} || | |
59 | 59 | {K, V} <- rabbit_mgmt_db:get_overview(User, Range)] |
60 | 60 | end, |
61 | 61 | rabbit_mgmt_util:reply(Overview, ReqData, Context). |
81 | 81 | || L <- rabbit_networking:active_listeners()], |
82 | 82 | ["protocol", "port", "node"] ). |
83 | 83 | |
84 | maybe_struct(L) when is_list(L) -> {struct, L}; | |
85 | maybe_struct(V) -> V. | |
86 | ||
84 | 87 | %%-------------------------------------------------------------------- |
85 | 88 | |
86 | 89 | web_contexts(ReqData) -> |
57 | 57 | rabbit_mgmt_util:amqp_request( |
58 | 58 | rabbit_mgmt_util:vhost(ReqData), |
59 | 59 | ReqData, Context, |
60 | #'queue.delete'{ queue = rabbit_mgmt_util:id(queue, ReqData) }). | |
60 | #'queue.delete'{ queue = rabbit_mgmt_util:id(queue, ReqData), | |
61 | if_empty = qs_true("if-empty", ReqData), | |
62 | if_unused = qs_true("if-unused", ReqData) }). | |
61 | 63 | |
62 | 64 | is_authorized(ReqData, Context) -> |
63 | 65 | rabbit_mgmt_util:is_authorized_vhost(ReqData, Context). |
77 | 79 | {ok, Q} -> rabbit_mgmt_format:queue(Q); |
78 | 80 | {error, not_found} -> not_found |
79 | 81 | end. |
82 | ||
83 | qs_true(Key, ReqData) -> "true" =:= wrq:get_qs_value(Key, ReqData). |
8 | 8 | {load_definitions, none}, |
9 | 9 | {rates_mode, basic}, |
10 | 10 | {sample_retention_policies, |
11 | %% List of {MaxAgeSecs, IfTimestampDivisibleBySecs} | |
11 | %% List of {MaxAgeInSeconds, SampleEveryNSeconds} | |
12 | 12 | [{global, [{605, 5}, {3660, 60}, {29400, 600}, {86400, 1800}]}, |
13 | 13 | {basic, [{605, 5}, {3600, 60}]}, |
14 | 14 | {detailed, [{10, 5}]}]} |
1006 | 1006 | ?assertEqual([{routed, false}], |
1007 | 1007 | http_post("/exchanges/%2f/amq.default/publish", Msg, ?OK)). |
1008 | 1008 | |
1009 | if_empty_unused_test() -> | |
1010 | http_put("/exchanges/%2f/test", [], ?NO_CONTENT), | |
1011 | http_put("/queues/%2f/test", [], ?NO_CONTENT), | |
1012 | http_post("/bindings/%2f/e/test/q/test", [], ?CREATED), | |
1013 | http_post("/exchanges/%2f/amq.default/publish", | |
1014 | msg(<<"test">>, [], <<"Hello world">>), ?OK), | |
1015 | http_delete("/queues/%2f/test?if-empty=true", ?BAD_REQUEST), | |
1016 | http_delete("/exchanges/%2f/test?if-unused=true", ?BAD_REQUEST), | |
1017 | http_delete("/queues/%2f/test/contents", ?NO_CONTENT), | |
1018 | ||
1019 | {Conn, _ConnPath, _ChPath, _ConnChPath} = get_conn("guest", "guest"), | |
1020 | {ok, Ch} = amqp_connection:open_channel(Conn), | |
1021 | amqp_channel:subscribe(Ch, #'basic.consume'{queue = <<"test">> }, self()), | |
1022 | http_delete("/queues/%2f/test?if-unused=true", ?BAD_REQUEST), | |
1023 | amqp_connection:close(Conn), | |
1024 | ||
1025 | http_delete("/queues/%2f/test?if-empty=true", ?NO_CONTENT), | |
1026 | http_delete("/exchanges/%2f/test?if-unused=true", ?NO_CONTENT), | |
1027 | passed. | |
1028 | ||
1009 | 1029 | parameters_test() -> |
1010 | 1030 | rabbit_runtime_parameters_test:register(), |
1011 | 1031 |
0 | #!/bin/sh -e | |
1 | TWO=$(python2 -c 'import sys;print(sys.version_info[0])') | |
2 | THREE=$(python3 -c 'import sys;print(sys.version_info[0])') | |
3 | ||
4 | if [ $TWO != 2 ] ; then | |
5 | echo Python 2 not found! | |
6 | exit 1 | |
7 | fi | |
8 | ||
9 | if [ $THREE != 3 ] ; then | |
10 | echo Python 3 not found! | |
11 | exit 1 | |
12 | fi | |
13 | ||
14 | echo | |
15 | echo ---------------------- | |
16 | echo Testing under Python 2 | |
17 | echo ---------------------- | |
18 | ||
19 | python2 $(dirname $0)/rabbitmqadmin-test.py | |
20 | ||
21 | echo | |
22 | echo ---------------------- | |
23 | echo Testing under Python 3 | |
24 | echo ---------------------- | |
25 | ||
26 | python3 $(dirname $0)/rabbitmqadmin-test.py |
155 | 155 | self.run_success(['declare', 'queue', 'name=test']) |
156 | 156 | self.run_success(['publish', 'routing_key=test', 'payload=test_1']) |
157 | 157 | self.run_success(['publish', 'routing_key=test', 'payload=test_2']) |
158 | self.run_success(['publish', 'routing_key=test'], stdin='test_3') | |
158 | self.run_success(['publish', 'routing_key=test'], stdin=b'test_3') | |
159 | 159 | self.assert_table([exp_msg('test', 2, False, 'test_1')], ['get', 'queue=test', 'requeue=false']) |
160 | 160 | self.assert_table([exp_msg('test', 1, False, 'test_2')], ['get', 'queue=test', 'requeue=true']) |
161 | 161 | self.assert_table([exp_msg('test', 1, True, 'test_2')], ['get', 'queue=test', 'requeue=false']) |
162 | 162 | self.assert_table([exp_msg('test', 0, False, 'test_3')], ['get', 'queue=test', 'requeue=false']) |
163 | self.run_success(['publish', 'routing_key=test'], stdin='test_4') | |
163 | self.run_success(['publish', 'routing_key=test'], stdin=b'test_4') | |
164 | 164 | filename = '/tmp/rabbitmq-test/get.txt' |
165 | 165 | self.run_success(['get', 'queue=test', 'requeue=false', 'payload_file=' + filename]) |
166 | 166 | with open(filename) as f: |
211 | 211 | args.extend(args0) |
212 | 212 | self.assertEqual(expected, [l.split('\t') for l in self.admin(args)[0].splitlines()]) |
213 | 213 | |
214 | def admin(self, args, stdin=None): | |
215 | return run('../../../bin/rabbitmqadmin', args, stdin) | |
214 | def admin(self, args0, stdin=None): | |
215 | args = ['python{0}'.format(sys.version_info[0]), | |
216 | norm('../../../bin/rabbitmqadmin')] | |
217 | args.extend(args0) | |
218 | return run(args, stdin) | |
216 | 219 | |
217 | 220 | def ctl(self, args0, stdin=None): |
218 | args = ['-n', 'rabbit-test'] | |
219 | args.extend(args0) | |
220 | (stdout, ret) = run('../../../../rabbitmq-server/scripts/rabbitmqctl', args, stdin) | |
221 | args = [norm('../../../../rabbitmq-server/scripts/rabbitmqctl'), '-n', 'rabbit-test'] | |
222 | args.extend(args0) | |
223 | (stdout, ret) = run(args, stdin) | |
221 | 224 | if ret != 0: |
222 | 225 | self.fail(stdout) |
223 | 226 | |
224 | def run(cmd, args, stdin): | |
225 | path = os.path.normpath(os.path.join(os.getcwd(), sys.argv[0], cmd)) | |
226 | cmdline = [path] | |
227 | cmdline.extend(args) | |
228 | proc = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) | |
227 | def norm(cmd): | |
228 | return os.path.normpath(os.path.join(os.getcwd(), sys.argv[0], cmd)) | |
229 | ||
230 | def run(args, stdin): | |
231 | proc = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) | |
229 | 232 | (stdout, stderr) = proc.communicate(stdin) |
230 | 233 | returncode = proc.returncode |
231 | return (stdout + stderr, returncode) | |
234 | res = stdout.decode('utf-8') + stderr.decode('utf-8') | |
235 | return (res, returncode) | |
232 | 236 | |
233 | 237 | def l(thing): |
234 | 238 | return ['list', thing, 'name'] |
238 | 242 | return [key, '', str(count), payload, str(len(payload)), 'string', '', str(redelivered)] |
239 | 243 | |
240 | 244 | if __name__ == '__main__': |
241 | print "\nrabbitmqadmin tests\n===================\n" | |
245 | print("\nrabbitmqadmin tests\n===================\n") | |
242 | 246 | suite = unittest.TestLoader().loadTestsFromTestCase(TestRabbitMQAdmin) |
243 | 247 | results = unittest.TextTestRunner(verbosity=2).run(suite) |
244 | 248 | if not results.wasSuccessful(): |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
22 | 22 | code_change/3]). |
23 | 23 | |
24 | 24 | -export([list_registry_plugins/1]). |
25 | ||
26 | -import(rabbit_misc, [pget/2]). | |
25 | 27 | |
26 | 28 | -include_lib("rabbit_common/include/rabbit.hrl"). |
27 | 29 | |
33 | 35 | uptime, run_queue, processors, exchange_types, |
34 | 36 | auth_mechanisms, applications, contexts, |
35 | 37 | log_file, sasl_log_file, db_dir, config_files, net_ticktime, |
36 | enabled_plugins]). | |
37 | ||
38 | %%-------------------------------------------------------------------- | |
39 | ||
40 | -record(state, {fd_total}). | |
38 | enabled_plugins, persister_stats]). | |
39 | ||
40 | %%-------------------------------------------------------------------- | |
41 | ||
42 | -record(state, {fd_total, fhc_stats, fhc_stats_derived, node_owners}). | |
41 | 43 | |
42 | 44 | %%-------------------------------------------------------------------- |
43 | 45 | |
182 | 184 | i(db_dir, _State) -> list_to_binary(rabbit_mnesia:dir()); |
183 | 185 | i(config_files, _State) -> [list_to_binary(F) || F <- rabbit:config_files()]; |
184 | 186 | i(net_ticktime, _State) -> net_kernel:get_net_ticktime(); |
187 | i(persister_stats, State) -> persister_stats(State); | |
185 | 188 | i(enabled_plugins, _State) -> {ok, Dir} = application:get_env( |
186 | 189 | rabbit, enabled_plugins_file), |
187 | 190 | rabbit_plugins:read_enabled(Dir); |
222 | 225 | set_plugin_name(Name, Module) -> |
223 | 226 | [{name, list_to_binary(atom_to_list(Name))} | |
224 | 227 | proplists:delete(name, Module:description())]. |
228 | ||
229 | persister_stats(#state{fhc_stats = FHC, | |
230 | fhc_stats_derived = FHCD}) -> | |
231 | [{flatten_key(K), V} || {{_Op, Type} = K, V} <- FHC, | |
232 | Type =/= time] ++ | |
233 | [{flatten_key(K), V} || {K, V} <- FHCD]. | |
234 | ||
235 | flatten_key({A, B}) -> | |
236 | list_to_atom(atom_to_list(A) ++ "_" ++ atom_to_list(B)). | |
237 | ||
238 | cluster_links() -> | |
239 | {ok, Items} = net_kernel:nodes_info(), | |
240 | [Link || Item <- Items, | |
241 | Link <- [format_nodes_info(Item)], Link =/= undefined]. | |
242 | ||
243 | format_nodes_info({Node, Info}) -> | |
244 | Owner = proplists:get_value(owner, Info), | |
245 | case catch process_info(Owner, links) of | |
246 | {links, Links} -> | |
247 | case [Link || Link <- Links, is_port(Link)] of | |
248 | [Port] -> | |
249 | {Node, Owner, format_nodes_info1(Port)}; | |
250 | _ -> | |
251 | undefined | |
252 | end; | |
253 | _ -> | |
254 | undefined | |
255 | end. | |
256 | ||
257 | format_nodes_info1(Port) -> | |
258 | case {rabbit_net:socket_ends(Port, inbound), | |
259 | rabbit_net:getstat(Port, [recv_oct, send_oct])} of | |
260 | {{ok, {PeerAddr, PeerPort, SockAddr, SockPort}}, {ok, Stats}} -> | |
261 | [{peer_addr, maybe_ntoab(PeerAddr)}, | |
262 | {peer_port, PeerPort}, | |
263 | {sock_addr, maybe_ntoab(SockAddr)}, | |
264 | {sock_port, SockPort}, | |
265 | {recv_bytes, pget(recv_oct, Stats)}, | |
266 | {send_bytes, pget(send_oct, Stats)}]; | |
267 | _ -> | |
268 | [] | |
269 | end. | |
270 | ||
271 | maybe_ntoab(A) when is_tuple(A) -> list_to_binary(rabbit_misc:ntoab(A)); | |
272 | maybe_ntoab(H) -> H. | |
225 | 273 | |
226 | 274 | %%-------------------------------------------------------------------- |
227 | 275 | |
255 | 303 | |
256 | 304 | format_mochiweb_option(ssl_opts, V) -> |
257 | 305 | format_mochiweb_option_list(V); |
258 | format_mochiweb_option(ciphers, V) -> | |
259 | list_to_binary(rabbit_misc:format("~w", [V])); | |
260 | format_mochiweb_option(_K, V) when is_list(V) -> | |
261 | list_to_binary(V); | |
262 | 306 | format_mochiweb_option(_K, V) -> |
263 | V. | |
307 | case io_lib:printable_list(V) of | |
308 | true -> list_to_binary(V); | |
309 | false -> list_to_binary(rabbit_misc:format("~w", [V])) | |
310 | end. | |
264 | 311 | |
265 | 312 | %%-------------------------------------------------------------------- |
266 | 313 | |
267 | 314 | init([]) -> |
268 | State = #state{fd_total = file_handle_cache:ulimit()}, | |
315 | State = #state{fd_total = file_handle_cache:ulimit(), | |
316 | fhc_stats = file_handle_cache_stats:get(), | |
317 | node_owners = sets:new()}, | |
269 | 318 | %% If we emit an update straight away we will do so just before |
270 | 319 | %% the mgmt db starts up - and then have to wait ?REFRESH_RATIO |
271 | 320 | %% until we send another. So let's have a shorter wait in the hope |
293 | 342 | |
294 | 343 | %%-------------------------------------------------------------------- |
295 | 344 | |
296 | emit_update(State) -> | |
345 | emit_update(State0) -> | |
346 | State = update_state(State0), | |
297 | 347 | rabbit_event:notify(node_stats, infos(?KEYS, State)), |
298 | 348 | erlang:send_after(?REFRESH_RATIO, self(), emit_update), |
299 | State. | |
349 | emit_node_node_stats(State). | |
350 | ||
351 | emit_node_node_stats(State = #state{node_owners = Owners}) -> | |
352 | Links = cluster_links(), | |
353 | NewOwners = sets:from_list([{Node, Owner} || {Node, Owner, _} <- Links]), | |
354 | Dead = sets:to_list(sets:subtract(Owners, NewOwners)), | |
355 | [rabbit_event:notify( | |
356 | node_node_deleted, [{route, Route}]) || {Node, _Owner} <- Dead, | |
357 | Route <- [{node(), Node}, | |
358 | {Node, node()}]], | |
359 | [rabbit_event:notify( | |
360 | node_node_stats, [{route, {node(), Node}} | Stats]) || | |
361 | {Node, _Owner, Stats} <- Links], | |
362 | State#state{node_owners = NewOwners}. | |
363 | ||
364 | update_state(State0 = #state{fhc_stats = FHC0}) -> | |
365 | FHC = file_handle_cache_stats:get(), | |
366 | Avgs = [{{Op, avg_time}, avg_op_time(Op, V, FHC, FHC0)} | |
367 | || {{Op, time}, V} <- FHC], | |
368 | State0#state{fhc_stats = FHC, | |
369 | fhc_stats_derived = Avgs}. | |
370 | ||
371 | -define(MICRO_TO_MILLI, 1000). | |
372 | ||
373 | avg_op_time(Op, Time, FHC, FHC0) -> | |
374 | Time0 = pget({Op, time}, FHC0), | |
375 | TimeDelta = Time - Time0, | |
376 | OpDelta = pget({Op, count}, FHC) - pget({Op, count}, FHC0), | |
377 | case OpDelta of | |
378 | 0 -> 0; | |
379 | _ -> (TimeDelta / OpDelta) / ?MICRO_TO_MILLI | |
380 | end. |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
0 | 0 | RELEASABLE:=true |
1 | DEPS:=rabbitmq-erlang-client | |
1 | DEPS:=rabbitmq-server rabbitmq-erlang-client rabbitmq-test | |
2 | WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/test.sh | |
3 | WITH_BROKER_TEST_CONFIG:=$(PACKAGE_DIR)/test/ebin/test | |
4 | WITH_BROKER_SETUP_SCRIPTS:=$(PACKAGE_DIR)/test/setup-rabbit-test.sh | |
2 | 5 | |
3 | RABBITMQ_TEST_PATH=$(PACKAGE_DIR)/../../rabbitmq-test | |
4 | WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/test.sh | |
6 | define package_rules | |
7 | ||
8 | $(PACKAGE_DIR)+pre-test:: | |
9 | rm -rf $(PACKAGE_DIR)/test/certs | |
10 | mkdir $(PACKAGE_DIR)/test/certs | |
11 | mkdir -p $(PACKAGE_DIR)/test/ebin | |
12 | sed -e "s|%%CERTS_DIR%%|$(abspath $(PACKAGE_DIR))/test/certs|g" < $(PACKAGE_DIR)/test/src/test.config > $(PACKAGE_DIR)/test/ebin/test.config | |
13 | make -C $(PACKAGE_DIR)/../rabbitmq-test/certs all PASSWORD=bunnychow DIR=$(abspath $(PACKAGE_DIR))/test/certs | |
14 | ||
15 | $(PACKAGE_DIR)+clean:: | |
16 | rm -rf $(PACKAGE_DIR)/test/certs | |
17 | ||
18 | endef |
15 | 15 | |
16 | 16 | -module(rabbit_mqtt_processor). |
17 | 17 | |
18 | -export([info/2, initial_state/1, | |
18 | -export([info/2, initial_state/2, | |
19 | 19 | process_frame/2, amqp_pub/2, amqp_callback/2, send_will/1, |
20 | 20 | close_connection/1]). |
21 | 21 | |
23 | 23 | -include("rabbit_mqtt_frame.hrl"). |
24 | 24 | -include("rabbit_mqtt.hrl"). |
25 | 25 | |
26 | -define(APP, rabbitmq_mqtt). | |
26 | 27 | -define(FRAME_TYPE(Frame, Type), |
27 | 28 | Frame = #mqtt_frame{ fixed = #mqtt_frame_fixed{ type = Type }}). |
28 | 29 | |
29 | initial_state(Socket) -> | |
30 | initial_state(Socket,SSLLoginName) -> | |
30 | 31 | #proc_state{ unacked_pubs = gb_trees:empty(), |
31 | 32 | awaiting_ack = gb_trees:empty(), |
32 | 33 | message_id = 1, |
34 | 35 | consumer_tags = {undefined, undefined}, |
35 | 36 | channels = {undefined, undefined}, |
36 | 37 | exchange = rabbit_mqtt_util:env(exchange), |
37 | socket = Socket }. | |
38 | socket = Socket, | |
39 | ssl_login_name = SSLLoginName }. | |
38 | 40 | |
39 | 41 | info(client_id, #proc_state{ client_id = ClientId }) -> ClientId. |
40 | 42 | |
53 | 55 | proto_ver = ProtoVersion, |
54 | 56 | clean_sess = CleanSess, |
55 | 57 | client_id = ClientId0, |
56 | keep_alive = Keepalive} = Var}, PState) -> | |
58 | keep_alive = Keepalive} = Var}, | |
59 | PState = #proc_state{ ssl_login_name = SSLLoginName }) -> | |
57 | 60 | ClientId = case ClientId0 of |
58 | 61 | [] -> rabbit_mqtt_util:gen_client_id(); |
59 | 62 | [_|_] -> ClientId0 |
66 | 69 | {_, true} -> |
67 | 70 | {?CONNACK_INVALID_ID, PState}; |
68 | 71 | _ -> |
69 | case creds(Username, Password) of | |
72 | case creds(Username, Password, SSLLoginName) of | |
70 | 73 | nocreds -> |
71 | 74 | rabbit_log:error("MQTT login failed - no credentials~n"), |
72 | 75 | {?CONNACK_CREDENTIALS, PState}; |
75 | 78 | {?CONNACK_ACCEPT, Conn} -> |
76 | 79 | link(Conn), |
77 | 80 | {ok, Ch} = amqp_connection:open_channel(Conn), |
81 | link(Ch), | |
78 | 82 | amqp_channel:enable_delivery_flow_control(Ch), |
79 | 83 | ok = rabbit_mqtt_collector:register( |
80 | 84 | ClientId, self()), |
363 | 367 | [UserBin] -> {rabbit_mqtt_util:env(vhost), UserBin} |
364 | 368 | end. |
365 | 369 | |
366 | creds(User, Pass) -> | |
367 | DefaultUser = rabbit_mqtt_util:env(default_user), | |
368 | DefaultPass = rabbit_mqtt_util:env(default_pass), | |
369 | Anon = rabbit_mqtt_util:env(allow_anonymous), | |
370 | U = case {User =/= undefined, is_binary(DefaultUser), Anon =:= true} of | |
371 | {true, _, _ } -> list_to_binary(User); | |
372 | {false, true, true} -> DefaultUser; | |
373 | _ -> nocreds | |
370 | creds(User, Pass, SSLLoginName) -> | |
371 | DefaultUser = rabbit_mqtt_util:env(default_user), | |
372 | DefaultPass = rabbit_mqtt_util:env(default_pass), | |
373 | {ok, Anon} = application:get_env(?APP, allow_anonymous), | |
374 | {ok, TLSAuth} = application:get_env(?APP, ssl_cert_login), | |
375 | U = case {User =/= undefined, is_binary(DefaultUser), | |
376 | Anon =:= true, (TLSAuth andalso SSLLoginName =/= none)} of | |
377 | {true, _, _, _} -> list_to_binary(User); | |
378 | {false, _, _, true} -> SSLLoginName; | |
379 | {false, true, true, false} -> DefaultUser; | |
380 | _ -> nocreds | |
374 | 381 | end, |
375 | 382 | case U of |
376 | 383 | nocreds -> |
377 | 384 | nocreds; |
378 | 385 | _ -> |
379 | case {Pass =/= undefined, is_binary(DefaultPass), Anon =:= true} of | |
380 | {true, _, _ } -> {U, list_to_binary(Pass)}; | |
381 | {false, true, true} -> {U, DefaultPass}; | |
382 | _ -> {U, none} | |
386 | case {Pass =/= undefined, is_binary(DefaultPass), Anon =:= true, SSLLoginName == U} of | |
387 | {true, _, _, _} -> {U, list_to_binary(Pass)}; | |
388 | {false, _, _, _} -> {U, none}; | |
389 | {false, true, true, _} -> {U, DefaultPass}; | |
390 | _ -> {U, none} | |
383 | 391 | end |
384 | 392 | end. |
385 | 393 | |
512 | 520 | catch amqp_connection:close(Connection), |
513 | 521 | PState #proc_state{ channels = {undefined, undefined}, |
514 | 522 | connection = undefined }. |
515 |
51 | 51 | {ok, Sock} -> |
52 | 52 | rabbit_alarm:register( |
53 | 53 | self(), {?MODULE, conserve_resources, []}), |
54 | ProcessorState = rabbit_mqtt_processor:initial_state(Sock), | |
54 | ProcessorState = rabbit_mqtt_processor:initial_state(Sock,ssl_login_name(Sock)), | |
55 | 55 | {noreply, |
56 | 56 | control_throttle( |
57 | 57 | #state{socket = Sock, |
137 | 137 | KeepaliveSup, Sock, 0, SendFun, Keepalive, ReceiveFun), |
138 | 138 | {noreply, State #state { keepalive = Heartbeater }}; |
139 | 139 | |
140 | handle_info(keepalive_timeout, State = #state { conn_name = ConnStr }) -> | |
140 | handle_info(keepalive_timeout, State = #state {conn_name = ConnStr, | |
141 | proc_state = PState}) -> | |
141 | 142 | log(error, "closing MQTT connection ~p (keepalive timeout)~n", [ConnStr]), |
142 | {stop, {shutdown, keepalive_timeout}, State}; | |
143 | send_will_and_terminate(PState, {shutdown, keepalive_timeout}, State); | |
143 | 144 | |
144 | 145 | handle_info(Msg, State) -> |
145 | 146 | {stop, {mqtt_unexpected_msg, Msg}, State}. |
189 | 190 | |
190 | 191 | code_change(_OldVsn, State, _Extra) -> |
191 | 192 | {ok, State}. |
193 | ||
194 | ssl_login_name(Sock) -> | |
195 | case rabbit_net:peercert(Sock) of | |
196 | {ok, C} -> case rabbit_ssl:peer_cert_auth_name(C) of | |
197 | unsafe -> none; | |
198 | not_found -> none; | |
199 | Name -> Name | |
200 | end; | |
201 | {error, no_peercert} -> none; | |
202 | nossl -> none | |
203 | end. | |
192 | 204 | |
193 | 205 | %%---------------------------------------------------------------------------- |
194 | 206 | |
244 | 256 | log(Level, Fmt, Args) -> rabbit_log:log(connection, Level, Fmt, Args). |
245 | 257 | |
246 | 258 | send_will_and_terminate(PState, State) -> |
259 | send_will_and_terminate(PState, {shutdown, conn_closed}, State). | |
260 | ||
261 | send_will_and_terminate(PState, Reason, State) -> | |
247 | 262 | rabbit_mqtt_processor:send_will(PState), |
248 | 263 | % todo: flush channel after publish |
249 | {stop, {shutdown, conn_closed}, State}. | |
264 | {stop, Reason, State}. | |
250 | 265 | |
251 | 266 | network_error(closed, |
252 | 267 | State = #state{ conn_name = ConnStr, |
5 | 5 | {mod, {rabbit_mqtt, []}}, |
6 | 6 | {env, [{default_user, <<"guest">>}, |
7 | 7 | {default_pass, <<"guest">>}, |
8 | {ssl_cert_login,false}, | |
8 | 9 | {allow_anonymous, true}, |
9 | 10 | {vhost, <<"/">>}, |
10 | 11 | {exchange, <<"amq.topic">>}, |
10 | 10 | JAVA_AMQP_DIR=../../rabbitmq-java-client/ |
11 | 11 | JAVA_AMQP_CLASSES=$(JAVA_AMQP_DIR)build/classes/ |
12 | 12 | |
13 | TEST_SRCS:=$(shell find $(TEST_SRC) -name '*.java') | |
14 | 13 | ALL_CLASSES:=$(foreach f,$(shell find src -name '*.class'),'$(f)') |
15 | TEST_CLASSES:=$(TEST_SRCS:.java=.class) | |
16 | 14 | CP:=$(PAHO_JAR):$(JUNIT_JAR):$(TEST_SRC):$(JAVA_AMQP_CLASSES) |
15 | ||
16 | HOSTNAME:=$(shell hostname) | |
17 | 17 | |
18 | 18 | define class_from_path |
19 | 19 | $(subst .class,,$(subst src.,,$(subst /,.,$(1)))) |
20 | 20 | endef |
21 | 21 | |
22 | 22 | .PHONY: test |
23 | test: $(TEST_CLASSES) build_java_amqp | |
24 | $(foreach test,$(TEST_CLASSES),CLASSPATH=$(CP) java junit.textui.TestRunner -text $(call class_from_path,$(test))) | |
23 | test: build_java_amqp | |
24 | ant test -Dhostname=$(HOSTNAME) | |
25 | 25 | |
26 | 26 | clean: |
27 | rm -rf $(PAHO_JAR) $(ALL_CLASSES) | |
27 | ant clean | |
28 | rm -rf test_client | |
29 | ||
28 | 30 | |
29 | 31 | distclean: clean |
30 | 32 | rm -rf $(CHECKOUT_DIR) |
33 | 35 | git clone $(UPSTREAM_GIT) $@ |
34 | 36 | (cd $@ && git checkout $(REVISION)) || rm -rf $@ |
35 | 37 | |
36 | $(PAHO_JAR): $(CHECKOUT_DIR) | |
37 | ant -buildfile $</org.eclipse.paho.client.mqttv3/build.xml \ | |
38 | -Dship.folder=. -Dmqttv3-client-jar=$(PAHO_JAR_NAME) full | |
39 | ||
40 | %.class: %.java $(PAHO_JAR) $(JUNIT_JAR) | |
41 | $(JC) -cp $(CP) $< | |
42 | 38 | |
43 | 39 | .PHONY: build_java_amqp |
44 | build_java_amqp: | |
45 | make -C $(JAVA_AMQP_DIR) | |
40 | build_java_amqp: $(CHECKOUT_DIR) | |
41 | make -C $(JAVA_AMQP_DIR) jar | |
42 |
0 | build.out=build | |
1 | test.resources=${build.out}/test/resources | |
2 | javac.debug=true | |
3 | test.javac.out=${build.out}/test/classes | |
4 | test.resources=${build.out}/test/resources | |
5 | test.src.home=src | |
6 | certs.dir=certs | |
7 | certs.password=test | |
8 | server.keystore=${test.resources}/server.jks | |
9 | server.cert=${certs.dir}/server/cert.pem | |
10 | ca.cert=${certs.dir}/testca/cacert.pem | |
11 | server.keystore.phrase=bunnyhop | |
12 | ||
13 | client.keystore=${test.resources}/client.jks | |
14 | client.keystore.phrase=bunnychow | |
15 | client.srckeystore=${certs.dir}/client/keycert.p12 | |
16 | client.srckeystore.password=bunnychow |
0 | <?xml version="1.0"?> | |
1 | <project name="MQTT Java Test client" default="build"> | |
2 | ||
3 | <property name="output.folder" value="./target/work" /> | |
4 | <property name="ship.folder" value="./" /> | |
5 | ||
6 | <property file="build.properties"/> | |
7 | ||
8 | <property name="java-amqp-client-path" location="../../rabbitmq-java-client" /> | |
9 | ||
10 | <path id="test.javac.classpath"> | |
11 | <!-- cf dist target, infra --> | |
12 | <fileset dir="lib"> | |
13 | <include name="**/*.jar"/> | |
14 | </fileset> | |
15 | <fileset dir="test_client"> | |
16 | <include name="**/*.jar"/> | |
17 | </fileset> | |
18 | <fileset dir="${java-amqp-client-path}"> | |
19 | <include name="**/rabbitmq-client.jar" /> | |
20 | </fileset> | |
21 | </path> | |
22 | ||
23 | <target name="clean-paho" description="Clean compiled Eclipe Paho Test Client jars" > | |
24 | <ant antfile="test_client/org.eclipse.paho.client.mqttv3/build.xml" useNativeBasedir="true" target="clean"/> | |
25 | </target> | |
26 | ||
27 | <target name="clean" > | |
28 | <delete dir="${build.out}"/> | |
29 | </target> | |
30 | ||
31 | <target name="build-paho" depends="clean-paho" description="Build the Eclipse Paho Test Client"> | |
32 | <ant antfile="test_client/org.eclipse.paho.client.mqttv3/build.xml" useNativeBasedir="true" /> | |
33 | </target> | |
34 | ||
35 | <target name="detect-ssl"> | |
36 | <available property="SSL_AVAILABLE" file="${certs.dir}/client"/> | |
37 | <property name="CLIENT_KEYSTORE_PHRASE" value="bunnies"/> | |
38 | <property name="SSL_P12_PASSWORD" value="${certs.password}"/> | |
39 | </target> | |
40 | ||
41 | <target name="detect-tmpdir"> | |
42 | <property environment="env"/> | |
43 | <condition property="TMPDIR" value="${env.TMPDIR}" else="/tmp"> | |
44 | <available file="${env.TMPDIR}" type="dir"/> | |
45 | </condition> | |
46 | </target> | |
47 | ||
48 | <target name="make-server-keystore" if="SSL_AVAILABLE" depends="detect-ssl, detect-tmpdir"> | |
49 | <mkdir dir="${test.resources}"/> | |
50 | <exec executable="keytool" failonerror="true" osfamily="unix"> | |
51 | <arg line="-import"/> | |
52 | <arg value="-alias"/> | |
53 | <arg value="server1"/> | |
54 | <arg value="-file"/> | |
55 | <arg value="${server.cert}"/> | |
56 | <arg value="-keystore"/> | |
57 | <arg value="${server.keystore}"/> | |
58 | <arg value="-noprompt"/> | |
59 | <arg value="-storepass"/> | |
60 | <arg value="${server.keystore.phrase}"/> | |
61 | </exec> | |
62 | <exec executable="keytool" failonerror="true" osfamily="unix"> | |
63 | <arg line="-import"/> | |
64 | <arg value="-alias"/> | |
65 | <arg value="testca"/> | |
66 | <arg value="-trustcacerts"/> | |
67 | <arg value="-file"/> | |
68 | <arg value="${ca.cert}"/> | |
69 | <arg value="-keystore"/> | |
70 | <arg value="${server.keystore}"/> | |
71 | <arg value="-noprompt"/> | |
72 | <arg value="-storepass"/> | |
73 | <arg value="${server.keystore.phrase}"/> | |
74 | </exec> | |
75 | </target> | |
76 | ||
77 | <target name="make-client-keystore" if="SSL_AVAILABLE" depends="detect-ssl, detect-tmpdir"> | |
78 | <mkdir dir="${test.resources}"/> | |
79 | <exec executable="keytool" failonerror="true" osfamily="unix"> | |
80 | <arg line="-importkeystore"/> | |
81 | <arg line="-srckeystore" /> | |
82 | <arg line="${client.srckeystore}" /> | |
83 | <arg value="-srcstoretype"/> | |
84 | <arg value="PKCS12"/> | |
85 | <arg value="-srcstorepass"/> | |
86 | <arg value="${client.srckeystore.password}"/> | |
87 | <arg value="-destkeystore"/> | |
88 | <arg value="${client.keystore}"/> | |
89 | <arg value="-deststoretype"/> | |
90 | <arg value="JKS"/> | |
91 | <arg value="-noprompt"/> | |
92 | <arg value="-storepass"/> | |
93 | <arg value="${client.keystore.phrase}"/> | |
94 | </exec> | |
95 | </target> | |
96 | ||
97 | <target name="test-build" depends="clean,build-paho"> | |
98 | <mkdir dir="${test.javac.out}"/> | |
99 | ||
100 | <javac srcdir="./src" | |
101 | destdir="${test.javac.out}" | |
102 | debug="on" | |
103 | includeantruntime="false" > | |
104 | <classpath> | |
105 | <path refid="test.javac.classpath"/> | |
106 | </classpath> | |
107 | </javac> | |
108 | </target> | |
109 | ||
110 | <target name="test-ssl" depends="test-build, make-server-keystore, make-client-keystore" if="SSL_AVAILABLE"> | |
111 | <junit printSummary="withOutAndErr" | |
112 | haltOnFailure="true" | |
113 | failureproperty="test.failure" | |
114 | fork="yes"> | |
115 | <classpath> | |
116 | <path refid="test.javac.classpath"/> | |
117 | <pathelement path="${test.javac.out}"/> | |
118 | <pathelement path="${test.resources}"/> | |
119 | </classpath> | |
120 | <jvmarg value="-Dhostname=${hostname}"/> | |
121 | <jvmarg value="-Dserver.keystore.passwd=${server.keystore.phrase}"/> | |
122 | <jvmarg value="-Dclient.keystore.passwd=${client.keystore.phrase}"/> | |
123 | <formatter type="plain"/> | |
124 | <formatter type="xml"/> | |
125 | <test todir="${build.out}" name="com.rabbitmq.mqtt.test.tls.MqttSSLTest"/> | |
126 | </junit> | |
127 | </target> | |
128 | ||
129 | <target name="test-server" depends="test-build"> | |
130 | <junit printSummary="withOutAndErr" | |
131 | haltOnFailure="true" | |
132 | failureproperty="test.failure" | |
133 | fork="yes"> | |
134 | <classpath> | |
135 | <path refid="test.javac.classpath"/> | |
136 | <pathelement path="${test.javac.out}"/> | |
137 | </classpath> | |
138 | ||
139 | <formatter type="plain"/> | |
140 | <formatter type="xml"/> | |
141 | <test todir="${build.out}" name="com.rabbitmq.mqtt.test.MqttTest"/> | |
142 | </junit> | |
143 | </target> | |
144 | ||
145 | <target name="test" depends="test-server, test-ssl" description="Build the test mqtt client libraries."> | |
146 | ||
147 | </target> | |
148 | ||
149 | </project> |
Binary diff not shown
0 | #!/bin/sh | |
1 | CTL=$1 | |
2 | USER="O=client,CN=$(hostname)" | |
3 | ||
4 | $CTL add_user "$USER" '' | |
5 | $CTL set_permissions -p / "$USER" ".*" ".*" ".*" |
0 | #!/bin/sh -e | |
1 | sh -e `dirname $0`/rabbit-test.sh "`dirname $0`/../../rabbitmq-server/scripts/rabbitmqctl -n rabbit-test" |
0 | #!/bin/sh | |
1 | CTL=$1 | |
2 | USER="O=client,CN=$(hostname)" | |
3 | ||
4 | # Test direct connections | |
5 | $CTL add_user "$USER" '' | |
6 | $CTL set_permissions -p / "$USER" ".*" ".*" ".*" |
0 | #!/bin/sh -e | |
1 | sh -e `dirname $0`/rabbit-test.sh "`dirname $0`/../../rabbitmq-server/scripts/rabbitmqctl -n rabbit-test" |
0 | // The contents of this file are subject to the Mozilla Public License | |
1 | // Version 1.1 (the "License"); you may not use this file except in | |
2 | // compliance with the License. You may obtain a copy of the License | |
3 | // at http://www.mozilla.org/MPL/ | |
4 | // | |
5 | // Software distributed under the License is distributed on an "AS IS" | |
6 | // basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | // the License for the specific language governing rights and | |
8 | // limitations under the License. | |
9 | // | |
10 | // The Original Code is RabbitMQ. | |
11 | // | |
12 | // The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | // Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. | |
14 | // | |
15 | ||
16 | package com.rabbitmq.mqtt.test.tls; | |
17 | ||
18 | import junit.framework.Assert; | |
19 | import junit.framework.TestCase; | |
20 | import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken; | |
21 | import org.eclipse.paho.client.mqttv3.MqttCallback; | |
22 | import org.eclipse.paho.client.mqttv3.MqttClient; | |
23 | import org.eclipse.paho.client.mqttv3.MqttConnectOptions; | |
24 | import org.eclipse.paho.client.mqttv3.MqttException; | |
25 | import org.eclipse.paho.client.mqttv3.MqttMessage; | |
26 | ||
27 | import java.io.IOException; | |
28 | import java.util.ArrayList; | |
29 | ||
30 | ||
31 | /** | |
32 | * MQTT v3.1 tests | |
33 | * TODO: synchronise access to variables | |
34 | */ | |
35 | ||
36 | public class MqttSSLTest extends TestCase implements MqttCallback { | |
37 | ||
38 | private final int port = 8883; | |
39 | private final String brokerUrl = "ssl://" + getHost() + ":" + port; | |
40 | private String clientId; | |
41 | private String clientId2; | |
42 | private MqttClient client; | |
43 | private MqttClient client2; | |
44 | private MqttConnectOptions conOpt; | |
45 | private ArrayList<MqttMessage> receivedMessages; | |
46 | ||
47 | private long lastReceipt; | |
48 | private boolean expectConnectionFailure; | |
49 | ||
50 | ||
51 | private static final String getHost() { | |
52 | Object host = System.getProperty("hostname"); | |
53 | assertNotNull(host); | |
54 | return host.toString(); | |
55 | } | |
56 | ||
57 | // override 10s limit | |
58 | private class MyConnOpts extends MqttConnectOptions { | |
59 | private int keepAliveInterval = 60; | |
60 | ||
61 | @Override | |
62 | public void setKeepAliveInterval(int keepAliveInterval) { | |
63 | this.keepAliveInterval = keepAliveInterval; | |
64 | } | |
65 | ||
66 | @Override | |
67 | public int getKeepAliveInterval() { | |
68 | return keepAliveInterval; | |
69 | } | |
70 | } | |
71 | ||
72 | ||
73 | @Override | |
74 | public void setUp() throws MqttException, IOException { | |
75 | clientId = getClass().getSimpleName() + ((int) (10000 * Math.random())); | |
76 | clientId2 = clientId + "-2"; | |
77 | client = new MqttClient(brokerUrl, clientId, null); | |
78 | client2 = new MqttClient(brokerUrl, clientId2, null); | |
79 | conOpt = new MyConnOpts(); | |
80 | conOpt.setSocketFactory(MutualAuth.getSSLContextWithoutCert().getSocketFactory()); | |
81 | setConOpts(conOpt); | |
82 | receivedMessages = new ArrayList<MqttMessage>(); | |
83 | expectConnectionFailure = false; | |
84 | } | |
85 | ||
86 | @Override | |
87 | public void tearDown() throws MqttException { | |
88 | // clean any sticky sessions | |
89 | setConOpts(conOpt); | |
90 | client = new MqttClient(brokerUrl, clientId, null); | |
91 | try { | |
92 | client.connect(conOpt); | |
93 | client.disconnect(); | |
94 | } catch (Exception _) { | |
95 | } | |
96 | ||
97 | client2 = new MqttClient(brokerUrl, clientId2, null); | |
98 | try { | |
99 | client2.connect(conOpt); | |
100 | client2.disconnect(); | |
101 | } catch (Exception _) { | |
102 | } | |
103 | } | |
104 | ||
105 | ||
106 | private void setConOpts(MqttConnectOptions conOpts) { | |
107 | // provide authentication if the broker needs it | |
108 | // conOpts.setUserName("guest"); | |
109 | // conOpts.setPassword("guest".toCharArray()); | |
110 | conOpts.setCleanSession(true); | |
111 | conOpts.setKeepAliveInterval(60); | |
112 | } | |
113 | ||
114 | public void testCertLogin() throws MqttException { | |
115 | try { | |
116 | conOpt.setSocketFactory(MutualAuth.getSSLContextWithClientCert().getSocketFactory()); | |
117 | client.connect(conOpt); | |
118 | } catch (Exception e) { | |
119 | e.printStackTrace(); | |
120 | fail("Exception: " + e.getMessage()); | |
121 | } | |
122 | } | |
123 | ||
124 | ||
125 | public void testInvalidUser() throws MqttException { | |
126 | conOpt.setUserName("invalid-user"); | |
127 | try { | |
128 | client.connect(conOpt); | |
129 | fail("Authentication failure expected"); | |
130 | } catch (MqttException ex) { | |
131 | Assert.assertEquals(MqttException.REASON_CODE_FAILED_AUTHENTICATION, ex.getReasonCode()); | |
132 | } catch (Exception e) { | |
133 | e.printStackTrace(); | |
134 | fail("Exception: " + e.getMessage()); | |
135 | } | |
136 | } | |
137 | ||
138 | public void testInvalidPassword() throws MqttException { | |
139 | conOpt.setUserName("invalid-user"); | |
140 | conOpt.setPassword("invalid-password".toCharArray()); | |
141 | try { | |
142 | client.connect(conOpt); | |
143 | fail("Authentication failure expected"); | |
144 | } catch (MqttException ex) { | |
145 | Assert.assertEquals(MqttException.REASON_CODE_FAILED_AUTHENTICATION, ex.getReasonCode()); | |
146 | } catch (Exception e) { | |
147 | e.printStackTrace(); | |
148 | fail("Exception: " + e.getMessage()); | |
149 | } | |
150 | } | |
151 | ||
152 | ||
153 | public void connectionLost(Throwable cause) { | |
154 | if (!expectConnectionFailure) | |
155 | fail("Connection unexpectedly lost"); | |
156 | } | |
157 | ||
158 | public void messageArrived(String topic, MqttMessage message) throws Exception { | |
159 | lastReceipt = System.currentTimeMillis(); | |
160 | receivedMessages.add(message); | |
161 | } | |
162 | ||
163 | public void deliveryComplete(IMqttDeliveryToken token) { | |
164 | } | |
165 | } |
0 | package com.rabbitmq.mqtt.test.tls; | |
1 | ||
2 | import javax.net.ssl.KeyManagerFactory; | |
3 | import javax.net.ssl.SSLContext; | |
4 | import javax.net.ssl.TrustManagerFactory; | |
5 | import java.io.IOException; | |
6 | import java.security.KeyStore; | |
7 | import java.security.KeyStoreException; | |
8 | import java.security.NoSuchAlgorithmException; | |
9 | import java.security.cert.CertificateException; | |
10 | import java.util.Arrays; | |
11 | import java.util.List; | |
12 | ||
13 | public class MutualAuth { | |
14 | ||
15 | private MutualAuth() { | |
16 | ||
17 | } | |
18 | ||
19 | private static String getStringProperty(String propertyName) throws IllegalArgumentException { | |
20 | Object value = System.getProperty(propertyName); | |
21 | if (value == null) throw new IllegalArgumentException("Property: " + propertyName + " not found"); | |
22 | return value.toString(); | |
23 | } | |
24 | ||
25 | private static TrustManagerFactory getServerTrustManagerFactory() throws NoSuchAlgorithmException, CertificateException, IOException, KeyStoreException { | |
26 | char[] trustPhrase = getStringProperty("server.keystore.passwd").toCharArray(); | |
27 | MutualAuth dummy = new MutualAuth(); | |
28 | ||
29 | // Server TrustStore | |
30 | KeyStore tks = KeyStore.getInstance("JKS"); | |
31 | tks.load(dummy.getClass().getResourceAsStream("/server.jks"), trustPhrase); | |
32 | ||
33 | TrustManagerFactory tmf = TrustManagerFactory.getInstance("X509"); | |
34 | tmf.init(tks); | |
35 | ||
36 | return tmf; | |
37 | } | |
38 | ||
39 | public static SSLContext getSSLContextWithClientCert() throws IOException { | |
40 | ||
41 | char[] clientPhrase = getStringProperty("client.keystore.passwd").toCharArray(); | |
42 | ||
43 | MutualAuth dummy = new MutualAuth(); | |
44 | try { | |
45 | SSLContext sslContext = getVanillaSSLContext(); | |
46 | // Client Keystore | |
47 | KeyStore ks = KeyStore.getInstance("JKS"); | |
48 | ks.load(dummy.getClass().getResourceAsStream("/client.jks"), clientPhrase); | |
49 | KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509"); | |
50 | kmf.init(ks, clientPhrase); | |
51 | ||
52 | sslContext.init(kmf.getKeyManagers(), getServerTrustManagerFactory().getTrustManagers(), null); | |
53 | return sslContext; | |
54 | } catch (Exception e) { | |
55 | throw new IOException(e); | |
56 | } | |
57 | ||
58 | } | |
59 | ||
60 | private static SSLContext getVanillaSSLContext() throws NoSuchAlgorithmException { | |
61 | SSLContext result = null; | |
62 | List<String> xs = Arrays.asList("TLSv1.2", "TLSv1.1", "TLSv1"); | |
63 | for(String x : xs) { | |
64 | try { | |
65 | return SSLContext.getInstance(x); | |
66 | } catch (NoSuchAlgorithmException nae) { | |
67 | // keep trying | |
68 | } | |
69 | } | |
70 | throw new NoSuchAlgorithmException("Could not obtain an SSLContext for TLS 1.0-1.2"); | |
71 | } | |
72 | ||
73 | public static SSLContext getSSLContextWithoutCert() throws IOException { | |
74 | try { | |
75 | SSLContext sslContext = getVanillaSSLContext(); | |
76 | sslContext.init(null, getServerTrustManagerFactory().getTrustManagers(), null); | |
77 | return sslContext; | |
78 | } catch (Exception e) { | |
79 | throw new IOException(e); | |
80 | } | |
81 | } | |
82 | ||
83 | } |
0 | [{rabbitmq_mqtt, [ | |
1 | {ssl_cert_login, true}, | |
2 | {allow_anonymous, true}, | |
3 | {tcp_listeners, [1883]}, | |
4 | {ssl_listeners, [8883]} | |
5 | ]}, | |
6 | {rabbit, [{ssl_options, [{cacertfile,"%%CERTS_DIR%%/testca/cacert.pem"}, | |
7 | {certfile,"%%CERTS_DIR%%/server/cert.pem"}, | |
8 | {keyfile,"%%CERTS_DIR%%/server/key.pem"}, | |
9 | {verify,verify_peer}, | |
10 | {fail_if_no_peer_cert,false} | |
11 | ]} | |
12 | ]} | |
13 | ]. |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
116 | 116 | validate_params_user(#amqp_params_direct{}, none) -> |
117 | 117 | ok; |
118 | 118 | validate_params_user(#amqp_params_direct{virtual_host = VHost}, |
119 | User = #user{username = Username, | |
120 | auth_backend = M}) -> | |
121 | case rabbit_vhost:exists(VHost) andalso M:check_vhost_access(User, VHost) of | |
122 | true -> ok; | |
123 | false -> {error, "user \"~s\" may not connect to vhost \"~s\"", | |
119 | User = #user{username = Username}) -> | |
120 | case rabbit_vhost:exists(VHost) andalso | |
121 | (catch rabbit_access_control:check_vhost_access( | |
122 | User, VHost, undefined)) of | |
123 | ok -> ok; | |
124 | _ -> {error, "user \"~s\" may not connect to vhost \"~s\"", | |
124 | 125 | [Username, VHost]} |
125 | 126 | end; |
126 | 127 | validate_params_user(#amqp_params_network{}, _User) -> |
256 | 256 | valid_param(Value) -> valid_param(Value, none). |
257 | 257 | |
258 | 258 | lookup_user(Name) -> |
259 | {ok, User} = rabbit_auth_backend_internal:check_user_login(Name, []), | |
259 | {ok, User} = rabbit_access_control:check_user_login(Name, []), | |
260 | 260 | User. |
261 | 261 | |
262 | 262 | clear_param(Name) -> |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
17 | 17 | var num_keys = ['prefetch-count', 'reconnect-delay']; |
18 | 18 | var bool_keys = ['add-forward-headers']; |
19 | 19 | var arrayable_keys = ['src-uri', 'dest-uri']; |
20 | var redirect = this.params['redirect']; | |
21 | if (redirect != undefined) { | |
22 | delete this.params['redirect']; | |
23 | } | |
20 | 24 | put_parameter(this, [], num_keys, bool_keys, arrayable_keys); |
25 | if (redirect != undefined) { | |
26 | go_to(redirect); | |
27 | } | |
21 | 28 | return false; |
22 | 29 | }); |
23 | 30 | sammy.del('#/shovel-parameters', function() { |
64 | 64 | %% static shovels do not have a vhost, so only allow admins (not |
65 | 65 | %% monitors) to see them. |
66 | 66 | filter_vhost_user(List, _ReqData, #context{user = User = #user{tags = Tags}}) -> |
67 | VHosts = rabbit_mgmt_util:list_login_vhosts(User), | |
67 | VHosts = rabbit_mgmt_util:list_login_vhosts(User, undefined), | |
68 | 68 | [I || I <- List, case pget(vhost, I) of |
69 | 69 | undefined -> lists:member(administrator, Tags); |
70 | 70 | VHost -> lists:member(VHost, VHosts) |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
0 | 0 | RELEASABLE:=true |
1 | DEPS:=rabbitmq-server rabbitmq-erlang-client | |
1 | DEPS:=rabbitmq-server rabbitmq-erlang-client rabbitmq-test | |
2 | 2 | STANDALONE_TEST_COMMANDS:=eunit:test([rabbit_stomp_test_util,rabbit_stomp_test_frame],[verbose]) |
3 | WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/src/test.py $(PACKAGE_DIR)/test/src/test_connect_options.py | |
3 | WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/src/test.py $(PACKAGE_DIR)/test/src/test_connect_options.py $(PACKAGE_DIR)/test/src/test_ssl.py | |
4 | 4 | WITH_BROKER_TEST_COMMANDS:=rabbit_stomp_test:all_tests() rabbit_stomp_amqqueue_test:all_tests() |
5 | ||
6 | RABBITMQ_TEST_PATH=$(PACKAGE_DIR)/../rabbitmq-test | |
7 | ABS_PACKAGE_DIR:=$(abspath $(PACKAGE_DIR)) | |
8 | ||
9 | CERTS_DIR:=$(ABS_PACKAGE_DIR)/test/certs | |
10 | CAN_RUN_SSL:=$(shell if [ -d $(RABBITMQ_TEST_PATH) ]; then echo "true"; else echo "false"; fi) | |
11 | ||
12 | TEST_CONFIG_PATH=$(TEST_EBIN_DIR)/test.config | |
13 | WITH_BROKER_TEST_CONFIG:=$(TEST_EBIN_DIR)/test | |
14 | ||
15 | .PHONY: $(TEST_CONFIG_PATH) | |
16 | ||
17 | ifeq ($(CAN_RUN_SSL),true) | |
18 | ||
19 | WITH_BROKER_TEST_SCRIPTS += $(PACKAGE_DIR)/test/src/test_ssl.py | |
20 | ||
21 | $(TEST_CONFIG_PATH): $(CERTS_DIR) $(ABS_PACKAGE_DIR)/test/src/ssl.config | |
22 | sed -e "s|%%CERTS_DIR%%|$(CERTS_DIR)|g" < $(ABS_PACKAGE_DIR)/test/src/ssl.config > $@ | |
23 | @echo "\nRunning SSL tests\n" | |
24 | ||
25 | $(CERTS_DIR): | |
26 | mkdir -p $(CERTS_DIR) | |
27 | make -C $(RABBITMQ_TEST_PATH)/certs all PASSWORD=test DIR=$(CERTS_DIR) | |
28 | ||
29 | else | |
30 | $(TEST_CONFIG_PATH): $(ABS_PACKAGE_DIR)/test/src/non_ssl.config | |
31 | cp $(ABS_PACKAGE_DIR)/test/src/non_ssl.config $@ | |
32 | @echo "\nNOT running SSL tests - looked in $(RABBITMQ_TEST_PATH) \n" | |
33 | ||
34 | endif | |
5 | WITH_BROKER_TEST_CONFIG:=$(PACKAGE_DIR)/test/ebin/test | |
35 | 6 | |
36 | 7 | define package_rules |
37 | 8 | |
38 | $(PACKAGE_DIR)+pre-test:: $(TEST_CONFIG_PATH) | |
9 | $(PACKAGE_DIR)+pre-test:: | |
10 | rm -rf $(PACKAGE_DIR)/test/certs | |
11 | mkdir $(PACKAGE_DIR)/test/certs | |
12 | mkdir -p $(PACKAGE_DIR)/test/ebin | |
13 | sed -e "s|%%CERTS_DIR%%|$(abspath $(PACKAGE_DIR))/test/certs|g" < $(PACKAGE_DIR)/test/src/test.config > $(PACKAGE_DIR)/test/ebin/test.config | |
14 | make -C $(PACKAGE_DIR)/../rabbitmq-test/certs all PASSWORD=test DIR=$(abspath $(PACKAGE_DIR))/test/certs | |
39 | 15 | make -C $(PACKAGE_DIR)/deps/stomppy |
40 | 16 | |
41 | 17 | $(PACKAGE_DIR)+clean:: |
42 | rm -rf $(CERTS_DIR) | |
18 | rm -rf $(PACKAGE_DIR)/test/certs | |
43 | 19 | |
44 | 20 | $(PACKAGE_DIR)+clean-with-deps:: |
45 | 21 | make -C $(PACKAGE_DIR)/deps/stomppy distclean |
157 | 157 | {shutdown, {server_initiated_close, Code, Explanation}}}, |
158 | 158 | State = #state{connection = Conn}) -> |
159 | 159 | amqp_death(Code, Explanation, State); |
160 | handle_info({'EXIT', Conn, | |
161 | {shutdown, {connection_closing, | |
162 | {server_initiated_close, Code, Explanation}}}}, | |
163 | State = #state{connection = Conn}) -> | |
164 | amqp_death(Code, Explanation, State); | |
160 | 165 | handle_info({'EXIT', Conn, Reason}, State = #state{connection = Conn}) -> |
161 | 166 | send_error("AMQP connection died", "Reason: ~p", [Reason], State), |
162 | 167 | {stop, {conn_died, Reason}, State}; |
168 | ||
169 | handle_info({'EXIT', Ch, Reason}, State = #state{channel = Ch}) -> | |
170 | send_error("AMQP channel died", "Reason: ~p", [Reason], State), | |
171 | {stop, {channel_died, Reason}, State}; | |
172 | handle_info({'EXIT', Ch, | |
173 | {shutdown, {server_initiated_close, Code, Explanation}}}, | |
174 | State = #state{channel = Ch}) -> | |
175 | amqp_death(Code, Explanation, State); | |
176 | ||
177 | ||
163 | 178 | handle_info({inet_reply, _, ok}, State) -> |
164 | 179 | {noreply, State, hibernate}; |
165 | 180 | handle_info({bump_credit, Msg}, State) -> |
510 | 525 | {ok, Connection} -> |
511 | 526 | link(Connection), |
512 | 527 | {ok, Channel} = amqp_connection:open_channel(Connection), |
528 | link(Channel), | |
513 | 529 | amqp_channel:enable_delivery_flow_control(Channel), |
514 | 530 | SessionId = rabbit_guid:string(rabbit_guid:gen_secure(), "session"), |
515 | 531 | {{SendTimeout, ReceiveTimeout}, State1} = |
16 | 16 | -module(rabbit_stomp_reader). |
17 | 17 | |
18 | 18 | -export([start_link/3]). |
19 | -export([init/3]). | |
19 | -export([init/3, mainloop/2]). | |
20 | -export([system_continue/3, system_terminate/4, system_code_change/4]). | |
20 | 21 | -export([conserve_resources/3]). |
21 | 22 | |
22 | 23 | -include("rabbit_stomp.hrl"). |
24 | 25 | -include_lib("amqp_client/include/amqp_client.hrl"). |
25 | 26 | |
26 | 27 | -record(reader_state, {socket, parse_state, processor, state, |
27 | conserve_resources, recv_outstanding}). | |
28 | conserve_resources, recv_outstanding, | |
29 | parent}). | |
28 | 30 | |
29 | 31 | %%---------------------------------------------------------------------------- |
30 | 32 | |
47 | 49 | {ok, ConnStr} -> |
48 | 50 | case SockTransform(Sock0) of |
49 | 51 | {ok, Sock} -> |
50 | ||
52 | DebugOpts = sys:debug_options([]), | |
51 | 53 | ProcInitArgs = processor_args(SupHelperPid, |
52 | 54 | Configuration, |
53 | 55 | Sock), |
58 | 60 | |
59 | 61 | ParseState = rabbit_stomp_frame:initial_state(), |
60 | 62 | try |
61 | mainloop( | |
63 | mainloop(DebugOpts, | |
62 | 64 | register_resource_alarm( |
63 | 65 | #reader_state{socket = Sock, |
64 | 66 | parse_state = ParseState, |
85 | 87 | end |
86 | 88 | end. |
87 | 89 | |
88 | mainloop(State0 = #reader_state{socket = Sock}) -> | |
90 | mainloop(DebugOpts, State0 = #reader_state{socket = Sock}) -> | |
89 | 91 | State = run_socket(control_throttle(State0)), |
90 | 92 | receive |
91 | 93 | {inet_async, Sock, _Ref, {ok, Data}} -> |
92 | mainloop(process_received_bytes( | |
94 | mainloop(DebugOpts, process_received_bytes( | |
93 | 95 | Data, State#reader_state{recv_outstanding = false})); |
94 | 96 | {inet_async, _Sock, _Ref, {error, closed}} -> |
95 | 97 | ok; |
98 | 100 | {inet_reply, _Sock, {error, closed}} -> |
99 | 101 | ok; |
100 | 102 | {conserve_resources, Conserve} -> |
101 | mainloop(State#reader_state{conserve_resources = Conserve}); | |
103 | mainloop(DebugOpts, State#reader_state{conserve_resources = Conserve}); | |
102 | 104 | {bump_credit, Msg} -> |
103 | 105 | credit_flow:handle_bump_msg(Msg), |
104 | mainloop(State); | |
106 | mainloop(DebugOpts, State); | |
107 | {system, From, Request} -> | |
108 | sys:handle_system_msg(Request, From, State#reader_state.parent, | |
109 | ?MODULE, DebugOpts, State); | |
105 | 110 | {'EXIT', _From, shutdown} -> |
106 | 111 | ok; |
107 | 112 | Other -> |
159 | 164 | |
160 | 165 | %%---------------------------------------------------------------------------- |
161 | 166 | |
167 | system_continue(Parent, DebugOpts, State) -> | |
168 | mainloop(DebugOpts, State#reader_state{parent = Parent}). | |
169 | ||
170 | system_terminate(Reason, _Parent, _OldVsn, _Extra) -> | |
171 | exit(Reason). | |
172 | ||
173 | system_code_change(Misc, _Module, _OldSvn, _Extra) -> | |
174 | {ok, Misc}. | |
175 | ||
176 | %%---------------------------------------------------------------------------- | |
177 | ||
162 | 178 | processor_args(SupPid, Configuration, Sock) -> |
163 | 179 | SendFun = fun (sync, IoData) -> |
164 | 180 | %% no messages emitted |
0 | [{rabbitmq_stomp, [{default_user, [{login, "guest"}, | |
1 | {passcode, "guest"} | |
2 | ]}, | |
3 | {implicit_connect, true} | |
4 | ]} | |
5 | ]. |
0 | [{rabbitmq_stomp, [{default_user, []}, | |
1 | {ssl_cert_login, true}, | |
2 | {ssl_listeners, [61614]} | |
3 | ]}, | |
4 | {rabbit, [{ssl_options, [{cacertfile,"%%CERTS_DIR%%/testca/cacert.pem"}, | |
5 | {certfile,"%%CERTS_DIR%%/server/cert.pem"}, | |
6 | {keyfile,"%%CERTS_DIR%%/server/key.pem"}, | |
7 | {verify,verify_peer}, | |
8 | {fail_if_no_peer_cert,true} | |
9 | ]} | |
10 | ]} | |
11 | ]. |
0 | 0 | import unittest |
1 | 1 | import os |
2 | import os.path | |
3 | import sys | |
2 | 4 | |
3 | 5 | import stomp |
4 | 6 | import base |
5 | 7 | |
6 | ssl_key_file = os.path.abspath("test/certs/client/key.pem") | |
7 | ssl_cert_file = os.path.abspath("test/certs/client/cert.pem") | |
8 | ssl_ca_certs = os.path.abspath("test/certs/testca/cacert.pem") | |
8 | ||
9 | base_path = os.path.dirname(sys.argv[0]) | |
10 | ||
11 | ssl_key_file = os.path.abspath(base_path + "/../certs/client/key.pem") | |
12 | ssl_cert_file = os.path.abspath(base_path + "/../certs/client/cert.pem") | |
13 | ssl_ca_certs = os.path.abspath(base_path + "/../certs/testca/cacert.pem") | |
9 | 14 | |
10 | 15 | class TestSslClient(unittest.TestCase): |
11 | 16 | |
15 | 20 | use_ssl = True, ssl_key_file = ssl_key_file, |
16 | 21 | ssl_cert_file = ssl_cert_file, |
17 | 22 | ssl_ca_certs = ssl_ca_certs) |
18 | ||
23 | print "FILE: ", ssl_cert_file | |
19 | 24 | conn.start() |
20 | 25 | conn.connect() |
21 | 26 | return conn |
0 | [{rabbitmq_stomp, [{default_user, []}, | |
1 | {ssl_cert_login, true}, | |
2 | {ssl_listeners, [61614]} | |
3 | ]}, | |
4 | {rabbit, [{ssl_options, [{cacertfile,"%%CERTS_DIR%%/testca/cacert.pem"}, | |
5 | {certfile,"%%CERTS_DIR%%/server/cert.pem"}, | |
6 | {keyfile,"%%CERTS_DIR%%/server/key.pem"}, | |
7 | {verify,verify_peer}, | |
8 | {fail_if_no_peer_cert,true} | |
9 | ]} | |
10 | ]} | |
11 | ]. |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
0 | 0 | DEPS:=rabbitmq-erlang-client |
1 | 1 | FILTER:=all |
2 | 2 | COVER:=false |
3 | WITH_BROKER_TEST_COMMANDS:=rabbit_test_runner:run_in_broker(\"$(PACKAGE_DIR)/test/ebin\",\"$(FILTER)\") | |
3 | 4 | STANDALONE_TEST_COMMANDS:=rabbit_test_runner:run_multi(\"$(UMBRELLA_BASE_DIR)/rabbitmq-server\",\"$(PACKAGE_DIR)/test/ebin\",\"$(FILTER)\",$(COVER),none) |
4 | 5 | |
5 | 6 | ## Require R15B to compile inet_proxy_dist since it requires includes |
65 | 65 | %% Modification START |
66 | 66 | ProxyPort = case TcpPort >= 25672 andalso TcpPort < 25700 |
67 | 67 | andalso inet_tcp_proxy:is_enabled() of |
68 | true -> TcpPort + 10000; | |
68 | true -> TcpPort + 5000; | |
69 | 69 | false -> TcpPort |
70 | 70 | end, |
71 | 71 | case inet_tcp:connect(Ip, ProxyPort, |
60 | 60 | go() -> |
61 | 61 | ets:new(?TABLE, [public, named_table]), |
62 | 62 | {ok, Port} = application:get_env(kernel, inet_dist_listen_min), |
63 | ProxyPort = Port + 10000, | |
63 | ProxyPort = Port + 5000, | |
64 | 64 | {ok, Sock} = gen_tcp:listen(ProxyPort, [inet, |
65 | 65 | {reuseaddr, true}]), |
66 | 66 | accept_loop(Sock, Port). |
30 | 30 | -import(rabbit_misc, [pget/2, pget/3]). |
31 | 31 | |
32 | 32 | -define(INITIAL_KEYS, [cover, base, server, plugins]). |
33 | -define(NON_RUNNING_KEYS, ?INITIAL_KEYS ++ [nodename, port]). | |
33 | -define(NON_RUNNING_KEYS, ?INITIAL_KEYS ++ [nodename, port, mnesia_dir]). | |
34 | 34 | |
35 | 35 | cluster_ab(InitialCfg) -> cluster(InitialCfg, [a, b]). |
36 | 36 | cluster_abc(InitialCfg) -> cluster(InitialCfg, [a, b, c]). |
52 | 52 | [{_, _}|_] -> [InitialCfg0 || _ <- NodeNames]; |
53 | 53 | _ -> InitialCfg0 |
54 | 54 | end, |
55 | Nodes = [[{nodename, N}, {port, P} | strip_non_initial(Cfg)] | |
55 | Nodes = [[{nodename, N}, {port, P}, | |
56 | {mnesia_dir, rabbit_misc:format("rabbitmq-~s-mnesia", [N])} | | |
57 | strip_non_initial(Cfg)] | |
56 | 58 | || {N, P, Cfg} <- lists:zip3(NodeNames, Ports, InitialCfgs)], |
57 | 59 | [start_node(Node) || Node <- Nodes]. |
58 | 60 | |
160 | 162 | |
161 | 163 | kill_node(Cfg) -> |
162 | 164 | maybe_flush_cover(Cfg), |
163 | catch execute(Cfg, {"kill -9 ~s", [pget(os_pid, Cfg)]}), | |
165 | OSPid = pget(os_pid, Cfg), | |
166 | catch execute(Cfg, {"kill -9 ~s", [OSPid]}), | |
167 | await_os_pid_death(OSPid), | |
164 | 168 | strip_running(Cfg). |
169 | ||
170 | await_os_pid_death(OSPid) -> | |
171 | case rabbit_misc:is_os_process_alive(OSPid) of | |
172 | true -> timer:sleep(100), | |
173 | await_os_pid_death(OSPid); | |
174 | false -> ok | |
175 | end. | |
165 | 176 | |
166 | 177 | restart_node(Cfg) -> |
167 | 178 | start_node(stop_node(Cfg)). |
192 | 203 | execute(Env0, Cmd0, AcceptableExitCodes) -> |
193 | 204 | Env = [{"RABBITMQ_" ++ K, fmt(V)} || {K, V} <- Env0], |
194 | 205 | Cmd = fmt(Cmd0), |
206 | error_logger:info_msg("Invoking '~s'~n", [Cmd]), | |
195 | 207 | Port = erlang:open_port( |
196 | 208 | {spawn, "/usr/bin/env sh -c \"" ++ Cmd ++ "\""}, |
197 | 209 | [{env, Env}, exit_status, |
208 | 220 | Port = pget(port, Cfg), |
209 | 221 | Base = pget(base, Cfg), |
210 | 222 | Server = pget(server, Cfg), |
211 | [{"MNESIA_BASE", {"~s/rabbitmq-~s-mnesia", [Base, Nodename]}}, | |
212 | {"LOG_BASE", {"~s", [Base]}}, | |
213 | {"NODENAME", {"~s", [Nodename]}}, | |
214 | {"NODE_PORT", {"~B", [Port]}}, | |
215 | {"PID_FILE", pid_file(Cfg)}, | |
216 | {"CONFIG_FILE", "/some/path/which/does/not/exist"}, | |
217 | {"ALLOW_INPUT", "1"}, %% Needed to make it close on exit | |
223 | [{"MNESIA_DIR", {"~s/~s", [Base, pget(mnesia_dir, Cfg)]}}, | |
224 | {"PLUGINS_EXPAND_DIR", {"~s/~s-plugins-expand", [Base, Nodename]}}, | |
225 | {"LOG_BASE", {"~s", [Base]}}, | |
226 | {"NODENAME", {"~s", [Nodename]}}, | |
227 | {"NODE_PORT", {"~B", [Port]}}, | |
228 | {"PID_FILE", pid_file(Cfg)}, | |
229 | {"CONFIG_FILE", "/some/path/which/does/not/exist"}, | |
230 | {"ALLOW_INPUT", "1"}, %% Needed to make it close on exit | |
218 | 231 | %% Bit of a hack - only needed for mgmt tests. |
219 | 232 | {"SERVER_START_ARGS", |
220 | 233 | {"-rabbitmq_management listener [{port,1~B}]", [Port]}}, |
241 | 254 | port_receive_loop(Port, Stdout, AcceptableExitCodes) -> |
242 | 255 | receive |
243 | 256 | {Port, {exit_status, X}} -> |
257 | Fmt = "Command exited with code ~p~nStdout: ~s~n", | |
258 | Args = [X, Stdout], | |
244 | 259 | case lists:member(X, AcceptableExitCodes) of |
245 | true -> Stdout; | |
246 | false -> exit({exit_status, X, AcceptableExitCodes, Stdout}) | |
260 | true -> error_logger:info_msg(Fmt, Args), | |
261 | Stdout; | |
262 | false -> error_logger:error_msg(Fmt, Args), | |
263 | exit({exit_status, X, AcceptableExitCodes, Stdout}) | |
247 | 264 | end; |
248 | 265 | {Port, {data, Out}} -> |
249 | %%io:format(user, "~s", [Out]), | |
250 | 266 | port_receive_loop(Port, Stdout ++ Out, AcceptableExitCodes) |
251 | 267 | end. |
252 | 268 |
0 | %% The contents of this file are subject to the Mozilla Public License | |
1 | %% Version 1.1 (the "License"); you may not use this file except in | |
2 | %% compliance with the License. You may obtain a copy of the License | |
3 | %% at http://www.mozilla.org/MPL/ | |
4 | %% | |
5 | %% Software distributed under the License is distributed on an "AS IS" | |
6 | %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | %% the License for the specific language governing rights and | |
8 | %% limitations under the License. | |
9 | %% | |
10 | %% The Original Code is RabbitMQ. | |
11 | %% | |
12 | %% The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. | |
14 | %% | |
15 | -module(cluster_rename). | |
16 | ||
17 | -compile(export_all). | |
18 | -include_lib("eunit/include/eunit.hrl"). | |
19 | -include_lib("amqp_client/include/amqp_client.hrl"). | |
20 | ||
21 | -import(rabbit_misc, [pget/2]). | |
22 | ||
23 | -define(CLUSTER2, | |
24 | fun(C) -> rabbit_test_configs:cluster(C, [bugs, bigwig]) end). | |
25 | ||
26 | -define(CLUSTER3, | |
27 | fun(C) -> rabbit_test_configs:cluster(C, [bugs, bigwig, peter]) end). | |
28 | ||
29 | %% Rolling rename of a cluster, each node should do a secondary rename. | |
30 | rename_cluster_one_by_one_with() -> ?CLUSTER3. | |
31 | rename_cluster_one_by_one([Bugs, Bigwig, Peter]) -> | |
32 | publish_all([{Bugs, <<"1">>}, {Bigwig, <<"2">>}, {Peter, <<"3">>}]), | |
33 | ||
34 | Jessica = stop_rename_start(Bugs, jessica, [bugs, jessica]), | |
35 | Hazel = stop_rename_start(Bigwig, hazel, [bigwig, hazel]), | |
36 | Flopsy = stop_rename_start(Peter, flopsy, [peter, flopsy]), | |
37 | ||
38 | consume_all([{Jessica, <<"1">>}, {Hazel, <<"2">>}, {Flopsy, <<"3">>}]), | |
39 | stop_all([Jessica, Hazel, Flopsy]), | |
40 | ok. | |
41 | ||
42 | %% Big bang rename of a cluster, bugs should do a primary rename. | |
43 | rename_cluster_big_bang_with() -> ?CLUSTER3. | |
44 | rename_cluster_big_bang([Bugs, Bigwig, Peter]) -> | |
45 | publish_all([{Bugs, <<"1">>}, {Bigwig, <<"2">>}, {Peter, <<"3">>}]), | |
46 | ||
47 | Peter1 = rabbit_test_configs:stop_node(Peter), | |
48 | Bigwig1 = rabbit_test_configs:stop_node(Bigwig), | |
49 | Bugs1 = rabbit_test_configs:stop_node(Bugs), | |
50 | ||
51 | Map = [bugs, jessica, bigwig, hazel, peter, flopsy], | |
52 | Jessica0 = rename_node(Bugs1, jessica, Map), | |
53 | Hazel0 = rename_node(Bigwig1, hazel, Map), | |
54 | Flopsy0 = rename_node(Peter1, flopsy, Map), | |
55 | ||
56 | Jessica = rabbit_test_configs:start_node(Jessica0), | |
57 | Hazel = rabbit_test_configs:start_node(Hazel0), | |
58 | Flopsy = rabbit_test_configs:start_node(Flopsy0), | |
59 | ||
60 | consume_all([{Jessica, <<"1">>}, {Hazel, <<"2">>}, {Flopsy, <<"3">>}]), | |
61 | stop_all([Jessica, Hazel, Flopsy]), | |
62 | ok. | |
63 | ||
64 | %% Here we test that bugs copes with things being renamed around it. | |
65 | partial_one_by_one_with() -> ?CLUSTER3. | |
66 | partial_one_by_one([Bugs, Bigwig, Peter]) -> | |
67 | publish_all([{Bugs, <<"1">>}, {Bigwig, <<"2">>}, {Peter, <<"3">>}]), | |
68 | ||
69 | Jessica = stop_rename_start(Bugs, jessica, [bugs, jessica]), | |
70 | Hazel = stop_rename_start(Bigwig, hazel, [bigwig, hazel]), | |
71 | ||
72 | consume_all([{Jessica, <<"1">>}, {Hazel, <<"2">>}, {Peter, <<"3">>}]), | |
73 | stop_all([Jessica, Hazel, Peter]), | |
74 | ok. | |
75 | ||
76 | %% Here we test that bugs copes with things being renamed around it. | |
77 | partial_big_bang_with() -> ?CLUSTER3. | |
78 | partial_big_bang([Bugs, Bigwig, Peter]) -> | |
79 | publish_all([{Bugs, <<"1">>}, {Bigwig, <<"2">>}, {Peter, <<"3">>}]), | |
80 | ||
81 | Peter1 = rabbit_test_configs:stop_node(Peter), | |
82 | Bigwig1 = rabbit_test_configs:stop_node(Bigwig), | |
83 | Bugs1 = rabbit_test_configs:stop_node(Bugs), | |
84 | ||
85 | Map = [bigwig, hazel, peter, flopsy], | |
86 | Hazel0 = rename_node(Bigwig1, hazel, Map), | |
87 | Flopsy0 = rename_node(Peter1, flopsy, Map), | |
88 | ||
89 | Bugs2 = rabbit_test_configs:start_node(Bugs1), | |
90 | Hazel = rabbit_test_configs:start_node(Hazel0), | |
91 | Flopsy = rabbit_test_configs:start_node(Flopsy0), | |
92 | ||
93 | consume_all([{Bugs2, <<"1">>}, {Hazel, <<"2">>}, {Flopsy, <<"3">>}]), | |
94 | stop_all([Bugs2, Hazel, Flopsy]), | |
95 | ok. | |
96 | ||
97 | %% We should be able to specify the -n parameter on ctl with either | |
98 | %% the before or after name for the local node (since in real cases | |
99 | %% one might want to invoke the command before or after the hostname | |
100 | %% has changed) - usually we test before so here we test after. | |
101 | post_change_nodename_with() -> ?CLUSTER2. | |
102 | post_change_nodename([Bugs, _Bigwig]) -> | |
103 | publish(Bugs, <<"bugs">>), | |
104 | ||
105 | Bugs1 = rabbit_test_configs:stop_node(Bugs), | |
106 | Bugs2 = [{nodename, jessica} | proplists:delete(nodename, Bugs1)], | |
107 | Jessica0 = rename_node(Bugs2, jessica, [bugs, jessica]), | |
108 | Jessica = rabbit_test_configs:start_node(Jessica0), | |
109 | ||
110 | consume(Jessica, <<"bugs">>), | |
111 | stop_all([Jessica]), | |
112 | ok. | |
113 | ||
114 | %% If we invoke rename but the node name does not actually change, we | |
115 | %% should roll back. | |
116 | abortive_rename_with() -> ?CLUSTER2. | |
117 | abortive_rename([Bugs, _Bigwig]) -> | |
118 | publish(Bugs, <<"bugs">>), | |
119 | ||
120 | Bugs1 = rabbit_test_configs:stop_node(Bugs), | |
121 | _Jessica = rename_node(Bugs1, jessica, [bugs, jessica]), | |
122 | Bugs2 = rabbit_test_configs:start_node(Bugs1), | |
123 | ||
124 | consume(Bugs2, <<"bugs">>), | |
125 | ok. | |
126 | ||
127 | %% And test some ways the command can fail. | |
128 | rename_fail_with() -> ?CLUSTER2. | |
129 | rename_fail([Bugs, _Bigwig]) -> | |
130 | Bugs1 = rabbit_test_configs:stop_node(Bugs), | |
131 | %% Rename from a node that does not exist | |
132 | rename_node_fail(Bugs1, [bugzilla, jessica]), | |
133 | %% Rename to a node which does | |
134 | rename_node_fail(Bugs1, [bugs, bigwig]), | |
135 | %% Rename two nodes to the same thing | |
136 | rename_node_fail(Bugs1, [bugs, jessica, bigwig, jessica]), | |
137 | %% Rename while impersonating a node not in the cluster | |
138 | rename_node_fail(set_node(rabbit, Bugs1), [bugs, jessica]), | |
139 | ok. | |
140 | ||
141 | rename_twice_fail_with() -> ?CLUSTER2. | |
142 | rename_twice_fail([Bugs, _Bigwig]) -> | |
143 | Bugs1 = rabbit_test_configs:stop_node(Bugs), | |
144 | Indecisive = rename_node(Bugs1, indecisive, [bugs, indecisive]), | |
145 | rename_node_fail(Indecisive, [indecisive, jessica]), | |
146 | ok. | |
147 | ||
148 | %% ---------------------------------------------------------------------------- | |
149 | ||
150 | %% Normal post-test stop does not work since names have changed... | |
151 | stop_all(Cfgs) -> | |
152 | [rabbit_test_configs:stop_node(Cfg) || Cfg <- Cfgs]. | |
153 | ||
154 | stop_rename_start(Cfg, Nodename, Map) -> | |
155 | rabbit_test_configs:start_node( | |
156 | rename_node(rabbit_test_configs:stop_node(Cfg), Nodename, Map)). | |
157 | ||
158 | rename_node(Cfg, Nodename, Map) -> | |
159 | rename_node(Cfg, Nodename, Map, fun rabbit_test_configs:rabbitmqctl/2). | |
160 | ||
161 | rename_node_fail(Cfg, Map) -> | |
162 | rename_node(Cfg, ignored, Map, fun rabbit_test_configs:rabbitmqctl_fail/2). | |
163 | ||
164 | rename_node(Cfg, Nodename, Map, Ctl) -> | |
165 | MapS = string:join( | |
166 | [atom_to_list(rabbit_nodes:make(N)) || N <- Map], " "), | |
167 | Ctl(Cfg, {"rename_cluster_node ~s", [MapS]}), | |
168 | set_node(Nodename, Cfg). | |
169 | ||
170 | publish(Cfg, Q) -> | |
171 | Ch = pget(channel, Cfg), | |
172 | amqp_channel:call(Ch, #'confirm.select'{}), | |
173 | amqp_channel:call(Ch, #'queue.declare'{queue = Q, durable = true}), | |
174 | amqp_channel:cast(Ch, #'basic.publish'{routing_key = Q}, | |
175 | #amqp_msg{props = #'P_basic'{delivery_mode = 2}, | |
176 | payload = Q}), | |
177 | amqp_channel:wait_for_confirms(Ch). | |
178 | ||
179 | consume(Cfg, Q) -> | |
180 | {_Conn, Ch} = rabbit_test_util:connect(Cfg), | |
181 | amqp_channel:call(Ch, #'queue.declare'{queue = Q, durable = true}), | |
182 | {#'basic.get_ok'{}, #amqp_msg{payload = Q}} = | |
183 | amqp_channel:call(Ch, #'basic.get'{queue = Q}). | |
184 | ||
185 | ||
186 | publish_all(CfgsKeys) -> | |
187 | [publish(Cfg, Key) || {Cfg, Key} <- CfgsKeys]. | |
188 | ||
189 | consume_all(CfgsKeys) -> | |
190 | [consume(Cfg, Key) || {Cfg, Key} <- CfgsKeys]. | |
191 | ||
192 | set_node(Nodename, Cfg) -> | |
193 | [{nodename, Nodename} | proplists:delete(nodename, Cfg)]. |
216 | 216 | passive = true})), |
217 | 217 | ok. |
218 | 218 | |
219 | forget_offline_promotes_slave_with() -> [cluster_ab, ha_policy_all]. | |
220 | forget_offline_promotes_slave([Rabbit, Hare]) -> | |
221 | RabbitCh = pget(channel, Rabbit), | |
222 | Mirrored = <<"mirrored-queue">>, | |
223 | declare(RabbitCh, Mirrored), | |
224 | amqp_channel:call(RabbitCh, #'confirm.select'{}), | |
225 | amqp_channel:cast(RabbitCh, #'basic.publish'{routing_key = Mirrored}, | |
219 | forget_promotes_offline_slave_with() -> | |
220 | fun (Cfgs) -> | |
221 | rabbit_test_configs:cluster(Cfgs, [a, b, c, d]) | |
222 | end. | |
223 | ||
224 | forget_promotes_offline_slave([A, B, C, D]) -> | |
225 | ACh = pget(channel, A), | |
226 | ANode = pget(node, A), | |
227 | Q = <<"mirrored-queue">>, | |
228 | declare(ACh, Q), | |
229 | set_ha_policy(Q, A, [B, C]), | |
230 | set_ha_policy(Q, A, [C, D]), %% Test add and remove from recoverable_slaves | |
231 | ||
232 | %% Publish and confirm | |
233 | amqp_channel:call(ACh, #'confirm.select'{}), | |
234 | amqp_channel:cast(ACh, #'basic.publish'{routing_key = Q}, | |
226 | 235 | #amqp_msg{props = #'P_basic'{delivery_mode = 2}}), |
227 | amqp_channel:wait_for_confirms(RabbitCh), | |
228 | ||
229 | %% We should have a down slave on hare and a down master on rabbit. | |
230 | Hare2 = rabbit_test_configs:stop_node(Hare), | |
231 | _Rabbit2 = rabbit_test_configs:stop_node(Rabbit), | |
232 | ||
233 | rabbit_test_configs:rabbitmqctl( | |
234 | Hare2, {"forget_cluster_node --offline ~s", [pget(node, Rabbit)]}), | |
235 | ||
236 | Hare3 = rabbit_test_configs:start_node(Hare2), | |
237 | ||
238 | {_HConn2, HareCh2} = rabbit_test_util:connect(Hare3), | |
239 | #'queue.declare_ok'{message_count = 1} = declare(HareCh2, Mirrored), | |
240 | ||
236 | amqp_channel:wait_for_confirms(ACh), | |
237 | ||
238 | %% We kill nodes rather than stop them in order to make sure | |
239 | %% that we aren't dependent on anything that happens as they shut | |
240 | %% down (see bug 26467). | |
241 | D2 = rabbit_test_configs:kill_node(D), | |
242 | C2 = rabbit_test_configs:kill_node(C), | |
243 | _B2 = rabbit_test_configs:kill_node(B), | |
244 | _A2 = rabbit_test_configs:kill_node(A), | |
245 | ||
246 | rabbit_test_configs:rabbitmqctl(C2, "force_boot"), | |
247 | ||
248 | C3 = rabbit_test_configs:start_node(C2), | |
249 | ||
250 | %% We should now have the following dramatis personae: | |
251 | %% A - down, master | |
252 | %% B - down, used to be slave, no longer is, never had the message | |
253 | %% C - running, should be slave, but has wiped the message on restart | |
254 | %% D - down, recoverable slave, contains message | |
255 | %% | |
256 | %% So forgetting A should offline-promote the queue to D, keeping | |
257 | %% the message. | |
258 | ||
259 | rabbit_test_configs:rabbitmqctl(C3, {"forget_cluster_node ~s", [ANode]}), | |
260 | ||
261 | D3 = rabbit_test_configs:start_node(D2), | |
262 | {_DConn2, DCh2} = rabbit_test_util:connect(D3), | |
263 | #'queue.declare_ok'{message_count = 1} = declare(DCh2, Q), | |
241 | 264 | ok. |
265 | ||
266 | set_ha_policy(Q, MasterCfg, SlaveCfgs) -> | |
267 | Nodes = [list_to_binary(atom_to_list(pget(node, N))) || | |
268 | N <- [MasterCfg | SlaveCfgs]], | |
269 | rabbit_test_util:set_ha_policy(MasterCfg, Q, {<<"nodes">>, Nodes}), | |
270 | await_slaves(Q, pget(node, MasterCfg), [pget(node, C) || C <- SlaveCfgs]). | |
271 | ||
272 | await_slaves(Q, MNode, SNodes) -> | |
273 | {ok, #amqqueue{pid = MPid, | |
274 | slave_pids = SPids}} = | |
275 | rpc:call(MNode, rabbit_amqqueue, lookup, | |
276 | [rabbit_misc:r(<<"/">>, queue, Q)]), | |
277 | ActMNode = node(MPid), | |
278 | ActSNodes = lists:usort([node(P) || P <- SPids]), | |
279 | case {MNode, lists:usort(SNodes)} of | |
280 | {ActMNode, ActSNodes} -> ok; | |
281 | _ -> timer:sleep(100), | |
282 | await_slaves(Q, MNode, SNodes) | |
283 | end. | |
242 | 284 | |
243 | 285 | force_boot_with() -> cluster_ab. |
244 | 286 | force_boot([Rabbit, Hare]) -> |
330 | 372 | [Rabbit, Hare]), |
331 | 373 | assert_not_clustered(Bunny). |
332 | 374 | |
333 | update_cluster_nodes_test_with() -> start_abc. | |
334 | update_cluster_nodes_test(Config) -> | |
375 | update_cluster_nodes_with() -> start_abc. | |
376 | update_cluster_nodes(Config) -> | |
335 | 377 | [Rabbit, Hare, Bunny] = cluster_members(Config), |
336 | 378 | |
337 | 379 | %% Mnesia is running... |
392 | 434 | assert_not_clustered(Hare), |
393 | 435 | assert_not_clustered(Rabbit), |
394 | 436 | |
395 | %% If we use a legacy config file, it still works (and a warning is emitted) | |
437 | %% If we use a legacy config file, the node fails to start. | |
396 | 438 | ok = stop_app(Hare), |
397 | 439 | ok = reset(Hare), |
398 | 440 | ok = rpc:call(Hare, application, set_env, |
399 | 441 | [rabbit, cluster_nodes, [Rabbit]]), |
400 | ok = start_app(Hare), | |
401 | assert_cluster_status({[Rabbit, Hare], [Rabbit], [Rabbit, Hare]}, | |
402 | [Rabbit, Hare]). | |
403 | ||
404 | force_reset_test_with() -> start_abc. | |
405 | force_reset_test(Config) -> | |
442 | assert_failure(fun () -> start_app(Hare) end), | |
443 | assert_not_clustered(Rabbit), | |
444 | ||
445 | %% If we use an invalid node name, the node fails to start. | |
446 | ok = stop_app(Hare), | |
447 | ok = reset(Hare), | |
448 | ok = rpc:call(Hare, application, set_env, | |
449 | [rabbit, cluster_nodes, {["Mike's computer"], disc}]), | |
450 | assert_failure(fun () -> start_app(Hare) end), | |
451 | assert_not_clustered(Rabbit), | |
452 | ||
453 | %% If we use an invalid node type, the node fails to start. | |
454 | ok = stop_app(Hare), | |
455 | ok = reset(Hare), | |
456 | ok = rpc:call(Hare, application, set_env, | |
457 | [rabbit, cluster_nodes, {[Rabbit], blue}]), | |
458 | assert_failure(fun () -> start_app(Hare) end), | |
459 | assert_not_clustered(Rabbit), | |
460 | ||
461 | %% If we use an invalid cluster_nodes conf, the node fails to start. | |
462 | ok = stop_app(Hare), | |
463 | ok = reset(Hare), | |
464 | ok = rpc:call(Hare, application, set_env, | |
465 | [rabbit, cluster_nodes, true]), | |
466 | assert_failure(fun () -> start_app(Hare) end), | |
467 | assert_not_clustered(Rabbit), | |
468 | ||
469 | ok = stop_app(Hare), | |
470 | ok = reset(Hare), | |
471 | ok = rpc:call(Hare, application, set_env, | |
472 | [rabbit, cluster_nodes, "Yes, please"]), | |
473 | assert_failure(fun () -> start_app(Hare) end), | |
474 | assert_not_clustered(Rabbit). | |
475 | ||
476 | force_reset_node_with() -> start_abc. | |
477 | force_reset_node(Config) -> | |
406 | 478 | [Rabbit, Hare, _Bunny] = cluster_members(Config), |
407 | 479 | |
408 | 480 | stop_join_start(Rabbit, Hare), |
86 | 86 | %% Add D and E, D joins in |
87 | 87 | [CfgD, CfgE] = CfgsDE = rabbit_test_configs:start_nodes(CfgA, [d, e], 5675), |
88 | 88 | D = pget(node, CfgD), |
89 | E = pget(node, CfgE), | |
89 | 90 | rabbit_test_configs:add_to_cluster(CfgsABC, CfgsDE), |
90 | 91 | assert_slaves(A, ?QNAME, {A, [B, C, D]}), |
91 | 92 | |
92 | %% Remove D, E does not join in | |
93 | %% Remove D, E joins in | |
93 | 94 | rabbit_test_configs:stop_node(CfgD), |
94 | assert_slaves(A, ?QNAME, {A, [B, C]}), | |
95 | assert_slaves(A, ?QNAME, {A, [B, C, E]}), | |
95 | 96 | |
96 | 97 | %% Clean up since we started this by hand |
97 | 98 | rabbit_test_configs:stop_node(CfgE), |
36 | 36 | [A] = partitions(C), |
37 | 37 | ok. |
38 | 38 | |
39 | pause_on_down_with() -> ?CONFIG. | |
40 | pause_on_down([CfgA, CfgB, CfgC] = Cfgs) -> | |
39 | pause_minority_on_down_with() -> ?CONFIG. | |
40 | pause_minority_on_down([CfgA, CfgB, CfgC] = Cfgs) -> | |
41 | 41 | A = pget(node, CfgA), |
42 | 42 | set_mode(Cfgs, pause_minority), |
43 | 43 | true = is_running(A), |
50 | 50 | await_running(A, false), |
51 | 51 | ok. |
52 | 52 | |
53 | pause_on_blocked_with() -> ?CONFIG. | |
54 | pause_on_blocked(Cfgs) -> | |
53 | pause_minority_on_blocked_with() -> ?CONFIG. | |
54 | pause_minority_on_blocked(Cfgs) -> | |
55 | 55 | [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs], |
56 | 56 | set_mode(Cfgs, pause_minority), |
57 | pause_on_blocked(A, B, C). | |
58 | ||
59 | pause_if_all_down_on_down_with() -> ?CONFIG. | |
60 | pause_if_all_down_on_down([_, CfgB, CfgC] = Cfgs) -> | |
61 | [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs], | |
62 | set_mode(Cfgs, {pause_if_all_down, [C], ignore}), | |
63 | [(true = is_running(N)) || N <- [A, B, C]], | |
64 | ||
65 | rabbit_test_util:kill(CfgB, sigkill), | |
66 | timer:sleep(?DELAY), | |
67 | [(true = is_running(N)) || N <- [A, C]], | |
68 | ||
69 | rabbit_test_util:kill(CfgC, sigkill), | |
70 | timer:sleep(?DELAY), | |
71 | await_running(A, false), | |
72 | ok. | |
73 | ||
74 | pause_if_all_down_on_blocked_with() -> ?CONFIG. | |
75 | pause_if_all_down_on_blocked(Cfgs) -> | |
76 | [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs], | |
77 | set_mode(Cfgs, {pause_if_all_down, [C], ignore}), | |
78 | pause_on_blocked(A, B, C). | |
79 | ||
80 | pause_on_blocked(A, B, C) -> | |
57 | 81 | [(true = is_running(N)) || N <- [A, B, C]], |
58 | 82 | block([{A, B}, {A, C}]), |
59 | 83 | await_running(A, false), |
76 | 100 | %% test to pass since there are a lot of things in the broker that can |
77 | 101 | %% suddenly take several seconds to time out when TCP connections |
78 | 102 | %% won't establish. |
79 | pause_false_promises_mirrored_with() -> | |
103 | pause_minority_false_promises_mirrored_with() -> | |
80 | 104 | [start_ab, fun enable_dist_proxy/1, |
81 | 105 | build_cluster, short_ticktime(10), start_connections, ha_policy_all]. |
82 | 106 | |
83 | pause_false_promises_mirrored(Cfgs) -> | |
84 | pause_false_promises(Cfgs). | |
85 | ||
86 | pause_false_promises_unmirrored_with() -> | |
107 | pause_minority_false_promises_mirrored(Cfgs) -> | |
108 | pause_false_promises(Cfgs, pause_minority). | |
109 | ||
110 | pause_minority_false_promises_unmirrored_with() -> | |
87 | 111 | [start_ab, fun enable_dist_proxy/1, |
88 | 112 | build_cluster, short_ticktime(10), start_connections]. |
89 | 113 | |
90 | pause_false_promises_unmirrored(Cfgs) -> | |
91 | pause_false_promises(Cfgs). | |
92 | ||
93 | pause_false_promises([CfgA, CfgB | _] = Cfgs) -> | |
114 | pause_minority_false_promises_unmirrored(Cfgs) -> | |
115 | pause_false_promises(Cfgs, pause_minority). | |
116 | ||
117 | pause_if_all_down_false_promises_mirrored_with() -> | |
118 | [start_ab, fun enable_dist_proxy/1, | |
119 | build_cluster, short_ticktime(10), start_connections, ha_policy_all]. | |
120 | ||
121 | pause_if_all_down_false_promises_mirrored([_, CfgB | _] = Cfgs) -> | |
122 | B = pget(node, CfgB), | |
123 | pause_false_promises(Cfgs, {pause_if_all_down, [B], ignore}). | |
124 | ||
125 | pause_if_all_down_false_promises_unmirrored_with() -> | |
126 | [start_ab, fun enable_dist_proxy/1, | |
127 | build_cluster, short_ticktime(10), start_connections]. | |
128 | ||
129 | pause_if_all_down_false_promises_unmirrored([_, CfgB | _] = Cfgs) -> | |
130 | B = pget(node, CfgB), | |
131 | pause_false_promises(Cfgs, {pause_if_all_down, [B], ignore}). | |
132 | ||
133 | pause_false_promises([CfgA, CfgB | _] = Cfgs, ClusterPartitionHandling) -> | |
94 | 134 | [A, B] = [pget(node, Cfg) || Cfg <- Cfgs], |
95 | set_mode([CfgA], pause_minority), | |
135 | set_mode([CfgA], ClusterPartitionHandling), | |
96 | 136 | ChA = pget(channel, CfgA), |
97 | 137 | ChB = pget(channel, CfgB), |
98 | 138 | amqp_channel:call(ChB, #'queue.declare'{queue = <<"test">>, |
172 | 212 | %% NB: we test full and partial partitions here. |
173 | 213 | autoheal_with() -> ?CONFIG. |
174 | 214 | autoheal(Cfgs) -> |
175 | [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs], | |
176 | 215 | set_mode(Cfgs, autoheal), |
216 | do_autoheal(Cfgs). | |
217 | ||
218 | autoheal_after_pause_if_all_down_with() -> ?CONFIG. | |
219 | autoheal_after_pause_if_all_down([_, CfgB, CfgC | _] = Cfgs) -> | |
220 | B = pget(node, CfgB), | |
221 | C = pget(node, CfgC), | |
222 | set_mode(Cfgs, {pause_if_all_down, [B, C], autoheal}), | |
223 | do_autoheal(Cfgs). | |
224 | ||
225 | do_autoheal(Cfgs) -> | |
226 | [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs], | |
177 | 227 | Test = fun (Pairs) -> |
178 | 228 | block_unblock(Pairs), |
179 | 229 | %% Sleep to make sure all the partitions are noticed |
180 | 230 | %% ?DELAY for the net_tick timeout |
181 | 231 | timer:sleep(?DELAY), |
182 | 232 | [await_listening(N, true) || N <- [A, B, C]], |
183 | [] = partitions(A), | |
184 | [] = partitions(B), | |
185 | [] = partitions(C) | |
233 | [await_partitions(N, []) || N <- [A, B, C]] | |
186 | 234 | end, |
187 | 235 | Test([{B, C}]), |
188 | 236 | Test([{A, C}, {B, C}]), |
224 | 272 | Partitions -> exit({partitions, Partitions}) |
225 | 273 | end. |
226 | 274 | |
227 | partial_pause_with() -> ?CONFIG. | |
228 | partial_pause(Cfgs) -> | |
275 | partial_pause_minority_with() -> ?CONFIG. | |
276 | partial_pause_minority(Cfgs) -> | |
229 | 277 | [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs], |
230 | 278 | set_mode(Cfgs, pause_minority), |
231 | 279 | block([{A, B}]), |
233 | 281 | await_running(C, true), |
234 | 282 | unblock([{A, B}]), |
235 | 283 | [await_listening(N, true) || N <- [A, B, C]], |
236 | [] = partitions(A), | |
237 | [] = partitions(B), | |
238 | [] = partitions(C), | |
284 | [await_partitions(N, []) || N <- [A, B, C]], | |
285 | ok. | |
286 | ||
287 | partial_pause_if_all_down_with() -> ?CONFIG. | |
288 | partial_pause_if_all_down(Cfgs) -> | |
289 | [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs], | |
290 | set_mode(Cfgs, {pause_if_all_down, [B], ignore}), | |
291 | block([{A, B}]), | |
292 | await_running(A, false), | |
293 | [await_running(N, true) || N <- [B, C]], | |
294 | unblock([{A, B}]), | |
295 | [await_listening(N, true) || N <- [A, B, C]], | |
296 | [await_partitions(N, []) || N <- [A, B, C]], | |
239 | 297 | ok. |
240 | 298 | |
241 | 299 | set_mode(Cfgs, Mode) -> |
270 | 328 | rpc:call(X, inet_tcp_proxy, allow, [Y]), |
271 | 329 | rpc:call(Y, inet_tcp_proxy, allow, [X]). |
272 | 330 | |
273 | await_running (Node, Bool) -> await(Node, Bool, fun is_running/1). | |
274 | await_listening(Node, Bool) -> await(Node, Bool, fun is_listening/1). | |
275 | ||
276 | await(Node, Bool, Fun) -> | |
331 | await_running (Node, Bool) -> await(Node, Bool, fun is_running/1). | |
332 | await_listening (Node, Bool) -> await(Node, Bool, fun is_listening/1). | |
333 | await_partitions(Node, Parts) -> await(Node, Parts, fun partitions/1). | |
334 | ||
335 | await(Node, Res, Fun) -> | |
277 | 336 | case Fun(Node) of |
278 | Bool -> ok; | |
279 | _ -> timer:sleep(100), | |
280 | await(Node, Bool, Fun) | |
337 | Res -> ok; | |
338 | _ -> timer:sleep(100), | |
339 | await(Node, Res, Fun) | |
281 | 340 | end. |
282 | 341 | |
283 | 342 | is_running(Node) -> rpc:call(Node, rabbit, is_running, []). |
0 | %% The contents of this file are subject to the Mozilla Public License | |
1 | %% Version 1.1 (the "License"); you may not use this file except in | |
2 | %% compliance with the License. You may obtain a copy of the License | |
3 | %% at http://www.mozilla.org/MPL/ | |
4 | %% | |
5 | %% Software distributed under the License is distributed on an "AS IS" | |
6 | %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | %% the License for the specific language governing rights and | |
8 | %% limitations under the License. | |
9 | %% | |
10 | %% The Original Code is RabbitMQ. | |
11 | %% | |
12 | %% The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. | |
14 | %% | |
15 | ||
16 | -module(rabbit_priority_queue_test). | |
17 | ||
18 | -compile(export_all). | |
19 | -include_lib("eunit/include/eunit.hrl"). | |
20 | -include_lib("amqp_client/include/amqp_client.hrl"). | |
21 | ||
22 | -import(rabbit_misc, [pget/2]). | |
23 | ||
24 | %% The BQ API is used in all sorts of places in all sorts of | |
25 | %% ways. Therefore we have to jump through a few different hoops | |
26 | %% in order to integration-test it. | |
27 | %% | |
28 | %% * start/1, stop/0, init/3, terminate/2, delete_and_terminate/2 | |
29 | %% - starting and stopping rabbit. durable queues / persistent msgs needed | |
30 | %% to test recovery | |
31 | %% | |
32 | %% * publish/5, drain_confirmed/1, fetch/2, ack/2, is_duplicate/2, msg_rates/1, | |
33 | %% needs_timeout/1, timeout/1, invoke/3, resume/1 [0] | |
34 | %% - regular publishing and consuming, with confirms and acks and durability | |
35 | %% | |
36 | %% * publish_delivered/4 - publish with acks straight through | |
37 | %% * discard/3 - publish without acks straight through | |
38 | %% * dropwhile/2 - expire messages without DLX | |
39 | %% * fetchwhile/4 - expire messages with DLX | |
40 | %% * ackfold/4 - reject messages with DLX | |
41 | %% * requeue/2 - reject messages without DLX | |
42 | %% * drop/2 - maxlen messages without DLX | |
43 | %% * purge/1 - issue AMQP queue.purge | |
44 | %% * purge_acks/1 - mirror queue explicit sync with unacked msgs | |
45 | %% * fold/3 - mirror queue explicit sync | |
46 | %% * depth/1 - mirror queue implicit sync detection | |
47 | %% * len/1, is_empty/1 - info items | |
48 | %% * handle_pre_hibernate/1 - hibernation | |
49 | %% | |
50 | %% * set_ram_duration_target/2, ram_duration/1, status/1 | |
51 | %% - maybe need unit testing? | |
52 | %% | |
53 | %% [0] publish enough to get credit flow from msg store | |
54 | ||
55 | recovery_test() -> | |
56 | {Conn, Ch} = open(), | |
57 | Q = <<"test">>, | |
58 | declare(Ch, Q, 3), | |
59 | publish(Ch, Q, [1, 2, 3, 1, 2, 3, 1, 2, 3]), | |
60 | amqp_connection:close(Conn), | |
61 | ||
62 | %% TODO these break coverage | |
63 | rabbit:stop(), | |
64 | rabbit:start(), | |
65 | ||
66 | {Conn2, Ch2} = open(), | |
67 | get_all(Ch2, Q, do_ack, [3, 3, 3, 2, 2, 2, 1, 1, 1]), | |
68 | delete(Ch2, Q), | |
69 | amqp_connection:close(Conn2), | |
70 | passed. | |
71 | ||
72 | simple_order_test() -> | |
73 | {Conn, Ch} = open(), | |
74 | Q = <<"test">>, | |
75 | declare(Ch, Q, 3), | |
76 | publish(Ch, Q, [1, 2, 3, 1, 2, 3, 1, 2, 3]), | |
77 | get_all(Ch, Q, do_ack, [3, 3, 3, 2, 2, 2, 1, 1, 1]), | |
78 | publish(Ch, Q, [2, 3, 1, 2, 3, 1, 2, 3, 1]), | |
79 | get_all(Ch, Q, no_ack, [3, 3, 3, 2, 2, 2, 1, 1, 1]), | |
80 | publish(Ch, Q, [3, 1, 2, 3, 1, 2, 3, 1, 2]), | |
81 | get_all(Ch, Q, do_ack, [3, 3, 3, 2, 2, 2, 1, 1, 1]), | |
82 | delete(Ch, Q), | |
83 | amqp_connection:close(Conn), | |
84 | passed. | |
85 | ||
86 | matching_test() -> | |
87 | {Conn, Ch} = open(), | |
88 | Q = <<"test">>, | |
89 | declare(Ch, Q, 5), | |
90 | %% We round priority down, and 0 is the default | |
91 | publish(Ch, Q, [undefined, 0, 5, 10, undefined]), | |
92 | get_all(Ch, Q, do_ack, [5, 10, undefined, 0, undefined]), | |
93 | delete(Ch, Q), | |
94 | amqp_connection:close(Conn), | |
95 | passed. | |
96 | ||
97 | resume_test() -> | |
98 | {Conn, Ch} = open(), | |
99 | Q = <<"test">>, | |
100 | declare(Ch, Q, 5), | |
101 | amqp_channel:call(Ch, #'confirm.select'{}), | |
102 | publish_many(Ch, Q, 10000), | |
103 | amqp_channel:wait_for_confirms(Ch), | |
104 | amqp_channel:call(Ch, #'queue.purge'{queue = Q}), %% Assert it exists | |
105 | delete(Ch, Q), | |
106 | amqp_connection:close(Conn), | |
107 | passed. | |
108 | ||
109 | straight_through_test() -> | |
110 | {Conn, Ch} = open(), | |
111 | Q = <<"test">>, | |
112 | declare(Ch, Q, 3), | |
113 | [begin | |
114 | consume(Ch, Q, Ack), | |
115 | [begin | |
116 | publish1(Ch, Q, P), | |
117 | assert_delivered(Ch, Ack, P) | |
118 | end || P <- [1, 2, 3]], | |
119 | cancel(Ch) | |
120 | end || Ack <- [do_ack, no_ack]], | |
121 | get_empty(Ch, Q), | |
122 | delete(Ch, Q), | |
123 | amqp_connection:close(Conn), | |
124 | passed. | |
125 | ||
126 | dropwhile_fetchwhile_test() -> | |
127 | {Conn, Ch} = open(), | |
128 | Q = <<"test">>, | |
129 | [begin | |
130 | declare(Ch, Q, Args ++ arguments(3)), | |
131 | publish(Ch, Q, [1, 2, 3, 1, 2, 3, 1, 2, 3]), | |
132 | timer:sleep(10), | |
133 | get_empty(Ch, Q), | |
134 | delete(Ch, Q) | |
135 | end || | |
136 | Args <- [[{<<"x-message-ttl">>, long, 1}], | |
137 | [{<<"x-message-ttl">>, long, 1}, | |
138 | {<<"x-dead-letter-exchange">>, longstr, <<"amq.fanout">>}] | |
139 | ]], | |
140 | amqp_connection:close(Conn), | |
141 | passed. | |
142 | ||
143 | ackfold_test() -> | |
144 | {Conn, Ch} = open(), | |
145 | Q = <<"test">>, | |
146 | Q2 = <<"test2">>, | |
147 | declare(Ch, Q, | |
148 | [{<<"x-dead-letter-exchange">>, longstr, <<>>}, | |
149 | {<<"x-dead-letter-routing-key">>, longstr, Q2} | |
150 | | arguments(3)]), | |
151 | declare(Ch, Q2, none), | |
152 | publish(Ch, Q, [1, 2, 3]), | |
153 | [_, _, DTag] = get_all(Ch, Q, manual_ack, [3, 2, 1]), | |
154 | amqp_channel:cast(Ch, #'basic.nack'{delivery_tag = DTag, | |
155 | multiple = true, | |
156 | requeue = false}), | |
157 | timer:sleep(100), | |
158 | get_all(Ch, Q2, do_ack, [3, 2, 1]), | |
159 | delete(Ch, Q), | |
160 | delete(Ch, Q2), | |
161 | amqp_connection:close(Conn), | |
162 | passed. | |
163 | ||
164 | requeue_test() -> | |
165 | {Conn, Ch} = open(), | |
166 | Q = <<"test">>, | |
167 | declare(Ch, Q, 3), | |
168 | publish(Ch, Q, [1, 2, 3]), | |
169 | [_, _, DTag] = get_all(Ch, Q, manual_ack, [3, 2, 1]), | |
170 | amqp_channel:cast(Ch, #'basic.nack'{delivery_tag = DTag, | |
171 | multiple = true, | |
172 | requeue = true}), | |
173 | get_all(Ch, Q, do_ack, [3, 2, 1]), | |
174 | delete(Ch, Q), | |
175 | amqp_connection:close(Conn), | |
176 | passed. | |
177 | ||
178 | drop_test() -> | |
179 | {Conn, Ch} = open(), | |
180 | Q = <<"test">>, | |
181 | declare(Ch, Q, [{<<"x-max-length">>, long, 4} | arguments(3)]), | |
182 | publish(Ch, Q, [1, 2, 3, 1, 2, 3, 1, 2, 3]), | |
183 | %% We drop from the head, so this is according to the "spec" even | |
184 | %% if not likely to be what the user wants. | |
185 | get_all(Ch, Q, do_ack, [2, 1, 1, 1]), | |
186 | delete(Ch, Q), | |
187 | amqp_connection:close(Conn), | |
188 | passed. | |
189 | ||
190 | purge_test() -> | |
191 | {Conn, Ch} = open(), | |
192 | Q = <<"test">>, | |
193 | declare(Ch, Q, 3), | |
194 | publish(Ch, Q, [1, 2, 3]), | |
195 | amqp_channel:call(Ch, #'queue.purge'{queue = Q}), | |
196 | get_empty(Ch, Q), | |
197 | delete(Ch, Q), | |
198 | amqp_connection:close(Conn), | |
199 | passed. | |
200 | ||
201 | ram_duration_test() -> | |
202 | QName = rabbit_misc:r(<<"/">>, queue, <<"pseudo">>), | |
203 | Q0 = rabbit_amqqueue:pseudo_queue(QName, self()), | |
204 | Q = Q0#amqqueue{arguments = [{<<"x-max-priority">>, long, 5}]}, | |
205 | PQ = rabbit_priority_queue, | |
206 | BQS1 = PQ:init(Q, new, fun(_, _) -> ok end), | |
207 | {Duration1, BQS2} = PQ:ram_duration(BQS1), | |
208 | BQS3 = PQ:set_ram_duration_target(infinity, BQS2), | |
209 | BQS4 = PQ:set_ram_duration_target(1, BQS3), | |
210 | {Duration2, BQS5} = PQ:ram_duration(BQS4), | |
211 | PQ:delete_and_terminate(a_whim, BQS5), | |
212 | passed. | |
213 | ||
214 | mirror_queue_sync_with() -> cluster_ab. | |
215 | mirror_queue_sync([CfgA, _CfgB]) -> | |
216 | Ch = pget(channel, CfgA), | |
217 | Q = <<"test">>, | |
218 | declare(Ch, Q, 3), | |
219 | publish(Ch, Q, [1, 2, 3]), | |
220 | ok = rabbit_test_util:set_ha_policy(CfgA, <<".*">>, <<"all">>), | |
221 | publish(Ch, Q, [1, 2, 3, 1, 2, 3]), | |
222 | %% master now has 9, slave 6. | |
223 | get_partial(Ch, Q, manual_ack, [3, 3, 3, 2, 2, 2]), | |
224 | %% So some but not all are unacked at the slave | |
225 | rabbit_test_util:control_action(sync_queue, CfgA, [binary_to_list(Q)], | |
226 | [{"-p", "/"}]), | |
227 | wait_for_sync(CfgA, rabbit_misc:r(<<"/">>, queue, Q)), | |
228 | passed. | |
229 | ||
230 | %%---------------------------------------------------------------------------- | |
231 | ||
232 | open() -> | |
233 | {ok, Conn} = amqp_connection:start(#amqp_params_network{}), | |
234 | {ok, Ch} = amqp_connection:open_channel(Conn), | |
235 | {Conn, Ch}. | |
236 | ||
237 | declare(Ch, Q, Args) when is_list(Args) -> | |
238 | amqp_channel:call(Ch, #'queue.declare'{queue = Q, | |
239 | durable = true, | |
240 | arguments = Args}); | |
241 | declare(Ch, Q, Max) -> | |
242 | declare(Ch, Q, arguments(Max)). | |
243 | ||
244 | delete(Ch, Q) -> | |
245 | amqp_channel:call(Ch, #'queue.delete'{queue = Q}). | |
246 | ||
247 | publish(Ch, Q, Ps) -> | |
248 | amqp_channel:call(Ch, #'confirm.select'{}), | |
249 | [publish1(Ch, Q, P) || P <- Ps], | |
250 | amqp_channel:wait_for_confirms(Ch). | |
251 | ||
252 | publish_many(_Ch, _Q, 0) -> ok; | |
253 | publish_many( Ch, Q, N) -> publish1(Ch, Q, random:uniform(5)), | |
254 | publish_many(Ch, Q, N - 1). | |
255 | ||
256 | publish1(Ch, Q, P) -> | |
257 | amqp_channel:cast(Ch, #'basic.publish'{routing_key = Q}, | |
258 | #amqp_msg{props = props(P), | |
259 | payload = priority2bin(P)}). | |
260 | ||
261 | props(undefined) -> #'P_basic'{delivery_mode = 2}; | |
262 | props(P) -> #'P_basic'{priority = P, | |
263 | delivery_mode = 2}. | |
264 | ||
265 | consume(Ch, Q, Ack) -> | |
266 | amqp_channel:subscribe(Ch, #'basic.consume'{queue = Q, | |
267 | no_ack = Ack =:= no_ack, | |
268 | consumer_tag = <<"ctag">>}, | |
269 | self()), | |
270 | receive | |
271 | #'basic.consume_ok'{consumer_tag = <<"ctag">>} -> | |
272 | ok | |
273 | end. | |
274 | ||
275 | cancel(Ch) -> | |
276 | amqp_channel:call(Ch, #'basic.cancel'{consumer_tag = <<"ctag">>}). | |
277 | ||
278 | assert_delivered(Ch, Ack, P) -> | |
279 | PBin = priority2bin(P), | |
280 | receive | |
281 | {#'basic.deliver'{delivery_tag = DTag}, #amqp_msg{payload = PBin2}} -> | |
282 | ?assertEqual(PBin, PBin2), | |
283 | maybe_ack(Ch, Ack, DTag) | |
284 | end. | |
285 | ||
286 | get_all(Ch, Q, Ack, Ps) -> | |
287 | DTags = get_partial(Ch, Q, Ack, Ps), | |
288 | get_empty(Ch, Q), | |
289 | DTags. | |
290 | ||
291 | get_partial(Ch, Q, Ack, Ps) -> | |
292 | [get_ok(Ch, Q, Ack, P) || P <- Ps]. | |
293 | ||
294 | get_empty(Ch, Q) -> | |
295 | #'basic.get_empty'{} = amqp_channel:call(Ch, #'basic.get'{queue = Q}). | |
296 | ||
297 | get_ok(Ch, Q, Ack, P) -> | |
298 | PBin = priority2bin(P), | |
299 | {#'basic.get_ok'{delivery_tag = DTag}, #amqp_msg{payload = PBin2}} = | |
300 | amqp_channel:call(Ch, #'basic.get'{queue = Q, | |
301 | no_ack = Ack =:= no_ack}), | |
302 | ?assertEqual(PBin, PBin2), | |
303 | maybe_ack(Ch, Ack, DTag). | |
304 | ||
305 | maybe_ack(Ch, do_ack, DTag) -> | |
306 | amqp_channel:cast(Ch, #'basic.ack'{delivery_tag = DTag}), | |
307 | DTag; | |
308 | maybe_ack(_Ch, _, DTag) -> | |
309 | DTag. | |
310 | ||
311 | arguments(none) -> []; | |
312 | arguments(Max) -> [{<<"x-max-priority">>, byte, Max}]. | |
313 | ||
314 | priority2bin(undefined) -> <<"undefined">>; | |
315 | priority2bin(Int) -> list_to_binary(integer_to_list(Int)). | |
316 | ||
317 | %%---------------------------------------------------------------------------- | |
318 | ||
319 | wait_for_sync(Cfg, Q) -> | |
320 | case synced(Cfg, Q) of | |
321 | true -> ok; | |
322 | false -> timer:sleep(100), | |
323 | wait_for_sync(Cfg, Q) | |
324 | end. | |
325 | ||
326 | synced(Cfg, Q) -> | |
327 | Info = rpc:call(pget(node, Cfg), | |
328 | rabbit_amqqueue, info_all, | |
329 | [<<"/">>, [name, synchronised_slave_pids]]), | |
330 | [SSPids] = [Pids || [{name, Q1}, {synchronised_slave_pids, Pids}] <- Info, | |
331 | Q =:= Q1], | |
332 | length(SSPids) =:= 1. | |
333 | ||
334 | %%---------------------------------------------------------------------------- |
33 | 33 | amqp_channel:call(Ch, #'queue.delete'{queue = Queue}) |
34 | 34 | end || _I <- lists:seq(1, 20)], |
35 | 35 | ok. |
36 | ||
37 | %% Check that by the time we get a declare-ok back, the slaves are up | |
38 | %% and in Mnesia. | |
39 | declare_synchrony_with() -> [cluster_ab, ha_policy_all]. | |
40 | declare_synchrony([Rabbit, Hare]) -> | |
41 | RabbitCh = pget(channel, Rabbit), | |
42 | HareCh = pget(channel, Hare), | |
43 | Q = <<"mirrored-queue">>, | |
44 | declare(RabbitCh, Q), | |
45 | amqp_channel:call(RabbitCh, #'confirm.select'{}), | |
46 | amqp_channel:cast(RabbitCh, #'basic.publish'{routing_key = Q}, | |
47 | #amqp_msg{props = #'P_basic'{delivery_mode = 2}}), | |
48 | amqp_channel:wait_for_confirms(RabbitCh), | |
49 | _Rabbit2 = rabbit_test_configs:kill_node(Rabbit), | |
50 | ||
51 | #'queue.declare_ok'{message_count = 1} = declare(HareCh, Q), | |
52 | ok. | |
53 | ||
54 | declare(Ch, Name) -> | |
55 | amqp_channel:call(Ch, #'queue.declare'{durable = true, queue = Name}). | |
36 | 56 | |
37 | 57 | consume_survives_stop_with() -> ?CONFIG. |
38 | 58 | consume_survives_sigkill_with() -> ?CONFIG. |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
35 | 35 | |
36 | 36 | $ curl -i -u guest:guest -H "content-type:application/json" -XPUT \ |
37 | 37 | http://localhost:55672/api/traces/%2f/my-trace \ |
38 | -d'{"format":"text","pattern":"#"}' | |
38 | -d'{"format":"text","pattern":"#", "max_payload_bytes":1000}' | |
39 | 39 | |
40 | max_payload_bytes is optional (omit it to prevent payload truncation), | |
41 | format and pattern are mandatory.⏎ |
15 | 15 | <th>Name</th> |
16 | 16 | <th>Pattern</th> |
17 | 17 | <th>Format</th> |
18 | <th>Payload limit</th> | |
18 | 19 | <th>Rate</th> |
19 | 20 | <th>Queued</th> |
20 | 21 | <th></th> |
32 | 33 | <td><%= fmt_string(trace.name) %></td> |
33 | 34 | <td><%= fmt_string(trace.pattern) %></td> |
34 | 35 | <td><%= fmt_string(trace.format) %></td> |
36 | <td class="c"><%= fmt_string(trace.max_payload_bytes, 'Unlimited') %></td> | |
35 | 37 | <% if (trace.queue) { %> |
36 | 38 | <td class="r"> |
37 | <%= fmt_rate(trace.queue.message_stats, 'ack', false) %> | |
39 | <%= fmt_detail_rate(trace.queue.message_stats, 'deliver_no_ack') %> | |
38 | 40 | </td> |
39 | 41 | <td class="r"> |
40 | 42 | <%= trace.queue.messages %> |
131 | 133 | </td> |
132 | 134 | </tr> |
133 | 135 | <tr> |
136 | <th><label>Max payload bytes: <span class="help" id="tracing-max-payload"></span></label></th> | |
137 | <td> | |
138 | <input type="text" name="max_payload_bytes" value=""/> | |
139 | </td> | |
140 | </tr> | |
141 | <tr> | |
134 | 142 | <th><label>Pattern:</label></th> |
135 | 143 | <td> |
136 | 144 | <input type="text" name="pattern" value="#"/> |
10 | 10 | 'trace', '#/traces'); |
11 | 11 | }); |
12 | 12 | sammy.put('#/traces', function() { |
13 | if (this.params['max_payload_bytes'] === '') { | |
14 | delete this.params['max_payload_bytes']; | |
15 | } | |
16 | else { | |
17 | this.params['max_payload_bytes'] = | |
18 | parseInt(this.params['max_payload_bytes']); | |
19 | } | |
13 | 20 | if (sync_put(this, '/traces/:vhost/:name')) |
14 | 21 | update(); |
15 | 22 | return false; |
28 | 35 | |
29 | 36 | NAVIGATION['Admin'][0]['Tracing'] = ['#/traces', 'administrator']; |
30 | 37 | |
38 | HELP['tracing-max-payload'] = | |
39 | 'Maximum size of payload to log, in bytes. Payloads larger than this limit will be truncated. Leave blank to prevent truncation. Set to 0 to prevent logging of payload altogether.'; | |
40 | ||
31 | 41 | function link_trace(name) { |
32 | 42 | return _link_to(name, 'api/trace-files/' + esc(name)); |
33 | 43 | } |
21 | 21 | |
22 | 22 | -import(rabbit_misc, [pget/2, pget/3, table_lookup/2]). |
23 | 23 | |
24 | -record(state, {conn, ch, vhost, queue, file, filename, format}). | |
24 | -record(state, {conn, ch, vhost, queue, file, filename, format, buf, buf_cnt, | |
25 | max_payload}). | |
25 | 26 | -record(log_record, {timestamp, type, exchange, queue, node, connection, |
26 | vhost, username, channel, routing_keys, | |
27 | vhost, username, channel, routing_keys, routed_queues, | |
27 | 28 | properties, payload}). |
28 | 29 | |
29 | 30 | -define(X, <<"amq.rabbitmq.trace">>). |
31 | -define(MAX_BUF, 100). | |
30 | 32 | |
31 | 33 | -export([start_link/1, info_all/1]). |
32 | 34 | -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, |
44 | 46 | process_flag(trap_exit, true), |
45 | 47 | Name = pget(name, Args), |
46 | 48 | VHost = pget(vhost, Args), |
49 | MaxPayload = pget(max_payload_bytes, Args, unlimited), | |
47 | 50 | {ok, Conn} = amqp_connection:start( |
48 | 51 | #amqp_params_direct{virtual_host = VHost}), |
49 | 52 | link(Conn), |
56 | 59 | amqp_channel:call( |
57 | 60 | Ch, #'queue.bind'{exchange = ?X, queue = Q, |
58 | 61 | routing_key = pget(pattern, Args)}), |
59 | #'basic.qos_ok'{} = | |
60 | amqp_channel:call(Ch, #'basic.qos'{prefetch_count = 10}), | |
62 | amqp_channel:enable_delivery_flow_control(Ch), | |
61 | 63 | #'basic.consume_ok'{} = |
62 | 64 | amqp_channel:subscribe(Ch, #'basic.consume'{queue = Q, |
63 | no_ack = false}, self()), | |
65 | no_ack = true}, self()), | |
64 | 66 | {ok, Dir} = application:get_env(directory), |
65 | 67 | Filename = Dir ++ "/" ++ binary_to_list(Name) ++ ".log", |
66 | 68 | case filelib:ensure_dir(Filename) of |
67 | 69 | ok -> |
68 | case file:open(Filename, [append]) of | |
70 | case prim_file:open(Filename, [append]) of | |
69 | 71 | {ok, F} -> |
70 | 72 | rabbit_tracing_traces:announce(VHost, Name, self()), |
71 | 73 | Format = list_to_atom(binary_to_list(pget(format, Args))), |
73 | 75 | "format ~p~n", [Filename, Format]), |
74 | 76 | {ok, #state{conn = Conn, ch = Ch, vhost = VHost, queue = Q, |
75 | 77 | file = F, filename = Filename, |
76 | format = Format}}; | |
78 | format = Format, buf = [], buf_cnt = 0, | |
79 | max_payload = MaxPayload}}; | |
77 | 80 | {error, E} -> |
78 | 81 | {stop, {could_not_open, Filename, E}} |
79 | 82 | end; |
93 | 96 | handle_cast(_C, State) -> |
94 | 97 | {noreply, State}. |
95 | 98 | |
96 | handle_info(Delivery = {#'basic.deliver'{delivery_tag = Seq}, #amqp_msg{}}, | |
97 | State = #state{ch = Ch, file = F, format = Format}) -> | |
98 | Print = fun(Fmt, Args) -> io:format(F, Fmt, Args) end, | |
99 | log(Format, Print, delivery_to_log_record(Delivery)), | |
100 | amqp_channel:cast(Ch, #'basic.ack'{delivery_tag = Seq}), | |
101 | {noreply, State}; | |
99 | handle_info({BasicDeliver, Msg, DeliveryCtx}, | |
100 | State = #state{format = Format}) -> | |
101 | amqp_channel:notify_received(DeliveryCtx), | |
102 | {noreply, log(Format, delivery_to_log_record({BasicDeliver, Msg}, State), | |
103 | State), | |
104 | 0}; | |
105 | ||
106 | handle_info(timeout, State) -> | |
107 | {noreply, flush(State)}; | |
102 | 108 | |
103 | 109 | handle_info(_I, State) -> |
104 | 110 | {noreply, State}. |
105 | 111 | |
106 | terminate(shutdown, #state{conn = Conn, ch = Ch, | |
107 | file = F, filename = Filename}) -> | |
112 | terminate(shutdown, State = #state{conn = Conn, ch = Ch, | |
113 | file = F, filename = Filename}) -> | |
114 | flush(State), | |
108 | 115 | catch amqp_channel:close(Ch), |
109 | 116 | catch amqp_connection:close(Conn), |
110 | catch file:close(F), | |
117 | catch prim_file:close(F), | |
111 | 118 | rabbit_log:info("Tracer closed log file ~p~n", [Filename]), |
112 | 119 | ok; |
113 | 120 | |
120 | 127 | |
121 | 128 | delivery_to_log_record({#'basic.deliver'{routing_key = Key}, |
122 | 129 | #amqp_msg{props = #'P_basic'{headers = H}, |
123 | payload = Payload}}) -> | |
124 | {Type, Q} = case Key of | |
125 | <<"publish.", _Rest/binary>> -> {published, none}; | |
126 | <<"deliver.", Rest/binary>> -> {received, Rest} | |
127 | end, | |
130 | payload = Payload}}, State) -> | |
131 | {Type, Q, RQs} = case Key of | |
132 | <<"publish.", _Rest/binary>> -> | |
133 | {array, Qs} = table_lookup(H, <<"routed_queues">>), | |
134 | {published, none, [Q || {_, Q} <- Qs]}; | |
135 | <<"deliver.", Rest/binary>> -> | |
136 | {received, Rest, none} | |
137 | end, | |
128 | 138 | {longstr, Node} = table_lookup(H, <<"node">>), |
129 | 139 | {longstr, X} = table_lookup(H, <<"exchange_name">>), |
130 | 140 | {array, Keys} = table_lookup(H, <<"routing_keys">>), |
143 | 153 | username = User, |
144 | 154 | channel = Chan, |
145 | 155 | routing_keys = [K || {_, K} <- Keys], |
156 | routed_queues= RQs, | |
146 | 157 | properties = Props, |
147 | payload = Payload}. | |
148 | ||
149 | log(text, P, Record) -> | |
150 | P("~n~s~n", [string:copies("=", 80)]), | |
151 | P("~s: ", [Record#log_record.timestamp]), | |
152 | case Record#log_record.type of | |
153 | published -> P("Message published~n~n", []); | |
154 | received -> P("Message received~n~n", []) | |
155 | end, | |
156 | P("Node: ~s~n", [Record#log_record.node]), | |
157 | P("Connection: ~s~n", [Record#log_record.connection]), | |
158 | P("Virtual host: ~s~n", [Record#log_record.vhost]), | |
159 | P("User: ~s~n", [Record#log_record.username]), | |
160 | P("Channel: ~p~n", [Record#log_record.channel]), | |
161 | P("Exchange: ~s~n", [Record#log_record.exchange]), | |
162 | case Record#log_record.queue of | |
163 | none -> ok; | |
164 | Q -> P("Queue: ~s~n", [Q]) | |
165 | end, | |
166 | P("Routing keys: ~p~n", [Record#log_record.routing_keys]), | |
167 | P("Properties: ~p~n", [Record#log_record.properties]), | |
168 | P("Payload: ~n~s~n", [Record#log_record.payload]); | |
169 | ||
170 | log(json, P, Record) -> | |
171 | P("~s~n", [mochijson2:encode( | |
172 | [{timestamp, Record#log_record.timestamp}, | |
173 | {type, Record#log_record.type}, | |
174 | {node, Record#log_record.node}, | |
175 | {connection, Record#log_record.connection}, | |
176 | {vhost, Record#log_record.vhost}, | |
177 | {user, Record#log_record.username}, | |
178 | {channel, Record#log_record.channel}, | |
179 | {exchange, Record#log_record.exchange}, | |
180 | {queue, Record#log_record.queue}, | |
181 | {routing_keys, Record#log_record.routing_keys}, | |
182 | {properties, rabbit_mgmt_format:amqp_table( | |
158 | payload = truncate(Payload, State)}. | |
159 | ||
160 | log(text, Record, State) -> | |
161 | Fmt = "~n========================================" | |
162 | "========================================~n~s: Message ~s~n~n" | |
163 | "Node: ~s~nConnection: ~s~n" | |
164 | "Virtual host: ~s~nUser: ~s~n" | |
165 | "Channel: ~p~nExchange: ~s~n" | |
166 | "Routing keys: ~p~n" ++ | |
167 | case Record#log_record.queue of | |
168 | none -> ""; | |
169 | _ -> "Queue: ~s~n" | |
170 | end ++ | |
171 | case Record#log_record.routed_queues of | |
172 | none -> ""; | |
173 | _ -> "Routed queues: ~p~n" | |
174 | end ++ | |
175 | "Properties: ~p~nPayload: ~n~s~n", | |
176 | Args = | |
177 | [Record#log_record.timestamp, | |
178 | Record#log_record.type, | |
179 | Record#log_record.node, Record#log_record.connection, | |
180 | Record#log_record.vhost, Record#log_record.username, | |
181 | Record#log_record.channel, Record#log_record.exchange, | |
182 | Record#log_record.routing_keys] ++ | |
183 | case Record#log_record.queue of | |
184 | none -> []; | |
185 | Q -> [Q] | |
186 | end ++ | |
187 | case Record#log_record.routed_queues of | |
188 | none -> []; | |
189 | RQs -> [RQs] | |
190 | end ++ | |
191 | [Record#log_record.properties, Record#log_record.payload], | |
192 | print_log(io_lib:format(Fmt, Args), State); | |
193 | ||
194 | log(json, Record, State) -> | |
195 | print_log(mochijson2:encode( | |
196 | [{timestamp, Record#log_record.timestamp}, | |
197 | {type, Record#log_record.type}, | |
198 | {node, Record#log_record.node}, | |
199 | {connection, Record#log_record.connection}, | |
200 | {vhost, Record#log_record.vhost}, | |
201 | {user, Record#log_record.username}, | |
202 | {channel, Record#log_record.channel}, | |
203 | {exchange, Record#log_record.exchange}, | |
204 | {queue, Record#log_record.queue}, | |
205 | {routed_queues, Record#log_record.routed_queues}, | |
206 | {routing_keys, Record#log_record.routing_keys}, | |
207 | {properties, rabbit_mgmt_format:amqp_table( | |
183 | 208 | Record#log_record.properties)}, |
184 | {payload, base64:encode(Record#log_record.payload)}])]). | |
209 | {payload, base64:encode(Record#log_record.payload)}]) | |
210 | ++ "\n", | |
211 | State). | |
212 | ||
213 | print_log(LogMsg, State = #state{buf = Buf, buf_cnt = BufCnt}) -> | |
214 | maybe_flush(State#state{buf = [LogMsg | Buf], buf_cnt = BufCnt + 1}). | |
215 | ||
216 | maybe_flush(State = #state{buf_cnt = ?MAX_BUF}) -> | |
217 | flush(State); | |
218 | maybe_flush(State) -> | |
219 | State. | |
220 | ||
221 | flush(State = #state{file = F, buf = Buf}) -> | |
222 | prim_file:write(F, lists:reverse(Buf)), | |
223 | State#state{buf = [], buf_cnt = 0}. | |
224 | ||
225 | truncate(Payload, #state{max_payload = Max}) -> | |
226 | case Max =:= unlimited orelse size(Payload) =< Max of | |
227 | true -> Payload; | |
228 | false -> <<Trunc:Max/binary, _/binary>> = Payload, | |
229 | Trunc | |
230 | end. |
22 | 22 | -define(ERR, <<"Something went wrong trying to start the trace - check the " |
23 | 23 | "logs.">>). |
24 | 24 | |
25 | -import(rabbit_misc, [pget/2, pget/3]). | |
26 | ||
25 | 27 | -include_lib("rabbitmq_management/include/rabbit_mgmt.hrl"). |
26 | 28 | -include_lib("webmachine/include/webmachine.hrl"). |
27 | 29 | |
46 | 48 | to_json(ReqData, Context) -> |
47 | 49 | rabbit_mgmt_util:reply(trace(ReqData), ReqData, Context). |
48 | 50 | |
49 | accept_content(ReqData, Context) -> | |
50 | case rabbit_mgmt_util:vhost(ReqData) of | |
51 | not_found -> not_found; | |
52 | VHost -> Name = rabbit_mgmt_util:id(name, ReqData), | |
53 | rabbit_mgmt_util:with_decode( | |
54 | [format], ReqData, Context, | |
55 | fun([_], Trace) -> | |
56 | case rabbit_tracing_traces:create( | |
57 | VHost, Name, Trace) of | |
58 | {ok, _} -> {true, ReqData, Context}; | |
59 | _ -> rabbit_mgmt_util:bad_request( | |
60 | ?ERR, ReqData, Context) | |
61 | end | |
62 | end) | |
51 | accept_content(RD, Ctx) -> | |
52 | case rabbit_mgmt_util:vhost(RD) of | |
53 | not_found -> | |
54 | not_found; | |
55 | VHost -> | |
56 | Name = rabbit_mgmt_util:id(name, RD), | |
57 | rabbit_mgmt_util:with_decode( | |
58 | [format, pattern], RD, Ctx, | |
59 | fun([_, _], Trace) -> | |
60 | Fs = [fun val_payload_bytes/3, fun val_format/3, | |
61 | fun val_create/3], | |
62 | case lists:foldl(fun (F, ok) -> F(VHost, Name, Trace); | |
63 | (_F, Err) -> Err | |
64 | end, ok, Fs) of | |
65 | ok -> {true, RD, Ctx}; | |
66 | Err -> rabbit_mgmt_util:bad_request(Err, RD, Ctx) | |
67 | end | |
68 | end) | |
63 | 69 | end. |
64 | 70 | |
65 | 71 | delete_resource(ReqData, Context) -> |
79 | 85 | VHost -> rabbit_tracing_traces:lookup( |
80 | 86 | VHost, rabbit_mgmt_util:id(name, ReqData)) |
81 | 87 | end. |
88 | ||
89 | val_payload_bytes(_VHost, _Name, Trace) -> | |
90 | case is_integer(pget(max_payload_bytes, Trace, 0)) of | |
91 | false -> <<"max_payload_bytes not integer">>; | |
92 | true -> ok | |
93 | end. | |
94 | ||
95 | val_format(_VHost, _Name, Trace) -> | |
96 | case lists:member(pget(format, Trace), [<<"json">>, <<"text">>]) of | |
97 | false -> <<"format not json or text">>; | |
98 | true -> ok | |
99 | end. | |
100 | ||
101 | val_create(VHost, Name, Trace) -> | |
102 | case rabbit_tracing_traces:create(VHost, Name, Trace) of | |
103 | {ok, _} -> ok; | |
104 | _ -> ?ERR | |
105 | end. |
68 | 68 | http_delete("/trace-files/test.log", ?NO_CONTENT), |
69 | 69 | ok. |
70 | 70 | |
71 | tracing_validation_test() -> | |
72 | Path = "/traces/%2f/test", | |
73 | http_put(Path, [{pattern, <<"#">>}], ?BAD_REQUEST), | |
74 | http_put(Path, [{format, <<"json">>}], ?BAD_REQUEST), | |
75 | http_put(Path, [{format, <<"ebcdic">>}, | |
76 | {pattern, <<"#">>}], ?BAD_REQUEST), | |
77 | http_put(Path, [{format, <<"text">>}, | |
78 | {pattern, <<"#">>}, | |
79 | {max_payload_bytes, <<"abc">>}], ?BAD_REQUEST), | |
80 | http_put(Path, [{format, <<"json">>}, | |
81 | {pattern, <<"#">>}, | |
82 | {max_payload_bytes, 1000}], ?NO_CONTENT), | |
83 | http_delete(Path, ?NO_CONTENT), | |
84 | ok. | |
85 | ||
71 | 86 | %%--------------------------------------------------------------------------- |
72 | 87 | %% Below is copypasta from rabbit_mgmt_test_http, it's not obvious how |
73 | 88 | %% to share that given the build system. |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
31 | 31 | <<"/stomp">>, fun service_stomp/3, {}, SockjsOpts), |
32 | 32 | VhostRoutes = [{[<<"stomp">>, '...'], sockjs_cowboy_handler, SockjsState}], |
33 | 33 | Routes = [{'_', VhostRoutes}], % any vhost |
34 | cowboy:start_listener(http, 100, | |
34 | NbAcceptors = get_env(nb_acceptors, 100), | |
35 | cowboy:start_listener(http, NbAcceptors, | |
35 | 36 | cowboy_tcp_transport, [{port, Port}], |
36 | 37 | cowboy_http_protocol, [{dispatch, Routes}]), |
37 | 38 | rabbit_log:info("rabbit_web_stomp: listening for HTTP connections on ~s:~w~n", |
42 | 43 | Conf -> |
43 | 44 | rabbit_networking:ensure_ssl(), |
44 | 45 | TLSPort = proplists:get_value(port, Conf), |
45 | cowboy:start_listener(https, 100, | |
46 | cowboy:start_listener(https, NbAcceptors, | |
46 | 47 | cowboy_ssl_transport, Conf, |
47 | 48 | cowboy_http_protocol, [{dispatch, Routes}]), |
48 | 49 | rabbit_log:info("rabbit_web_stomp: listening for HTTPS connections on ~s:~w~n", |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
9 | 9 | SIGNING_USER_EMAIL=info@rabbitmq.com |
10 | 10 | SIGNING_USER_ID=RabbitMQ Release Signing Key <info@rabbitmq.com> |
11 | 11 | |
12 | # Misc options to pass to hg commands | |
13 | HG_OPTS= | |
12 | # Misc options to pass to git commands | |
13 | GIT_OPTS= | |
14 | 14 | |
15 | 15 | # Misc options to pass to ssh commands |
16 | 16 | SSH_OPTS= |
34 | 34 | |
35 | 35 | REPOS:=rabbitmq-codegen rabbitmq-server rabbitmq-java-client rabbitmq-dotnet-client rabbitmq-test |
36 | 36 | |
37 | HGREPOBASE:=$(shell dirname `hg paths default 2>/dev/null` 2>/dev/null) | |
38 | ||
39 | ifeq ($(HGREPOBASE),) | |
40 | HGREPOBASE=ssh://hg@hg.rabbitmq.com | |
37 | GITREPOBASE:=$(shell dirname `git remote -v 2>/dev/null | awk '/^origin\t.+ \(fetch\)$$/ { print $$2; }'` 2>/dev/null) | |
38 | ||
39 | ifeq ($(GITREPOBASE),) | |
40 | GITREPOBASE=https://github.com/rabbitmq | |
41 | 41 | endif |
42 | 42 | |
43 | 43 | .PHONY: all |
129 | 129 | |
130 | 130 | .PHONY: rabbitmq-server-windows-exe-packaging |
131 | 131 | rabbitmq-server-windows-exe-packaging: rabbitmq-server-windows-packaging |
132 | $(MAKE) -C rabbitmq-server/packaging/windows-exe clean | |
132 | 133 | $(MAKE) -C rabbitmq-server/packaging/windows-exe dist VERSION=$(VERSION) |
133 | 134 | cp rabbitmq-server/packaging/windows-exe/rabbitmq-server-*.exe $(SERVER_PACKAGES_DIR) |
134 | 135 |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
0 | ## Overview | |
1 | ||
2 | RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions. | |
3 | Pull requests is the primary place of discussing code changes. | |
4 | ||
5 | ## How to Contribute | |
6 | ||
7 | The process is fairly standard: | |
8 | ||
9 | * Fork the repository or repositories you plan on contributing to | |
10 | * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella) | |
11 | * `cd umbrella`, `make co` | |
12 | * Create a branch with a descriptive name in the relevant repositories | |
13 | * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork | |
14 | * Submit pull requests with an explanation what has been changed and **why** | |
15 | * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below) | |
16 | * Be patient. We will get to your pull request eventually | |
17 | ||
18 | If what you are going to work on is a substantial change, please first ask the core team | |
19 | of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). | |
20 | ||
21 | ||
22 | ## (Brief) Code of Conduct | |
23 | ||
24 | In one line: don't be a dick. | |
25 | ||
26 | Be respectful to the maintainers and other contributors. Open source | |
27 | contributors put long hours into developing projects and doing user | |
28 | support. Those projects and user support are available for free. We | |
29 | believe this deserves some respect. | |
30 | ||
31 | Be respectful to people of all races, genders, religious beliefs and | |
32 | political views. Regardless of how brilliant a pull request is | |
33 | technically, we will not tolerate disrespectful or aggressive | |
34 | behaviour. | |
35 | ||
36 | Contributors who violate this straightforward Code of Conduct will see | |
37 | their pull requests closed and locked. | |
38 | ||
39 | ||
40 | ## Contributor Agreement | |
41 | ||
42 | If you want to contribute a non-trivial change, please submit a signed copy of our | |
43 | [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time | |
44 | you submit your pull request. This will make it much easier (in some cases, possible) | |
45 | for the RabbitMQ team at Pivotal to merge your contribution. | |
46 | ||
47 | ||
48 | ## Where to Ask Questions | |
49 | ||
50 | If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users). |
59 | 59 | [ "x" = "x$RABBITMQ_USE_LONGNAME" ] && RABBITMQ_USE_LONGNAME=${USE_LONGNAME} |
60 | 60 | if [ "xtrue" = "x$RABBITMQ_USE_LONGNAME" ] ; then |
61 | 61 | RABBITMQ_NAME_TYPE=-name |
62 | [ "x" = "x$HOSTNAME" ] && HOSTNAME=`env hostname --fqdn` | |
62 | [ "x" = "x$HOSTNAME" ] && HOSTNAME=`env hostname -f` | |
63 | 63 | [ "x" = "x$NODENAME" ] && NODENAME=rabbit@${HOSTNAME} |
64 | 64 | else |
65 | 65 | RABBITMQ_NAME_TYPE=-sname |
18 | 18 | # Non-empty defaults should be set in rabbitmq-env |
19 | 19 | . `dirname $0`/rabbitmq-env |
20 | 20 | |
21 | RABBITMQ_USE_LONGNAME=${RABBITMQ_USE_LONGNAME} \ | |
21 | 22 | exec ${ERL_DIR}erl \ |
22 | 23 | -pa "${RABBITMQ_HOME}/ebin" \ |
23 | 24 | -noinput \ |
24 | 25 | -hidden \ |
25 | ${RABBITMQ_NAME_TYPE} rabbitmq-plugins$$ \ | |
26 | ${RABBITMQ_PLUGINS_ERL_ARGS} \ | |
26 | 27 | -boot "${CLEAN_BOOT_FILE}" \ |
27 | 28 | -s rabbit_plugins_main \ |
28 | 29 | -enabled_plugins_file "$RABBITMQ_ENABLED_PLUGINS_FILE" \ |
22 | 22 | set STAR=%* |
23 | 23 | setlocal enabledelayedexpansion |
24 | 24 | |
25 | if "!RABBITMQ_USE_LONGNAME!"=="" ( | |
26 | set RABBITMQ_NAME_TYPE="-sname" | |
27 | ) | |
28 | ||
29 | if "!RABBITMQ_USE_LONGNAME!"=="true" ( | |
30 | set RABBITMQ_NAME_TYPE="-name" | |
31 | ) | |
32 | ||
33 | 25 | if "!RABBITMQ_SERVICENAME!"=="" ( |
34 | 26 | set RABBITMQ_SERVICENAME=RabbitMQ |
35 | 27 | ) |
51 | 43 | echo Please either set ERLANG_HOME to point to your Erlang installation or place the |
52 | 44 | echo RabbitMQ server distribution in the Erlang lib folder. |
53 | 45 | echo. |
54 | exit /B | |
46 | exit /B 1 | |
55 | 47 | ) |
56 | 48 | |
57 | 49 | if "!RABBITMQ_ENABLED_PLUGINS_FILE!"=="" ( |
62 | 54 | set RABBITMQ_PLUGINS_DIR=!TDP0!..\plugins |
63 | 55 | ) |
64 | 56 | |
65 | "!ERLANG_HOME!\bin\erl.exe" -pa "!TDP0!..\ebin" -noinput -hidden !RABBITMQ_NAME_TYPE! rabbitmq-plugins!RANDOM!!TIME:~9! -s rabbit_plugins_main -enabled_plugins_file "!RABBITMQ_ENABLED_PLUGINS_FILE!" -plugins_dist_dir "!RABBITMQ_PLUGINS_DIR:\=/!" -nodename !RABBITMQ_NODENAME! -extra !STAR! | |
57 | "!ERLANG_HOME!\bin\erl.exe" ^ | |
58 | -pa "!TDP0!..\ebin" ^ | |
59 | -noinput ^ | |
60 | -hidden ^ | |
61 | !RABBITMQ_CTL_ERL_ARGS! ^ | |
62 | -s rabbit_plugins_main ^ | |
63 | -enabled_plugins_file "!RABBITMQ_ENABLED_PLUGINS_FILE!" ^ | |
64 | -plugins_dist_dir "!RABBITMQ_PLUGINS_DIR:\=/!" ^ | |
65 | -nodename !RABBITMQ_NODENAME! ^ | |
66 | -extra !STAR! | |
66 | 67 | |
67 | 68 | endlocal |
68 | 69 | endlocal |
69 | 69 | echo Please either set ERLANG_HOME to point to your Erlang installation or place the |
70 | 70 | echo RabbitMQ server distribution in the Erlang lib folder. |
71 | 71 | echo. |
72 | exit /B | |
72 | exit /B 1 | |
73 | 73 | ) |
74 | 74 | |
75 | 75 | if "!RABBITMQ_MNESIA_BASE!"=="" ( |
141 | 141 | ) |
142 | 142 | ) |
143 | 143 | |
144 | set RABBITMQ_START_RABBIT= | |
145 | if "!RABBITMQ_NODE_ONLY!"=="" ( | |
146 | set RABBITMQ_START_RABBIT=-s rabbit boot | |
147 | ) | |
148 | ||
144 | 149 | "!ERLANG_HOME!\bin\erl.exe" ^ |
145 | 150 | -pa "!RABBITMQ_EBIN_ROOT!" ^ |
146 | 151 | -noinput ^ |
147 | 152 | -boot start_sasl ^ |
148 | -s rabbit boot ^ | |
153 | !RABBITMQ_START_RABBIT! ^ | |
149 | 154 | !RABBITMQ_CONFIG_ARG! ^ |
150 | 155 | !RABBITMQ_NAME_TYPE! !RABBITMQ_NODENAME! ^ |
151 | 156 | +W w ^ |
218 | 218 | ) |
219 | 219 | ) |
220 | 220 | |
221 | set RABBITMQ_START_RABBIT= | |
222 | if "!RABBITMQ_NODE_ONLY!"=="" ( | |
223 | set RABBITMQ_START_RABBIT=-s rabbit boot | |
224 | ) | |
225 | ||
221 | 226 | set ERLANG_SERVICE_ARGUMENTS= ^ |
222 | 227 | -pa "!RABBITMQ_EBIN_ROOT!" ^ |
223 | -boot start_sasl ^ | |
228 | !RABBITMQ_START_RABBIT! ^ | |
224 | 229 | -s rabbit boot ^ |
225 | 230 | !RABBITMQ_CONFIG_ARG! ^ |
226 | 231 | +W w ^ |
18 | 18 | # Non-empty defaults should be set in rabbitmq-env |
19 | 19 | . `dirname $0`/rabbitmq-env |
20 | 20 | |
21 | # rabbitmqctl starts distribution itself, so we need to make sure epmd | |
22 | # is running. | |
23 | ${ERL_DIR}erl ${RABBITMQ_NAME_TYPE} rabbitmqctl-prelaunch-$$ -noinput \ | |
24 | -eval 'erlang:halt().' -boot "${CLEAN_BOOT_FILE}" | |
25 | ||
26 | 21 | # We specify Mnesia dir and sasl error logger since some actions |
27 | 22 | # (e.g. forget_cluster_node --offline) require us to impersonate the |
28 | 23 | # real node. |
26 | 26 | set RABBITMQ_BASE=!APPDATA!\RabbitMQ |
27 | 27 | ) |
28 | 28 | |
29 | if "!RABBITMQ_USE_LONGNAME!"=="" ( | |
30 | set RABBITMQ_NAME_TYPE="-sname" | |
31 | ) | |
32 | ||
33 | if "!RABBITMQ_USE_LONGNAME!"=="true" ( | |
34 | set RABBITMQ_NAME_TYPE="-name" | |
35 | ) | |
36 | ||
37 | 29 | if "!COMPUTERNAME!"=="" ( |
38 | 30 | set COMPUTERNAME=localhost |
39 | 31 | ) |
59 | 51 | echo Please either set ERLANG_HOME to point to your Erlang installation or place the |
60 | 52 | echo RabbitMQ server distribution in the Erlang lib folder. |
61 | 53 | echo. |
62 | exit /B | |
54 | exit /B 1 | |
63 | 55 | ) |
64 | ||
65 | rem rabbitmqctl starts distribution itself, so we need to make sure epmd | |
66 | rem is running. | |
67 | "!ERLANG_HOME!\bin\erl.exe" !RABBITMQ_NAME_TYPE! rabbitmqctl-prelaunch-!RANDOM!!TIME:~9! -noinput -eval "erlang:halt()." | |
68 | 56 | |
69 | 57 | "!ERLANG_HOME!\bin\erl.exe" ^ |
70 | 58 | -pa "!TDP0!..\ebin" ^ |
61 | 61 | |
62 | 62 | stop_applications(Apps, ErrorHandler) -> |
63 | 63 | manage_applications(fun lists:foldr/3, |
64 | %% Mitigation for bug 26467. TODO remove when we fix it. | |
65 | fun (mnesia) -> | |
66 | timer:sleep(1000), | |
67 | application:stop(mnesia); | |
68 | (App) -> | |
69 | application:stop(App) | |
70 | end, | |
64 | fun application:stop/1, | |
71 | 65 | fun application:start/1, |
72 | 66 | not_started, |
73 | 67 | ErrorHandler, |
14 | 14 | %% |
15 | 15 | |
16 | 16 | -module(delegate). |
17 | ||
18 | %% delegate is an alternative way of doing remote calls. Compared to | |
19 | %% the rpc module, it reduces inter-node communication. For example, | |
20 | %% if a message is routed to 1,000 queues on node A and needs to be | |
21 | %% propagated to nodes B and C, it would be nice to avoid doing 2,000 | |
22 | %% remote casts to queue processes. | |
23 | %% | |
24 | %% An important issue here is preserving order - we need to make sure | |
25 | %% that messages from a certain channel to a certain queue take a | |
26 | %% consistent route, to prevent them being reordered. In fact all | |
27 | %% AMQP-ish things (such as queue declaration results and basic.get) | |
28 | %% must take the same route as well, to ensure that clients see causal | |
29 | %% ordering correctly. Therefore we have a rather generic mechanism | |
30 | %% here rather than just a message-reflector. That's also why we pick | |
31 | %% the delegate process to use based on a hash of the source pid. | |
32 | %% | |
33 | %% When a function is invoked using delegate:invoke/2, delegate:call/2 | |
34 | %% or delegate:cast/2 on a group of pids, the pids are first split | |
35 | %% into local and remote ones. Remote processes are then grouped by | |
36 | %% node. The function is then invoked locally and on every node (using | |
37 | %% gen_server2:multi/4) as many times as there are processes on that | |
38 | %% node, sequentially. | |
39 | %% | |
40 | %% Errors returned when executing functions on remote nodes are re-raised | |
41 | %% in the caller. | |
42 | %% | |
43 | %% RabbitMQ starts a pool of delegate processes on boot. The size of | |
44 | %% the pool is configurable, the aim is to make sure we don't have too | |
45 | %% few delegates and thus limit performance on many-CPU machines. | |
17 | 46 | |
18 | 47 | -behaviour(gen_server2). |
19 | 48 |
29 | 29 | %% may happen, especially for writes. |
30 | 30 | %% 3) Writes are all appends. You cannot write to the middle of a |
31 | 31 | %% file, although you can truncate and then append if you want. |
32 | %% 4) Although there is a write buffer, there is no read buffer. Feel | |
33 | %% free to use the read_ahead mode, but beware of the interaction | |
34 | %% between that buffer and the write buffer. | |
32 | %% 4) There are read and write buffers. Feel free to use the read_ahead | |
33 | %% mode, but beware of the interaction between that buffer and the write | |
34 | %% buffer. | |
35 | 35 | %% |
36 | 36 | %% Some benefits |
37 | 37 | %% 1) You do not have to remember to call sync before close |
177 | 177 | write_buffer_size, |
178 | 178 | write_buffer_size_limit, |
179 | 179 | write_buffer, |
180 | read_buffer, | |
181 | read_buffer_pos, | |
182 | read_buffer_rem, %% Num of bytes from pos to end | |
183 | read_buffer_size, %% Next size of read buffer to use | |
184 | read_buffer_size_limit, %% Max size of read buffer to use | |
185 | read_buffer_usage, %% Bytes we have read from it, for tuning | |
180 | 186 | at_eof, |
181 | 187 | path, |
182 | 188 | mode, |
236 | 242 | -spec(register_callback/3 :: (atom(), atom(), [any()]) -> 'ok'). |
237 | 243 | -spec(open/3 :: |
238 | 244 | (file:filename(), [any()], |
239 | [{'write_buffer', (non_neg_integer() | 'infinity' | 'unbuffered')}]) | |
245 | [{'write_buffer', (non_neg_integer() | 'infinity' | 'unbuffered')} | | |
246 | {'read_buffer', (non_neg_integer() | 'unbuffered')}]) | |
240 | 247 | -> val_or_error(ref())). |
241 | 248 | -spec(close/1 :: (ref()) -> ok_or_error()). |
242 | 249 | -spec(read/2 :: (ref(), non_neg_integer()) -> |
330 | 337 | |
331 | 338 | read(Ref, Count) -> |
332 | 339 | with_flushed_handles( |
333 | [Ref], | |
340 | [Ref], keep, | |
334 | 341 | fun ([#handle { is_read = false }]) -> |
335 | 342 | {error, not_open_for_reading}; |
336 | ([Handle = #handle { hdl = Hdl, offset = Offset }]) -> | |
337 | case prim_file:read(Hdl, Count) of | |
338 | {ok, Data} = Obj -> Offset1 = Offset + iolist_size(Data), | |
339 | {Obj, | |
340 | [Handle #handle { offset = Offset1 }]}; | |
341 | eof -> {eof, [Handle #handle { at_eof = true }]}; | |
342 | Error -> {Error, [Handle]} | |
343 | ([Handle = #handle{read_buffer = Buf, | |
344 | read_buffer_pos = BufPos, | |
345 | read_buffer_rem = BufRem, | |
346 | read_buffer_usage = BufUsg, | |
347 | offset = Offset}]) | |
348 | when BufRem >= Count -> | |
349 | <<_:BufPos/binary, Res:Count/binary, _/binary>> = Buf, | |
350 | {{ok, Res}, [Handle#handle{offset = Offset + Count, | |
351 | read_buffer_pos = BufPos + Count, | |
352 | read_buffer_rem = BufRem - Count, | |
353 | read_buffer_usage = BufUsg + Count }]}; | |
354 | ([Handle0]) -> | |
355 | Handle = #handle{read_buffer = Buf, | |
356 | read_buffer_pos = BufPos, | |
357 | read_buffer_rem = BufRem, | |
358 | read_buffer_size = BufSz, | |
359 | hdl = Hdl, | |
360 | offset = Offset} | |
361 | = tune_read_buffer_limit(Handle0, Count), | |
362 | WantedCount = Count - BufRem, | |
363 | case prim_file_read(Hdl, lists:max([BufSz, WantedCount])) of | |
364 | {ok, Data} -> | |
365 | <<_:BufPos/binary, BufTl/binary>> = Buf, | |
366 | ReadCount = size(Data), | |
367 | case ReadCount < WantedCount of | |
368 | true -> | |
369 | OffSet1 = Offset + BufRem + ReadCount, | |
370 | {{ok, <<BufTl/binary, Data/binary>>}, | |
371 | [reset_read_buffer( | |
372 | Handle#handle{offset = OffSet1})]}; | |
373 | false -> | |
374 | <<Hd:WantedCount/binary, _/binary>> = Data, | |
375 | OffSet1 = Offset + BufRem + WantedCount, | |
376 | BufRem1 = ReadCount - WantedCount, | |
377 | {{ok, <<BufTl/binary, Hd/binary>>}, | |
378 | [Handle#handle{offset = OffSet1, | |
379 | read_buffer = Data, | |
380 | read_buffer_pos = WantedCount, | |
381 | read_buffer_rem = BufRem1, | |
382 | read_buffer_usage = WantedCount}]} | |
383 | end; | |
384 | eof -> | |
385 | {eof, [Handle #handle { at_eof = true }]}; | |
386 | Error -> | |
387 | {Error, [reset_read_buffer(Handle)]} | |
343 | 388 | end |
344 | 389 | end). |
345 | 390 | |
354 | 399 | write_buffer_size_limit = 0, |
355 | 400 | at_eof = true } = Handle1} -> |
356 | 401 | Offset1 = Offset + iolist_size(Data), |
357 | {prim_file:write(Hdl, Data), | |
402 | {prim_file_write(Hdl, Data), | |
358 | 403 | [Handle1 #handle { is_dirty = true, offset = Offset1 }]}; |
359 | 404 | {{ok, _Offset}, #handle { write_buffer = WriteBuffer, |
360 | 405 | write_buffer_size = Size, |
376 | 421 | |
377 | 422 | sync(Ref) -> |
378 | 423 | with_flushed_handles( |
379 | [Ref], | |
424 | [Ref], keep, | |
380 | 425 | fun ([#handle { is_dirty = false, write_buffer = [] }]) -> |
381 | 426 | ok; |
382 | 427 | ([Handle = #handle { hdl = Hdl, |
383 | 428 | is_dirty = true, write_buffer = [] }]) -> |
384 | case prim_file:sync(Hdl) of | |
429 | case prim_file_sync(Hdl) of | |
385 | 430 | ok -> {ok, [Handle #handle { is_dirty = false }]}; |
386 | 431 | Error -> {Error, [Handle]} |
387 | 432 | end |
396 | 441 | |
397 | 442 | position(Ref, NewOffset) -> |
398 | 443 | with_flushed_handles( |
399 | [Ref], | |
444 | [Ref], keep, | |
400 | 445 | fun ([Handle]) -> {Result, Handle1} = maybe_seek(NewOffset, Handle), |
401 | 446 | {Result, [Handle1]} |
402 | 447 | end). |
464 | 509 | fun ([#handle { at_eof = true, write_buffer_size = 0, offset = 0 }]) -> |
465 | 510 | ok; |
466 | 511 | ([Handle]) -> |
467 | case maybe_seek(bof, Handle #handle { write_buffer = [], | |
468 | write_buffer_size = 0 }) of | |
512 | case maybe_seek(bof, Handle#handle{write_buffer = [], | |
513 | write_buffer_size = 0}) of | |
469 | 514 | {{ok, 0}, Handle1 = #handle { hdl = Hdl }} -> |
470 | 515 | case prim_file:truncate(Hdl) of |
471 | 516 | ok -> {ok, [Handle1 #handle { at_eof = true }]}; |
538 | 583 | %% Internal functions |
539 | 584 | %%---------------------------------------------------------------------------- |
540 | 585 | |
586 | prim_file_read(Hdl, Size) -> | |
587 | file_handle_cache_stats:update( | |
588 | io_read, Size, fun() -> prim_file:read(Hdl, Size) end). | |
589 | ||
590 | prim_file_write(Hdl, Bytes) -> | |
591 | file_handle_cache_stats:update( | |
592 | io_write, iolist_size(Bytes), fun() -> prim_file:write(Hdl, Bytes) end). | |
593 | ||
594 | prim_file_sync(Hdl) -> | |
595 | file_handle_cache_stats:update(io_sync, fun() -> prim_file:sync(Hdl) end). | |
596 | ||
597 | prim_file_position(Hdl, NewOffset) -> | |
598 | file_handle_cache_stats:update( | |
599 | io_seek, fun() -> prim_file:position(Hdl, NewOffset) end). | |
600 | ||
541 | 601 | is_reader(Mode) -> lists:member(read, Mode). |
542 | 602 | |
543 | 603 | is_writer(Mode) -> lists:member(write, Mode). |
549 | 609 | end. |
550 | 610 | |
551 | 611 | with_handles(Refs, Fun) -> |
612 | with_handles(Refs, reset, Fun). | |
613 | ||
614 | with_handles(Refs, ReadBuffer, Fun) -> | |
552 | 615 | case get_or_reopen([{Ref, reopen} || Ref <- Refs]) of |
553 | {ok, Handles} -> | |
616 | {ok, Handles0} -> | |
617 | Handles = case ReadBuffer of | |
618 | reset -> [reset_read_buffer(H) || H <- Handles0]; | |
619 | keep -> Handles0 | |
620 | end, | |
554 | 621 | case Fun(Handles) of |
555 | 622 | {Result, Handles1} when is_list(Handles1) -> |
556 | 623 | lists:zipwith(fun put_handle/2, Refs, Handles1), |
563 | 630 | end. |
564 | 631 | |
565 | 632 | with_flushed_handles(Refs, Fun) -> |
633 | with_flushed_handles(Refs, reset, Fun). | |
634 | ||
635 | with_flushed_handles(Refs, ReadBuffer, Fun) -> | |
566 | 636 | with_handles( |
567 | Refs, | |
637 | Refs, ReadBuffer, | |
568 | 638 | fun (Handles) -> |
569 | 639 | case lists:foldl( |
570 | 640 | fun (Handle, {ok, HandlesAcc}) -> |
610 | 680 | {ok, lists:reverse(RefHdls)}; |
611 | 681 | reopen([{Ref, NewOrReopen, Handle = #handle { hdl = closed, |
612 | 682 | path = Path, |
613 | mode = Mode, | |
683 | mode = Mode0, | |
614 | 684 | offset = Offset, |
615 | 685 | last_used_at = undefined }} | |
616 | 686 | RefNewOrReopenHdls] = ToOpen, Tree, RefHdls) -> |
617 | case prim_file:open(Path, case NewOrReopen of | |
618 | new -> Mode; | |
619 | reopen -> [read | Mode] | |
620 | end) of | |
687 | Mode = case NewOrReopen of | |
688 | new -> Mode0; | |
689 | reopen -> file_handle_cache_stats:update(io_reopen), | |
690 | [read | Mode0] | |
691 | end, | |
692 | case prim_file:open(Path, Mode) of | |
621 | 693 | {ok, Hdl} -> |
622 | 694 | Now = now(), |
623 | 695 | {{ok, _Offset}, Handle1} = |
624 | maybe_seek(Offset, Handle #handle { hdl = Hdl, | |
625 | offset = 0, | |
626 | last_used_at = Now }), | |
696 | maybe_seek(Offset, reset_read_buffer( | |
697 | Handle#handle{hdl = Hdl, | |
698 | offset = 0, | |
699 | last_used_at = Now})), | |
627 | 700 | put({Ref, fhc_handle}, Handle1), |
628 | 701 | reopen(RefNewOrReopenHdls, gb_trees:insert(Now, Ref, Tree), |
629 | 702 | [{Ref, Handle1} | RefHdls]); |
708 | 781 | infinity -> infinity; |
709 | 782 | N when is_integer(N) -> N |
710 | 783 | end, |
784 | ReadBufferSize = | |
785 | case proplists:get_value(read_buffer, Options, unbuffered) of | |
786 | unbuffered -> 0; | |
787 | N2 when is_integer(N2) -> N2 | |
788 | end, | |
711 | 789 | Ref = make_ref(), |
712 | 790 | put({Ref, fhc_handle}, #handle { hdl = closed, |
713 | 791 | offset = 0, |
715 | 793 | write_buffer_size = 0, |
716 | 794 | write_buffer_size_limit = WriteBufferSize, |
717 | 795 | write_buffer = [], |
796 | read_buffer = <<>>, | |
797 | read_buffer_pos = 0, | |
798 | read_buffer_rem = 0, | |
799 | read_buffer_size = ReadBufferSize, | |
800 | read_buffer_size_limit = ReadBufferSize, | |
801 | read_buffer_usage = 0, | |
718 | 802 | at_eof = false, |
719 | 803 | path = Path, |
720 | 804 | mode = Mode, |
741 | 825 | is_dirty = IsDirty, |
742 | 826 | last_used_at = Then } = Handle1 } -> |
743 | 827 | ok = case IsDirty of |
744 | true -> prim_file:sync(Hdl); | |
828 | true -> prim_file_sync(Hdl); | |
745 | 829 | false -> ok |
746 | 830 | end, |
747 | 831 | ok = prim_file:close(Hdl), |
775 | 859 | Result |
776 | 860 | end. |
777 | 861 | |
778 | maybe_seek(NewOffset, Handle = #handle { hdl = Hdl, offset = Offset, | |
779 | at_eof = AtEoF }) -> | |
780 | {AtEoF1, NeedsSeek} = needs_seek(AtEoF, Offset, NewOffset), | |
781 | case (case NeedsSeek of | |
782 | true -> prim_file:position(Hdl, NewOffset); | |
783 | false -> {ok, Offset} | |
784 | end) of | |
785 | {ok, Offset1} = Result -> | |
786 | {Result, Handle #handle { offset = Offset1, at_eof = AtEoF1 }}; | |
787 | {error, _} = Error -> | |
788 | {Error, Handle} | |
862 | maybe_seek(New, Handle = #handle{hdl = Hdl, | |
863 | offset = Old, | |
864 | read_buffer_pos = BufPos, | |
865 | read_buffer_rem = BufRem, | |
866 | at_eof = AtEoF}) -> | |
867 | {AtEoF1, NeedsSeek} = needs_seek(AtEoF, Old, New), | |
868 | case NeedsSeek of | |
869 | true when is_number(New) andalso | |
870 | ((New >= Old andalso New =< BufRem + Old) | |
871 | orelse (New < Old andalso Old - New =< BufPos)) -> | |
872 | Diff = New - Old, | |
873 | {{ok, New}, Handle#handle{offset = New, | |
874 | at_eof = AtEoF1, | |
875 | read_buffer_pos = BufPos + Diff, | |
876 | read_buffer_rem = BufRem - Diff}}; | |
877 | true -> | |
878 | case prim_file_position(Hdl, New) of | |
879 | {ok, Offset1} = Result -> | |
880 | {Result, reset_read_buffer(Handle#handle{offset = Offset1, | |
881 | at_eof = AtEoF1})}; | |
882 | {error, _} = Error -> | |
883 | {Error, Handle} | |
884 | end; | |
885 | false -> | |
886 | {{ok, Old}, Handle} | |
789 | 887 | end. |
790 | 888 | |
791 | 889 | needs_seek( AtEoF, _CurOffset, cur ) -> {AtEoF, false}; |
816 | 914 | write_buffer = WriteBuffer, |
817 | 915 | write_buffer_size = DataSize, |
818 | 916 | at_eof = true }) -> |
819 | case prim_file:write(Hdl, lists:reverse(WriteBuffer)) of | |
917 | case prim_file_write(Hdl, lists:reverse(WriteBuffer)) of | |
820 | 918 | ok -> |
821 | 919 | Offset1 = Offset + DataSize, |
822 | 920 | {ok, Handle #handle { offset = Offset1, is_dirty = true, |
825 | 923 | {Error, Handle} |
826 | 924 | end. |
827 | 925 | |
926 | reset_read_buffer(Handle) -> | |
927 | Handle#handle{read_buffer = <<>>, | |
928 | read_buffer_pos = 0, | |
929 | read_buffer_rem = 0}. | |
930 | ||
931 | %% We come into this function whenever there's been a miss while | |
932 | %% reading from the buffer - but note that when we first start with a | |
933 | %% new handle the usage will be 0. Therefore in that case don't take | |
934 | %% it as meaning the buffer was useless, we just haven't done anything | |
935 | %% yet! | |
936 | tune_read_buffer_limit(Handle = #handle{read_buffer_usage = 0}, _Count) -> | |
937 | Handle; | |
938 | %% In this head we have been using the buffer but now tried to read | |
939 | %% outside it. So how did we do? If we used less than the size of the | |
940 | %% buffer, make the new buffer the size of what we used before, but | |
941 | %% add one byte (so that next time we can distinguish between getting | |
942 | %% the buffer size exactly right and actually wanting more). If we | |
943 | %% read 100% of what we had, then double it for next time, up to the | |
944 | %% limit that was set when we were created. | |
945 | tune_read_buffer_limit(Handle = #handle{read_buffer = Buf, | |
946 | read_buffer_usage = Usg, | |
947 | read_buffer_size = Sz, | |
948 | read_buffer_size_limit = Lim}, Count) -> | |
949 | %% If the buffer is <<>> then we are in the first read after a | |
950 | %% reset, the read_buffer_usage is the total usage from before the | |
951 | %% reset. But otherwise we are in a read which read off the end of | |
952 | %% the buffer, so really the size of this read should be included | |
953 | %% in the usage. | |
954 | TotalUsg = case Buf of | |
955 | <<>> -> Usg; | |
956 | _ -> Usg + Count | |
957 | end, | |
958 | Handle#handle{read_buffer_usage = 0, | |
959 | read_buffer_size = erlang:min(case TotalUsg < Sz of | |
960 | true -> Usg + 1; | |
961 | false -> Usg * 2 | |
962 | end, Lim)}. | |
963 | ||
828 | 964 | infos(Items, State) -> [{Item, i(Item, State)} || Item <- Items]. |
829 | 965 | |
830 | 966 | i(total_limit, #fhc_state{limit = Limit}) -> Limit; |
842 | 978 | %%---------------------------------------------------------------------------- |
843 | 979 | |
844 | 980 | init([AlarmSet, AlarmClear]) -> |
981 | file_handle_cache_stats:init(), | |
845 | 982 | Limit = case application:get_env(file_handles_high_watermark) of |
846 | 983 | {ok, Watermark} when (is_integer(Watermark) andalso |
847 | 984 | Watermark > 0) -> |
0 | %% The contents of this file are subject to the Mozilla Public License | |
1 | %% Version 1.1 (the "License"); you may not use this file except in | |
2 | %% compliance with the License. You may obtain a copy of the License | |
3 | %% at http://www.mozilla.org/MPL/ | |
4 | %% | |
5 | %% Software distributed under the License is distributed on an "AS IS" | |
6 | %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | %% the License for the specific language governing rights and | |
8 | %% limitations under the License. | |
9 | %% | |
10 | %% The Original Code is RabbitMQ. | |
11 | %% | |
12 | %% The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. | |
14 | %% | |
15 | ||
16 | -module(file_handle_cache_stats). | |
17 | ||
18 | %% stats about read / write operations that go through the fhc. | |
19 | ||
20 | -export([init/0, update/3, update/2, update/1, get/0]). | |
21 | ||
22 | -define(TABLE, ?MODULE). | |
23 | ||
24 | -define(COUNT, | |
25 | [io_reopen, mnesia_ram_tx, mnesia_disk_tx, | |
26 | msg_store_read, msg_store_write, | |
27 | queue_index_journal_write, queue_index_write, queue_index_read]). | |
28 | -define(COUNT_TIME, [io_sync, io_seek]). | |
29 | -define(COUNT_TIME_BYTES, [io_read, io_write]). | |
30 | ||
31 | init() -> | |
32 | ets:new(?TABLE, [public, named_table]), | |
33 | [ets:insert(?TABLE, {{Op, Counter}, 0}) || Op <- ?COUNT_TIME_BYTES, | |
34 | Counter <- [count, bytes, time]], | |
35 | [ets:insert(?TABLE, {{Op, Counter}, 0}) || Op <- ?COUNT_TIME, | |
36 | Counter <- [count, time]], | |
37 | [ets:insert(?TABLE, {{Op, Counter}, 0}) || Op <- ?COUNT, | |
38 | Counter <- [count]]. | |
39 | ||
40 | update(Op, Bytes, Thunk) -> | |
41 | {Time, Res} = timer_tc(Thunk), | |
42 | ets:update_counter(?TABLE, {Op, count}, 1), | |
43 | ets:update_counter(?TABLE, {Op, bytes}, Bytes), | |
44 | ets:update_counter(?TABLE, {Op, time}, Time), | |
45 | Res. | |
46 | ||
47 | update(Op, Thunk) -> | |
48 | {Time, Res} = timer_tc(Thunk), | |
49 | ets:update_counter(?TABLE, {Op, count}, 1), | |
50 | ets:update_counter(?TABLE, {Op, time}, Time), | |
51 | Res. | |
52 | ||
53 | update(Op) -> | |
54 | ets:update_counter(?TABLE, {Op, count}, 1), | |
55 | ok. | |
56 | ||
57 | get() -> | |
58 | lists:sort(ets:tab2list(?TABLE)). | |
59 | ||
60 | %% TODO timer:tc/1 was introduced in R14B03; use that function once we | |
61 | %% require that version. | |
62 | timer_tc(Thunk) -> | |
63 | T1 = os:timestamp(), | |
64 | Res = Thunk(), | |
65 | T2 = os:timestamp(), | |
66 | {timer:now_diff(T2, T1), Res}. |
14 | 14 | %% |
15 | 15 | |
16 | 16 | -module(gatherer). |
17 | ||
18 | %% Gatherer is a queue which has producer and consumer processes. Before producers | |
19 | %% push items to the queue using gatherer:in/2 they need to declare their intent | |
20 | %% to do so with gatherer:fork/1. When a publisher's work is done, it states so | |
21 | %% using gatherer:finish/1. | |
22 | %% | |
23 | %% Consumers pop messages off queues with gatherer:out/1. If a queue is empty | |
24 | %% and there are producers that haven't finished working, the caller is blocked | |
25 | %% until an item is available. If there are no active producers, gatherer:out/1 | |
26 | %% immediately returns 'empty'. | |
27 | %% | |
28 | %% This module is primarily used to collect results from asynchronous tasks | |
29 | %% running in a worker pool, e.g. when recovering bindings or rebuilding | |
30 | %% message store indices. | |
17 | 31 | |
18 | 32 | -behaviour(gen_server2). |
19 | 33 |
14 | 14 | %% |
15 | 15 | |
16 | 16 | -module(lqueue). |
17 | ||
18 | %% lqueue implements a subset of Erlang's queue module. lqueues | |
19 | %% maintain their own length, so lqueue:len/1 | |
20 | %% is an O(1) operation, in contrast with queue:len/1 which is O(n). | |
17 | 21 | |
18 | 22 | -export([new/0, is_empty/1, len/1, in/2, in_r/2, out/1, out_r/1, join/2, |
19 | 23 | foldl/3, foldr/3, from_list/1, to_list/1, peek/1, peek_r/1]). |
14 | 14 | %% |
15 | 15 | |
16 | 16 | -module(pmon). |
17 | ||
18 | %% Process Monitor | |
19 | %% ================ | |
20 | %% | |
21 | %% This module monitors processes so that every process has at most | |
22 | %% 1 monitor. | |
23 | %% Processes monitored can be dynamically added and removed. | |
24 | %% | |
25 | %% Unlike erlang:[de]monitor* functions, this module | |
26 | %% provides basic querying capability and avoids contacting down nodes. | |
27 | %% | |
28 | %% It is used to monitor nodes, queue mirrors, and by | |
29 | %% the queue collector, among other things. | |
17 | 30 | |
18 | 31 | -export([new/0, new/1, monitor/2, monitor_all/2, demonitor/2, |
19 | 32 | is_monitored/2, erase/2, monitored/1, is_empty/1]). |
115 | 115 | {mfa, {rabbit_sup, start_restartable_child, |
116 | 116 | [rabbit_node_monitor]}}, |
117 | 117 | {requires, [rabbit_alarm, guid_generator]}, |
118 | {enables, core_initialized}]}). | |
119 | ||
120 | -rabbit_boot_step({rabbit_epmd_monitor, | |
121 | [{description, "epmd monitor"}, | |
122 | {mfa, {rabbit_sup, start_restartable_child, | |
123 | [rabbit_epmd_monitor]}}, | |
124 | {requires, kernel_ready}, | |
118 | 125 | {enables, core_initialized}]}). |
119 | 126 | |
120 | 127 | -rabbit_boot_step({core_initialized, |
242 | 249 | {ok, Want} = application:get_env(rabbit, hipe_compile), |
243 | 250 | Can = code:which(hipe) =/= non_existing, |
244 | 251 | case {Want, Can} of |
245 | {true, true} -> hipe_compile(), | |
246 | true; | |
252 | {true, true} -> hipe_compile(); | |
247 | 253 | {true, false} -> false; |
248 | {false, _} -> true | |
249 | end. | |
250 | ||
251 | warn_if_hipe_compilation_failed(true) -> | |
254 | {false, _} -> {ok, disabled} | |
255 | end. | |
256 | ||
257 | log_hipe_result({ok, disabled}) -> | |
252 | 258 | ok; |
253 | warn_if_hipe_compilation_failed(false) -> | |
259 | log_hipe_result({ok, Count, Duration}) -> | |
260 | rabbit_log:info( | |
261 | "HiPE in use: compiled ~B modules in ~Bs.~n", [Count, Duration]); | |
262 | log_hipe_result(false) -> | |
263 | io:format( | |
264 | "~nNot HiPE compiling: HiPE not found in this Erlang installation.~n"), | |
254 | 265 | rabbit_log:warning( |
255 | 266 | "Not HiPE compiling: HiPE not found in this Erlang installation.~n"). |
256 | 267 | |
275 | 286 | {'DOWN', MRef, process, _, Reason} -> exit(Reason) |
276 | 287 | end || {_Pid, MRef} <- PidMRefs], |
277 | 288 | T2 = erlang:now(), |
278 | io:format("|~n~nCompiled ~B modules in ~Bs~n", | |
279 | [Count, timer:now_diff(T2, T1) div 1000000]). | |
289 | Duration = timer:now_diff(T2, T1) div 1000000, | |
290 | io:format("|~n~nCompiled ~B modules in ~Bs~n", [Count, Duration]), | |
291 | {ok, Count, Duration}. | |
280 | 292 | |
281 | 293 | split(L, N) -> split0(L, [[] || _ <- lists:seq(1, N)]). |
282 | 294 | |
306 | 318 | boot() -> |
307 | 319 | start_it(fun() -> |
308 | 320 | ok = ensure_application_loaded(), |
309 | Success = maybe_hipe_compile(), | |
321 | HipeResult = maybe_hipe_compile(), | |
310 | 322 | ok = ensure_working_log_handlers(), |
311 | warn_if_hipe_compilation_failed(Success), | |
323 | log_hipe_result(HipeResult), | |
312 | 324 | rabbit_node_monitor:prepare_cluster_status_files(), |
313 | 325 | ok = rabbit_upgrade:maybe_upgrade_mnesia(), |
314 | 326 | %% It's important that the consistency check happens after |
322 | 334 | Plugins = rabbit_plugins:setup(), |
323 | 335 | ToBeLoaded = Plugins ++ ?APPS, |
324 | 336 | start_apps(ToBeLoaded), |
337 | case code:load_file(sd_notify) of | |
338 | {module, sd_notify} -> SDNotify = sd_notify, | |
339 | SDNotify:sd_notify(0, "READY=1"); | |
340 | {error, _} -> ok | |
341 | end, | |
325 | 342 | ok = log_broker_started(rabbit_plugins:active()). |
326 | 343 | |
327 | 344 | start_it(StartFun) -> |
333 | 350 | false -> StartFun() |
334 | 351 | end |
335 | 352 | catch |
336 | throw:{could_not_start, _App, _Reason}=Err -> | |
353 | throw:{could_not_start, _App, _Reason} = Err -> | |
337 | 354 | boot_error(Err, not_available); |
338 | 355 | _:Reason -> |
339 | 356 | boot_error(Reason, erlang:get_stacktrace()) |
386 | 403 | ok. |
387 | 404 | |
388 | 405 | handle_app_error(Term) -> |
389 | fun(App, {bad_return, {_MFA, {'EXIT', {ExitReason, _}}}}) -> | |
406 | fun(App, {bad_return, {_MFA, {'EXIT', ExitReason}}}) -> | |
390 | 407 | throw({Term, App, ExitReason}); |
391 | 408 | (App, Reason) -> |
392 | 409 | throw({Term, App, Reason}) |
393 | 410 | end. |
394 | 411 | |
395 | 412 | run_cleanup_steps(Apps) -> |
396 | [run_step(Name, Attrs, cleanup) || {_, Name, Attrs} <- find_steps(Apps)], | |
413 | [run_step(Attrs, cleanup) || Attrs <- find_steps(Apps)], | |
397 | 414 | ok. |
398 | 415 | |
399 | 416 | await_startup() -> |
521 | 538 | run_boot_steps([App || {App, _, _} <- application:loaded_applications()]). |
522 | 539 | |
523 | 540 | run_boot_steps(Apps) -> |
524 | [ok = run_step(Step, Attrs, mfa) || {_, Step, Attrs} <- find_steps(Apps)], | |
541 | [ok = run_step(Attrs, mfa) || Attrs <- find_steps(Apps)], | |
525 | 542 | ok. |
526 | 543 | |
527 | 544 | find_steps(Apps) -> |
528 | 545 | All = sort_boot_steps(rabbit_misc:all_module_attributes(rabbit_boot_step)), |
529 | [Step || {App, _, _} = Step <- All, lists:member(App, Apps)]. | |
530 | ||
531 | run_step(StepName, Attributes, AttributeName) -> | |
546 | [Attrs || {App, _, Attrs} <- All, lists:member(App, Apps)]. | |
547 | ||
548 | run_step(Attributes, AttributeName) -> | |
532 | 549 | case [MFA || {Key, MFA} <- Attributes, |
533 | 550 | Key =:= AttributeName] of |
534 | 551 | [] -> |
535 | 552 | ok; |
536 | 553 | MFAs -> |
537 | [try | |
538 | apply(M,F,A) | |
539 | of | |
554 | [case apply(M,F,A) of | |
540 | 555 | ok -> ok; |
541 | {error, Reason} -> boot_error({boot_step, StepName, Reason}, | |
542 | not_available) | |
543 | catch | |
544 | _:Reason -> boot_error({boot_step, StepName, Reason}, | |
545 | erlang:get_stacktrace()) | |
556 | {error, Reason} -> exit({error, Reason}) | |
546 | 557 | end || {M,F,A} <- MFAs], |
547 | 558 | ok |
548 | 559 | end. |
580 | 591 | {_App, StepName, Attributes} <- SortedSteps, |
581 | 592 | {mfa, {M,F,A}} <- Attributes, |
582 | 593 | not erlang:function_exported(M, F, length(A))] of |
583 | [] -> SortedSteps; | |
584 | MissingFunctions -> basic_boot_error( | |
585 | {missing_functions, MissingFunctions}, | |
586 | "Boot step functions not exported: ~p~n", | |
587 | [MissingFunctions]) | |
594 | [] -> SortedSteps; | |
595 | MissingFns -> exit({boot_functions_not_exported, MissingFns}) | |
588 | 596 | end; |
589 | 597 | {error, {vertex, duplicate, StepName}} -> |
590 | basic_boot_error({duplicate_boot_step, StepName}, | |
591 | "Duplicate boot step name: ~w~n", [StepName]); | |
598 | exit({duplicate_boot_step, StepName}); | |
592 | 599 | {error, {edge, Reason, From, To}} -> |
593 | basic_boot_error( | |
594 | {invalid_boot_step_dependency, From, To}, | |
595 | "Could not add boot step dependency of ~w on ~w:~n~s", | |
596 | [To, From, | |
597 | case Reason of | |
598 | {bad_vertex, V} -> | |
599 | io_lib:format("Boot step not registered: ~w~n", [V]); | |
600 | {bad_edge, [First | Rest]} -> | |
601 | [io_lib:format("Cyclic dependency: ~w", [First]), | |
602 | [io_lib:format(" depends on ~w", [Next]) || | |
603 | Next <- Rest], | |
604 | io_lib:format(" depends on ~w~n", [First])] | |
605 | end]) | |
600 | exit({invalid_boot_step_dependency, From, To, Reason}) | |
606 | 601 | end. |
607 | 602 | |
608 | 603 | -ifdef(use_specs). |
609 | 604 | -spec(boot_error/2 :: (term(), not_available | [tuple()]) -> no_return()). |
610 | 605 | -endif. |
611 | boot_error(Term={error, {timeout_waiting_for_tables, _}}, _Stacktrace) -> | |
606 | boot_error({could_not_start, rabbit, {{timeout_waiting_for_tables, _}, _}}, | |
607 | _Stacktrace) -> | |
612 | 608 | AllNodes = rabbit_mnesia:cluster_nodes(all), |
609 | Suffix = "~nBACKGROUND~n==========~n~n" | |
610 | "This cluster node was shut down while other nodes were still running.~n" | |
611 | "To avoid losing data, you should start the other nodes first, then~n" | |
612 | "start this one. To force this node to start, first invoke~n" | |
613 | "\"rabbitmqctl force_boot\". If you do so, any changes made on other~n" | |
614 | "cluster nodes after this one was shut down may be lost.~n", | |
613 | 615 | {Err, Nodes} = |
614 | 616 | case AllNodes -- [node()] of |
615 | 617 | [] -> {"Timeout contacting cluster nodes. Since RabbitMQ was" |
616 | 618 | " shut down forcefully~nit cannot determine which nodes" |
617 | " are timing out.~n", []}; | |
619 | " are timing out.~n" ++ Suffix, []}; | |
618 | 620 | Ns -> {rabbit_misc:format( |
619 | "Timeout contacting cluster nodes: ~p.~n", [Ns]), | |
621 | "Timeout contacting cluster nodes: ~p.~n" ++ Suffix, [Ns]), | |
620 | 622 | Ns} |
621 | 623 | end, |
622 | basic_boot_error(Term, | |
623 | Err ++ rabbit_nodes:diagnostics(Nodes) ++ "~n~n", []); | |
624 | log_boot_error_and_exit( | |
625 | timeout_waiting_for_tables, | |
626 | Err ++ rabbit_nodes:diagnostics(Nodes) ++ "~n~n", []); | |
624 | 627 | boot_error(Reason, Stacktrace) -> |
625 | Fmt = "Error description:~n ~p~n~n" ++ | |
628 | Fmt = "Error description:~n ~p~n~n" | |
626 | 629 | "Log files (may contain more information):~n ~s~n ~s~n~n", |
627 | 630 | Args = [Reason, log_location(kernel), log_location(sasl)], |
628 | 631 | boot_error(Reason, Fmt, Args, Stacktrace). |
632 | 635 | -> no_return()). |
633 | 636 | -endif. |
634 | 637 | boot_error(Reason, Fmt, Args, not_available) -> |
635 | basic_boot_error(Reason, Fmt, Args); | |
638 | log_boot_error_and_exit(Reason, Fmt, Args); | |
636 | 639 | boot_error(Reason, Fmt, Args, Stacktrace) -> |
637 | basic_boot_error(Reason, Fmt ++ "Stack trace:~n ~p~n~n", | |
638 | Args ++ [Stacktrace]). | |
639 | ||
640 | basic_boot_error(Reason, Format, Args) -> | |
640 | log_boot_error_and_exit(Reason, Fmt ++ "Stack trace:~n ~p~n~n", | |
641 | Args ++ [Stacktrace]). | |
642 | ||
643 | log_boot_error_and_exit(Reason, Format, Args) -> | |
641 | 644 | io:format("~n~nBOOT FAILED~n===========~n~n" ++ Format, Args), |
642 | 645 | rabbit_log:info(Format, Args), |
643 | 646 | timer:sleep(1000), |
644 | exit({?MODULE, failure_during_boot, Reason}). | |
647 | exit(Reason). | |
645 | 648 | |
646 | 649 | %%--------------------------------------------------------------------------- |
647 | 650 | %% boot step functions |
18 | 18 | -include("rabbit.hrl"). |
19 | 19 | |
20 | 20 | -export([check_user_pass_login/2, check_user_login/2, check_user_loopback/2, |
21 | check_vhost_access/2, check_resource_access/3]). | |
21 | check_vhost_access/3, check_resource_access/3]). | |
22 | 22 | |
23 | 23 | %%---------------------------------------------------------------------------- |
24 | 24 | |
30 | 30 | |
31 | 31 | -spec(check_user_pass_login/2 :: |
32 | 32 | (rabbit_types:username(), rabbit_types:password()) |
33 | -> {'ok', rabbit_types:user()} | {'refused', string(), [any()]}). | |
33 | -> {'ok', rabbit_types:user()} | | |
34 | {'refused', rabbit_types:username(), string(), [any()]}). | |
34 | 35 | -spec(check_user_login/2 :: |
35 | 36 | (rabbit_types:username(), [{atom(), any()}]) |
36 | -> {'ok', rabbit_types:user()} | {'refused', string(), [any()]}). | |
37 | -> {'ok', rabbit_types:user()} | | |
38 | {'refused', rabbit_types:username(), string(), [any()]}). | |
37 | 39 | -spec(check_user_loopback/2 :: (rabbit_types:username(), |
38 | 40 | rabbit_net:socket() | inet:ip_address()) |
39 | 41 | -> 'ok' | 'not_allowed'). |
40 | -spec(check_vhost_access/2 :: | |
41 | (rabbit_types:user(), rabbit_types:vhost()) | |
42 | -spec(check_vhost_access/3 :: | |
43 | (rabbit_types:user(), rabbit_types:vhost(), rabbit_net:socket()) | |
42 | 44 | -> 'ok' | rabbit_types:channel_exit()). |
43 | 45 | -spec(check_resource_access/3 :: |
44 | 46 | (rabbit_types:user(), rabbit_types:r(atom()), permission_atom()) |
54 | 56 | check_user_login(Username, AuthProps) -> |
55 | 57 | {ok, Modules} = application:get_env(rabbit, auth_backends), |
56 | 58 | R = lists:foldl( |
57 | fun ({ModN, ModZ}, {refused, _, _}) -> | |
59 | fun ({ModN, ModZs0}, {refused, _, _, _}) -> | |
60 | ModZs = case ModZs0 of | |
61 | A when is_atom(A) -> [A]; | |
62 | L when is_list(L) -> L | |
63 | end, | |
58 | 64 | %% Different modules for authN vs authZ. So authenticate |
59 | 65 | %% with authN module, then if that succeeds do |
60 | %% passwordless (i.e pre-authenticated) login with authZ | |
61 | %% module, and use the #user{} the latter gives us. | |
62 | case try_login(ModN, Username, AuthProps) of | |
63 | {ok, _} -> try_login(ModZ, Username, []); | |
64 | Else -> Else | |
66 | %% passwordless (i.e pre-authenticated) login with authZ. | |
67 | case try_authenticate(ModN, Username, AuthProps) of | |
68 | {ok, ModNUser = #auth_user{username = Username2}} -> | |
69 | user(ModNUser, try_authorize(ModZs, Username2)); | |
70 | Else -> | |
71 | Else | |
65 | 72 | end; |
66 | (Mod, {refused, _, _}) -> | |
73 | (Mod, {refused, _, _, _}) -> | |
67 | 74 | %% Same module for authN and authZ. Just take the result |
68 | 75 | %% it gives us |
69 | try_login(Mod, Username, AuthProps); | |
76 | case try_authenticate(Mod, Username, AuthProps) of | |
77 | {ok, ModNUser = #auth_user{impl = Impl}} -> | |
78 | user(ModNUser, {ok, [{Mod, Impl}]}); | |
79 | Else -> | |
80 | Else | |
81 | end; | |
70 | 82 | (_, {ok, User}) -> |
71 | 83 | %% We've successfully authenticated. Skip to the end... |
72 | 84 | {ok, User} |
73 | end, {refused, "No modules checked '~s'", [Username]}, Modules), | |
74 | rabbit_event:notify(case R of | |
75 | {ok, _User} -> user_authentication_success; | |
76 | _ -> user_authentication_failure | |
77 | end, [{name, Username}]), | |
85 | end, | |
86 | {refused, Username, "No modules checked '~s'", [Username]}, Modules), | |
78 | 87 | R. |
79 | 88 | |
80 | try_login(Module, Username, AuthProps) -> | |
81 | case Module:check_user_login(Username, AuthProps) of | |
82 | {error, E} -> {refused, "~s failed authenticating ~s: ~p~n", | |
83 | [Module, Username, E]}; | |
84 | Else -> Else | |
89 | try_authenticate(Module, Username, AuthProps) -> | |
90 | case Module:user_login_authentication(Username, AuthProps) of | |
91 | {ok, AuthUser} -> {ok, AuthUser}; | |
92 | {error, E} -> {refused, Username, | |
93 | "~s failed authenticating ~s: ~p~n", | |
94 | [Module, Username, E]}; | |
95 | {refused, F, A} -> {refused, Username, F, A} | |
85 | 96 | end. |
97 | ||
98 | try_authorize(Modules, Username) -> | |
99 | lists:foldr( | |
100 | fun (Module, {ok, ModsImpls}) -> | |
101 | case Module:user_login_authorization(Username) of | |
102 | {ok, Impl} -> {ok, [{Module, Impl} | ModsImpls]}; | |
103 | {error, E} -> {refused, Username, | |
104 | "~s failed authorizing ~s: ~p~n", | |
105 | [Module, Username, E]}; | |
106 | {refused, F, A} -> {refused, Username, F, A} | |
107 | end; | |
108 | (_, {refused, F, A}) -> | |
109 | {refused, Username, F, A} | |
110 | end, {ok, []}, Modules). | |
111 | ||
112 | user(#auth_user{username = Username, tags = Tags}, {ok, ModZImpls}) -> | |
113 | {ok, #user{username = Username, | |
114 | tags = Tags, | |
115 | authz_backends = ModZImpls}}; | |
116 | user(_AuthUser, Error) -> | |
117 | Error. | |
118 | ||
119 | auth_user(#user{username = Username, tags = Tags}, Impl) -> | |
120 | #auth_user{username = Username, | |
121 | tags = Tags, | |
122 | impl = Impl}. | |
86 | 123 | |
87 | 124 | check_user_loopback(Username, SockOrAddr) -> |
88 | 125 | {ok, Users} = application:get_env(rabbit, loopback_users), |
92 | 129 | false -> not_allowed |
93 | 130 | end. |
94 | 131 | |
95 | check_vhost_access(User = #user{ username = Username, | |
96 | auth_backend = Module }, VHostPath) -> | |
97 | check_access( | |
98 | fun() -> | |
99 | %% TODO this could be an andalso shortcut under >R13A | |
100 | case rabbit_vhost:exists(VHostPath) of | |
101 | false -> false; | |
102 | true -> Module:check_vhost_access(User, VHostPath) | |
103 | end | |
104 | end, | |
105 | Module, "access to vhost '~s' refused for user '~s'", | |
106 | [VHostPath, Username]). | |
132 | check_vhost_access(User = #user{username = Username, | |
133 | authz_backends = Modules}, VHostPath, Sock) -> | |
134 | lists:foldl( | |
135 | fun({Mod, Impl}, ok) -> | |
136 | check_access( | |
137 | fun() -> | |
138 | rabbit_vhost:exists(VHostPath) andalso | |
139 | Mod:check_vhost_access( | |
140 | auth_user(User, Impl), VHostPath, Sock) | |
141 | end, | |
142 | Mod, "access to vhost '~s' refused for user '~s'", | |
143 | [VHostPath, Username]); | |
144 | (_, Else) -> | |
145 | Else | |
146 | end, ok, Modules). | |
107 | 147 | |
108 | 148 | check_resource_access(User, R = #resource{kind = exchange, name = <<"">>}, |
109 | 149 | Permission) -> |
110 | 150 | check_resource_access(User, R#resource{name = <<"amq.default">>}, |
111 | 151 | Permission); |
112 | check_resource_access(User = #user{username = Username, auth_backend = Module}, | |
152 | check_resource_access(User = #user{username = Username, | |
153 | authz_backends = Modules}, | |
113 | 154 | Resource, Permission) -> |
114 | check_access( | |
115 | fun() -> Module:check_resource_access(User, Resource, Permission) end, | |
116 | Module, "access to ~s refused for user '~s'", | |
117 | [rabbit_misc:rs(Resource), Username]). | |
155 | lists:foldl( | |
156 | fun({Module, Impl}, ok) -> | |
157 | check_access( | |
158 | fun() -> Module:check_resource_access( | |
159 | auth_user(User, Impl), Resource, Permission) end, | |
160 | Module, "access to ~s refused for user '~s'", | |
161 | [rabbit_misc:rs(Resource), Username]); | |
162 | (_, Else) -> Else | |
163 | end, ok, Modules). | |
118 | 164 | |
119 | 165 | check_access(Fun, Module, ErrStr, ErrArgs) -> |
120 | 166 | Allow = case Fun() of |
22 | 22 | -export([lookup/1, not_found_or_absent/1, with/2, with/3, with_or_die/2, |
23 | 23 | assert_equivalence/5, |
24 | 24 | check_exclusive_access/2, with_exclusive_access_or_die/3, |
25 | stat/1, deliver/2, deliver_flow/2, requeue/3, ack/3, reject/4]). | |
25 | stat/1, deliver/2, requeue/3, ack/3, reject/4]). | |
26 | 26 | -export([list/0, list/1, info_keys/0, info/1, info/2, info_all/1, info_all/2]). |
27 | 27 | -export([list_down/1]). |
28 | 28 | -export([force_event_refresh/1, notify_policy_changed/1]). |
148 | 148 | -spec(forget_all_durable/1 :: (node()) -> 'ok'). |
149 | 149 | -spec(deliver/2 :: ([rabbit_types:amqqueue()], rabbit_types:delivery()) -> |
150 | 150 | qpids()). |
151 | -spec(deliver_flow/2 :: ([rabbit_types:amqqueue()], rabbit_types:delivery()) -> | |
152 | qpids()). | |
153 | 151 | -spec(requeue/3 :: (pid(), [msg_id()], pid()) -> 'ok'). |
154 | 152 | -spec(ack/3 :: (pid(), [msg_id()], pid()) -> 'ok'). |
155 | 153 | -spec(reject/4 :: (pid(), [msg_id()], boolean(), pid()) -> 'ok'). |
264 | 262 | declare(QueueName, Durable, AutoDelete, Args, Owner, Node) -> |
265 | 263 | ok = check_declare_arguments(QueueName, Args), |
266 | 264 | Q = rabbit_queue_decorator:set( |
267 | rabbit_policy:set(#amqqueue{name = QueueName, | |
268 | durable = Durable, | |
269 | auto_delete = AutoDelete, | |
270 | arguments = Args, | |
271 | exclusive_owner = Owner, | |
272 | pid = none, | |
273 | slave_pids = [], | |
274 | sync_slave_pids = [], | |
275 | down_slave_nodes = [], | |
276 | gm_pids = [], | |
277 | state = live})), | |
265 | rabbit_policy:set(#amqqueue{name = QueueName, | |
266 | durable = Durable, | |
267 | auto_delete = AutoDelete, | |
268 | arguments = Args, | |
269 | exclusive_owner = Owner, | |
270 | pid = none, | |
271 | slave_pids = [], | |
272 | sync_slave_pids = [], | |
273 | recoverable_slaves = [], | |
274 | gm_pids = [], | |
275 | state = live})), | |
278 | 276 | Node = rabbit_mirror_queue_misc:initial_queue_node(Q, Node), |
279 | 277 | gen_server2:call( |
280 | 278 | rabbit_amqqueue_sup_sup:start_queue_process(Node, Q, declare), |
468 | 466 | {<<"x-dead-letter-exchange">>, fun check_dlxname_arg/2}, |
469 | 467 | {<<"x-dead-letter-routing-key">>, fun check_dlxrk_arg/2}, |
470 | 468 | {<<"x-max-length">>, fun check_non_neg_int_arg/2}, |
471 | {<<"x-max-length-bytes">>, fun check_non_neg_int_arg/2}]. | |
469 | {<<"x-max-length-bytes">>, fun check_non_neg_int_arg/2}, | |
470 | {<<"x-max-priority">>, fun check_non_neg_int_arg/2}]. | |
472 | 471 | |
473 | 472 | consume_args() -> [{<<"x-priority">>, fun check_int_arg/2}, |
474 | 473 | {<<"x-cancel-on-ha-failover">>, fun check_bool_arg/2}]. |
559 | 558 | info_down(Q, Items, DownReason) -> |
560 | 559 | [{Item, i_down(Item, Q, DownReason)} || Item <- Items]. |
561 | 560 | |
562 | i_down(name, #amqqueue{name = Name}, _) -> Name; | |
563 | i_down(durable, #amqqueue{durable = Durable},_) -> Durable; | |
564 | i_down(auto_delete, #amqqueue{auto_delete = AD}, _) -> AD; | |
565 | i_down(arguments, #amqqueue{arguments = Args}, _) -> Args; | |
566 | i_down(pid, #amqqueue{pid = QPid}, _) -> QPid; | |
567 | i_down(down_slave_nodes, #amqqueue{down_slave_nodes = DSN}, _) -> DSN; | |
561 | i_down(name, #amqqueue{name = Name}, _) -> Name; | |
562 | i_down(durable, #amqqueue{durable = Dur}, _) -> Dur; | |
563 | i_down(auto_delete, #amqqueue{auto_delete = AD}, _) -> AD; | |
564 | i_down(arguments, #amqqueue{arguments = Args}, _) -> Args; | |
565 | i_down(pid, #amqqueue{pid = QPid}, _) -> QPid; | |
566 | i_down(recoverable_slaves, #amqqueue{recoverable_slaves = RS}, _) -> RS; | |
568 | 567 | i_down(state, _Q, DownReason) -> DownReason; |
569 | 568 | i_down(K, _Q, _DownReason) -> |
570 | 569 | case lists:member(K, rabbit_amqqueue_process:info_keys()) of |
621 | 620 | ok = internal_delete(QName). |
622 | 621 | |
623 | 622 | purge(#amqqueue{ pid = QPid }) -> delegate:call(QPid, purge). |
624 | ||
625 | deliver(Qs, Delivery) -> deliver(Qs, Delivery, noflow). | |
626 | ||
627 | deliver_flow(Qs, Delivery) -> deliver(Qs, Delivery, flow). | |
628 | 623 | |
629 | 624 | requeue(QPid, MsgIds, ChPid) -> delegate:call(QPid, {requeue, MsgIds, ChPid}). |
630 | 625 | |
723 | 718 | fun () -> |
724 | 719 | Qs = mnesia:match_object(rabbit_durable_queue, |
725 | 720 | #amqqueue{_ = '_'}, write), |
726 | [forget_node_for_queue(Q) || #amqqueue{pid = Pid} = Q <- Qs, | |
721 | [forget_node_for_queue(Node, Q) || | |
722 | #amqqueue{pid = Pid} = Q <- Qs, | |
727 | 723 | node(Pid) =:= Node], |
728 | 724 | ok |
729 | 725 | end), |
730 | 726 | ok. |
731 | 727 | |
732 | forget_node_for_queue(#amqqueue{name = Name, | |
733 | down_slave_nodes = []}) -> | |
728 | %% Try to promote a slave while down - it should recover as a | |
729 | %% master. We try to take the oldest slave here for best chance of | |
730 | %% recovery. | |
731 | forget_node_for_queue(DeadNode, Q = #amqqueue{recoverable_slaves = RS}) -> | |
732 | forget_node_for_queue(DeadNode, RS, Q). | |
733 | ||
734 | forget_node_for_queue(_DeadNode, [], #amqqueue{name = Name}) -> | |
734 | 735 | %% No slaves to recover from, queue is gone. |
735 | 736 | %% Don't process_deletions since that just calls callbacks and we |
736 | 737 | %% are not really up. |
737 | 738 | internal_delete1(Name, true); |
738 | 739 | |
739 | forget_node_for_queue(Q = #amqqueue{down_slave_nodes = [H|T]}) -> | |
740 | %% Promote a slave while down - it'll happily recover as a master | |
741 | Q1 = Q#amqqueue{pid = rabbit_misc:node_to_fake_pid(H), | |
742 | down_slave_nodes = T}, | |
743 | ok = mnesia:write(rabbit_durable_queue, Q1, write). | |
740 | %% Should not happen, but let's be conservative. | |
741 | forget_node_for_queue(DeadNode, [DeadNode | T], Q) -> | |
742 | forget_node_for_queue(DeadNode, T, Q); | |
743 | ||
744 | forget_node_for_queue(DeadNode, [H|T], Q) -> | |
745 | case node_permits_offline_promotion(H) of | |
746 | false -> forget_node_for_queue(DeadNode, T, Q); | |
747 | true -> Q1 = Q#amqqueue{pid = rabbit_misc:node_to_fake_pid(H)}, | |
748 | ok = mnesia:write(rabbit_durable_queue, Q1, write) | |
749 | end. | |
750 | ||
751 | node_permits_offline_promotion(Node) -> | |
752 | case node() of | |
753 | Node -> not rabbit:is_running(); %% [1] | |
754 | _ -> Running = rabbit_mnesia:cluster_nodes(running), | |
755 | not lists:member(Node, Running) %% [2] | |
756 | end. | |
757 | %% [1] In this case if we are a real running node (i.e. rabbitmqctl | |
758 | %% has RPCed into us) then we cannot allow promotion. If on the other | |
759 | %% hand we *are* rabbitmqctl impersonating the node for offline | |
760 | %% node-forgetting then we can. | |
761 | %% | |
762 | %% [2] This is simpler; as long as it's down that's OK | |
744 | 763 | |
745 | 764 | run_backing_queue(QPid, Mod, Fun) -> |
746 | 765 | gen_server2:cast(QPid, {run_backing_queue, Mod, Fun}). |
762 | 781 | fun () -> |
763 | 782 | Qs = mnesia:match_object(rabbit_queue, |
764 | 783 | #amqqueue{_ = '_'}, write), |
765 | [case lists:member(Node, DSNs) of | |
766 | true -> DSNs1 = DSNs -- [Node], | |
784 | [case lists:member(Node, RSs) of | |
785 | true -> RSs1 = RSs -- [Node], | |
767 | 786 | store_queue( |
768 | Q#amqqueue{down_slave_nodes = DSNs1}); | |
787 | Q#amqqueue{recoverable_slaves = RSs1}); | |
769 | 788 | false -> ok |
770 | end || #amqqueue{down_slave_nodes = DSNs} = Q <- Qs], | |
789 | end || #amqqueue{recoverable_slaves = RSs} = Q <- Qs], | |
771 | 790 | ok |
772 | 791 | end). |
773 | 792 | |
806 | 825 | pid = Pid, |
807 | 826 | slave_pids = []}. |
808 | 827 | |
809 | immutable(Q) -> Q#amqqueue{pid = none, | |
810 | slave_pids = none, | |
811 | sync_slave_pids = none, | |
812 | down_slave_nodes = none, | |
813 | gm_pids = none, | |
814 | policy = none, | |
815 | decorators = none, | |
816 | state = none}. | |
817 | ||
818 | deliver([], _Delivery, _Flow) -> | |
828 | immutable(Q) -> Q#amqqueue{pid = none, | |
829 | slave_pids = none, | |
830 | sync_slave_pids = none, | |
831 | recoverable_slaves = none, | |
832 | gm_pids = none, | |
833 | policy = none, | |
834 | decorators = none, | |
835 | state = none}. | |
836 | ||
837 | deliver([], _Delivery) -> | |
819 | 838 | %% /dev/null optimisation |
820 | 839 | []; |
821 | 840 | |
822 | deliver(Qs, Delivery, Flow) -> | |
841 | deliver(Qs, Delivery = #delivery{flow = Flow}) -> | |
823 | 842 | {MPids, SPids} = qpids(Qs), |
824 | 843 | QPids = MPids ++ SPids, |
844 | %% We use up two credits to send to a slave since the message | |
845 | %% arrives at the slave from two directions. We will ack one when | |
846 | %% the slave receives the message direct from the channel, and the | |
847 | %% other when it receives it via GM. | |
825 | 848 | case Flow of |
826 | flow -> [credit_flow:send(QPid) || QPid <- QPids]; | |
849 | flow -> [credit_flow:send(QPid) || QPid <- QPids], | |
850 | [credit_flow:send(QPid) || QPid <- SPids]; | |
827 | 851 | noflow -> ok |
828 | 852 | end, |
829 | 853 | |
832 | 856 | %% after they have become master they should mark the message as |
833 | 857 | %% 'delivered' since they do not know what the master may have |
834 | 858 | %% done with it. |
835 | MMsg = {deliver, Delivery, false, Flow}, | |
836 | SMsg = {deliver, Delivery, true, Flow}, | |
859 | MMsg = {deliver, Delivery, false}, | |
860 | SMsg = {deliver, Delivery, true}, | |
837 | 861 | delegate:cast(MPids, MMsg), |
838 | 862 | delegate:cast(SPids, SMsg), |
839 | 863 | QPids. |
82 | 82 | memory, |
83 | 83 | slave_pids, |
84 | 84 | synchronised_slave_pids, |
85 | down_slave_nodes, | |
85 | recoverable_slaves, | |
86 | 86 | state |
87 | 87 | ]). |
88 | 88 | |
152 | 152 | #amqqueue{} = Q1 -> |
153 | 153 | case matches(Recover, Q, Q1) of |
154 | 154 | true -> |
155 | send_reply(From, {new, Q}), | |
156 | 155 | ok = file_handle_cache:register_callback( |
157 | 156 | rabbit_amqqueue, set_maximum_since_use, [self()]), |
158 | 157 | ok = rabbit_memory_monitor:register( |
160 | 159 | set_ram_duration_target, [self()]}), |
161 | 160 | BQ = backing_queue_module(Q1), |
162 | 161 | BQS = bq_init(BQ, Q, TermsOrNew), |
162 | send_reply(From, {new, Q}), | |
163 | 163 | recovery_barrier(Barrier), |
164 | 164 | State1 = process_args_policy( |
165 | 165 | State#q{backing_queue = BQ, |
497 | 497 | |
498 | 498 | discard(#delivery{confirm = Confirm, |
499 | 499 | sender = SenderPid, |
500 | flow = Flow, | |
500 | 501 | message = #basic_message{id = MsgId}}, BQ, BQS, MTC) -> |
501 | 502 | MTC1 = case Confirm of |
502 | 503 | true -> confirm_messages([MsgId], MTC); |
503 | 504 | false -> MTC |
504 | 505 | end, |
505 | BQS1 = BQ:discard(MsgId, SenderPid, BQS), | |
506 | BQS1 = BQ:discard(MsgId, SenderPid, Flow, BQS), | |
506 | 507 | {BQS1, MTC1}. |
507 | 508 | |
508 | 509 | run_message_queue(State) -> run_message_queue(false, State). |
524 | 525 | end |
525 | 526 | end. |
526 | 527 | |
527 | attempt_delivery(Delivery = #delivery{sender = SenderPid, message = Message}, | |
528 | attempt_delivery(Delivery = #delivery{sender = SenderPid, | |
529 | flow = Flow, | |
530 | message = Message}, | |
528 | 531 | Props, Delivered, State = #q{backing_queue = BQ, |
529 | 532 | backing_queue_state = BQS, |
530 | 533 | msg_id_to_channel = MTC}) -> |
531 | 534 | case rabbit_queue_consumers:deliver( |
532 | 535 | fun (true) -> true = BQ:is_empty(BQS), |
533 | {AckTag, BQS1} = BQ:publish_delivered( | |
534 | Message, Props, SenderPid, BQS), | |
536 | {AckTag, BQS1} = | |
537 | BQ:publish_delivered( | |
538 | Message, Props, SenderPid, Flow, BQS), | |
535 | 539 | {{Message, Delivered, AckTag}, {BQS1, MTC}}; |
536 | 540 | (false) -> {{Message, Delivered, undefined}, |
537 | 541 | discard(Delivery, BQ, BQS, MTC)} |
548 | 552 | State#q{consumers = Consumers})} |
549 | 553 | end. |
550 | 554 | |
551 | deliver_or_enqueue(Delivery = #delivery{message = Message, sender = SenderPid}, | |
555 | deliver_or_enqueue(Delivery = #delivery{message = Message, | |
556 | sender = SenderPid, | |
557 | flow = Flow}, | |
552 | 558 | Delivered, State = #q{backing_queue = BQ, |
553 | 559 | backing_queue_state = BQS}) -> |
554 | 560 | send_mandatory(Delivery), %% must do this before confirms |
569 | 575 | {BQS3, MTC1} = discard(Delivery, BQ, BQS2, MTC), |
570 | 576 | State3#q{backing_queue_state = BQS3, msg_id_to_channel = MTC1}; |
571 | 577 | {undelivered, State3 = #q{backing_queue_state = BQS2}} -> |
572 | BQS3 = BQ:publish(Message, Props, Delivered, SenderPid, BQS2), | |
578 | BQS3 = BQ:publish(Message, Props, Delivered, SenderPid, Flow, BQS2), | |
573 | 579 | {Dropped, State4 = #q{backing_queue_state = BQS4}} = |
574 | 580 | maybe_drop_head(State3#q{backing_queue_state = BQS3}), |
575 | 581 | QLen = BQ:len(BQS4), |
854 | 860 | false -> ''; |
855 | 861 | true -> SSPids |
856 | 862 | end; |
857 | i(down_slave_nodes, #q{q = #amqqueue{name = Name, | |
858 | durable = Durable}}) -> | |
859 | {ok, Q = #amqqueue{down_slave_nodes = Nodes}} = | |
863 | i(recoverable_slaves, #q{q = #amqqueue{name = Name, | |
864 | durable = Durable}}) -> | |
865 | {ok, Q = #amqqueue{recoverable_slaves = Nodes}} = | |
860 | 866 | rabbit_amqqueue:lookup(Name), |
861 | 867 | case Durable andalso rabbit_mirror_queue_misc:is_mirrored(Q) of |
862 | 868 | false -> ''; |
1099 | 1105 | State = #q{backing_queue = BQ, backing_queue_state = BQS}) -> |
1100 | 1106 | noreply(State#q{backing_queue_state = BQ:invoke(Mod, Fun, BQS)}); |
1101 | 1107 | |
1102 | handle_cast({deliver, Delivery = #delivery{sender = Sender}, Delivered, Flow}, | |
1108 | handle_cast({deliver, Delivery = #delivery{sender = Sender, | |
1109 | flow = Flow}, SlaveWhenPublished}, | |
1103 | 1110 | State = #q{senders = Senders}) -> |
1104 | 1111 | Senders1 = case Flow of |
1105 | 1112 | flow -> credit_flow:ack(Sender), |
1113 | case SlaveWhenPublished of | |
1114 | true -> credit_flow:ack(Sender); %% [0] | |
1115 | false -> ok | |
1116 | end, | |
1106 | 1117 | pmon:monitor(Sender, Senders); |
1107 | 1118 | noflow -> Senders |
1108 | 1119 | end, |
1109 | 1120 | State1 = State#q{senders = Senders1}, |
1110 | noreply(deliver_or_enqueue(Delivery, Delivered, State1)); | |
1121 | noreply(deliver_or_enqueue(Delivery, SlaveWhenPublished, State1)); | |
1122 | %% [0] The second ack is since the channel thought we were a slave at | |
1123 | %% the time it published this message, so it used two credits (see | |
1124 | %% rabbit_amqqueue:deliver/2). | |
1111 | 1125 | |
1112 | 1126 | handle_cast({ack, AckTags, ChPid}, State) -> |
1113 | 1127 | noreply(ack(AckTags, ChPid, State)); |
0 | %% The contents of this file are subject to the Mozilla Public License | |
1 | %% Version 1.1 (the "License"); you may not use this file except in | |
2 | %% compliance with the License. You may obtain a copy of the License | |
3 | %% at http://www.mozilla.org/MPL/ | |
4 | %% | |
5 | %% Software distributed under the License is distributed on an "AS IS" | |
6 | %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | %% the License for the specific language governing rights and | |
8 | %% limitations under the License. | |
9 | %% | |
10 | %% The Original Code is RabbitMQ. | |
11 | %% | |
12 | %% The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. | |
14 | %% | |
15 | ||
16 | -module(rabbit_auth_backend). | |
17 | ||
18 | -ifdef(use_specs). | |
19 | ||
20 | %% A description proplist as with auth mechanisms, | |
21 | %% exchanges. Currently unused. | |
22 | -callback description() -> [proplists:property()]. | |
23 | ||
24 | %% Check a user can log in, given a username and a proplist of | |
25 | %% authentication information (e.g. [{password, Password}]). | |
26 | %% | |
27 | %% Possible responses: | |
28 | %% {ok, User} | |
29 | %% Authentication succeeded, and here's the user record. | |
30 | %% {error, Error} | |
31 | %% Something went wrong. Log and die. | |
32 | %% {refused, Msg, Args} | |
33 | %% Client failed authentication. Log and die. | |
34 | -callback check_user_login(rabbit_types:username(), [term()]) -> | |
35 | {'ok', rabbit_types:user()} | | |
36 | {'refused', string(), [any()]} | | |
37 | {'error', any()}. | |
38 | ||
39 | %% Given #user and vhost, can a user log in to a vhost? | |
40 | %% Possible responses: | |
41 | %% true | |
42 | %% false | |
43 | %% {error, Error} | |
44 | %% Something went wrong. Log and die. | |
45 | -callback check_vhost_access(rabbit_types:user(), rabbit_types:vhost()) -> | |
46 | boolean() | {'error', any()}. | |
47 | ||
48 | ||
49 | %% Given #user, resource and permission, can a user access a resource? | |
50 | %% | |
51 | %% Possible responses: | |
52 | %% true | |
53 | %% false | |
54 | %% {error, Error} | |
55 | %% Something went wrong. Log and die. | |
56 | -callback check_resource_access(rabbit_types:user(), | |
57 | rabbit_types:r(atom()), | |
58 | rabbit_access_control:permission_atom()) -> | |
59 | boolean() | {'error', any()}. | |
60 | ||
61 | -else. | |
62 | ||
63 | -export([behaviour_info/1]). | |
64 | ||
65 | behaviour_info(callbacks) -> | |
66 | [{description, 0}, {check_user_login, 2}, {check_vhost_access, 2}, | |
67 | {check_resource_access, 3}]; | |
68 | behaviour_info(_Other) -> | |
69 | undefined. | |
70 | ||
71 | -endif. |
16 | 16 | -module(rabbit_auth_backend_dummy). |
17 | 17 | -include("rabbit.hrl"). |
18 | 18 | |
19 | -behaviour(rabbit_auth_backend). | |
19 | -behaviour(rabbit_authn_backend). | |
20 | -behaviour(rabbit_authz_backend). | |
20 | 21 | |
21 | -export([description/0]). | |
22 | 22 | -export([user/0]). |
23 | -export([check_user_login/2, check_vhost_access/2, check_resource_access/3]). | |
23 | -export([user_login_authentication/2, user_login_authorization/1, | |
24 | check_vhost_access/3, check_resource_access/3]). | |
24 | 25 | |
25 | 26 | -ifdef(use_specs). |
26 | 27 | |
30 | 31 | |
31 | 32 | %% A user to be used by the direct client when permission checks are |
32 | 33 | %% not needed. This user can do anything AMQPish. |
33 | user() -> #user{username = <<"none">>, | |
34 | tags = [], | |
35 | auth_backend = ?MODULE, | |
36 | impl = none}. | |
34 | user() -> #user{username = <<"none">>, | |
35 | tags = [], | |
36 | authz_backends = [{?MODULE, none}]}. | |
37 | 37 | |
38 | 38 | %% Implementation of rabbit_auth_backend |
39 | 39 | |
40 | description() -> | |
41 | [{name, <<"Dummy">>}, | |
42 | {description, <<"Database for the dummy user">>}]. | |
43 | ||
44 | check_user_login(_, _) -> | |
40 | user_login_authentication(_, _) -> | |
45 | 41 | {refused, "cannot log in conventionally as dummy user", []}. |
46 | 42 | |
47 | check_vhost_access(#user{}, _VHostPath) -> true. | |
48 | check_resource_access(#user{}, #resource{}, _Permission) -> true. | |
43 | user_login_authorization(_) -> | |
44 | {refused, "cannot log in conventionally as dummy user", []}. | |
45 | ||
46 | check_vhost_access(#auth_user{}, _VHostPath, _Sock) -> true. | |
47 | check_resource_access(#auth_user{}, #resource{}, _Permission) -> true. |
16 | 16 | -module(rabbit_auth_backend_internal). |
17 | 17 | -include("rabbit.hrl"). |
18 | 18 | |
19 | -behaviour(rabbit_auth_backend). | |
20 | ||
21 | -export([description/0]). | |
22 | -export([check_user_login/2, check_vhost_access/2, check_resource_access/3]). | |
19 | -behaviour(rabbit_authn_backend). | |
20 | -behaviour(rabbit_authz_backend). | |
21 | ||
22 | -export([user_login_authentication/2, user_login_authorization/1, | |
23 | check_vhost_access/3, check_resource_access/3]). | |
23 | 24 | |
24 | 25 | -export([add_user/2, delete_user/1, lookup_user/1, |
25 | 26 | change_password/2, clear_password/1, |
75 | 76 | %%---------------------------------------------------------------------------- |
76 | 77 | %% Implementation of rabbit_auth_backend |
77 | 78 | |
78 | description() -> | |
79 | [{name, <<"Internal">>}, | |
80 | {description, <<"Internal user / password database">>}]. | |
81 | ||
82 | check_user_login(Username, []) -> | |
79 | user_login_authentication(Username, []) -> | |
83 | 80 | internal_check_user_login(Username, fun(_) -> true end); |
84 | check_user_login(Username, [{password, Cleartext}]) -> | |
81 | user_login_authentication(Username, [{password, Cleartext}]) -> | |
85 | 82 | internal_check_user_login( |
86 | 83 | Username, |
87 | 84 | fun (#internal_user{password_hash = <<Salt:4/binary, Hash/binary>>}) -> |
89 | 86 | (#internal_user{}) -> |
90 | 87 | false |
91 | 88 | end); |
92 | check_user_login(Username, AuthProps) -> | |
89 | user_login_authentication(Username, AuthProps) -> | |
93 | 90 | exit({unknown_auth_props, Username, AuthProps}). |
91 | ||
92 | user_login_authorization(Username) -> | |
93 | case user_login_authentication(Username, []) of | |
94 | {ok, #auth_user{impl = Impl}} -> {ok, Impl}; | |
95 | Else -> Else | |
96 | end. | |
94 | 97 | |
95 | 98 | internal_check_user_login(Username, Fun) -> |
96 | 99 | Refused = {refused, "user '~s' - invalid credentials", [Username]}, |
97 | 100 | case lookup_user(Username) of |
98 | 101 | {ok, User = #internal_user{tags = Tags}} -> |
99 | 102 | case Fun(User) of |
100 | true -> {ok, #user{username = Username, | |
101 | tags = Tags, | |
102 | auth_backend = ?MODULE, | |
103 | impl = User}}; | |
103 | true -> {ok, #auth_user{username = Username, | |
104 | tags = Tags, | |
105 | impl = none}}; | |
104 | 106 | _ -> Refused |
105 | 107 | end; |
106 | 108 | {error, not_found} -> |
107 | 109 | Refused |
108 | 110 | end. |
109 | 111 | |
110 | check_vhost_access(#user{username = Username}, VHostPath) -> | |
112 | check_vhost_access(#auth_user{username = Username}, VHostPath, _Sock) -> | |
111 | 113 | case mnesia:dirty_read({rabbit_user_permission, |
112 | 114 | #user_vhost{username = Username, |
113 | 115 | virtual_host = VHostPath}}) of |
115 | 117 | [_R] -> true |
116 | 118 | end. |
117 | 119 | |
118 | check_resource_access(#user{username = Username}, | |
120 | check_resource_access(#auth_user{username = Username}, | |
119 | 121 | #resource{virtual_host = VHostPath, name = Name}, |
120 | 122 | Permission) -> |
121 | 123 | case mnesia:dirty_read({rabbit_user_permission, |
35 | 35 | %% Another round is needed. Here's the state I want next time. |
36 | 36 | %% {protocol_error, Msg, Args} |
37 | 37 | %% Client got the protocol wrong. Log and die. |
38 | %% {refused, Msg, Args} | |
38 | %% {refused, Username, Msg, Args} | |
39 | 39 | %% Client failed authentication. Log and die. |
40 | 40 | -callback handle_response(binary(), any()) -> |
41 | 41 | {'ok', rabbit_types:user()} | |
42 | 42 | {'challenge', binary(), any()} | |
43 | 43 | {'protocol_error', string(), [any()]} | |
44 | {'refused', string(), [any()]}. | |
44 | {'refused', rabbit_types:username() | none, string(), [any()]}. | |
45 | 45 | |
46 | 46 | -else. |
47 | 47 |
0 | %% The contents of this file are subject to the Mozilla Public License | |
1 | %% Version 1.1 (the "License"); you may not use this file except in | |
2 | %% compliance with the License. You may obtain a copy of the License | |
3 | %% at http://www.mozilla.org/MPL/ | |
4 | %% | |
5 | %% Software distributed under the License is distributed on an "AS IS" | |
6 | %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | %% the License for the specific language governing rights and | |
8 | %% limitations under the License. | |
9 | %% | |
10 | %% The Original Code is RabbitMQ. | |
11 | %% | |
12 | %% The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. | |
14 | %% | |
15 | ||
16 | -module(rabbit_authn_backend). | |
17 | ||
18 | -include("rabbit.hrl"). | |
19 | ||
20 | -ifdef(use_specs). | |
21 | ||
22 | %% Check a user can log in, given a username and a proplist of | |
23 | %% authentication information (e.g. [{password, Password}]). If your | |
24 | %% backend is not to be used for authentication, this should always | |
25 | %% refuse access. | |
26 | %% | |
27 | %% Possible responses: | |
28 | %% {ok, User} | |
29 | %% Authentication succeeded, and here's the user record. | |
30 | %% {error, Error} | |
31 | %% Something went wrong. Log and die. | |
32 | %% {refused, Msg, Args} | |
33 | %% Client failed authentication. Log and die. | |
34 | -callback user_login_authentication(rabbit_types:username(), [term()]) -> | |
35 | {'ok', rabbit_types:auth_user()} | | |
36 | {'refused', string(), [any()]} | | |
37 | {'error', any()}. | |
38 | ||
39 | -else. | |
40 | ||
41 | -export([behaviour_info/1]). | |
42 | ||
43 | behaviour_info(callbacks) -> | |
44 | [{user_login_authentication, 2}]; | |
45 | behaviour_info(_Other) -> | |
46 | undefined. | |
47 | ||
48 | -endif. |
0 | %% The contents of this file are subject to the Mozilla Public License | |
1 | %% Version 1.1 (the "License"); you may not use this file except in | |
2 | %% compliance with the License. You may obtain a copy of the License | |
3 | %% at http://www.mozilla.org/MPL/ | |
4 | %% | |
5 | %% Software distributed under the License is distributed on an "AS IS" | |
6 | %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | %% the License for the specific language governing rights and | |
8 | %% limitations under the License. | |
9 | %% | |
10 | %% The Original Code is RabbitMQ. | |
11 | %% | |
12 | %% The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. | |
14 | %% | |
15 | ||
16 | -module(rabbit_authz_backend). | |
17 | ||
18 | -include("rabbit.hrl"). | |
19 | ||
20 | -ifdef(use_specs). | |
21 | ||
22 | %% Check a user can log in, when this backend is being used for | |
23 | %% authorisation only. Authentication has already taken place | |
24 | %% successfully, but we need to check that the user exists in this | |
25 | %% backend, and initialise any impl field we will want to have passed | |
26 | %% back in future calls to check_vhost_access/3 and | |
27 | %% check_resource_access/3. | |
28 | %% | |
29 | %% Possible responses: | |
30 | %% {ok, Impl} | |
31 | %% User authorisation succeeded, and here's the impl field. | |
32 | %% {error, Error} | |
33 | %% Something went wrong. Log and die. | |
34 | %% {refused, Msg, Args} | |
35 | %% User authorisation failed. Log and die. | |
36 | -callback user_login_authorization(rabbit_types:username()) -> | |
37 | {'ok', any()} | | |
38 | {'refused', string(), [any()]} | | |
39 | {'error', any()}. | |
40 | ||
41 | %% Given #auth_user and vhost, can a user log in to a vhost? | |
42 | %% Possible responses: | |
43 | %% true | |
44 | %% false | |
45 | %% {error, Error} | |
46 | %% Something went wrong. Log and die. | |
47 | -callback check_vhost_access(rabbit_types:auth_user(), | |
48 | rabbit_types:vhost(), rabbit_net:socket()) -> | |
49 | boolean() | {'error', any()}. | |
50 | ||
51 | %% Given #auth_user, resource and permission, can a user access a resource? | |
52 | %% | |
53 | %% Possible responses: | |
54 | %% true | |
55 | %% false | |
56 | %% {error, Error} | |
57 | %% Something went wrong. Log and die. | |
58 | -callback check_resource_access(rabbit_types:auth_user(), | |
59 | rabbit_types:r(atom()), | |
60 | rabbit_access_control:permission_atom()) -> | |
61 | boolean() | {'error', any()}. | |
62 | ||
63 | -else. | |
64 | ||
65 | -export([behaviour_info/1]). | |
66 | ||
67 | behaviour_info(callbacks) -> | |
68 | [{user_login_authorization, 1}, | |
69 | {check_vhost_access, 3}, {check_resource_access, 3}]; | |
70 | behaviour_info(_Other) -> | |
71 | undefined. | |
72 | ||
73 | -endif. |
15 | 15 | |
16 | 16 | -module(rabbit_autoheal). |
17 | 17 | |
18 | -export([init/0, maybe_start/1, rabbit_down/2, node_down/2, handle_msg/3]). | |
18 | -export([init/0, enabled/0, maybe_start/1, rabbit_down/2, node_down/2, | |
19 | handle_msg/3]). | |
19 | 20 | |
20 | 21 | %% The named process we are running in. |
21 | 22 | -define(SERVER, rabbit_node_monitor). |
22 | 23 | |
23 | 24 | -define(MNESIA_STOPPED_PING_INTERNAL, 200). |
25 | ||
26 | -define(AUTOHEAL_STATE_AFTER_RESTART, rabbit_autoheal_state_after_restart). | |
24 | 27 | |
25 | 28 | %%---------------------------------------------------------------------------- |
26 | 29 | |
44 | 47 | %% stops - if a node stops for any other reason it just gets a message |
45 | 48 | %% it will ignore, and otherwise we carry on. |
46 | 49 | %% |
50 | %% Meanwhile, the leader may continue to receive new autoheal requests: | |
51 | %% all of them are ignored. The winner notifies the leader when the | |
52 | %% current autoheal process is finished (ie. when all losers stopped and | |
53 | %% were asked to start again) or was aborted. When the leader receives | |
54 | %% the notification or if it looses contact with the winner, it can | |
55 | %% accept new autoheal requests. | |
56 | %% | |
47 | 57 | %% The winner and the leader are not necessarily the same node. |
48 | 58 | %% |
49 | %% Possible states: | |
59 | %% The leader can be a loser and will restart in this case. It remembers | |
60 | %% there is an autoheal in progress by temporarily saving the autoheal | |
61 | %% state to the application environment. | |
62 | %% | |
63 | %% == Possible states == | |
50 | 64 | %% |
51 | 65 | %% not_healing |
52 | 66 | %% - the default |
55 | 69 | %% - we are the winner and are waiting for all losing nodes to stop |
56 | 70 | %% before telling them they can restart |
57 | 71 | %% |
58 | %% about_to_heal | |
59 | %% - we are the leader, and have already assigned the winner and | |
60 | %% losers. We are part of the losers and we wait for the winner_is | |
61 | %% announcement. This leader-specific state differs from not_healing | |
62 | %% (the state other losers are in), because the leader could still | |
63 | %% receive request_start messages: those subsequent requests must be | |
64 | %% ignored. | |
65 | %% | |
66 | %% {leader_waiting, OutstandingStops} | |
72 | %% {leader_waiting, Winner, Notify} | |
67 | 73 | %% - we are the leader, and have already assigned the winner and losers. |
68 | %% We are neither but need to ignore further requests to autoheal. | |
74 | %% We are waiting for a confirmation from the winner that the autoheal | |
75 | %% process has ended. Meanwhile we can ignore autoheal requests. | |
76 | %% Because we may be a loser too, this state is saved to the application | |
77 | %% environment and restored on startup. | |
69 | 78 | %% |
70 | 79 | %% restarting |
71 | 80 | %% - we are restarting. Of course the node monitor immediately dies |
72 | 81 | %% then so this state does not last long. We therefore send the |
73 | 82 | %% autoheal_safe_to_start message to the rabbit_outside_app_process |
74 | 83 | %% instead. |
84 | %% | |
85 | %% == Message flow == | |
86 | %% | |
87 | %% 1. Any node (leader included) >> {request_start, node()} >> Leader | |
88 | %% When Mnesia detects it is running partitioned or | |
89 | %% when a remote node starts, rabbit_node_monitor calls | |
90 | %% rabbit_autoheal:maybe_start/1. The message above is sent to the | |
91 | %% leader so the leader can take a decision. | |
92 | %% | |
93 | %% 2. Leader >> {become_winner, Losers} >> Winner | |
94 | %% The leader notifies the winner so the latter can proceed with | |
95 | %% the autoheal. | |
96 | %% | |
97 | %% 3. Winner >> {winner_is, Winner} >> All losers | |
98 | %% The winner notifies losers they must stop. | |
99 | %% | |
100 | %% 4. Winner >> autoheal_safe_to_start >> All losers | |
101 | %% When either all losers stopped or the autoheal process was | |
102 | %% aborted, the winner notifies losers they can start again. | |
103 | %% | |
104 | %% 5. Leader >> report_autoheal_status >> Winner | |
105 | %% The leader asks the autoheal status to the winner. This only | |
106 | %% happens when the leader is a loser too. If this is not the case, | |
107 | %% this message is never sent. | |
108 | %% | |
109 | %% 6. Winner >> {autoheal_finished, Winner} >> Leader | |
110 | %% The winner notifies the leader that the autoheal process was | |
111 | %% either finished or aborted (ie. autoheal_safe_to_start was sent | |
112 | %% to losers). | |
75 | 113 | |
76 | 114 | %%---------------------------------------------------------------------------- |
77 | 115 | |
78 | init() -> not_healing. | |
116 | init() -> | |
117 | %% We check the application environment for a saved autoheal state | |
118 | %% saved during a restart. If this node is a leader, it is used | |
119 | %% to determine if it needs to ask the winner to report about the | |
120 | %% autoheal progress. | |
121 | State = case application:get_env(rabbit, ?AUTOHEAL_STATE_AFTER_RESTART) of | |
122 | {ok, S} -> S; | |
123 | undefined -> not_healing | |
124 | end, | |
125 | ok = application:unset_env(rabbit, ?AUTOHEAL_STATE_AFTER_RESTART), | |
126 | case State of | |
127 | {leader_waiting, Winner, _} -> | |
128 | rabbit_log:info( | |
129 | "Autoheal: in progress, requesting report from ~p~n", [Winner]), | |
130 | send(Winner, report_autoheal_status); | |
131 | _ -> | |
132 | ok | |
133 | end, | |
134 | State. | |
79 | 135 | |
80 | 136 | maybe_start(not_healing) -> |
81 | 137 | case enabled() of |
82 | true -> [Leader | _] = lists:usort(rabbit_mnesia:cluster_nodes(all)), | |
138 | true -> Leader = leader(), | |
83 | 139 | send(Leader, {request_start, node()}), |
84 | 140 | rabbit_log:info("Autoheal request sent to ~p~n", [Leader]), |
85 | 141 | not_healing; |
89 | 145 | State. |
90 | 146 | |
91 | 147 | enabled() -> |
92 | {ok, autoheal} =:= application:get_env(rabbit, cluster_partition_handling). | |
93 | ||
148 | case application:get_env(rabbit, cluster_partition_handling) of | |
149 | {ok, autoheal} -> true; | |
150 | {ok, {pause_if_all_down, _, autoheal}} -> true; | |
151 | _ -> false | |
152 | end. | |
153 | ||
154 | leader() -> | |
155 | [Leader | _] = lists:usort(rabbit_mnesia:cluster_nodes(all)), | |
156 | Leader. | |
94 | 157 | |
95 | 158 | %% This is the winner receiving its last notification that a node has |
96 | 159 | %% stopped - all nodes can now start again |
101 | 164 | rabbit_down(Node, {winner_waiting, WaitFor, Notify}) -> |
102 | 165 | {winner_waiting, WaitFor -- [Node], Notify}; |
103 | 166 | |
104 | rabbit_down(Node, {leader_waiting, [Node]}) -> | |
105 | not_healing; | |
106 | ||
107 | rabbit_down(Node, {leader_waiting, WaitFor}) -> | |
108 | {leader_waiting, WaitFor -- [Node]}; | |
167 | rabbit_down(Winner, {leader_waiting, Winner, Losers}) -> | |
168 | abort([Winner], Losers); | |
109 | 169 | |
110 | 170 | rabbit_down(_Node, State) -> |
111 | %% ignore, we already cancelled the autoheal process | |
171 | %% Ignore. Either: | |
172 | %% o we already cancelled the autoheal process; | |
173 | %% o we are still waiting the winner's report. | |
112 | 174 | State. |
113 | 175 | |
114 | 176 | node_down(_Node, not_healing) -> |
140 | 202 | case node() =:= Winner of |
141 | 203 | true -> handle_msg({become_winner, Losers}, |
142 | 204 | not_healing, Partitions); |
143 | false -> send(Winner, {become_winner, Losers}), %% [0] | |
144 | case lists:member(node(), Losers) of | |
145 | true -> about_to_heal; | |
146 | false -> {leader_waiting, Losers} | |
147 | end | |
205 | false -> send(Winner, {become_winner, Losers}), | |
206 | {leader_waiting, Winner, Losers} | |
148 | 207 | end |
149 | 208 | end; |
150 | %% [0] If we are a loser we will never receive this message - but it | |
151 | %% won't stick in the mailbox as we are restarting anyway | |
152 | 209 | |
153 | 210 | handle_msg({request_start, Node}, |
154 | 211 | State, _Partitions) -> |
169 | 226 | _ -> abort(Down, Losers) |
170 | 227 | end; |
171 | 228 | |
172 | handle_msg({winner_is, Winner}, | |
173 | State, _Partitions) | |
174 | when State =:= not_healing orelse State =:= about_to_heal -> | |
175 | rabbit_log:warning( | |
176 | "Autoheal: we were selected to restart; winner is ~p~n", [Winner]), | |
177 | rabbit_node_monitor:run_outside_applications( | |
178 | fun () -> | |
179 | MRef = erlang:monitor(process, {?SERVER, Winner}), | |
180 | rabbit:stop(), | |
181 | receive | |
182 | {'DOWN', MRef, process, {?SERVER, Winner}, _Reason} -> ok; | |
183 | autoheal_safe_to_start -> ok | |
184 | end, | |
185 | erlang:demonitor(MRef, [flush]), | |
186 | rabbit:start() | |
187 | end), | |
229 | handle_msg({winner_is, Winner}, State = not_healing, | |
230 | _Partitions) -> | |
231 | %% This node is a loser, nothing else. | |
232 | restart_loser(State, Winner), | |
233 | restarting; | |
234 | handle_msg({winner_is, Winner}, State = {leader_waiting, Winner, _}, | |
235 | _Partitions) -> | |
236 | %% This node is the leader and a loser at the same time. | |
237 | restart_loser(State, Winner), | |
188 | 238 | restarting; |
189 | 239 | |
190 | 240 | handle_msg(_, restarting, _Partitions) -> |
191 | 241 | %% ignore, we can contribute no further |
192 | restarting. | |
242 | restarting; | |
243 | ||
244 | handle_msg(report_autoheal_status, not_healing, _Partitions) -> | |
245 | %% The leader is asking about the autoheal status to us (the | |
246 | %% winner). This happens when the leader is a loser and it just | |
247 | %% restarted. We are in the "not_healing" state, so the previous | |
248 | %% autoheal process ended: let's tell this to the leader. | |
249 | send(leader(), {autoheal_finished, node()}), | |
250 | not_healing; | |
251 | ||
252 | handle_msg(report_autoheal_status, State, _Partitions) -> | |
253 | %% Like above, the leader is asking about the autoheal status. We | |
254 | %% are not finished with it. There is no need to send anything yet | |
255 | %% to the leader: we will send the notification when it is over. | |
256 | State; | |
257 | ||
258 | handle_msg({autoheal_finished, Winner}, | |
259 | {leader_waiting, Winner, _}, _Partitions) -> | |
260 | %% The winner is finished with the autoheal process and notified us | |
261 | %% (the leader). We can transition to the "not_healing" state and | |
262 | %% accept new requests. | |
263 | rabbit_log:info("Autoheal finished according to winner ~p~n", [Winner]), | |
264 | not_healing; | |
265 | ||
266 | handle_msg({autoheal_finished, Winner}, not_healing, _Partitions) | |
267 | when Winner =:= node() -> | |
268 | %% We are the leader and the winner. The state already transitioned | |
269 | %% to "not_healing" at the end of the autoheal process. | |
270 | rabbit_log:info("Autoheal finished according to winner ~p~n", [node()]), | |
271 | not_healing. | |
193 | 272 | |
194 | 273 | %%---------------------------------------------------------------------------- |
195 | 274 | |
214 | 293 | %% losing nodes before sending the "autoheal_safe_to_start" signal. |
215 | 294 | wait_for_mnesia_shutdown(Notify), |
216 | 295 | [{rabbit_outside_app_process, N} ! autoheal_safe_to_start || N <- Notify], |
296 | send(leader(), {autoheal_finished, node()}), | |
217 | 297 | not_healing. |
218 | 298 | |
219 | 299 | wait_for_mnesia_shutdown([Node | Rest] = AllNodes) -> |
231 | 311 | end; |
232 | 312 | wait_for_mnesia_shutdown([]) -> |
233 | 313 | ok. |
314 | ||
315 | restart_loser(State, Winner) -> | |
316 | rabbit_log:warning( | |
317 | "Autoheal: we were selected to restart; winner is ~p~n", [Winner]), | |
318 | rabbit_node_monitor:run_outside_applications( | |
319 | fun () -> | |
320 | MRef = erlang:monitor(process, {?SERVER, Winner}), | |
321 | rabbit:stop(), | |
322 | NextState = receive | |
323 | {'DOWN', MRef, process, {?SERVER, Winner}, _Reason} -> | |
324 | not_healing; | |
325 | autoheal_safe_to_start -> | |
326 | State | |
327 | end, | |
328 | erlang:demonitor(MRef, [flush]), | |
329 | %% During the restart, the autoheal state is lost so we | |
330 | %% store it in the application environment temporarily so | |
331 | %% init/0 can pick it up. | |
332 | %% | |
333 | %% This is useful to the leader which is a loser at the | |
334 | %% same time: because the leader is restarting, there | |
335 | %% is a great chance it misses the "autoheal finished!" | |
336 | %% notification from the winner. Thanks to the saved | |
337 | %% state, it knows it needs to ask the winner if the | |
338 | %% autoheal process is finished or not. | |
339 | application:set_env(rabbit, | |
340 | ?AUTOHEAL_STATE_AFTER_RESTART, NextState), | |
341 | rabbit:start() | |
342 | end, true). | |
234 | 343 | |
235 | 344 | make_decision(AllPartitions) -> |
236 | 345 | Sorted = lists:sort([{partition_value(P), P} || P <- AllPartitions]), |
21 | 21 | messages_unacknowledged_ram, messages_persistent, |
22 | 22 | message_bytes, message_bytes_ready, |
23 | 23 | message_bytes_unacknowledged, message_bytes_ram, |
24 | message_bytes_persistent, backing_queue_status]). | |
24 | message_bytes_persistent, | |
25 | disk_reads, disk_writes, backing_queue_status]). | |
25 | 26 | |
26 | 27 | -ifdef(use_specs). |
27 | 28 | |
29 | 30 | -type(ack() :: any()). |
30 | 31 | -type(state() :: any()). |
31 | 32 | |
33 | -type(flow() :: 'flow' | 'noflow'). | |
32 | 34 | -type(msg_ids() :: [rabbit_types:msg_id()]). |
33 | 35 | -type(fetch_result(Ack) :: |
34 | 36 | ('empty' | {rabbit_types:basic_message(), boolean(), Ack})). |
98 | 100 | |
99 | 101 | %% Publish a message. |
100 | 102 | -callback publish(rabbit_types:basic_message(), |
101 | rabbit_types:message_properties(), boolean(), pid(), | |
103 | rabbit_types:message_properties(), boolean(), pid(), flow(), | |
102 | 104 | state()) -> state(). |
103 | 105 | |
104 | 106 | %% Called for messages which have already been passed straight |
105 | 107 | %% out to a client. The queue will be empty for these calls |
106 | 108 | %% (i.e. saves the round trip through the backing queue). |
107 | 109 | -callback publish_delivered(rabbit_types:basic_message(), |
108 | rabbit_types:message_properties(), pid(), state()) | |
110 | rabbit_types:message_properties(), pid(), flow(), | |
111 | state()) | |
109 | 112 | -> {ack(), state()}. |
110 | 113 | |
111 | 114 | %% Called to inform the BQ about messages which have reached the |
112 | 115 | %% queue, but are not going to be further passed to BQ. |
113 | -callback discard(rabbit_types:msg_id(), pid(), state()) -> state(). | |
116 | -callback discard(rabbit_types:msg_id(), pid(), flow(), state()) -> state(). | |
114 | 117 | |
115 | 118 | %% Return ids of messages which have been confirmed since the last |
116 | 119 | %% invocation of this function (or initialisation). |
248 | 251 | |
249 | 252 | behaviour_info(callbacks) -> |
250 | 253 | [{start, 1}, {stop, 0}, {init, 3}, {terminate, 2}, |
251 | {delete_and_terminate, 2}, {purge, 1}, {purge_acks, 1}, {publish, 5}, | |
252 | {publish_delivered, 4}, {discard, 3}, {drain_confirmed, 1}, | |
254 | {delete_and_terminate, 2}, {purge, 1}, {purge_acks, 1}, {publish, 6}, | |
255 | {publish_delivered, 5}, {discard, 4}, {drain_confirmed, 1}, | |
253 | 256 | {dropwhile, 2}, {fetchwhile, 4}, |
254 | 257 | {fetch, 2}, {ack, 2}, {requeue, 2}, {ackfold, 4}, {fold, 3}, {len, 1}, |
255 | 258 | {is_empty, 1}, {depth, 1}, {set_ram_duration_target, 2}, |
20 | 20 | -export([publish/4, publish/5, publish/1, |
21 | 21 | message/3, message/4, properties/1, prepend_table_header/3, |
22 | 22 | extract_headers/1, map_headers/2, delivery/4, header_routes/1, |
23 | parse_expiration/1]). | |
23 | parse_expiration/1, header/2, header/3]). | |
24 | 24 | -export([build_content/2, from_content/1, msg_size/1, maybe_gc_large_msg/1]). |
25 | 25 | |
26 | 26 | %%---------------------------------------------------------------------------- |
31 | 31 | (rabbit_framing:amqp_property_record() | [{atom(), any()}])). |
32 | 32 | -type(publish_result() :: |
33 | 33 | ({ok, [pid()]} | rabbit_types:error('not_found'))). |
34 | -type(header() :: any()). | |
34 | 35 | -type(headers() :: rabbit_framing:amqp_table() | 'undefined'). |
35 | 36 | |
36 | 37 | -type(exchange_input() :: (rabbit_types:exchange() | rabbit_exchange:name())). |
60 | 61 | -spec(prepend_table_header/3 :: |
61 | 62 | (binary(), rabbit_framing:amqp_table(), headers()) -> headers()). |
62 | 63 | |
64 | -spec(header/2 :: | |
65 | (header(), headers()) -> 'undefined' | any()). | |
66 | -spec(header/3 :: | |
67 | (header(), headers(), any()) -> 'undefined' | any()). | |
68 | ||
63 | 69 | -spec(extract_headers/1 :: (rabbit_types:content()) -> headers()). |
64 | 70 | |
65 | 71 | -spec(map_headers/2 :: (fun((headers()) -> headers()), rabbit_types:content()) |
113 | 119 | |
114 | 120 | delivery(Mandatory, Confirm, Message, MsgSeqNo) -> |
115 | 121 | #delivery{mandatory = Mandatory, confirm = Confirm, sender = self(), |
116 | message = Message, msg_seq_no = MsgSeqNo}. | |
122 | message = Message, msg_seq_no = MsgSeqNo, flow = noflow}. | |
117 | 123 | |
118 | 124 | build_content(Properties, BodyBin) when is_binary(BodyBin) -> |
119 | 125 | build_content(Properties, [BodyBin]); |
223 | 229 | end, |
224 | 230 | NewHdr = rabbit_misc:set_table_value(ExistingHdr, Name, array, Values), |
225 | 231 | set_invalid(NewHdr, Header). |
232 | ||
233 | header(_Header, undefined) -> | |
234 | undefined; | |
235 | header(_Header, []) -> | |
236 | undefined; | |
237 | header(Header, Headers) -> | |
238 | header(Header, Headers, undefined). | |
239 | ||
240 | header(Header, Headers, Default) -> | |
241 | case lists:keysearch(Header, 1, Headers) of | |
242 | false -> Default; | |
243 | {value, Val} -> Val | |
244 | end. | |
226 | 245 | |
227 | 246 | extract_headers(Content) -> |
228 | 247 | #content{properties = #'P_basic'{headers = Headers}} = |
40 | 40 | %% parse_table supports the AMQP 0-8/0-9 standard types, S, I, D, T |
41 | 41 | %% and F, as well as the QPid extensions b, d, f, l, s, t, x, and V. |
42 | 42 | |
43 | -define(SIMPLE_PARSE_TABLE(BType, Pattern, RType), | |
44 | parse_table(<<NLen:8/unsigned, NameString:NLen/binary, | |
45 | BType, Pattern, Rest/binary>>) -> | |
46 | [{NameString, RType, Value} | parse_table(Rest)]). | |
47 | ||
48 | %% Note that we try to put these in approximately the order we expect | |
49 | %% to hit them, that's why the empty binary is half way through. | |
50 | ||
51 | parse_table(<<NLen:8/unsigned, NameString:NLen/binary, | |
52 | $S, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) -> | |
53 | [{NameString, longstr, Value} | parse_table(Rest)]; | |
54 | ||
55 | ?SIMPLE_PARSE_TABLE($I, Value:32/signed, signedint); | |
56 | ?SIMPLE_PARSE_TABLE($T, Value:64/unsigned, timestamp); | |
57 | ||
43 | 58 | parse_table(<<>>) -> |
44 | 59 | []; |
45 | parse_table(<<NLen:8/unsigned, NameString:NLen/binary, ValueAndRest/binary>>) -> | |
46 | {Type, Value, Rest} = parse_field_value(ValueAndRest), | |
47 | [{NameString, Type, Value} | parse_table(Rest)]. | |
60 | ||
61 | ?SIMPLE_PARSE_TABLE($b, Value:8/signed, byte); | |
62 | ?SIMPLE_PARSE_TABLE($d, Value:64/float, double); | |
63 | ?SIMPLE_PARSE_TABLE($f, Value:32/float, float); | |
64 | ?SIMPLE_PARSE_TABLE($l, Value:64/signed, long); | |
65 | ?SIMPLE_PARSE_TABLE($s, Value:16/signed, short); | |
66 | ||
67 | parse_table(<<NLen:8/unsigned, NameString:NLen/binary, | |
68 | $t, Value:8/unsigned, Rest/binary>>) -> | |
69 | [{NameString, bool, (Value /= 0)} | parse_table(Rest)]; | |
70 | ||
71 | parse_table(<<NLen:8/unsigned, NameString:NLen/binary, | |
72 | $D, Before:8/unsigned, After:32/unsigned, Rest/binary>>) -> | |
73 | [{NameString, decimal, {Before, After}} | parse_table(Rest)]; | |
74 | ||
75 | parse_table(<<NLen:8/unsigned, NameString:NLen/binary, | |
76 | $F, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) -> | |
77 | [{NameString, table, parse_table(Value)} | parse_table(Rest)]; | |
78 | ||
79 | parse_table(<<NLen:8/unsigned, NameString:NLen/binary, | |
80 | $A, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) -> | |
81 | [{NameString, array, parse_array(Value)} | parse_table(Rest)]; | |
82 | ||
83 | parse_table(<<NLen:8/unsigned, NameString:NLen/binary, | |
84 | $x, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) -> | |
85 | [{NameString, binary, Value} | parse_table(Rest)]; | |
86 | ||
87 | parse_table(<<NLen:8/unsigned, NameString:NLen/binary, | |
88 | $V, Rest/binary>>) -> | |
89 | [{NameString, void, undefined} | parse_table(Rest)]. | |
90 | ||
91 | -define(SIMPLE_PARSE_ARRAY(BType, Pattern, RType), | |
92 | parse_array(<<BType, Pattern, Rest/binary>>) -> | |
93 | [{RType, Value} | parse_array(Rest)]). | |
94 | ||
95 | parse_array(<<$S, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) -> | |
96 | [{longstr, Value} | parse_array(Rest)]; | |
97 | ||
98 | ?SIMPLE_PARSE_ARRAY($I, Value:32/signed, signedint); | |
99 | ?SIMPLE_PARSE_ARRAY($T, Value:64/unsigned, timestamp); | |
48 | 100 | |
49 | 101 | parse_array(<<>>) -> |
50 | 102 | []; |
51 | parse_array(<<ValueAndRest/binary>>) -> | |
52 | {Type, Value, Rest} = parse_field_value(ValueAndRest), | |
53 | [{Type, Value} | parse_array(Rest)]. | |
54 | 103 | |
55 | parse_field_value(<<$S, VLen:32/unsigned, V:VLen/binary, R/binary>>) -> | |
56 | {longstr, V, R}; | |
104 | ?SIMPLE_PARSE_ARRAY($b, Value:8/signed, byte); | |
105 | ?SIMPLE_PARSE_ARRAY($d, Value:64/float, double); | |
106 | ?SIMPLE_PARSE_ARRAY($f, Value:32/float, float); | |
107 | ?SIMPLE_PARSE_ARRAY($l, Value:64/signed, long); | |
108 | ?SIMPLE_PARSE_ARRAY($s, Value:16/signed, short); | |
57 | 109 | |
58 | parse_field_value(<<$I, V:32/signed, R/binary>>) -> | |
59 | {signedint, V, R}; | |
110 | parse_array(<<$t, Value:8/unsigned, Rest/binary>>) -> | |
111 | [{bool, (Value /= 0)} | parse_array(Rest)]; | |
60 | 112 | |
61 | parse_field_value(<<$D, Before:8/unsigned, After:32/unsigned, R/binary>>) -> | |
62 | {decimal, {Before, After}, R}; | |
113 | parse_array(<<$D, Before:8/unsigned, After:32/unsigned, Rest/binary>>) -> | |
114 | [{decimal, {Before, After}} | parse_array(Rest)]; | |
63 | 115 | |
64 | parse_field_value(<<$T, V:64/unsigned, R/binary>>) -> | |
65 | {timestamp, V, R}; | |
116 | parse_array(<<$F, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) -> | |
117 | [{table, parse_table(Value)} | parse_array(Rest)]; | |
66 | 118 | |
67 | parse_field_value(<<$F, VLen:32/unsigned, Table:VLen/binary, R/binary>>) -> | |
68 | {table, parse_table(Table), R}; | |
119 | parse_array(<<$A, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) -> | |
120 | [{array, parse_array(Value)} | parse_array(Rest)]; | |
69 | 121 | |
70 | parse_field_value(<<$A, VLen:32/unsigned, Array:VLen/binary, R/binary>>) -> | |
71 | {array, parse_array(Array), R}; | |
122 | parse_array(<<$x, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) -> | |
123 | [{binary, Value} | parse_array(Rest)]; | |
72 | 124 | |
73 | parse_field_value(<<$b, V:8/signed, R/binary>>) -> {byte, V, R}; | |
74 | parse_field_value(<<$d, V:64/float, R/binary>>) -> {double, V, R}; | |
75 | parse_field_value(<<$f, V:32/float, R/binary>>) -> {float, V, R}; | |
76 | parse_field_value(<<$l, V:64/signed, R/binary>>) -> {long, V, R}; | |
77 | parse_field_value(<<$s, V:16/signed, R/binary>>) -> {short, V, R}; | |
78 | parse_field_value(<<$t, V:8/unsigned, R/binary>>) -> {bool, (V /= 0), R}; | |
79 | ||
80 | parse_field_value(<<$x, VLen:32/unsigned, V:VLen/binary, R/binary>>) -> | |
81 | {binary, V, R}; | |
82 | ||
83 | parse_field_value(<<$V, R/binary>>) -> | |
84 | {void, undefined, R}. | |
125 | parse_array(<<$V, Rest/binary>>) -> | |
126 | [{void, undefined} | parse_array(Rest)]. | |
85 | 127 | |
86 | 128 | ensure_content_decoded(Content = #content{properties = Props}) |
87 | 129 | when Props =/= none -> |
483 | 483 | |
484 | 484 | %%--------------------------------------------------------------------------- |
485 | 485 | |
486 | log(Level, Fmt, Args) -> rabbit_log:log(channel, Level, Fmt, Args). | |
487 | ||
486 | 488 | reply(Reply, NewState) -> {reply, Reply, next_state(NewState), hibernate}. |
487 | 489 | |
488 | 490 | noreply(NewState) -> {noreply, next_state(NewState), hibernate}. |
519 | 521 | {_Result, State1} = notify_queues(State), |
520 | 522 | case rabbit_binary_generator:map_exception(Channel, Reason, Protocol) of |
521 | 523 | {Channel, CloseMethod} -> |
522 | rabbit_log:error("Channel error on connection ~p (~s, vhost: '~s'," | |
523 | " user: '~s'), channel ~p:~n~p~n", | |
524 | [ConnPid, ConnName, VHost, User#user.username, | |
525 | Channel, Reason]), | |
524 | log(error, "Channel error on connection ~p (~s, vhost: '~s'," | |
525 | " user: '~s'), channel ~p:~n~p~n", | |
526 | [ConnPid, ConnName, VHost, User#user.username, | |
527 | Channel, Reason]), | |
526 | 528 | ok = rabbit_writer:send_command(WriterPid, CloseMethod), |
527 | 529 | {noreply, State1}; |
528 | 530 | {0, _} -> |
580 | 582 | #ch{user = #user{username = Username}}) -> |
581 | 583 | ok; |
582 | 584 | check_user_id_header( |
583 | #'P_basic'{}, #ch{user = #user{auth_backend = rabbit_auth_backend_dummy}}) -> | |
585 | #'P_basic'{}, #ch{user = #user{authz_backends = | |
586 | [{rabbit_auth_backend_dummy, _}]}}) -> | |
584 | 587 | ok; |
585 | 588 | check_user_id_header(#'P_basic'{user_id = Claimed}, |
586 | 589 | #ch{user = #user{username = Actual, |
659 | 662 | check_not_default_exchange(_) -> |
660 | 663 | ok. |
661 | 664 | |
662 | check_exchange_deletion(XName = #resource{name = <<"amq.rabbitmq.", _/binary>>, | |
665 | check_exchange_deletion(XName = #resource{name = <<"amq.", _/binary>>, | |
663 | 666 | kind = exchange}) -> |
664 | 667 | rabbit_misc:protocol_error( |
665 | 668 | access_refused, "deletion of system ~s not allowed", |
788 | 791 | end, |
789 | 792 | case rabbit_basic:message(ExchangeName, RoutingKey, DecodedContent) of |
790 | 793 | {ok, Message} -> |
791 | rabbit_trace:tap_in(Message, ConnName, ChannelNum, | |
792 | Username, TraceState), | |
793 | 794 | Delivery = rabbit_basic:delivery( |
794 | 795 | Mandatory, DoConfirm, Message, MsgSeqNo), |
795 | 796 | QNames = rabbit_exchange:route(Exchange, Delivery), |
796 | DQ = {Delivery, QNames}, | |
797 | rabbit_trace:tap_in(Message, QNames, ConnName, ChannelNum, | |
798 | Username, TraceState), | |
799 | DQ = {Delivery#delivery{flow = flow}, QNames}, | |
797 | 800 | {noreply, case Tx of |
798 | 801 | none -> deliver_to_queues(DQ, State1); |
799 | 802 | {Msgs, Acks} -> Msgs1 = queue:in(DQ, Msgs), |
1664 | 1667 | DelQNames}, State = #ch{queue_names = QNames, |
1665 | 1668 | queue_monitors = QMons}) -> |
1666 | 1669 | Qs = rabbit_amqqueue:lookup(DelQNames), |
1667 | DeliveredQPids = rabbit_amqqueue:deliver_flow(Qs, Delivery), | |
1670 | DeliveredQPids = rabbit_amqqueue:deliver(Qs, Delivery), | |
1668 | 1671 | %% The pmon:monitor_all/2 monitors all queues to which we |
1669 | 1672 | %% delivered. But we want to monitor even queues we didn't deliver |
1670 | 1673 | %% to, since we need their 'DOWN' messages to clean |
1734 | 1737 | send_confirms(State = #ch{tx = none, confirmed = []}) -> |
1735 | 1738 | State; |
1736 | 1739 | send_confirms(State = #ch{tx = none, confirmed = C}) -> |
1737 | case rabbit_node_monitor:pause_minority_guard() of | |
1740 | case rabbit_node_monitor:pause_partition_guard() of | |
1738 | 1741 | ok -> MsgSeqNos = |
1739 | 1742 | lists:foldl( |
1740 | 1743 | fun ({MsgSeqNo, XName}, MSNs) -> |
1746 | 1749 | pausing -> State |
1747 | 1750 | end; |
1748 | 1751 | send_confirms(State) -> |
1749 | case rabbit_node_monitor:pause_minority_guard() of | |
1752 | case rabbit_node_monitor:pause_partition_guard() of | |
1750 | 1753 | ok -> maybe_complete_tx(State); |
1751 | 1754 | pausing -> State |
1752 | 1755 | end. |
16 | 16 | -module(rabbit_cli). |
17 | 17 | -include("rabbit_cli.hrl"). |
18 | 18 | |
19 | -export([main/3, parse_arguments/4, rpc_call/4]). | |
19 | -export([main/3, start_distribution/0, start_distribution/1, | |
20 | parse_arguments/4, rpc_call/4]). | |
20 | 21 | |
21 | 22 | %%---------------------------------------------------------------------------- |
22 | 23 | |
30 | 31 | -spec(main/3 :: (fun (([string()], string()) -> parse_result()), |
31 | 32 | fun ((atom(), atom(), [any()], [any()]) -> any()), |
32 | 33 | atom()) -> no_return()). |
34 | -spec(start_distribution/0 :: () -> {'ok', pid()} | {'error', any()}). | |
35 | -spec(start_distribution/1 :: (string()) -> {'ok', pid()} | {'error', any()}). | |
33 | 36 | -spec(usage/1 :: (atom()) -> no_return()). |
34 | 37 | -spec(parse_arguments/4 :: |
35 | 38 | ([{atom(), [{string(), optdef()}]} | atom()], |
41 | 44 | %%---------------------------------------------------------------------------- |
42 | 45 | |
43 | 46 | main(ParseFun, DoFun, UsageMod) -> |
47 | error_logger:tty(false), | |
48 | start_distribution(), | |
44 | 49 | {ok, [[NodeStr|_]|_]} = init:get_argument(nodename), |
45 | 50 | {Command, Opts, Args} = |
46 | 51 | case ParseFun(init:get_plain_arguments(), NodeStr) of |
98 | 103 | Other -> |
99 | 104 | print_error("~p", [Other]), |
100 | 105 | rabbit_misc:quit(2) |
106 | end. | |
107 | ||
108 | start_distribution() -> | |
109 | start_distribution(list_to_atom( | |
110 | rabbit_misc:format("rabbitmq-cli-~s", [os:getpid()]))). | |
111 | ||
112 | start_distribution(Name) -> | |
113 | rabbit_nodes:ensure_epmd(), | |
114 | net_kernel:start([Name, name_type()]). | |
115 | ||
116 | name_type() -> | |
117 | case os:getenv("RABBITMQ_USE_LONGNAME") of | |
118 | "true" -> longnames; | |
119 | _ -> shortnames | |
101 | 120 | end. |
102 | 121 | |
103 | 122 | usage(Mod) -> |
18 | 18 | -include("rabbit_cli.hrl"). |
19 | 19 | |
20 | 20 | -export([start/0, stop/0, parse_arguments/2, action/5, |
21 | sync_queue/1, cancel_sync_queue/1]). | |
21 | sync_queue/1, cancel_sync_queue/1, become/1]). | |
22 | 22 | |
23 | 23 | -import(rabbit_cli, [rpc_call/4]). |
24 | 24 | |
39 | 39 | change_cluster_node_type, |
40 | 40 | update_cluster_nodes, |
41 | 41 | {forget_cluster_node, [?OFFLINE_DEF]}, |
42 | rename_cluster_node, | |
42 | 43 | force_boot, |
43 | 44 | cluster_status, |
44 | 45 | {sync_queue, [?VHOST_DEF]}, |
103 | 104 | -define(COMMANDS_NOT_REQUIRING_APP, |
104 | 105 | [stop, stop_app, start_app, wait, reset, force_reset, rotate_logs, |
105 | 106 | join_cluster, change_cluster_node_type, update_cluster_nodes, |
106 | forget_cluster_node, cluster_status, status, environment, eval, | |
107 | force_boot]). | |
107 | forget_cluster_node, rename_cluster_node, cluster_status, status, | |
108 | environment, eval, force_boot]). | |
108 | 109 | |
109 | 110 | %%---------------------------------------------------------------------------- |
110 | 111 | |
122 | 123 | %%---------------------------------------------------------------------------- |
123 | 124 | |
124 | 125 | start() -> |
125 | start_distribution(), | |
126 | 126 | rabbit_cli:main( |
127 | 127 | fun (Args, NodeStr) -> |
128 | 128 | parse_arguments(Args, NodeStr) |
233 | 233 | [ClusterNode, false]) |
234 | 234 | end; |
235 | 235 | |
236 | action(rename_cluster_node, Node, NodesS, _Opts, Inform) -> | |
237 | Nodes = split_list([list_to_atom(N) || N <- NodesS]), | |
238 | Inform("Renaming cluster nodes:~n~s~n", | |
239 | [lists:flatten([rabbit_misc:format(" ~s -> ~s~n", [F, T]) || | |
240 | {F, T} <- Nodes])]), | |
241 | rabbit_mnesia_rename:rename(Node, Nodes); | |
242 | ||
236 | 243 | action(force_boot, Node, [], _Opts, Inform) -> |
237 | 244 | Inform("Forcing boot for Mnesia dir ~s", [mnesia:system_info(directory)]), |
238 | 245 | case rabbit:is_running(Node) of |
517 | 524 | Node, Pid, fun() -> rpc:call(Node, rabbit, await_startup, []) =:= ok end). |
518 | 525 | |
519 | 526 | while_process_is_alive(Node, Pid, Activity) -> |
520 | case process_up(Pid) of | |
527 | case rabbit_misc:is_os_process_alive(Pid) of | |
521 | 528 | true -> case Activity() of |
522 | 529 | true -> ok; |
523 | 530 | false -> timer:sleep(?EXTERNAL_CHECK_INTERVAL), |
527 | 534 | end. |
528 | 535 | |
529 | 536 | wait_for_process_death(Pid) -> |
530 | case process_up(Pid) of | |
537 | case rabbit_misc:is_os_process_alive(Pid) of | |
531 | 538 | true -> timer:sleep(?EXTERNAL_CHECK_INTERVAL), |
532 | 539 | wait_for_process_death(Pid); |
533 | 540 | false -> ok |
551 | 558 | exit({error, {could_not_read_pid, E}}) |
552 | 559 | end. |
553 | 560 | |
554 | % Test using some OS clunkiness since we shouldn't trust | |
555 | % rpc:call(os, getpid, []) at this point | |
556 | process_up(Pid) -> | |
557 | with_os([{unix, fun () -> | |
558 | run_ps(Pid) =:= 0 | |
559 | end}, | |
560 | {win32, fun () -> | |
561 | Cmd = "tasklist /nh /fi \"pid eq " ++ Pid ++ "\" ", | |
562 | Res = rabbit_misc:os_cmd(Cmd ++ "2>&1"), | |
563 | case re:run(Res, "erl\\.exe", [{capture, none}]) of | |
564 | match -> true; | |
565 | _ -> false | |
566 | end | |
567 | end}]). | |
568 | ||
569 | with_os(Handlers) -> | |
570 | {OsFamily, _} = os:type(), | |
571 | case proplists:get_value(OsFamily, Handlers) of | |
572 | undefined -> throw({unsupported_os, OsFamily}); | |
573 | Handler -> Handler() | |
574 | end. | |
575 | ||
576 | run_ps(Pid) -> | |
577 | Port = erlang:open_port({spawn, "ps -p " ++ Pid}, | |
578 | [exit_status, {line, 16384}, | |
579 | use_stdio, stderr_to_stdout]), | |
580 | exit_loop(Port). | |
581 | ||
582 | exit_loop(Port) -> | |
583 | receive | |
584 | {Port, {exit_status, Rc}} -> Rc; | |
585 | {Port, _} -> exit_loop(Port) | |
586 | end. | |
587 | ||
588 | start_distribution() -> | |
589 | CtlNodeName = rabbit_misc:format("rabbitmqctl-~s", [os:getpid()]), | |
590 | {ok, _} = net_kernel:start([list_to_atom(CtlNodeName), name_type()]). | |
591 | ||
592 | 561 | become(BecomeNode) -> |
562 | error_logger:tty(false), | |
563 | ok = net_kernel:stop(), | |
593 | 564 | case net_adm:ping(BecomeNode) of |
594 | 565 | pong -> exit({node_running, BecomeNode}); |
595 | 566 | pang -> io:format(" * Impersonating node: ~s...", [BecomeNode]), |
596 | error_logger:tty(false), | |
597 | ok = net_kernel:stop(), | |
598 | {ok, _} = net_kernel:start([BecomeNode, name_type()]), | |
567 | {ok, _} = rabbit_cli:start_distribution(BecomeNode), | |
599 | 568 | io:format(" done~n", []), |
600 | 569 | Dir = mnesia:system_info(directory), |
601 | 570 | io:format(" * Mnesia directory : ~s~n", [Dir]) |
602 | end. | |
603 | ||
604 | name_type() -> | |
605 | case os:getenv("RABBITMQ_USE_LONGNAME") of | |
606 | "true" -> longnames; | |
607 | _ -> shortnames | |
608 | 571 | end. |
609 | 572 | |
610 | 573 | %%---------------------------------------------------------------------------- |
719 | 682 | prettify_typed_amqp_value(array, Value) -> [prettify_typed_amqp_value(T, V) || |
720 | 683 | {T, V} <- Value]; |
721 | 684 | prettify_typed_amqp_value(_Type, Value) -> Value. |
685 | ||
686 | split_list([]) -> []; | |
687 | split_list([_]) -> exit(even_list_needed); | |
688 | split_list([A, B | T]) -> [{A, B} | split_list(T)]. |
65 | 65 | {<<"time">>, timestamp, TimeSec}, |
66 | 66 | {<<"exchange">>, longstr, Exchange#resource.name}, |
67 | 67 | {<<"routing-keys">>, array, RKs1}] ++ PerMsgTTL, |
68 | HeadersFun1(rabbit_basic:prepend_table_header(<<"x-death">>, | |
69 | Info, Headers)) | |
68 | HeadersFun1(update_x_death_header(Info, Headers)) | |
70 | 69 | end, |
71 | 70 | Content1 = #content{properties = Props} = |
72 | 71 | rabbit_basic:map_headers(HeadersFun2, Content), |
76 | 75 | id = rabbit_guid:gen(), |
77 | 76 | routing_keys = DeathRoutingKeys, |
78 | 77 | content = Content2}. |
78 | ||
79 | ||
80 | x_death_event_key(Info, Key, KeyType) -> | |
81 | case lists:keysearch(Key, 1, Info) of | |
82 | false -> undefined; | |
83 | {value, {Key, KeyType, Val}} -> Val | |
84 | end. | |
85 | ||
86 | update_x_death_header(Info, Headers) -> | |
87 | Q = x_death_event_key(Info, <<"queue">>, longstr), | |
88 | R = x_death_event_key(Info, <<"reason">>, longstr), | |
89 | case rabbit_basic:header(<<"x-death">>, Headers) of | |
90 | undefined -> | |
91 | rabbit_basic:prepend_table_header(<<"x-death">>, | |
92 | [{<<"count">>, long, 1} | Info], Headers); | |
93 | {<<"x-death">>, array, Tables} -> | |
94 | {Matches, Others} = lists:partition( | |
95 | fun ({table, Info0}) -> | |
96 | x_death_event_key(Info0, <<"queue">>, longstr) =:= Q | |
97 | andalso x_death_event_key(Info0, <<"reason">>, longstr) =:= R | |
98 | end, Tables), | |
99 | Info1 = case Matches of | |
100 | [] -> | |
101 | [{<<"count">>, long, 1} | Info]; | |
102 | [{table, M}] -> | |
103 | case x_death_event_key(M, <<"count">>, long) of | |
104 | undefined -> | |
105 | [{<<"count">>, long, 1} | M]; | |
106 | N -> | |
107 | lists:keyreplace( | |
108 | <<"count">>, 1, M, | |
109 | {<<"count">>, long, N + 1}) | |
110 | end | |
111 | end, | |
112 | rabbit_misc:set_table_value(Headers, <<"x-death">>, array, | |
113 | [{table, rabbit_misc:sort_field_table(Info1)} | Others]) | |
114 | end. | |
79 | 115 | |
80 | 116 | per_msg_ttl_header(#'P_basic'{expiration = undefined}) -> |
81 | 117 | []; |
82 | 82 | connect0(AuthFun, VHost, Protocol, Pid, Infos) -> |
83 | 83 | case rabbit:is_running() of |
84 | 84 | true -> case AuthFun() of |
85 | {ok, User} -> | |
85 | {ok, User = #user{username = Username}} -> | |
86 | notify_auth_result(Username, | |
87 | user_authentication_success, []), | |
86 | 88 | connect1(User, VHost, Protocol, Pid, Infos); |
87 | {refused, _M, _A} -> | |
89 | {refused, Username, Msg, Args} -> | |
90 | notify_auth_result(Username, | |
91 | user_authentication_failure, | |
92 | [{error, rabbit_misc:format(Msg, Args)}]), | |
88 | 93 | {error, {auth_failure, "Refused"}} |
89 | 94 | end; |
90 | 95 | false -> {error, broker_not_found_on_node} |
91 | 96 | end. |
92 | 97 | |
98 | notify_auth_result(Username, AuthResult, ExtraProps) -> | |
99 | EventProps = [{connection_type, direct}, | |
100 | {name, case Username of none -> ''; _ -> Username end}] ++ | |
101 | ExtraProps, | |
102 | rabbit_event:notify(AuthResult, [P || {_, V} = P <- EventProps, V =/= '']). | |
103 | ||
93 | 104 | connect1(User, VHost, Protocol, Pid, Infos) -> |
94 | try rabbit_access_control:check_vhost_access(User, VHost) of | |
105 | try rabbit_access_control:check_vhost_access(User, VHost, undefined) of | |
95 | 106 | ok -> ok = pg_local:join(rabbit_direct, Pid), |
96 | 107 | rabbit_event:notify(connection_created, Infos), |
97 | 108 | {ok, {User, rabbit_reader:server_properties(Protocol)}} |
0 | %% The contents of this file are subject to the Mozilla Public License | |
1 | %% Version 1.1 (the "License"); you may not use this file except in | |
2 | %% compliance with the License. You may obtain a copy of the License | |
3 | %% at http://www.mozilla.org/MPL/ | |
4 | %% | |
5 | %% Software distributed under the License is distributed on an "AS IS" | |
6 | %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | %% the License for the specific language governing rights and | |
8 | %% limitations under the License. | |
9 | %% | |
10 | %% The Original Code is RabbitMQ. | |
11 | %% | |
12 | %% The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. | |
14 | %% | |
15 | ||
16 | -module(rabbit_epmd_monitor). | |
17 | ||
18 | -behaviour(gen_server). | |
19 | ||
20 | -export([start_link/0]). | |
21 | ||
22 | -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, | |
23 | code_change/3]). | |
24 | ||
25 | -record(state, {timer, mod, me, host, port}). | |
26 | ||
27 | -define(SERVER, ?MODULE). | |
28 | -define(CHECK_FREQUENCY, 60000). | |
29 | ||
30 | %%---------------------------------------------------------------------------- | |
31 | ||
32 | -ifdef(use_specs). | |
33 | ||
34 | -spec(start_link/0 :: () -> rabbit_types:ok_pid_or_error()). | |
35 | ||
36 | -endif. | |
37 | ||
38 | %%---------------------------------------------------------------------------- | |
39 | %% It's possible for epmd to be killed out from underneath us. If that | |
40 | %% happens, then obviously clustering and rabbitmqctl stop | |
41 | %% working. This process checks up on epmd and restarts it / | |
42 | %% re-registers us with it if it has gone away. | |
43 | %% | |
44 | %% How could epmd be killed? | |
45 | %% | |
46 | %% 1) The most popular way for this to happen is when running as a | |
47 | %% Windows service. The user starts rabbitmqctl first, and this starts | |
48 | %% epmd under the user's account. When they log out epmd is killed. | |
49 | %% | |
50 | %% 2) Some packagings of (non-RabbitMQ?) Erlang apps might do "killall | |
51 | %% epmd" as a shutdown or uninstall step. | |
52 | %% ---------------------------------------------------------------------------- | |
53 | ||
54 | start_link() -> gen_server:start_link({local, ?SERVER}, ?MODULE, [], []). | |
55 | ||
56 | init([]) -> | |
57 | {Me, Host} = rabbit_nodes:parts(node()), | |
58 | Mod = net_kernel:epmd_module(), | |
59 | {port, Port, _Version} = Mod:port_please(Me, Host), | |
60 | {ok, ensure_timer(#state{mod = Mod, | |
61 | me = Me, | |
62 | host = Host, | |
63 | port = Port})}. | |
64 | ||
65 | handle_call(_Request, _From, State) -> | |
66 | {noreply, State}. | |
67 | ||
68 | handle_cast(_Msg, State) -> | |
69 | {noreply, State}. | |
70 | ||
71 | handle_info(check, State) -> | |
72 | check_epmd(State), | |
73 | {noreply, ensure_timer(State#state{timer = undefined})}; | |
74 | ||
75 | handle_info(_Info, State) -> | |
76 | {noreply, State}. | |
77 | ||
78 | terminate(_Reason, _State) -> | |
79 | ok. | |
80 | ||
81 | code_change(_OldVsn, State, _Extra) -> | |
82 | {ok, State}. | |
83 | ||
84 | %%---------------------------------------------------------------------------- | |
85 | ||
86 | ensure_timer(State) -> | |
87 | rabbit_misc:ensure_timer(State, #state.timer, ?CHECK_FREQUENCY, check). | |
88 | ||
89 | check_epmd(#state{mod = Mod, | |
90 | me = Me, | |
91 | host = Host, | |
92 | port = Port}) -> | |
93 | case Mod:port_please(Me, Host) of | |
94 | noport -> rabbit_log:warning( | |
95 | "epmd does not know us, re-registering ~s at port ~b~n", | |
96 | [Me, Port]), | |
97 | rabbit_nodes:ensure_epmd(), | |
98 | erl_epmd:register_node(Me, Port); | |
99 | _ -> ok | |
100 | end. |
21 | 21 | %% |
22 | 22 | %% Each channel has an associated limiter process, created with |
23 | 23 | %% start_link/1, which it passes to queues on consumer creation with |
24 | %% rabbit_amqqueue:basic_consume/9, and rabbit_amqqueue:basic_get/4. | |
24 | %% rabbit_amqqueue:basic_consume/10, and rabbit_amqqueue:basic_get/4. | |
25 | 25 | %% The latter isn't strictly necessary, since basic.get is not |
26 | 26 | %% subject to limiting, but it means that whenever a queue knows about |
27 | 27 | %% a channel, it also knows about its limiter, which is less fiddly. |
15 | 15 | |
16 | 16 | -module(rabbit_log). |
17 | 17 | |
18 | -export([log/3, log/4, info/1, info/2, warning/1, warning/2, error/1, error/2]). | |
18 | -export([log/3, log/4, debug/1, debug/2, info/1, info/2, warning/1, | |
19 | warning/2, error/1, error/2]). | |
19 | 20 | -export([with_local_io/1]). |
20 | 21 | |
21 | 22 | %%---------------------------------------------------------------------------- |
25 | 26 | -export_type([level/0]). |
26 | 27 | |
27 | 28 | -type(category() :: atom()). |
28 | -type(level() :: 'info' | 'warning' | 'error'). | |
29 | -type(level() :: 'debug' | 'info' | 'warning' | 'error'). | |
29 | 30 | |
30 | 31 | -spec(log/3 :: (category(), level(), string()) -> 'ok'). |
31 | 32 | -spec(log/4 :: (category(), level(), string(), [any()]) -> 'ok'). |
32 | 33 | |
34 | -spec(debug/1 :: (string()) -> 'ok'). | |
35 | -spec(debug/2 :: (string(), [any()]) -> 'ok'). | |
33 | 36 | -spec(info/1 :: (string()) -> 'ok'). |
34 | 37 | -spec(info/2 :: (string(), [any()]) -> 'ok'). |
35 | 38 | -spec(warning/1 :: (string()) -> 'ok'). |
49 | 52 | case level(Level) =< catlevel(Category) of |
50 | 53 | false -> ok; |
51 | 54 | true -> F = case Level of |
55 | debug -> fun error_logger:info_msg/2; | |
52 | 56 | info -> fun error_logger:info_msg/2; |
53 | 57 | warning -> fun error_logger:warning_msg/2; |
54 | 58 | error -> fun error_logger:error_msg/2 |
56 | 60 | with_local_io(fun () -> F(Fmt, Args) end) |
57 | 61 | end. |
58 | 62 | |
63 | debug(Fmt) -> log(default, debug, Fmt). | |
64 | debug(Fmt, Args) -> log(default, debug, Fmt, Args). | |
59 | 65 | info(Fmt) -> log(default, info, Fmt). |
60 | 66 | info(Fmt, Args) -> log(default, info, Fmt, Args). |
61 | 67 | warning(Fmt) -> log(default, warning, Fmt). |
74 | 80 | |
75 | 81 | %%-------------------------------------------------------------------- |
76 | 82 | |
83 | level(debug) -> 4; | |
77 | 84 | level(info) -> 3; |
78 | 85 | level(warning) -> 2; |
79 | 86 | level(error) -> 1; |
18 | 18 | -export([start_link/4, get_gm/1, ensure_monitoring/2]). |
19 | 19 | |
20 | 20 | -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, |
21 | code_change/3]). | |
21 | code_change/3, handle_pre_hibernate/1]). | |
22 | 22 | |
23 | 23 | -export([joined/2, members_changed/3, handle_msg/3, handle_terminate/2]). |
24 | 24 | |
352 | 352 | when node(MPid) =:= node() -> |
353 | 353 | case rabbit_mirror_queue_misc:remove_from_queue( |
354 | 354 | QueueName, MPid, DeadGMPids) of |
355 | {ok, MPid, DeadPids} -> | |
355 | {ok, MPid, DeadPids, ExtraNodes} -> | |
356 | 356 | rabbit_mirror_queue_misc:report_deaths(MPid, true, QueueName, |
357 | 357 | DeadPids), |
358 | rabbit_mirror_queue_misc:add_mirrors(QueueName, ExtraNodes, async), | |
358 | 359 | noreply(State); |
359 | 360 | {error, not_found} -> |
360 | 361 | {stop, normal, State} |
387 | 388 | |
388 | 389 | code_change(_OldVsn, State, _Extra) -> |
389 | 390 | {ok, State}. |
391 | ||
392 | handle_pre_hibernate(State = #state { gm = GM }) -> | |
393 | %% Since GM notifications of deaths are lazy we might not get a | |
394 | %% timely notification of slave death if policy changes when | |
395 | %% everything is idle. So cause some activity just before we | |
396 | %% sleep. This won't cause us to go into perpetual motion as the | |
397 | %% heartbeat does not wake up coordinator or slaves. | |
398 | gm:broadcast(GM, hibernate_heartbeat), | |
399 | {hibernate, State}. | |
390 | 400 | |
391 | 401 | %% --------------------------------------------------------------------------- |
392 | 402 | %% GM |
16 | 16 | -module(rabbit_mirror_queue_master). |
17 | 17 | |
18 | 18 | -export([init/3, terminate/2, delete_and_terminate/2, |
19 | purge/1, purge_acks/1, publish/5, publish_delivered/4, | |
20 | discard/3, fetch/2, drop/2, ack/2, requeue/2, ackfold/4, fold/3, | |
19 | purge/1, purge_acks/1, publish/6, publish_delivered/5, | |
20 | discard/4, fetch/2, drop/2, ack/2, requeue/2, ackfold/4, fold/3, | |
21 | 21 | len/1, is_empty/1, depth/1, drain_confirmed/1, |
22 | 22 | dropwhile/2, fetchwhile/4, set_ram_duration_target/2, ram_duration/1, |
23 | 23 | needs_timeout/1, timeout/1, handle_pre_hibernate/1, resume/1, |
229 | 229 | |
230 | 230 | purge_acks(_State) -> exit({not_implemented, {?MODULE, purge_acks}}). |
231 | 231 | |
232 | publish(Msg = #basic_message { id = MsgId }, MsgProps, IsDelivered, ChPid, | |
232 | publish(Msg = #basic_message { id = MsgId }, MsgProps, IsDelivered, ChPid, Flow, | |
233 | 233 | State = #state { gm = GM, |
234 | 234 | seen_status = SS, |
235 | 235 | backing_queue = BQ, |
236 | 236 | backing_queue_state = BQS }) -> |
237 | 237 | false = dict:is_key(MsgId, SS), %% ASSERTION |
238 | ok = gm:broadcast(GM, {publish, ChPid, MsgProps, Msg}, | |
238 | ok = gm:broadcast(GM, {publish, ChPid, Flow, MsgProps, Msg}, | |
239 | 239 | rabbit_basic:msg_size(Msg)), |
240 | BQS1 = BQ:publish(Msg, MsgProps, IsDelivered, ChPid, BQS), | |
240 | BQS1 = BQ:publish(Msg, MsgProps, IsDelivered, ChPid, Flow, BQS), | |
241 | 241 | ensure_monitoring(ChPid, State #state { backing_queue_state = BQS1 }). |
242 | 242 | |
243 | 243 | publish_delivered(Msg = #basic_message { id = MsgId }, MsgProps, |
244 | ChPid, State = #state { gm = GM, | |
245 | seen_status = SS, | |
246 | backing_queue = BQ, | |
247 | backing_queue_state = BQS }) -> | |
244 | ChPid, Flow, State = #state { gm = GM, | |
245 | seen_status = SS, | |
246 | backing_queue = BQ, | |
247 | backing_queue_state = BQS }) -> | |
248 | 248 | false = dict:is_key(MsgId, SS), %% ASSERTION |
249 | ok = gm:broadcast(GM, {publish_delivered, ChPid, MsgProps, Msg}, | |
249 | ok = gm:broadcast(GM, {publish_delivered, ChPid, Flow, MsgProps, Msg}, | |
250 | 250 | rabbit_basic:msg_size(Msg)), |
251 | {AckTag, BQS1} = BQ:publish_delivered(Msg, MsgProps, ChPid, BQS), | |
251 | {AckTag, BQS1} = BQ:publish_delivered(Msg, MsgProps, ChPid, Flow, BQS), | |
252 | 252 | State1 = State #state { backing_queue_state = BQS1 }, |
253 | 253 | {AckTag, ensure_monitoring(ChPid, State1)}. |
254 | 254 | |
255 | discard(MsgId, ChPid, State = #state { gm = GM, | |
256 | backing_queue = BQ, | |
257 | backing_queue_state = BQS, | |
258 | seen_status = SS }) -> | |
255 | discard(MsgId, ChPid, Flow, State = #state { gm = GM, | |
256 | backing_queue = BQ, | |
257 | backing_queue_state = BQS, | |
258 | seen_status = SS }) -> | |
259 | 259 | false = dict:is_key(MsgId, SS), %% ASSERTION |
260 | ok = gm:broadcast(GM, {discard, ChPid, MsgId}), | |
261 | ensure_monitoring(ChPid, State #state { backing_queue_state = | |
262 | BQ:discard(MsgId, ChPid, BQS) }). | |
260 | ok = gm:broadcast(GM, {discard, ChPid, Flow, MsgId}), | |
261 | ensure_monitoring(ChPid, | |
262 | State #state { backing_queue_state = | |
263 | BQ:discard(MsgId, ChPid, Flow, BQS) }). | |
263 | 264 | |
264 | 265 | dropwhile(Pred, State = #state{backing_queue = BQ, |
265 | 266 | backing_queue_state = BQS }) -> |
48 | 48 | |
49 | 49 | -spec(remove_from_queue/3 :: |
50 | 50 | (rabbit_amqqueue:name(), pid(), [pid()]) |
51 | -> {'ok', pid(), [pid()]} | {'error', 'not_found'}). | |
51 | -> {'ok', pid(), [pid()], [node()]} | {'error', 'not_found'}). | |
52 | 52 | -spec(on_node_up/0 :: () -> 'ok'). |
53 | 53 | -spec(add_mirrors/3 :: (rabbit_amqqueue:name(), [node()], 'sync' | 'async') |
54 | 54 | -> 'ok'). |
69 | 69 | |
70 | 70 | %%---------------------------------------------------------------------------- |
71 | 71 | |
72 | %% Returns {ok, NewMPid, DeadPids} | |
72 | %% Returns {ok, NewMPid, DeadPids, ExtraNodes} | |
73 | 73 | remove_from_queue(QueueName, Self, DeadGMPids) -> |
74 | 74 | rabbit_misc:execute_mnesia_transaction( |
75 | 75 | fun () -> |
77 | 77 | %% get here. |
78 | 78 | case mnesia:read({rabbit_queue, QueueName}) of |
79 | 79 | [] -> {error, not_found}; |
80 | [Q = #amqqueue { pid = QPid, | |
81 | slave_pids = SPids, | |
82 | gm_pids = GMPids, | |
83 | down_slave_nodes = DSNs}] -> | |
80 | [Q = #amqqueue { pid = QPid, | |
81 | slave_pids = SPids, | |
82 | gm_pids = GMPids }] -> | |
84 | 83 | {DeadGM, AliveGM} = lists:partition( |
85 | 84 | fun ({GM, _}) -> |
86 | 85 | lists:member(GM, DeadGMPids) |
89 | 88 | AlivePids = [Pid || {_GM, Pid} <- AliveGM], |
90 | 89 | Alive = [Pid || Pid <- [QPid | SPids], |
91 | 90 | lists:member(Pid, AlivePids)], |
92 | DSNs1 = [node(Pid) || | |
93 | Pid <- SPids, | |
94 | not lists:member(Pid, AlivePids)] ++ DSNs, | |
95 | 91 | {QPid1, SPids1} = promote_slave(Alive), |
96 | case {{QPid, SPids}, {QPid1, SPids1}} of | |
97 | {Same, Same} -> | |
98 | ok; | |
99 | _ when QPid =:= QPid1 orelse QPid1 =:= Self -> | |
100 | %% Either master hasn't changed, so | |
101 | %% we're ok to update mnesia; or we have | |
102 | %% become the master. | |
103 | Q1 = Q#amqqueue{pid = QPid1, | |
104 | slave_pids = SPids1, | |
105 | gm_pids = AliveGM, | |
106 | down_slave_nodes = DSNs1}, | |
107 | store_updated_slaves(Q1), | |
108 | %% If we add and remove nodes at the same time we | |
109 | %% might tell the old master we need to sync and | |
110 | %% then shut it down. So let's check if the new | |
111 | %% master needs to sync. | |
112 | maybe_auto_sync(Q1); | |
92 | Extra = | |
93 | case {{QPid, SPids}, {QPid1, SPids1}} of | |
94 | {Same, Same} -> | |
95 | []; | |
96 | _ when QPid =:= QPid1 orelse QPid1 =:= Self -> | |
97 | %% Either master hasn't changed, so | |
98 | %% we're ok to update mnesia; or we have | |
99 | %% become the master. | |
100 | Q1 = Q#amqqueue{pid = QPid1, | |
101 | slave_pids = SPids1, | |
102 | gm_pids = AliveGM}, | |
103 | store_updated_slaves(Q1), | |
104 | %% If we add and remove nodes at the | |
105 | %% same time we might tell the old | |
106 | %% master we need to sync and then | |
107 | %% shut it down. So let's check if | |
108 | %% the new master needs to sync. | |
109 | maybe_auto_sync(Q1), | |
110 | slaves_to_start_on_failure(Q1, DeadGMPids); | |
113 | 111 | _ -> |
114 | %% Master has changed, and we're not it. | |
115 | %% [1]. | |
116 | Q1 = Q#amqqueue{slave_pids = Alive, | |
117 | gm_pids = AliveGM, | |
118 | down_slave_nodes = DSNs1}, | |
119 | store_updated_slaves(Q1) | |
120 | end, | |
121 | {ok, QPid1, DeadPids} | |
112 | %% Master has changed, and we're not it. | |
113 | %% [1]. | |
114 | Q1 = Q#amqqueue{slave_pids = Alive, | |
115 | gm_pids = AliveGM}, | |
116 | store_updated_slaves(Q1), | |
117 | [] | |
118 | end, | |
119 | {ok, QPid1, DeadPids, Extra} | |
122 | 120 | end |
123 | 121 | end). |
124 | 122 | %% [1] We still update mnesia here in case the slave that is supposed |
143 | 141 | %% corresponding entry in gm_pids. By contrast, due to the |
144 | 142 | %% aforementioned restriction on updating the master pid, that pid may |
145 | 143 | %% not be present in gm_pids, but only if said master has died. |
144 | ||
145 | %% Sometimes a slave dying means we need to start more on other | |
146 | %% nodes - "exactly" mode can cause this to happen. | |
147 | slaves_to_start_on_failure(Q, DeadGMPids) -> | |
148 | %% In case Mnesia has not caught up yet, filter out nodes we know | |
149 | %% to be dead.. | |
150 | ClusterNodes = rabbit_mnesia:cluster_nodes(running) -- | |
151 | [node(P) || P <- DeadGMPids], | |
152 | {_, OldNodes, _} = actual_queue_nodes(Q), | |
153 | {_, NewNodes} = suggested_queue_nodes(Q, ClusterNodes), | |
154 | NewNodes -- OldNodes. | |
146 | 155 | |
147 | 156 | on_node_up() -> |
148 | 157 | QNames = |
233 | 242 | rabbit_log:log(mirroring, Level, "Mirrored ~s: " ++ Fmt, |
234 | 243 | [rabbit_misc:rs(QName) | Args]). |
235 | 244 | |
236 | store_updated_slaves(Q = #amqqueue{pid = MPid, | |
237 | slave_pids = SPids, | |
238 | sync_slave_pids = SSPids, | |
239 | down_slave_nodes = DSNs}) -> | |
245 | store_updated_slaves(Q = #amqqueue{slave_pids = SPids, | |
246 | sync_slave_pids = SSPids, | |
247 | recoverable_slaves = RS}) -> | |
240 | 248 | %% TODO now that we clear sync_slave_pids in rabbit_durable_queue, |
241 | 249 | %% do we still need this filtering? |
242 | 250 | SSPids1 = [SSPid || SSPid <- SSPids, lists:member(SSPid, SPids)], |
243 | DSNs1 = DSNs -- [node(P) || P <- [MPid | SPids]], | |
244 | Q1 = Q#amqqueue{sync_slave_pids = SSPids1, | |
245 | down_slave_nodes = DSNs1, | |
246 | state = live}, | |
251 | Q1 = Q#amqqueue{sync_slave_pids = SSPids1, | |
252 | recoverable_slaves = update_recoverable(SPids, RS), | |
253 | state = live}, | |
247 | 254 | ok = rabbit_amqqueue:store_queue(Q1), |
248 | 255 | %% Wake it up so that we emit a stats event |
249 | 256 | rabbit_amqqueue:notify_policy_changed(Q1), |
250 | 257 | Q1. |
258 | ||
259 | %% Recoverable nodes are those which we could promote if the whole | |
260 | %% cluster were to suddenly stop and we then lose the master; i.e. all | |
261 | %% nodes with running slaves, and all stopped nodes which had running | |
262 | %% slaves when they were up. | |
263 | %% | |
264 | %% Therefore we aim here to add new nodes with slaves, and remove | |
265 | %% running nodes without slaves, We also try to keep the order | |
266 | %% constant, and similar to the live SPids field (i.e. oldest | |
267 | %% first). That's not necessarily optimal if nodes spend a long time | |
268 | %% down, but we don't have a good way to predict what the optimal is | |
269 | %% in that case anyway, and we assume nodes will not just be down for | |
270 | %% a long time without being removed. | |
271 | update_recoverable(SPids, RS) -> | |
272 | SNodes = [node(SPid) || SPid <- SPids], | |
273 | RunningNodes = rabbit_mnesia:cluster_nodes(running), | |
274 | AddNodes = SNodes -- RS, | |
275 | DelNodes = RunningNodes -- SNodes, %% i.e. running with no slave | |
276 | (RS -- DelNodes) ++ AddNodes. | |
251 | 277 | |
252 | 278 | %%---------------------------------------------------------------------------- |
253 | 279 | |
343 | 369 | {NewMNode, NewSNodes} = suggested_queue_nodes(NewQ), |
344 | 370 | OldNodes = [OldMNode | OldSNodes], |
345 | 371 | NewNodes = [NewMNode | NewSNodes], |
372 | %% When a mirror dies, remove_from_queue/2 might have to add new | |
373 | %% slaves (in "exactly" mode). It will check mnesia to see which | |
374 | %% slaves there currently are. If drop_mirror/2 is invoked first | |
375 | %% then when we end up in remove_from_queue/2 it will not see the | |
376 | %% slaves that add_mirror/2 will add, and also want to add them | |
377 | %% (even though we are not responding to the death of a | |
378 | %% mirror). Breakage ensues. | |
346 | 379 | add_mirrors (QName, NewNodes -- OldNodes, async), |
347 | 380 | drop_mirrors(QName, OldNodes -- NewNodes), |
348 | 381 | %% This is for the case where no extra nodes were added but we changed to |
205 | 205 | {error, not_found} -> |
206 | 206 | gen_server2:reply(From, ok), |
207 | 207 | {stop, normal, State}; |
208 | {ok, Pid, DeadPids} -> | |
208 | {ok, Pid, DeadPids, ExtraNodes} -> | |
209 | 209 | rabbit_mirror_queue_misc:report_deaths(Self, false, QName, |
210 | 210 | DeadPids), |
211 | 211 | case Pid of |
212 | 212 | MPid -> |
213 | 213 | %% master hasn't changed |
214 | 214 | gen_server2:reply(From, ok), |
215 | rabbit_mirror_queue_misc:add_mirrors( | |
216 | QName, ExtraNodes, async), | |
215 | 217 | noreply(State); |
216 | 218 | Self -> |
217 | 219 | %% we've become master |
218 | 220 | QueueState = promote_me(From, State), |
221 | rabbit_mirror_queue_misc:add_mirrors( | |
222 | QName, ExtraNodes, async), | |
219 | 223 | {become, rabbit_amqqueue_process, QueueState, hibernate}; |
220 | 224 | _ -> |
221 | 225 | %% master has changed to not us |
222 | 226 | gen_server2:reply(From, ok), |
227 | %% assertion, we don't need to add_mirrors/2 in this | |
228 | %% branch, see last clause in remove_from_queue/2 | |
229 | [] = ExtraNodes, | |
223 | 230 | %% Since GM is by nature lazy we need to make sure |
224 | 231 | %% there is some traffic when a master dies, to |
225 | 232 | %% make sure all slaves get informed of the |
245 | 252 | handle_cast({gm, Instruction}, State) -> |
246 | 253 | handle_process_result(process_instruction(Instruction, State)); |
247 | 254 | |
248 | handle_cast({deliver, Delivery = #delivery{sender = Sender}, true, Flow}, | |
255 | handle_cast({deliver, Delivery = #delivery{sender = Sender, flow = Flow}, true}, | |
249 | 256 | State) -> |
250 | 257 | %% Asynchronous, non-"mandatory", deliver mode. |
251 | 258 | case Flow of |
420 | 427 | {promote, CPid} -> {become, rabbit_mirror_queue_coordinator, [CPid]} |
421 | 428 | end. |
422 | 429 | |
430 | handle_msg([_SPid], _From, hibernate_heartbeat) -> | |
431 | %% See rabbit_mirror_queue_coordinator:handle_pre_hibernate/1 | |
432 | ok; | |
423 | 433 | handle_msg([_SPid], _From, request_depth) -> |
424 | 434 | %% This is only of value to the master |
425 | 435 | ok; |
627 | 637 | (_Msgid, _Status, MTC0) -> |
628 | 638 | MTC0 |
629 | 639 | end, gb_trees:empty(), MS), |
630 | Deliveries = [Delivery#delivery{mandatory = false} || %% [0] | |
640 | Deliveries = [promote_delivery(Delivery) || | |
631 | 641 | {_ChPid, {PubQ, _PendCh, _ChState}} <- dict:to_list(SQ), |
632 | 642 | Delivery <- queue:to_list(PubQ)], |
633 | 643 | AwaitGmDown = [ChPid || {ChPid, {_, _, down_from_ch}} <- dict:to_list(SQ)], |
639 | 649 | Q1, rabbit_mirror_queue_master, MasterState, RateTRef, Deliveries, KS1, |
640 | 650 | MTC). |
641 | 651 | |
642 | %% [0] We reset mandatory to false here because we will have sent the | |
643 | %% mandatory_received already as soon as we got the message | |
652 | %% We reset mandatory to false here because we will have sent the | |
653 | %% mandatory_received already as soon as we got the message. We also | |
654 | %% need to send an ack for these messages since the channel is waiting | |
655 | %% for one for the via-GM case and we will not now receive one. | |
656 | promote_delivery(Delivery = #delivery{sender = Sender, flow = Flow}) -> | |
657 | case Flow of | |
658 | flow -> credit_flow:ack(Sender); | |
659 | noflow -> ok | |
660 | end, | |
661 | Delivery#delivery{mandatory = false}. | |
644 | 662 | |
645 | 663 | noreply(State) -> |
646 | 664 | {NewState, Timeout} = next_state(State), |
822 | 840 | State1 #state { sender_queues = SQ1, msg_id_status = MS1 }. |
823 | 841 | |
824 | 842 | |
825 | process_instruction({publish, ChPid, MsgProps, | |
843 | process_instruction({publish, ChPid, Flow, MsgProps, | |
826 | 844 | Msg = #basic_message { id = MsgId }}, State) -> |
845 | maybe_flow_ack(ChPid, Flow), | |
827 | 846 | State1 = #state { backing_queue = BQ, backing_queue_state = BQS } = |
828 | 847 | publish_or_discard(published, ChPid, MsgId, State), |
829 | BQS1 = BQ:publish(Msg, MsgProps, true, ChPid, BQS), | |
848 | BQS1 = BQ:publish(Msg, MsgProps, true, ChPid, Flow, BQS), | |
830 | 849 | {ok, State1 #state { backing_queue_state = BQS1 }}; |
831 | process_instruction({publish_delivered, ChPid, MsgProps, | |
850 | process_instruction({publish_delivered, ChPid, Flow, MsgProps, | |
832 | 851 | Msg = #basic_message { id = MsgId }}, State) -> |
852 | maybe_flow_ack(ChPid, Flow), | |
833 | 853 | State1 = #state { backing_queue = BQ, backing_queue_state = BQS } = |
834 | 854 | publish_or_discard(published, ChPid, MsgId, State), |
835 | 855 | true = BQ:is_empty(BQS), |
836 | {AckTag, BQS1} = BQ:publish_delivered(Msg, MsgProps, ChPid, BQS), | |
856 | {AckTag, BQS1} = BQ:publish_delivered(Msg, MsgProps, ChPid, Flow, BQS), | |
837 | 857 | {ok, maybe_store_ack(true, MsgId, AckTag, |
838 | 858 | State1 #state { backing_queue_state = BQS1 })}; |
839 | process_instruction({discard, ChPid, MsgId}, State) -> | |
859 | process_instruction({discard, ChPid, Flow, MsgId}, State) -> | |
860 | maybe_flow_ack(ChPid, Flow), | |
840 | 861 | State1 = #state { backing_queue = BQ, backing_queue_state = BQS } = |
841 | 862 | publish_or_discard(discarded, ChPid, MsgId, State), |
842 | BQS1 = BQ:discard(MsgId, ChPid, BQS), | |
863 | BQS1 = BQ:discard(MsgId, ChPid, Flow, BQS), | |
843 | 864 | {ok, State1 #state { backing_queue_state = BQS1 }}; |
844 | 865 | process_instruction({drop, Length, Dropped, AckRequired}, |
845 | 866 | State = #state { backing_queue = BQ, |
898 | 919 | BQ:delete_and_terminate(Reason, BQS), |
899 | 920 | {stop, State #state { backing_queue_state = undefined }}. |
900 | 921 | |
922 | maybe_flow_ack(ChPid, flow) -> credit_flow:ack(ChPid); | |
923 | maybe_flow_ack(_ChPid, noflow) -> ok. | |
924 | ||
901 | 925 | msg_ids_to_acktags(MsgIds, MA) -> |
902 | 926 | {AckTags, MA1} = |
903 | 927 | lists:foldl( |
262 | 262 | Props1 = Props#message_properties{needs_confirming = false}, |
263 | 263 | {MA1, BQS1} = |
264 | 264 | case Unacked of |
265 | false -> {MA, BQ:publish(Msg, Props1, true, none, BQS)}; | |
265 | false -> {MA, | |
266 | BQ:publish(Msg, Props1, true, none, noflow, BQS)}; | |
266 | 267 | true -> {AckTag, BQS2} = BQ:publish_delivered( |
267 | Msg, Props1, none, BQS), | |
268 | Msg, Props1, none, noflow, BQS), | |
268 | 269 | {[{Msg#basic_message.id, AckTag} | MA], BQS2} |
269 | 270 | end, |
270 | 271 | slave_sync_loop(Args, {MA1, TRef, BQS1}); |
43 | 43 | -export([format/2, format_many/1, format_stderr/2]). |
44 | 44 | -export([unfold/2, ceil/1, queue_fold/3]). |
45 | 45 | -export([sort_field_table/1]). |
46 | -export([pid_to_string/1, string_to_pid/1, node_to_fake_pid/1]). | |
46 | -export([pid_to_string/1, string_to_pid/1, | |
47 | pid_change_node/2, node_to_fake_pid/1]). | |
47 | 48 | -export([version_compare/2, version_compare/3]). |
48 | 49 | -export([version_minor_equivalent/2]). |
49 | 50 | -export([dict_cons/3, orddict_cons/3, gb_trees_cons/3]). |
57 | 58 | -export([format_message_queue/2]). |
58 | 59 | -export([append_rpc_all_nodes/4]). |
59 | 60 | -export([os_cmd/1]). |
61 | -export([is_os_process_alive/1]). | |
60 | 62 | -export([gb_sets_difference/2]). |
61 | 63 | -export([version/0, otp_release/0, which_applications/0]). |
62 | 64 | -export([sequence_error/1]). |
195 | 197 | (rabbit_framing:amqp_table()) -> rabbit_framing:amqp_table()). |
196 | 198 | -spec(pid_to_string/1 :: (pid()) -> string()). |
197 | 199 | -spec(string_to_pid/1 :: (string()) -> pid()). |
200 | -spec(pid_change_node/2 :: (pid(), node()) -> pid()). | |
198 | 201 | -spec(node_to_fake_pid/1 :: (atom()) -> pid()). |
199 | 202 | -spec(version_compare/2 :: (string(), string()) -> 'lt' | 'eq' | 'gt'). |
200 | 203 | -spec(version_compare/3 :: |
229 | 232 | -spec(format_message_queue/2 :: (any(), priority_queue:q()) -> term()). |
230 | 233 | -spec(append_rpc_all_nodes/4 :: ([node()], atom(), atom(), [any()]) -> [any()]). |
231 | 234 | -spec(os_cmd/1 :: (string()) -> string()). |
235 | -spec(is_os_process_alive/1 :: (non_neg_integer()) -> boolean()). | |
232 | 236 | -spec(gb_sets_difference/2 :: (gb_sets:set(), gb_sets:set()) -> gb_sets:set()). |
233 | 237 | -spec(version/0 :: () -> string()). |
234 | 238 | -spec(otp_release/0 :: () -> string()). |
519 | 523 | Res = mnesia:sync_transaction(TxFun), |
520 | 524 | DiskLogAfter = mnesia_dumper:get_log_writes(), |
521 | 525 | case DiskLogAfter == DiskLogBefore of |
522 | true -> Res; | |
523 | false -> {sync, Res} | |
526 | true -> file_handle_cache_stats:update( | |
527 | mnesia_ram_tx), | |
528 | Res; | |
529 | false -> file_handle_cache_stats:update( | |
530 | mnesia_disk_tx), | |
531 | {sync, Res} | |
524 | 532 | end; |
525 | 533 | true -> mnesia:sync_transaction(TxFun) |
526 | 534 | end |
685 | 693 | %% regardless of what node we are running on. The representation also |
686 | 694 | %% permits easy identification of the pid's node. |
687 | 695 | pid_to_string(Pid) when is_pid(Pid) -> |
688 | %% see http://erlang.org/doc/apps/erts/erl_ext_dist.html (8.10 and | |
689 | %% 8.7) | |
690 | <<131,103,100,NodeLen:16,NodeBin:NodeLen/binary,Id:32,Ser:32,Cre:8>> | |
691 | = term_to_binary(Pid), | |
692 | Node = binary_to_term(<<131,100,NodeLen:16,NodeBin:NodeLen/binary>>), | |
696 | {Node, Cre, Id, Ser} = decompose_pid(Pid), | |
693 | 697 | format("<~s.~B.~B.~B>", [Node, Cre, Id, Ser]). |
694 | 698 | |
695 | 699 | %% inverse of above |
700 | 704 | case re:run(Str, "^<(.*)\\.(\\d+)\\.(\\d+)\\.(\\d+)>\$", |
701 | 705 | [{capture,all_but_first,list}]) of |
702 | 706 | {match, [NodeStr, CreStr, IdStr, SerStr]} -> |
703 | <<131,NodeEnc/binary>> = term_to_binary(list_to_atom(NodeStr)), | |
704 | 707 | [Cre, Id, Ser] = lists:map(fun list_to_integer/1, |
705 | 708 | [CreStr, IdStr, SerStr]), |
706 | binary_to_term(<<131,103,NodeEnc/binary,Id:32,Ser:32,Cre:8>>); | |
709 | compose_pid(list_to_atom(NodeStr), Cre, Id, Ser); | |
707 | 710 | nomatch -> |
708 | 711 | throw(Err) |
709 | 712 | end. |
710 | 713 | |
714 | pid_change_node(Pid, NewNode) -> | |
715 | {_OldNode, Cre, Id, Ser} = decompose_pid(Pid), | |
716 | compose_pid(NewNode, Cre, Id, Ser). | |
717 | ||
711 | 718 | %% node(node_to_fake_pid(Node)) =:= Node. |
712 | 719 | node_to_fake_pid(Node) -> |
713 | string_to_pid(format("<~s.0.0.0>", [Node])). | |
720 | compose_pid(Node, 0, 0, 0). | |
721 | ||
722 | decompose_pid(Pid) when is_pid(Pid) -> | |
723 | %% see http://erlang.org/doc/apps/erts/erl_ext_dist.html (8.10 and | |
724 | %% 8.7) | |
725 | <<131,103,100,NodeLen:16,NodeBin:NodeLen/binary,Id:32,Ser:32,Cre:8>> | |
726 | = term_to_binary(Pid), | |
727 | Node = binary_to_term(<<131,100,NodeLen:16,NodeBin:NodeLen/binary>>), | |
728 | {Node, Cre, Id, Ser}. | |
729 | ||
730 | compose_pid(Node, Cre, Id, Ser) -> | |
731 | <<131,NodeEnc/binary>> = term_to_binary(Node), | |
732 | binary_to_term(<<131,103,NodeEnc/binary,Id:32,Ser:32,Cre:8>>). | |
714 | 733 | |
715 | 734 | version_compare(A, B, lte) -> |
716 | 735 | case version_compare(A, B) of |
914 | 933 | end |
915 | 934 | end. |
916 | 935 | |
936 | is_os_process_alive(Pid) -> | |
937 | with_os([{unix, fun () -> | |
938 | run_ps(Pid) =:= 0 | |
939 | end}, | |
940 | {win32, fun () -> | |
941 | Cmd = "tasklist /nh /fi \"pid eq " ++ Pid ++ "\" ", | |
942 | Res = os_cmd(Cmd ++ "2>&1"), | |
943 | case re:run(Res, "erl\\.exe", [{capture, none}]) of | |
944 | match -> true; | |
945 | _ -> false | |
946 | end | |
947 | end}]). | |
948 | ||
949 | with_os(Handlers) -> | |
950 | {OsFamily, _} = os:type(), | |
951 | case proplists:get_value(OsFamily, Handlers) of | |
952 | undefined -> throw({unsupported_os, OsFamily}); | |
953 | Handler -> Handler() | |
954 | end. | |
955 | ||
956 | run_ps(Pid) -> | |
957 | Port = erlang:open_port({spawn, "ps -p " ++ Pid}, | |
958 | [exit_status, {line, 16384}, | |
959 | use_stdio, stderr_to_stdout]), | |
960 | exit_loop(Port). | |
961 | ||
962 | exit_loop(Port) -> | |
963 | receive | |
964 | {Port, {exit_status, Rc}} -> Rc; | |
965 | {Port, _} -> exit_loop(Port) | |
966 | end. | |
967 | ||
917 | 968 | gb_sets_difference(S1, S2) -> |
918 | 969 | gb_sets:fold(fun gb_sets:delete_any/2, S1, S2). |
919 | 970 |
108 | 108 | %% We intuitively expect the global name server to be synced when |
109 | 109 | %% Mnesia is up. In fact that's not guaranteed to be the case - |
110 | 110 | %% let's make it so. |
111 | ok = global:sync(), | |
111 | ok = rabbit_node_monitor:global_sync(), | |
112 | 112 | ok. |
113 | 113 | |
114 | 114 | init_from_config() -> |
115 | FindBadNodeNames = fun | |
116 | (Name, BadNames) when is_atom(Name) -> BadNames; | |
117 | (Name, BadNames) -> [Name | BadNames] | |
118 | end, | |
115 | 119 | {TryNodes, NodeType} = |
116 | 120 | case application:get_env(rabbit, cluster_nodes) of |
121 | {ok, {Nodes, Type} = Config} | |
122 | when is_list(Nodes) andalso (Type == disc orelse Type == ram) -> | |
123 | case lists:foldr(FindBadNodeNames, [], Nodes) of | |
124 | [] -> Config; | |
125 | BadNames -> e({invalid_cluster_node_names, BadNames}) | |
126 | end; | |
127 | {ok, {_, BadType}} when BadType /= disc andalso BadType /= ram -> | |
128 | e({invalid_cluster_node_type, BadType}); | |
117 | 129 | {ok, Nodes} when is_list(Nodes) -> |
118 | Config = {Nodes -- [node()], case lists:member(node(), Nodes) of | |
119 | true -> disc; | |
120 | false -> ram | |
121 | end}, | |
122 | rabbit_log:warning( | |
123 | "Converting legacy 'cluster_nodes' configuration~n ~w~n" | |
124 | "to~n ~w.~n~n" | |
125 | "Please update the configuration to the new format " | |
126 | "{Nodes, NodeType}, where Nodes contains the nodes that the " | |
127 | "node will try to cluster with, and NodeType is either " | |
128 | "'disc' or 'ram'~n", [Nodes, Config]), | |
129 | Config; | |
130 | {ok, Config} -> | |
131 | Config | |
130 | %% The legacy syntax (a nodes list without the node | |
131 | %% type) is unsupported. | |
132 | case lists:foldr(FindBadNodeNames, [], Nodes) of | |
133 | [] -> e(cluster_node_type_mandatory); | |
134 | _ -> e(invalid_cluster_nodes_conf) | |
135 | end; | |
136 | {ok, _} -> | |
137 | e(invalid_cluster_nodes_conf) | |
132 | 138 | end, |
133 | 139 | case TryNodes of |
134 | 140 | [] -> init_db_and_upgrade([node()], disc, false); |
849 | 855 | |
850 | 856 | e(Tag) -> throw({error, {Tag, error_description(Tag)}}). |
851 | 857 | |
858 | error_description({invalid_cluster_node_names, BadNames}) -> | |
859 | "In the 'cluster_nodes' configuration key, the following node names " | |
860 | "are invalid: " ++ lists:flatten(io_lib:format("~p", [BadNames])); | |
861 | error_description({invalid_cluster_node_type, BadType}) -> | |
862 | "In the 'cluster_nodes' configuration key, the node type is invalid " | |
863 | "(expected 'disc' or 'ram'): " ++ | |
864 | lists:flatten(io_lib:format("~p", [BadType])); | |
865 | error_description(cluster_node_type_mandatory) -> | |
866 | "The 'cluster_nodes' configuration key must indicate the node type: " | |
867 | "either {[...], disc} or {[...], ram}"; | |
868 | error_description(invalid_cluster_nodes_conf) -> | |
869 | "The 'cluster_nodes' configuration key is invalid, it must be of the " | |
870 | "form {[Nodes], Type}, where Nodes is a list of node names and " | |
871 | "Type is either 'disc' or 'ram'"; | |
852 | 872 | error_description(clustering_only_disc_node) -> |
853 | 873 | "You cannot cluster a node if it is the only disc node in its existing " |
854 | 874 | " cluster. If new nodes joined while this node was offline, use " |
0 | %% The contents of this file are subject to the Mozilla Public License | |
1 | %% Version 1.1 (the "License"); you may not use this file except in | |
2 | %% compliance with the License. You may obtain a copy of the License | |
3 | %% at http://www.mozilla.org/MPL/ | |
4 | %% | |
5 | %% Software distributed under the License is distributed on an "AS IS" | |
6 | %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | %% the License for the specific language governing rights and | |
8 | %% limitations under the License. | |
9 | %% | |
10 | %% The Original Code is RabbitMQ. | |
11 | %% | |
12 | %% The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved. | |
14 | %% | |
15 | ||
16 | -module(rabbit_mnesia_rename). | |
17 | -include("rabbit.hrl"). | |
18 | ||
19 | -export([rename/2]). | |
20 | -export([maybe_finish/1]). | |
21 | ||
22 | -define(CONVERT_TABLES, [schema, rabbit_durable_queue]). | |
23 | ||
24 | %% Supports renaming the nodes in the Mnesia database. In order to do | |
25 | %% this, we take a backup of the database, traverse the backup | |
26 | %% changing node names and pids as we go, then restore it. | |
27 | %% | |
28 | %% That's enough for a standalone node, for clusters the story is more | |
29 | %% complex. We can take pairs of nodes From and To, but backing up and | |
30 | %% restoring the database changes schema cookies, so if we just do | |
31 | %% this on all nodes the cluster will refuse to re-form with | |
32 | %% "Incompatible schema cookies.". Therefore we do something similar | |
33 | %% to what we do for upgrades - the first node in the cluster to | |
34 | %% restart becomes the authority, and other nodes wipe their own | |
35 | %% Mnesia state and rejoin. They also need to tell Mnesia the old node | |
36 | %% is not coming back. | |
37 | %% | |
38 | %% If we are renaming nodes one at a time then the running cluster | |
39 | %% might not be aware that a rename has taken place, so after we wipe | |
40 | %% and rejoin we then update any tables (in practice just | |
41 | %% rabbit_durable_queue) which should be aware that we have changed. | |
42 | ||
43 | %%---------------------------------------------------------------------------- | |
44 | ||
45 | -ifdef(use_specs). | |
46 | ||
47 | -spec(rename/2 :: (node(), [{node(), node()}]) -> 'ok'). | |
48 | -spec(maybe_finish/1 :: ([node()]) -> 'ok'). | |
49 | ||
50 | -endif. | |
51 | ||
52 | %%---------------------------------------------------------------------------- | |
53 | ||
54 | rename(Node, NodeMapList) -> | |
55 | try | |
56 | %% Check everything is correct and figure out what we are | |
57 | %% changing from and to. | |
58 | {FromNode, ToNode, NodeMap} = prepare(Node, NodeMapList), | |
59 | ||
60 | %% We backup and restore Mnesia even if other nodes are | |
61 | %% running at the time, and defer the final decision about | |
62 | %% whether to use our mutated copy or rejoin the cluster until | |
63 | %% we restart. That means we might be mutating our copy of the | |
64 | %% database while the cluster is running. *Do not* contact the | |
65 | %% cluster while this is happening, we are likely to get | |
66 | %% confused. | |
67 | application:set_env(kernel, dist_auto_connect, never), | |
68 | ||
69 | %% Take a copy we can restore from if we abandon the | |
70 | %% rename. We don't restore from the "backup" since restoring | |
71 | %% that changes schema cookies and might stop us rejoining the | |
72 | %% cluster. | |
73 | ok = rabbit_mnesia:copy_db(mnesia_copy_dir()), | |
74 | ||
75 | %% And make the actual changes | |
76 | rabbit_control_main:become(FromNode), | |
77 | take_backup(before_backup_name()), | |
78 | convert_backup(NodeMap, before_backup_name(), after_backup_name()), | |
79 | ok = rabbit_file:write_term_file(rename_config_name(), | |
80 | [{FromNode, ToNode}]), | |
81 | convert_config_files(NodeMap), | |
82 | rabbit_control_main:become(ToNode), | |
83 | restore_backup(after_backup_name()), | |
84 | ok | |
85 | after | |
86 | stop_mnesia() | |
87 | end. | |
88 | ||
89 | prepare(Node, NodeMapList) -> | |
90 | %% If we have a previous rename and haven't started since, give up. | |
91 | case rabbit_file:is_dir(dir()) of | |
92 | true -> exit({rename_in_progress, | |
93 | "Restart node under old name to roll back"}); | |
94 | false -> ok = rabbit_file:ensure_dir(mnesia_copy_dir()) | |
95 | end, | |
96 | ||
97 | %% Check we don't have two nodes mapped to the same node | |
98 | {FromNodes, ToNodes} = lists:unzip(NodeMapList), | |
99 | case length(FromNodes) - length(lists:usort(ToNodes)) of | |
100 | 0 -> ok; | |
101 | _ -> exit({duplicate_node, ToNodes}) | |
102 | end, | |
103 | ||
104 | %% Figure out which node we are before and after the change | |
105 | FromNode = case [From || {From, To} <- NodeMapList, | |
106 | To =:= Node] of | |
107 | [N] -> N; | |
108 | [] -> Node | |
109 | end, | |
110 | NodeMap = dict:from_list(NodeMapList), | |
111 | ToNode = case dict:find(FromNode, NodeMap) of | |
112 | {ok, N2} -> N2; | |
113 | error -> FromNode | |
114 | end, | |
115 | ||
116 | %% Check that we are in the cluster, all old nodes are in the | |
117 | %% cluster, and no new nodes are. | |
118 | Nodes = rabbit_mnesia:cluster_nodes(all), | |
119 | case {FromNodes -- Nodes, ToNodes -- (ToNodes -- Nodes), | |
120 | lists:member(Node, Nodes ++ ToNodes)} of | |
121 | {[], [], true} -> ok; | |
122 | {[], [], false} -> exit({i_am_not_involved, Node}); | |
123 | {F, [], _} -> exit({nodes_not_in_cluster, F}); | |
124 | {_, T, _} -> exit({nodes_already_in_cluster, T}) | |
125 | end, | |
126 | {FromNode, ToNode, NodeMap}. | |
127 | ||
128 | take_backup(Backup) -> | |
129 | start_mnesia(), | |
130 | ok = mnesia:backup(Backup), | |
131 | stop_mnesia(). | |
132 | ||
133 | restore_backup(Backup) -> | |
134 | ok = mnesia:install_fallback(Backup, [{scope, local}]), | |
135 | start_mnesia(), | |
136 | stop_mnesia(), | |
137 | rabbit_mnesia:force_load_next_boot(). | |
138 | ||
139 | maybe_finish(AllNodes) -> | |
140 | case rabbit_file:read_term_file(rename_config_name()) of | |
141 | {ok, [{FromNode, ToNode}]} -> finish(FromNode, ToNode, AllNodes); | |
142 | _ -> ok | |
143 | end. | |
144 | ||
145 | finish(FromNode, ToNode, AllNodes) -> | |
146 | case node() of | |
147 | ToNode -> | |
148 | case rabbit_upgrade:nodes_running(AllNodes) of | |
149 | [] -> finish_primary(FromNode, ToNode); | |
150 | _ -> finish_secondary(FromNode, ToNode, AllNodes) | |
151 | end; | |
152 | FromNode -> | |
153 | rabbit_log:info( | |
154 | "Abandoning rename from ~s to ~s since we are still ~s~n", | |
155 | [FromNode, ToNode, FromNode]), | |
156 | [{ok, _} = file:copy(backup_of_conf(F), F) || F <- config_files()], | |
157 | ok = rabbit_file:recursive_delete([rabbit_mnesia:dir()]), | |
158 | ok = rabbit_file:recursive_copy( | |
159 | mnesia_copy_dir(), rabbit_mnesia:dir()), | |
160 | delete_rename_files(); | |
161 | _ -> | |
162 | %% Boot will almost certainly fail but we might as | |
163 | %% well just log this | |
164 | rabbit_log:info( | |
165 | "Rename attempted from ~s to ~s but we are ~s - ignoring.~n", | |
166 | [FromNode, ToNode, node()]) | |
167 | end. | |
168 | ||
169 | finish_primary(FromNode, ToNode) -> | |
170 | rabbit_log:info("Restarting as primary after rename from ~s to ~s~n", | |
171 | [FromNode, ToNode]), | |
172 | delete_rename_files(), | |
173 | ok. | |
174 | ||
175 | finish_secondary(FromNode, ToNode, AllNodes) -> | |
176 | rabbit_log:info("Restarting as secondary after rename from ~s to ~s~n", | |
177 | [FromNode, ToNode]), | |
178 | rabbit_upgrade:secondary_upgrade(AllNodes), | |
179 | rename_in_running_mnesia(FromNode, ToNode), | |
180 | delete_rename_files(), | |
181 | ok. | |
182 | ||
183 | dir() -> rabbit_mnesia:dir() ++ "-rename". | |
184 | before_backup_name() -> dir() ++ "/backup-before". | |
185 | after_backup_name() -> dir() ++ "/backup-after". | |
186 | rename_config_name() -> dir() ++ "/pending.config". | |
187 | mnesia_copy_dir() -> dir() ++ "/mnesia-copy". | |
188 | ||
189 | delete_rename_files() -> ok = rabbit_file:recursive_delete([dir()]). | |
190 | ||
191 | start_mnesia() -> rabbit_misc:ensure_ok(mnesia:start(), cannot_start_mnesia), | |
192 | rabbit_table:force_load(), | |
193 | rabbit_table:wait_for_replicated(). | |
194 | stop_mnesia() -> stopped = mnesia:stop(). | |
195 | ||
196 | convert_backup(NodeMap, FromBackup, ToBackup) -> | |
197 | mnesia:traverse_backup( | |
198 | FromBackup, ToBackup, | |
199 | fun | |
200 | (Row, Acc) -> | |
201 | case lists:member(element(1, Row), ?CONVERT_TABLES) of | |
202 | true -> {[update_term(NodeMap, Row)], Acc}; | |
203 | false -> {[Row], Acc} | |
204 | end | |
205 | end, switched). | |
206 | ||
207 | config_files() -> | |
208 | [rabbit_node_monitor:running_nodes_filename(), | |
209 | rabbit_node_monitor:cluster_status_filename()]. | |
210 | ||
211 | backup_of_conf(Path) -> | |
212 | filename:join([dir(), filename:basename(Path)]). | |
213 | ||
214 | convert_config_files(NodeMap) -> | |
215 | [convert_config_file(NodeMap, Path) || Path <- config_files()]. | |
216 | ||
217 | convert_config_file(NodeMap, Path) -> | |
218 | {ok, Term} = rabbit_file:read_term_file(Path), | |
219 | {ok, _} = file:copy(Path, backup_of_conf(Path)), | |
220 | ok = rabbit_file:write_term_file(Path, update_term(NodeMap, Term)). | |
221 | ||
222 | lookup_node(OldNode, NodeMap) -> | |
223 | case dict:find(OldNode, NodeMap) of | |
224 | {ok, NewNode} -> NewNode; | |
225 | error -> OldNode | |
226 | end. | |
227 | ||
228 | mini_map(FromNode, ToNode) -> dict:from_list([{FromNode, ToNode}]). | |
229 | ||
230 | update_term(NodeMap, L) when is_list(L) -> | |
231 | [update_term(NodeMap, I) || I <- L]; | |
232 | update_term(NodeMap, T) when is_tuple(T) -> | |
233 | list_to_tuple(update_term(NodeMap, tuple_to_list(T))); | |
234 | update_term(NodeMap, Node) when is_atom(Node) -> | |
235 | lookup_node(Node, NodeMap); | |
236 | update_term(NodeMap, Pid) when is_pid(Pid) -> | |
237 | rabbit_misc:pid_change_node(Pid, lookup_node(node(Pid), NodeMap)); | |
238 | update_term(_NodeMap, Term) -> | |
239 | Term. | |
240 | ||
241 | rename_in_running_mnesia(FromNode, ToNode) -> | |
242 | All = rabbit_mnesia:cluster_nodes(all), | |
243 | Running = rabbit_mnesia:cluster_nodes(running), | |
244 | case {lists:member(FromNode, Running), lists:member(ToNode, All)} of | |
245 | {false, true} -> ok; | |
246 | {true, _} -> exit({old_node_running, FromNode}); | |
247 | {_, false} -> exit({new_node_not_in_cluster, ToNode}) | |
248 | end, | |
249 | {atomic, ok} = mnesia:del_table_copy(schema, FromNode), | |
250 | Map = mini_map(FromNode, ToNode), | |
251 | {atomic, _} = transform_table(rabbit_durable_queue, Map), | |
252 | ok. | |
253 | ||
254 | transform_table(Table, Map) -> | |
255 | mnesia:sync_transaction( | |
256 | fun () -> | |
257 | mnesia:lock({table, Table}, write), | |
258 | transform_table(Table, Map, mnesia:first(Table)) | |
259 | end). | |
260 | ||
261 | transform_table(_Table, _Map, '$end_of_table') -> | |
262 | ok; | |
263 | transform_table(Table, Map, Key) -> | |
264 | [Term] = mnesia:read(Table, Key, write), | |
265 | ok = mnesia:write(Table, update_term(Map, Term), write), | |
266 | transform_table(Table, Map, mnesia:next(Table, Key)). |
472 | 472 | |
473 | 473 | read(MsgId, |
474 | 474 | CState = #client_msstate { cur_file_cache_ets = CurFileCacheEts }) -> |
475 | file_handle_cache_stats:update(msg_store_read), | |
475 | 476 | %% Check the cur file cache |
476 | 477 | case ets:lookup(CurFileCacheEts, MsgId) of |
477 | 478 | [] -> |
506 | 507 | client_write(MsgId, Msg, Flow, |
507 | 508 | CState = #client_msstate { cur_file_cache_ets = CurFileCacheEts, |
508 | 509 | client_ref = CRef }) -> |
510 | file_handle_cache_stats:update(msg_store_write), | |
509 | 511 | ok = client_update_flying(+1, MsgId, CState), |
510 | 512 | ok = update_msg_cache(CurFileCacheEts, MsgId, Msg), |
511 | 513 | ok = server_cast(CState, {write, CRef, MsgId, Flow}). |
1298 | 1300 | |
1299 | 1301 | open_file(Dir, FileName, Mode) -> |
1300 | 1302 | file_handle_cache:open(form_filename(Dir, FileName), ?BINARY_MODE ++ Mode, |
1301 | [{write_buffer, ?HANDLE_CACHE_BUFFER_SIZE}]). | |
1303 | [{write_buffer, ?HANDLE_CACHE_BUFFER_SIZE}, | |
1304 | {read_buffer, ?HANDLE_CACHE_BUFFER_SIZE}]). | |
1302 | 1305 | |
1303 | 1306 | close_handle(Key, CState = #client_msstate { file_handle_cache = FHC }) -> |
1304 | 1307 | CState #client_msstate { file_handle_cache = close_handle(Key, FHC) }; |
393 | 393 | mnesia:dirty_read(rabbit_listener, Node). |
394 | 394 | |
395 | 395 | on_node_down(Node) -> |
396 | ok = mnesia:dirty_delete(rabbit_listener, Node). | |
396 | case lists:member(Node, nodes()) of | |
397 | false -> ok = mnesia:dirty_delete(rabbit_listener, Node); | |
398 | true -> rabbit_log:info( | |
399 | "Keep ~s listeners: the node is already back~n", [Node]) | |
400 | end. | |
397 | 401 | |
398 | 402 | start_client(Sock, SockTransform) -> |
399 | 403 | {ok, _Child, Reader} = supervisor:start_child(rabbit_tcp_client_sup, []), |
24 | 24 | update_cluster_status/0, reset_cluster_status/0]). |
25 | 25 | -export([notify_node_up/0, notify_joined_cluster/0, notify_left_cluster/1]). |
26 | 26 | -export([partitions/0, partitions/1, status/1, subscribe/1]). |
27 | -export([pause_minority_guard/0]). | |
27 | -export([pause_partition_guard/0]). | |
28 | -export([global_sync/0]). | |
28 | 29 | |
29 | 30 | %% gen_server callbacks |
30 | 31 | -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, |
31 | 32 | code_change/3]). |
32 | 33 | |
33 | 34 | %% Utils |
34 | -export([all_rabbit_nodes_up/0, run_outside_applications/1, ping_all/0, | |
35 | -export([all_rabbit_nodes_up/0, run_outside_applications/2, ping_all/0, | |
35 | 36 | alive_nodes/1, alive_rabbit_nodes/1]). |
36 | 37 | |
37 | 38 | -define(SERVER, ?MODULE). |
63 | 64 | -spec(partitions/1 :: ([node()]) -> [{node(), [node()]}]). |
64 | 65 | -spec(status/1 :: ([node()]) -> {[{node(), [node()]}], [node()]}). |
65 | 66 | -spec(subscribe/1 :: (pid()) -> 'ok'). |
66 | -spec(pause_minority_guard/0 :: () -> 'ok' | 'pausing'). | |
67 | -spec(pause_partition_guard/0 :: () -> 'ok' | 'pausing'). | |
67 | 68 | |
68 | 69 | -spec(all_rabbit_nodes_up/0 :: () -> boolean()). |
69 | -spec(run_outside_applications/1 :: (fun (() -> any())) -> pid()). | |
70 | -spec(run_outside_applications/2 :: (fun (() -> any()), boolean()) -> pid()). | |
70 | 71 | -spec(ping_all/0 :: () -> 'ok'). |
71 | 72 | -spec(alive_nodes/1 :: ([node()]) -> [node()]). |
72 | 73 | -spec(alive_rabbit_nodes/1 :: ([node()]) -> [node()]). |
193 | 194 | gen_server:cast(?SERVER, {subscribe, Pid}). |
194 | 195 | |
195 | 196 | %%---------------------------------------------------------------------------- |
196 | %% pause_minority safety | |
197 | %% pause_minority/pause_if_all_down safety | |
197 | 198 | %%---------------------------------------------------------------------------- |
198 | 199 | |
199 | 200 | %% If we are in a minority and pause_minority mode then a) we are |
200 | 201 | %% going to shut down imminently and b) we should not confirm anything |
201 | 202 | %% until then, since anything we confirm is likely to be lost. |
202 | 203 | %% |
203 | %% We could confirm something by having an HA queue see the minority | |
204 | %% The same principles apply to a node which isn't part of the preferred | |
205 | %% partition when we are in pause_if_all_down mode. | |
206 | %% | |
207 | %% We could confirm something by having an HA queue see the pausing | |
204 | 208 | %% state (and fail over into it) before the node monitor stops us, or |
205 | 209 | %% by using unmirrored queues and just having them vanish (and |
206 | 210 | %% confiming messages as thrown away). |
207 | 211 | %% |
208 | 212 | %% So we have channels call in here before issuing confirms, to do a |
209 | %% lightweight check that we have not entered a minority state. | |
210 | ||
211 | pause_minority_guard() -> | |
212 | case get(pause_minority_guard) of | |
213 | not_minority_mode -> | |
213 | %% lightweight check that we have not entered a pausing state. | |
214 | ||
215 | pause_partition_guard() -> | |
216 | case get(pause_partition_guard) of | |
217 | not_pause_mode -> | |
214 | 218 | ok; |
215 | 219 | undefined -> |
216 | 220 | {ok, M} = application:get_env(rabbit, cluster_partition_handling), |
217 | 221 | case M of |
218 | pause_minority -> pause_minority_guard([]); | |
219 | _ -> put(pause_minority_guard, not_minority_mode), | |
220 | ok | |
222 | pause_minority -> | |
223 | pause_minority_guard([], ok); | |
224 | {pause_if_all_down, PreferredNodes, _} -> | |
225 | pause_if_all_down_guard(PreferredNodes, [], ok); | |
226 | _ -> | |
227 | put(pause_partition_guard, not_pause_mode), | |
228 | ok | |
221 | 229 | end; |
222 | {minority_mode, Nodes} -> | |
223 | pause_minority_guard(Nodes) | |
224 | end. | |
225 | ||
226 | pause_minority_guard(LastNodes) -> | |
230 | {minority_mode, Nodes, LastState} -> | |
231 | pause_minority_guard(Nodes, LastState); | |
232 | {pause_if_all_down_mode, PreferredNodes, Nodes, LastState} -> | |
233 | pause_if_all_down_guard(PreferredNodes, Nodes, LastState) | |
234 | end. | |
235 | ||
236 | pause_minority_guard(LastNodes, LastState) -> | |
227 | 237 | case nodes() of |
228 | LastNodes -> ok; | |
229 | _ -> put(pause_minority_guard, {minority_mode, nodes()}), | |
230 | case majority() of | |
231 | false -> pausing; | |
232 | true -> ok | |
233 | end | |
234 | end. | |
238 | LastNodes -> LastState; | |
239 | _ -> NewState = case majority() of | |
240 | false -> pausing; | |
241 | true -> ok | |
242 | end, | |
243 | put(pause_partition_guard, | |
244 | {minority_mode, nodes(), NewState}), | |
245 | NewState | |
246 | end. | |
247 | ||
248 | pause_if_all_down_guard(PreferredNodes, LastNodes, LastState) -> | |
249 | case nodes() of | |
250 | LastNodes -> LastState; | |
251 | _ -> NewState = case in_preferred_partition(PreferredNodes) of | |
252 | false -> pausing; | |
253 | true -> ok | |
254 | end, | |
255 | put(pause_partition_guard, | |
256 | {pause_if_all_down_mode, PreferredNodes, nodes(), | |
257 | NewState}), | |
258 | NewState | |
259 | end. | |
260 | ||
261 | %%---------------------------------------------------------------------------- | |
262 | %% "global" hang workaround. | |
263 | %%---------------------------------------------------------------------------- | |
264 | ||
265 | %% This code works around a possible inconsistency in the "global" | |
266 | %% state, causing global:sync/0 to never return. | |
267 | %% | |
268 | %% 1. A process is spawned. | |
269 | %% 2. If after 15", global:sync() didn't return, the "global" | |
270 | %% state is parsed. | |
271 | %% 3. If it detects that a sync is blocked for more than 10", | |
272 | %% the process sends fake nodedown/nodeup events to the two | |
273 | %% nodes involved (one local, one remote). | |
274 | %% 4. Both "global" instances restart their synchronisation. | |
275 | %% 5. globao:sync() finally returns. | |
276 | %% | |
277 | %% FIXME: Remove this workaround, once we got rid of the change to | |
278 | %% "dist_auto_connect" and fixed the bugs uncovered. | |
279 | ||
280 | global_sync() -> | |
281 | Pid = spawn(fun workaround_global_hang/0), | |
282 | ok = global:sync(), | |
283 | Pid ! global_sync_done, | |
284 | ok. | |
285 | ||
286 | workaround_global_hang() -> | |
287 | receive | |
288 | global_sync_done -> | |
289 | ok | |
290 | after 15000 -> | |
291 | find_blocked_global_peers() | |
292 | end. | |
293 | ||
294 | find_blocked_global_peers() -> | |
295 | {status, _, _, [Dict | _]} = sys:get_status(global_name_server), | |
296 | find_blocked_global_peers1(Dict). | |
297 | ||
298 | find_blocked_global_peers1([{{sync_tag_his, Peer}, Timestamp} | Rest]) -> | |
299 | Diff = timer:now_diff(erlang:now(), Timestamp), | |
300 | if | |
301 | Diff >= 10000 -> unblock_global_peer(Peer); | |
302 | true -> ok | |
303 | end, | |
304 | find_blocked_global_peers1(Rest); | |
305 | find_blocked_global_peers1([_ | Rest]) -> | |
306 | find_blocked_global_peers1(Rest); | |
307 | find_blocked_global_peers1([]) -> | |
308 | ok. | |
309 | ||
310 | unblock_global_peer(PeerNode) -> | |
311 | ThisNode = node(), | |
312 | PeerState = rpc:call(PeerNode, sys, get_status, [global_name_server]), | |
313 | error_logger:info_msg( | |
314 | "Global hang workaround: global state on ~s seems broken~n" | |
315 | " * Peer global state: ~p~n" | |
316 | " * Local global state: ~p~n" | |
317 | "Faking nodedown/nodeup between ~s and ~s~n", | |
318 | [PeerNode, PeerState, sys:get_status(global_name_server), | |
319 | PeerNode, ThisNode]), | |
320 | {global_name_server, ThisNode} ! {nodedown, PeerNode}, | |
321 | {global_name_server, PeerNode} ! {nodedown, ThisNode}, | |
322 | {global_name_server, ThisNode} ! {nodeup, PeerNode}, | |
323 | {global_name_server, PeerNode} ! {nodeup, ThisNode}, | |
324 | ok. | |
235 | 325 | |
236 | 326 | %%---------------------------------------------------------------------------- |
237 | 327 | %% gen_server callbacks |
288 | 378 | %% 'check_partial_partition' to all the nodes it still thinks are |
289 | 379 | %% alive. If any of those (intermediate) nodes still see the "down" |
290 | 380 | %% node as up, they inform it that this has happened. The original |
291 | %% node (in 'ignore' or 'autoheal' mode) will then disconnect from the | |
292 | %% intermediate node to "upgrade" to a full partition. | |
381 | %% node (in 'ignore', 'pause_if_all_down' or 'autoheal' mode) will then | |
382 | %% disconnect from the intermediate node to "upgrade" to a full | |
383 | %% partition. | |
293 | 384 | %% |
294 | 385 | %% In pause_minority mode it will instead immediately pause until all |
295 | 386 | %% nodes come back. This is because the contract for pause_minority is |
354 | 445 | ArgsBase), |
355 | 446 | await_cluster_recovery(fun all_nodes_up/0), |
356 | 447 | {noreply, State}; |
448 | {ok, {pause_if_all_down, PreferredNodes, _}} -> | |
449 | case in_preferred_partition(PreferredNodes) of | |
450 | true -> rabbit_log:error( | |
451 | FmtBase ++ "We will therefore intentionally " | |
452 | "disconnect from ~s~n", ArgsBase ++ [Proxy]), | |
453 | upgrade_to_full_partition(Proxy); | |
454 | false -> rabbit_log:info( | |
455 | FmtBase ++ "We are about to pause, no need " | |
456 | "for further actions~n", ArgsBase) | |
457 | end, | |
458 | {noreply, State}; | |
357 | 459 | {ok, _} -> |
358 | 460 | rabbit_log:error( |
359 | 461 | FmtBase ++ "We will therefore intentionally disconnect from ~s~n", |
360 | 462 | ArgsBase ++ [Proxy]), |
361 | cast(Proxy, {partial_partition_disconnect, node()}), | |
362 | disconnect(Proxy), | |
463 | upgrade_to_full_partition(Proxy), | |
363 | 464 | {noreply, State} |
364 | 465 | end; |
365 | 466 | |
524 | 625 | %% that we can respond in the same way to "rabbitmqctl stop_app" |
525 | 626 | %% and "rabbitmqctl stop" as much as possible. |
526 | 627 | %% |
527 | %% However, for pause_minority mode we can't do this, since we | |
528 | %% depend on looking at whether other nodes are up to decide | |
529 | %% whether to come back up ourselves - if we decide that based on | |
530 | %% the rabbit application we would go down and never come back. | |
628 | %% However, for pause_minority and pause_if_all_down modes we can't do | |
629 | %% this, since we depend on looking at whether other nodes are up | |
630 | %% to decide whether to come back up ourselves - if we decide that | |
631 | %% based on the rabbit application we would go down and never come | |
632 | %% back. | |
531 | 633 | case application:get_env(rabbit, cluster_partition_handling) of |
532 | 634 | {ok, pause_minority} -> |
533 | case majority() of | |
635 | case majority([Node]) of | |
534 | 636 | true -> ok; |
535 | 637 | false -> await_cluster_recovery(fun majority/0) |
536 | 638 | end, |
537 | 639 | State; |
640 | {ok, {pause_if_all_down, PreferredNodes, HowToRecover}} -> | |
641 | case in_preferred_partition(PreferredNodes, [Node]) of | |
642 | true -> ok; | |
643 | false -> await_cluster_recovery( | |
644 | fun in_preferred_partition/0) | |
645 | end, | |
646 | case HowToRecover of | |
647 | autoheal -> State#state{autoheal = | |
648 | rabbit_autoheal:node_down(Node, Autoheal)}; | |
649 | _ -> State | |
650 | end; | |
538 | 651 | {ok, ignore} -> |
539 | 652 | State; |
540 | 653 | {ok, autoheal} -> |
546 | 659 | end. |
547 | 660 | |
548 | 661 | await_cluster_recovery(Condition) -> |
549 | rabbit_log:warning("Cluster minority status detected - awaiting recovery~n", | |
550 | []), | |
662 | rabbit_log:warning("Cluster minority/secondary status detected - " | |
663 | "awaiting recovery~n", []), | |
551 | 664 | run_outside_applications(fun () -> |
552 | 665 | rabbit:stop(), |
553 | 666 | wait_for_cluster_recovery(Condition) |
554 | end), | |
667 | end, false), | |
555 | 668 | ok. |
556 | 669 | |
557 | run_outside_applications(Fun) -> | |
670 | run_outside_applications(Fun, WaitForExistingProcess) -> | |
558 | 671 | spawn(fun () -> |
559 | 672 | %% If our group leader is inside an application we are about |
560 | 673 | %% to stop, application:stop/1 does not return. |
561 | 674 | group_leader(whereis(init), self()), |
562 | %% Ensure only one such process at a time, the | |
563 | %% exit(badarg) is harmless if one is already running | |
564 | try register(rabbit_outside_app_process, self()) of | |
565 | true -> | |
566 | try | |
567 | Fun() | |
568 | catch _:E -> | |
569 | rabbit_log:error( | |
570 | "rabbit_outside_app_process:~n~p~n~p~n", | |
571 | [E, erlang:get_stacktrace()]) | |
572 | end | |
573 | catch error:badarg -> | |
574 | ok | |
575 | end | |
675 | register_outside_app_process(Fun, WaitForExistingProcess) | |
576 | 676 | end). |
677 | ||
678 | register_outside_app_process(Fun, WaitForExistingProcess) -> | |
679 | %% Ensure only one such process at a time, the exit(badarg) is | |
680 | %% harmless if one is already running. | |
681 | %% | |
682 | %% If WaitForExistingProcess is false, the given fun is simply not | |
683 | %% executed at all and the process exits. | |
684 | %% | |
685 | %% If WaitForExistingProcess is true, we wait for the end of the | |
686 | %% currently running process before executing the given function. | |
687 | try register(rabbit_outside_app_process, self()) of | |
688 | true -> | |
689 | do_run_outside_app_fun(Fun) | |
690 | catch | |
691 | error:badarg when WaitForExistingProcess -> | |
692 | MRef = erlang:monitor(process, rabbit_outside_app_process), | |
693 | receive | |
694 | {'DOWN', MRef, _, _, _} -> | |
695 | %% The existing process exited, let's try to | |
696 | %% register again. | |
697 | register_outside_app_process(Fun, WaitForExistingProcess) | |
698 | end; | |
699 | error:badarg -> | |
700 | ok | |
701 | end. | |
702 | ||
703 | do_run_outside_app_fun(Fun) -> | |
704 | try | |
705 | Fun() | |
706 | catch _:E -> | |
707 | rabbit_log:error( | |
708 | "rabbit_outside_app_process:~n~p~n~p~n", | |
709 | [E, erlang:get_stacktrace()]) | |
710 | end. | |
577 | 711 | |
578 | 712 | wait_for_cluster_recovery(Condition) -> |
579 | 713 | ping_all(), |
597 | 731 | %% that we do not attempt to deal with individual (other) partitions |
598 | 732 | %% going away. It's only safe to forget anything about partitions when |
599 | 733 | %% there are no partitions. |
600 | Partitions1 = case Partitions -- (Partitions -- alive_rabbit_nodes()) of | |
734 | Down = Partitions -- alive_rabbit_nodes(), | |
735 | NoLongerPartitioned = rabbit_mnesia:cluster_nodes(running), | |
736 | Partitions1 = case Partitions -- Down -- NoLongerPartitioned of | |
601 | 737 | [] -> []; |
602 | 738 | _ -> Partitions |
603 | 739 | end, |
657 | 793 | del_node(Node, Nodes) -> Nodes -- [Node]. |
658 | 794 | |
659 | 795 | cast(Node, Msg) -> gen_server:cast({?SERVER, Node}, Msg). |
796 | ||
797 | upgrade_to_full_partition(Proxy) -> | |
798 | cast(Proxy, {partial_partition_disconnect, node()}), | |
799 | disconnect(Proxy). | |
660 | 800 | |
661 | 801 | %% When we call this, it's because we want to force Mnesia to detect a |
662 | 802 | %% partition. But if we just disconnect_node/1 then Mnesia won't |
680 | 820 | %% here. "rabbit" in a function's name implies we test if the rabbit |
681 | 821 | %% application is up, not just the node. |
682 | 822 | |
683 | %% As we use these functions to decide what to do in pause_minority | |
684 | %% state, they *must* be fast, even in the case where TCP connections | |
685 | %% are timing out. So that means we should be careful about whether we | |
686 | %% connect to nodes which are currently disconnected. | |
823 | %% As we use these functions to decide what to do in pause_minority or | |
824 | %% pause_if_all_down states, they *must* be fast, even in the case where | |
825 | %% TCP connections are timing out. So that means we should be careful | |
826 | %% about whether we connect to nodes which are currently disconnected. | |
687 | 827 | |
688 | 828 | majority() -> |
829 | majority([]). | |
830 | ||
831 | majority(NodesDown) -> | |
689 | 832 | Nodes = rabbit_mnesia:cluster_nodes(all), |
690 | length(alive_nodes(Nodes)) / length(Nodes) > 0.5. | |
833 | AliveNodes = alive_nodes(Nodes) -- NodesDown, | |
834 | length(AliveNodes) / length(Nodes) > 0.5. | |
835 | ||
836 | in_preferred_partition() -> | |
837 | {ok, {pause_if_all_down, PreferredNodes, _}} = | |
838 | application:get_env(rabbit, cluster_partition_handling), | |
839 | in_preferred_partition(PreferredNodes). | |
840 | ||
841 | in_preferred_partition(PreferredNodes) -> | |
842 | in_preferred_partition(PreferredNodes, []). | |
843 | ||
844 | in_preferred_partition(PreferredNodes, NodesDown) -> | |
845 | Nodes = rabbit_mnesia:cluster_nodes(all), | |
846 | RealPreferredNodes = [N || N <- PreferredNodes, lists:member(N, Nodes)], | |
847 | AliveNodes = alive_nodes(RealPreferredNodes) -- NodesDown, | |
848 | RealPreferredNodes =:= [] orelse AliveNodes =/= []. | |
691 | 849 | |
692 | 850 | all_nodes_up() -> |
693 | 851 | Nodes = rabbit_mnesia:cluster_nodes(all), |
17 | 17 | |
18 | 18 | -export([names/1, diagnostics/1, make/1, parts/1, cookie_hash/0, |
19 | 19 | is_running/2, is_process_running/2, |
20 | cluster_name/0, set_cluster_name/1]). | |
20 | cluster_name/0, set_cluster_name/1, ensure_epmd/0]). | |
21 | 21 | |
22 | 22 | -include_lib("kernel/include/inet.hrl"). |
23 | 23 | |
40 | 40 | -spec(is_process_running/2 :: (node(), atom()) -> boolean()). |
41 | 41 | -spec(cluster_name/0 :: () -> binary()). |
42 | 42 | -spec(set_cluster_name/1 :: (binary()) -> 'ok'). |
43 | -spec(ensure_epmd/0 :: () -> 'ok'). | |
43 | 44 | |
44 | 45 | -endif. |
45 | 46 | |
196 | 197 | |
197 | 198 | set_cluster_name(Name) -> |
198 | 199 | rabbit_runtime_parameters:set_global(cluster_name, Name). |
200 | ||
201 | ensure_epmd() -> | |
202 | {ok, Prog} = init:get_argument(progname), | |
203 | ID = random:uniform(1000000000), | |
204 | Port = open_port( | |
205 | {spawn_executable, os:find_executable(Prog)}, | |
206 | [{args, ["-sname", rabbit_misc:format("epmd-starter-~b", [ID]), | |
207 | "-noshell", "-eval", "halt()."]}, | |
208 | exit_status, stderr_to_stdout, use_stdio]), | |
209 | port_shutdown_loop(Port). | |
210 | ||
211 | port_shutdown_loop(Port) -> | |
212 | receive | |
213 | {Port, {exit_status, _Rc}} -> ok; | |
214 | {Port, _} -> port_shutdown_loop(Port) | |
215 | end. |
21 | 21 | |
22 | 22 | -include("rabbit.hrl"). |
23 | 23 | |
24 | -define(DIST_PORT_NOT_CONFIGURED, 0). | |
24 | -define(SET_DIST_PORT, 0). | |
25 | 25 | -define(ERROR_CODE, 1). |
26 | -define(DIST_PORT_CONFIGURED, 2). | |
26 | -define(DO_NOT_SET_DIST_PORT, 2). | |
27 | 27 | |
28 | 28 | %%---------------------------------------------------------------------------- |
29 | 29 | %% Specs |
45 | 45 | {NodeName, NodeHost} = rabbit_nodes:parts(Node), |
46 | 46 | ok = duplicate_node_check(NodeName, NodeHost), |
47 | 47 | ok = dist_port_set_check(), |
48 | ok = dist_port_range_check(), | |
48 | 49 | ok = dist_port_use_check(NodeHost); |
49 | 50 | [] -> |
50 | 51 | %% Ignore running node while installing windows service |
51 | 52 | ok = dist_port_set_check(), |
52 | 53 | ok |
53 | 54 | end, |
54 | rabbit_misc:quit(?DIST_PORT_NOT_CONFIGURED), | |
55 | rabbit_misc:quit(?SET_DIST_PORT), | |
55 | 56 | ok. |
56 | 57 | |
57 | 58 | stop() -> |
87 | 88 | case {pget(inet_dist_listen_min, Kernel, none), |
88 | 89 | pget(inet_dist_listen_max, Kernel, none)} of |
89 | 90 | {none, none} -> ok; |
90 | _ -> rabbit_misc:quit(?DIST_PORT_CONFIGURED) | |
91 | _ -> rabbit_misc:quit(?DO_NOT_SET_DIST_PORT) | |
91 | 92 | end; |
92 | 93 | {ok, _} -> |
93 | 94 | ok; |
94 | 95 | {error, _} -> |
95 | 96 | ok |
96 | 97 | end |
98 | end. | |
99 | ||
100 | dist_port_range_check() -> | |
101 | case os:getenv("RABBITMQ_DIST_PORT") of | |
102 | false -> ok; | |
103 | PortStr -> case catch list_to_integer(PortStr) of | |
104 | Port when is_integer(Port) andalso Port > 65535 -> | |
105 | rabbit_misc:quit(?DO_NOT_SET_DIST_PORT); | |
106 | _ -> | |
107 | ok | |
108 | end | |
97 | 109 | end. |
98 | 110 | |
99 | 111 | dist_port_use_check(NodeHost) -> |
0 | %% The contents of this file are subject to the Mozilla Public License | |
1 | %% Version 1.1 (the "License"); you may not use this file except in | |
2 | %% compliance with the License. You may obtain a copy of the License | |
3 | %% at http://www.mozilla.org/MPL/ | |
4 | %% | |
5 | %% Software distributed under the License is distributed on an "AS IS" | |
6 | %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See | |
7 | %% the License for the specific language governing rights and | |
8 | %% limitations under the License. | |
9 | %% | |
10 | %% The Original Code is RabbitMQ. | |
11 | %% | |
12 | %% The Initial Developer of the Original Code is GoPivotal, Inc. | |
13 | %% Copyright (c) 2014 GoPivotal, Inc. All rights reserved. | |
14 | %% | |
15 | ||
16 | -module(rabbit_priority_queue). | |
17 | ||
18 | -include_lib("rabbit.hrl"). | |
19 | -include_lib("rabbit_framing.hrl"). | |
20 | -behaviour(rabbit_backing_queue). | |
21 | ||
22 | %% enabled unconditionally. Disabling priority queueing after | |
23 | %% it has been enabled is dangerous. | |
24 | -rabbit_boot_step({?MODULE, | |
25 | [{description, "enable priority queue"}, | |
26 | {mfa, {?MODULE, enable, []}}, | |
27 | {requires, pre_boot}, | |
28 | {enables, kernel_ready}]}). | |
29 | ||
30 | -export([enable/0]). | |
31 | ||
32 | -export([start/1, stop/0]). | |
33 | ||
34 | -export([init/3, terminate/2, delete_and_terminate/2, delete_crashed/1, | |
35 | purge/1, purge_acks/1, | |
36 | publish/6, publish_delivered/5, discard/4, drain_confirmed/1, | |
37 | dropwhile/2, fetchwhile/4, fetch/2, drop/2, ack/2, requeue/2, | |
38 | ackfold/4, fold/3, len/1, is_empty/1, depth/1, | |
39 | set_ram_duration_target/2, ram_duration/1, needs_timeout/1, timeout/1, | |
40 | handle_pre_hibernate/1, resume/1, msg_rates/1, | |
41 | info/2, invoke/3, is_duplicate/2]). | |
42 | ||
43 | -record(state, {bq, bqss}). | |
44 | -record(passthrough, {bq, bqs}). | |
45 | ||
46 | %% See 'note on suffixes' below | |
47 | -define(passthrough1(F), State#passthrough{bqs = BQ:F}). | |
48 | -define(passthrough2(F), | |
49 | {Res, BQS1} = BQ:F, {Res, State#passthrough{bqs = BQS1}}). | |
50 | -define(passthrough3(F), | |
51 | {Res1, Res2, BQS1} = BQ:F, {Res1, Res2, State#passthrough{bqs = BQS1}}). | |
52 | ||
53 | %% This module adds suport for priority queues. | |
54 | %% | |
55 | %% Priority queues have one backing queue per priority. Backing queue functions | |
56 | %% then produce a list of results for each BQ and fold over them, sorting | |
57 | %% by priority. | |
58 | %% | |
59 | %%For queues that do not | |
60 | %% have priorities enabled, the functions in this module delegate to | |
61 | %% their "regular" backing queue module counterparts. See the `passthrough` | |
62 | %% record and passthrough{1,2,3} macros. | |
63 | %% | |
64 | %% Delivery to consumers happens by first "running" the queue with | |
65 | %% the highest priority until there are no more messages to deliver, | |
66 | %% then the next one, and so on. This offers good prioritisation | |
67 | %% but may result in lower priority messages not being delivered | |
68 | %% when there's a high ingress rate of messages with higher priority. | |
69 | ||
70 | enable() -> | |
71 | {ok, RealBQ} = application:get_env(rabbit, backing_queue_module), | |
72 | case RealBQ of | |
73 | ?MODULE -> ok; | |
74 | _ -> rabbit_log:info("Priority queues enabled, real BQ is ~s~n", | |
75 | [RealBQ]), | |
76 | application:set_env( | |
77 | rabbitmq_priority_queue, backing_queue_module, RealBQ), | |
78 | application:set_env(rabbit, backing_queue_module, ?MODULE) | |
79 | end. | |
80 | ||
81 | %%---------------------------------------------------------------------------- | |
82 | ||
83 | start(QNames) -> | |
84 | BQ = bq(), | |
85 | %% TODO this expand-collapse dance is a bit ridiculous but it's what | |
86 | %% rabbit_amqqueue:recover/0 expects. We could probably simplify | |
87 | %% this if we rejigged recovery a bit. | |
88 | {DupNames, ExpNames} = expand_queues(QNames), | |
89 | case BQ:start(ExpNames) of | |
90 | {ok, ExpRecovery} -> | |
91 | {ok, collapse_recovery(QNames, DupNames, ExpRecovery)}; | |
92 | Else -> | |
93 | Else | |
94 | end. | |
95 | ||
96 | stop() -> | |
97 | BQ = bq(), | |
98 | BQ:stop(). | |
99 | ||
100 | %%---------------------------------------------------------------------------- | |
101 | ||
102 | mutate_name(P, Q = #amqqueue{name = QName = #resource{name = QNameBin}}) -> | |
103 | Q#amqqueue{name = QName#resource{name = mutate_name_bin(P, QNameBin)}}. | |
104 | ||
105 | mutate_name_bin(P, NameBin) -> <<NameBin/binary, 0, P:8>>. | |
106 | ||
107 | expand_queues(QNames) -> | |
108 | lists:unzip( | |
109 | lists:append([expand_queue(QName) || QName <- QNames])). | |
110 | ||
111 | expand_queue(QName = #resource{name = QNameBin}) -> | |
112 | {ok, Q} = rabbit_misc:dirty_read({rabbit_durable_queue, QName}), | |
113 | case priorities(Q) of | |
114 | none -> [{QName, QName}]; | |
115 | Ps -> [{QName, QName#resource{name = mutate_name_bin(P, QNameBin)}} | |
116 | || P <- Ps] | |
117 | end. | |
118 | ||
119 | collapse_recovery(QNames, DupNames, Recovery) -> | |
120 | NameToTerms = lists:foldl(fun({Name, RecTerm}, Dict) -> | |
121 | dict:append(Name, RecTerm, Dict) | |
122 | end, dict:new(), lists:zip(DupNames, Recovery)), | |
123 | [dict:fetch(Name, NameToTerms) || Name <- QNames]. | |
124 | ||
125 | priorities(#amqqueue{arguments = Args}) -> | |
126 | Ints = [long, short, signedint, byte], | |
127 | case rabbit_misc:table_lookup(Args, <<"x-max-priority">>) of | |
128 | {Type, Max} -> case lists:member(Type, Ints) of | |
129 | false -> none; | |
130 | true -> lists:reverse(lists:seq(0, Max)) | |
131 | end; | |
132 | _ -> none | |
133 | end. | |
134 | ||
135 | %%---------------------------------------------------------------------------- | |
136 | ||
137 | init(Q, Recover, AsyncCallback) -> | |
138 | BQ = bq(), | |
139 | case priorities(Q) of | |
140 | none -> RealRecover = case Recover of | |
141 | [R] -> R; %% [0] | |
142 | R -> R | |
143 | end, | |
144 | #passthrough{bq = BQ, | |
145 | bqs = BQ:init(Q, RealRecover, AsyncCallback)}; | |
146 | Ps -> Init = fun (P, Term) -> | |
147 | BQ:init( | |
148 | mutate_name(P, Q), Term, | |
149 | fun (M, F) -> AsyncCallback(M, {P, F}) end) | |
150 | end, | |
151 | BQSs = case have_recovery_terms(Recover) of | |
152 | false -> [{P, Init(P, Recover)} || P <- Ps]; | |
153 | _ -> PsTerms = lists:zip(Ps, Recover), | |
154 | [{P, Init(P, Term)} || {P, Term} <- PsTerms] | |
155 | end, | |
156 | #state{bq = BQ, | |
157 | bqss = BQSs} | |
158 | end. | |
159 | %% [0] collapse_recovery has the effect of making a list of recovery | |
160 | %% terms in priority order, even for non priority queues. It's easier | |
161 | %% to do that and "unwrap" in init/3 than to have collapse_recovery be | |
162 | %% aware of non-priority queues. | |
163 | ||
164 | have_recovery_terms(new) -> false; | |
165 | have_recovery_terms(non_clean_shutdown) -> false; | |
166 | have_recovery_terms(_) -> true. | |
167 | ||
168 | terminate(Reason, State = #state{bq = BQ}) -> | |
169 | foreach1(fun (_P, BQSN) -> BQ:terminate(Reason, BQSN) end, State); | |
170 | terminate(Reason, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
171 | ?passthrough1(terminate(Reason, BQS)). | |
172 | ||
173 | delete_and_terminate(Reason, State = #state{bq = BQ}) -> | |
174 | foreach1(fun (_P, BQSN) -> | |
175 | BQ:delete_and_terminate(Reason, BQSN) | |
176 | end, State); | |
177 | delete_and_terminate(Reason, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
178 | ?passthrough1(delete_and_terminate(Reason, BQS)). | |
179 | ||
180 | delete_crashed(Q) -> | |
181 | BQ = bq(), | |
182 | case priorities(Q) of | |
183 | none -> BQ:delete_crashed(Q); | |
184 | Ps -> [BQ:delete_crashed(mutate_name(P, Q)) || P <- Ps] | |
185 | end. | |
186 | ||
187 | purge(State = #state{bq = BQ}) -> | |
188 | fold_add2(fun (_P, BQSN) -> BQ:purge(BQSN) end, State); | |
189 | purge(State = #passthrough{bq = BQ, bqs = BQS}) -> | |
190 | ?passthrough2(purge(BQS)). | |
191 | ||
192 | purge_acks(State = #state{bq = BQ}) -> | |
193 | foreach1(fun (_P, BQSN) -> BQ:purge_acks(BQSN) end, State); | |
194 | purge_acks(State = #passthrough{bq = BQ, bqs = BQS}) -> | |
195 | ?passthrough1(purge_acks(BQS)). | |
196 | ||
197 | publish(Msg, MsgProps, IsDelivered, ChPid, Flow, State = #state{bq = BQ}) -> | |
198 | pick1(fun (_P, BQSN) -> | |
199 | BQ:publish(Msg, MsgProps, IsDelivered, ChPid, Flow, BQSN) | |
200 | end, Msg, State); | |
201 | publish(Msg, MsgProps, IsDelivered, ChPid, Flow, | |
202 | State = #passthrough{bq = BQ, bqs = BQS}) -> | |
203 | ?passthrough1(publish(Msg, MsgProps, IsDelivered, ChPid, Flow, BQS)). | |
204 | ||
205 | publish_delivered(Msg, MsgProps, ChPid, Flow, State = #state{bq = BQ}) -> | |
206 | pick2(fun (P, BQSN) -> | |
207 | {AckTag, BQSN1} = BQ:publish_delivered( | |
208 | Msg, MsgProps, ChPid, Flow, BQSN), | |
209 | {{P, AckTag}, BQSN1} | |
210 | end, Msg, State); | |
211 | publish_delivered(Msg, MsgProps, ChPid, Flow, | |
212 | State = #passthrough{bq = BQ, bqs = BQS}) -> | |
213 | ?passthrough2(publish_delivered(Msg, MsgProps, ChPid, Flow, BQS)). | |
214 | ||
215 | %% TODO this is a hack. The BQ api does not give us enough information | |
216 | %% here - if we had the Msg we could look at its priority and forward | |
217 | %% to the appropriate sub-BQ. But we don't so we are stuck. | |
218 | %% | |
219 | %% But fortunately VQ ignores discard/4, so we can too, *assuming we | |
220 | %% are talking to VQ*. discard/4 is used by HA, but that's "above" us | |
221 | %% (if in use) so we don't break that either, just some hypothetical | |
222 | %% alternate BQ implementation. | |
223 | discard(_MsgId, _ChPid, _Flow, State = #state{}) -> | |
224 | State; | |
225 | %% We should have something a bit like this here: | |
226 | %% pick1(fun (_P, BQSN) -> | |
227 | %% BQ:discard(MsgId, ChPid, Flow, BQSN) | |
228 | %% end, Msg, State); | |
229 | discard(MsgId, ChPid, Flow, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
230 | ?passthrough1(discard(MsgId, ChPid, Flow, BQS)). | |
231 | ||
232 | drain_confirmed(State = #state{bq = BQ}) -> | |
233 | fold_append2(fun (_P, BQSN) -> BQ:drain_confirmed(BQSN) end, State); | |
234 | drain_confirmed(State = #passthrough{bq = BQ, bqs = BQS}) -> | |
235 | ?passthrough2(drain_confirmed(BQS)). | |
236 | ||
237 | dropwhile(Pred, State = #state{bq = BQ}) -> | |
238 | find2(fun (_P, BQSN) -> BQ:dropwhile(Pred, BQSN) end, undefined, State); | |
239 | dropwhile(Pred, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
240 | ?passthrough2(dropwhile(Pred, BQS)). | |
241 | ||
242 | %% TODO this is a bit nasty. In the one place where fetchwhile/4 is | |
243 | %% actually used the accumulator is a list of acktags, which of course | |
244 | %% we need to mutate - so we do that although we are encoding an | |
245 | %% assumption here. | |
246 | fetchwhile(Pred, Fun, Acc, State = #state{bq = BQ}) -> | |
247 | findfold3( | |
248 | fun (P, BQSN, AccN) -> | |
249 | {Res, AccN1, BQSN1} = BQ:fetchwhile(Pred, Fun, AccN, BQSN), | |
250 | {Res, priority_on_acktags(P, AccN1), BQSN1} | |
251 | end, Acc, undefined, State); | |
252 | fetchwhile(Pred, Fun, Acc, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
253 | ?passthrough3(fetchwhile(Pred, Fun, Acc, BQS)). | |
254 | ||
255 | fetch(AckRequired, State = #state{bq = BQ}) -> | |
256 | find2( | |
257 | fun (P, BQSN) -> | |
258 | case BQ:fetch(AckRequired, BQSN) of | |
259 | {empty, BQSN1} -> {empty, BQSN1}; | |
260 | {{Msg, Del, ATag}, BQSN1} -> {{Msg, Del, {P, ATag}}, BQSN1} | |
261 | end | |
262 | end, empty, State); | |
263 | fetch(AckRequired, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
264 | ?passthrough2(fetch(AckRequired, BQS)). | |
265 | ||
266 | drop(AckRequired, State = #state{bq = BQ}) -> | |
267 | find2(fun (P, BQSN) -> | |
268 | case BQ:drop(AckRequired, BQSN) of | |
269 | {empty, BQSN1} -> {empty, BQSN1}; | |
270 | {{MsgId, AckTag}, BQSN1} -> {{MsgId, {P, AckTag}}, BQSN1} | |
271 | end | |
272 | end, empty, State); | |
273 | drop(AckRequired, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
274 | ?passthrough2(drop(AckRequired, BQS)). | |
275 | ||
276 | ack(AckTags, State = #state{bq = BQ}) -> | |
277 | fold_by_acktags2(fun (AckTagsN, BQSN) -> | |
278 | BQ:ack(AckTagsN, BQSN) | |
279 | end, AckTags, State); | |
280 | ack(AckTags, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
281 | ?passthrough2(ack(AckTags, BQS)). | |
282 | ||
283 | requeue(AckTags, State = #state{bq = BQ}) -> | |
284 | fold_by_acktags2(fun (AckTagsN, BQSN) -> | |
285 | BQ:requeue(AckTagsN, BQSN) | |
286 | end, AckTags, State); | |
287 | requeue(AckTags, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
288 | ?passthrough2(requeue(AckTags, BQS)). | |
289 | ||
290 | %% Similar problem to fetchwhile/4 | |
291 | ackfold(MsgFun, Acc, State = #state{bq = BQ}, AckTags) -> | |
292 | AckTagsByPriority = partition_acktags(AckTags), | |
293 | fold2( | |
294 | fun (P, BQSN, AccN) -> | |
295 | case orddict:find(P, AckTagsByPriority) of | |
296 | {ok, ATagsN} -> {AccN1, BQSN1} = | |
297 | BQ:ackfold(MsgFun, AccN, BQSN, ATagsN), | |
298 | {priority_on_acktags(P, AccN1), BQSN1}; | |
299 | error -> {AccN, BQSN} | |
300 | end | |
301 | end, Acc, State); | |
302 | ackfold(MsgFun, Acc, State = #passthrough{bq = BQ, bqs = BQS}, AckTags) -> | |
303 | ?passthrough2(ackfold(MsgFun, Acc, BQS, AckTags)). | |
304 | ||
305 | fold(Fun, Acc, State = #state{bq = BQ}) -> | |
306 | fold2(fun (_P, BQSN, AccN) -> BQ:fold(Fun, AccN, BQSN) end, Acc, State); | |
307 | fold(Fun, Acc, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
308 | ?passthrough2(fold(Fun, Acc, BQS)). | |
309 | ||
310 | len(#state{bq = BQ, bqss = BQSs}) -> | |
311 | add0(fun (_P, BQSN) -> BQ:len(BQSN) end, BQSs); | |
312 | len(#passthrough{bq = BQ, bqs = BQS}) -> | |
313 | BQ:len(BQS). | |
314 | ||
315 | is_empty(#state{bq = BQ, bqss = BQSs}) -> | |
316 | all0(fun (_P, BQSN) -> BQ:is_empty(BQSN) end, BQSs); | |
317 | is_empty(#passthrough{bq = BQ, bqs = BQS}) -> | |
318 | BQ:is_empty(BQS). | |
319 | ||
320 | depth(#state{bq = BQ, bqss = BQSs}) -> | |
321 | add0(fun (_P, BQSN) -> BQ:depth(BQSN) end, BQSs); | |
322 | depth(#passthrough{bq = BQ, bqs = BQS}) -> | |
323 | BQ:depth(BQS). | |
324 | ||
325 | set_ram_duration_target(DurationTarget, State = #state{bq = BQ}) -> | |
326 | foreach1(fun (_P, BQSN) -> | |
327 | BQ:set_ram_duration_target(DurationTarget, BQSN) | |
328 | end, State); | |
329 | set_ram_duration_target(DurationTarget, | |
330 | State = #passthrough{bq = BQ, bqs = BQS}) -> | |
331 | ?passthrough1(set_ram_duration_target(DurationTarget, BQS)). | |
332 | ||
333 | ram_duration(State = #state{bq = BQ}) -> | |
334 | fold_min2(fun (_P, BQSN) -> BQ:ram_duration(BQSN) end, State); | |
335 | ram_duration(State = #passthrough{bq = BQ, bqs = BQS}) -> | |
336 | ?passthrough2(ram_duration(BQS)). | |
337 | ||
338 | needs_timeout(#state{bq = BQ, bqss = BQSs}) -> | |
339 | fold0(fun (_P, _BQSN, timed) -> timed; | |
340 | (_P, BQSN, idle) -> case BQ:needs_timeout(BQSN) of | |
341 | timed -> timed; | |
342 | _ -> idle | |
343 | end; | |
344 | (_P, BQSN, false) -> BQ:needs_timeout(BQSN) | |
345 | end, false, BQSs); | |
346 | needs_timeout(#passthrough{bq = BQ, bqs = BQS}) -> | |
347 | BQ:needs_timeout(BQS). | |
348 | ||
349 | timeout(State = #state{bq = BQ}) -> | |
350 | foreach1(fun (_P, BQSN) -> BQ:timeout(BQSN) end, State); | |
351 | timeout(State = #passthrough{bq = BQ, bqs = BQS}) -> | |
352 | ?passthrough1(timeout(BQS)). | |
353 | ||
354 | handle_pre_hibernate(State = #state{bq = BQ}) -> | |
355 | foreach1(fun (_P, BQSN) -> | |
356 | BQ:handle_pre_hibernate(BQSN) | |
357 | end, State); | |
358 | handle_pre_hibernate(State = #passthrough{bq = BQ, bqs = BQS}) -> | |
359 | ?passthrough1(handle_pre_hibernate(BQS)). | |
360 | ||
361 | resume(State = #state{bq = BQ}) -> | |
362 | foreach1(fun (_P, BQSN) -> BQ:resume(BQSN) end, State); | |
363 | resume(State = #passthrough{bq = BQ, bqs = BQS}) -> | |
364 | ?passthrough1(resume(BQS)). | |
365 | ||
366 | msg_rates(#state{bq = BQ, bqss = BQSs}) -> | |
367 | fold0(fun(_P, BQSN, {InN, OutN}) -> | |
368 | {In, Out} = BQ:msg_rates(BQSN), | |
369 | {InN + In, OutN + Out} | |
370 | end, {0.0, 0.0}, BQSs); | |
371 | msg_rates(#passthrough{bq = BQ, bqs = BQS}) -> | |
372 | BQ:msg_rates(BQS). | |
373 | ||
374 | info(backing_queue_status, #state{bq = BQ, bqss = BQSs}) -> | |
375 | fold0(fun (P, BQSN, Acc) -> | |
376 | combine_status(P, BQ:info(backing_queue_status, BQSN), Acc) | |
377 | end, nothing, BQSs); | |
378 | info(Item, #state{bq = BQ, bqss = BQSs}) -> | |
379 | fold0(fun (_P, BQSN, Acc) -> | |
380 | Acc + BQ:info(Item, BQSN) | |
381 | end, 0, BQSs); | |
382 | info(Item, #passthrough{bq = BQ, bqs = BQS}) -> | |
383 | BQ:info(Item, BQS). | |
384 | ||
385 | invoke(Mod, {P, Fun}, State = #state{bq = BQ}) -> | |
386 | pick1(fun (_P, BQSN) -> BQ:invoke(Mod, Fun, BQSN) end, P, State); | |
387 | invoke(Mod, Fun, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
388 | ?passthrough1(invoke(Mod, Fun, BQS)). | |
389 | ||
390 | is_duplicate(Msg, State = #state{bq = BQ}) -> | |
391 | pick2(fun (_P, BQSN) -> BQ:is_duplicate(Msg, BQSN) end, Msg, State); | |
392 | is_duplicate(Msg, State = #passthrough{bq = BQ, bqs = BQS}) -> | |
393 | ?passthrough2(is_duplicate(Msg, BQS)). | |
394 | ||
395 | %%---------------------------------------------------------------------------- | |
396 | ||
397 | bq() -> | |
398 | {ok, RealBQ} = application:get_env( | |
399 | rabbitmq_priority_queue, backing_queue_module), | |
400 | RealBQ. | |
401 | ||
402 | %% Note on suffixes: Many utility functions here have suffixes telling | |
403 | %% you the arity of the return type of the BQ function they are | |
404 | %% designed to work with. | |
405 | %% | |
406 | %% 0 - BQ function returns a value and does not modify state | |
407 | %% 1 - BQ function just returns a new state | |
408 | %% 2 - BQ function returns a 2-tuple of {Result, NewState} | |
409 | %% 3 - BQ function returns a 3-tuple of {Result1, Result2, NewState} | |
410 | ||
411 | %% Fold over results | |
412 | fold0(Fun, Acc, [{P, BQSN} | Rest]) -> fold0(Fun, Fun(P, BQSN, Acc), Rest); | |
413 | fold0(_Fun, Acc, []) -> Acc. | |
414 | ||
415 | %% Do all BQs match? | |
416 | all0(Pred, BQSs) -> fold0(fun (_P, _BQSN, false) -> false; | |
417 | (P, BQSN, true) -> Pred(P, BQSN) | |
418 | end, true, BQSs). | |
419 | ||
420 | %% Sum results | |
421 | add0(Fun, BQSs) -> fold0(fun (P, BQSN, Acc) -> Acc + Fun(P, BQSN) end, 0, BQSs). | |
422 | ||
423 | %% Apply for all states | |
424 | foreach1(Fun, State = #state{bqss = BQSs}) -> | |
425 | a(State#state{bqss = foreach1(Fun, BQSs, [])}). | |
426 | foreach1(Fun, [{P, BQSN} | Rest], BQSAcc) -> | |
427 | BQSN1 = Fun(P, BQSN), | |
428 | foreach1(Fun, Rest, [{P, BQSN1} | BQSAcc]); | |
429 | foreach1(_Fun, [], BQSAcc) -> | |
430 | lists:reverse(BQSAcc). | |
431 | ||
432 | %% For a given thing, just go to its BQ | |
433 | pick1(Fun, Prioritisable, #state{bqss = BQSs} = State) -> | |
434 | {P, BQSN} = priority(Prioritisable, BQSs), | |
435 | a(State#state{bqss = bq_store(P, Fun(P, BQSN), BQSs)}). | |
436 | ||
437 | %% Fold over results | |
438 | fold2(Fun, Acc, State = #state{bqss = BQSs}) -> | |
439 | {Res, BQSs1} = fold2(Fun, Acc, BQSs, []), | |
440 | {Res, a(State#state{bqss = BQSs1})}. | |
441 | fold2(Fun, Acc, [{P, BQSN} | Rest], BQSAcc) -> | |
442 | {Acc1, BQSN1} = Fun(P, BQSN, Acc), | |
443 | fold2(Fun, Acc1, Rest, [{P, BQSN1} | BQSAcc]); | |
444 | fold2(_Fun, Acc, [], BQSAcc) -> | |
445 | {Acc, lists:reverse(BQSAcc)}. | |
446 | ||
447 | %% Fold over results assuming results are lists and we want to append them | |
448 | fold_append2(Fun, State) -> | |
449 | fold2(fun (P, BQSN, Acc) -> | |
450 | {Res, BQSN1} = Fun(P, BQSN), | |
451 | {Res ++ Acc, BQSN1} | |
452 | end, [], State). | |
453 | ||
454 | %% Fold over results assuming results are numbers and we want to sum them | |
455 | fold_add2(Fun, State) -> | |
456 | fold2(fun (P, BQSN, Acc) -> | |
457 | {Res, BQSN1} = Fun(P, BQSN), | |
458 | {add_maybe_infinity(Res, Acc), BQSN1} | |
459 | end, 0, State). | |
460 | ||
461 | %% Fold over results assuming results are numbers and we want the minimum | |
462 | fold_min2(Fun, State) -> | |
463 | fold2(fun (P, BQSN, Acc) -> | |
464 | {Res, BQSN1} = Fun(P, BQSN), | |
465 | {erlang:min(Res, Acc), BQSN1} | |
466 | end, infinity, State). | |
467 | ||
468 | %% Fold over results assuming results are lists and we want to append | |
469 | %% them, and also that we have some AckTags we want to pass in to each | |
470 | %% invocation. | |
471 | fold_by_acktags2(Fun, AckTags, State) -> | |
472 | AckTagsByPriority = partition_acktags(AckTags), | |
473 | fold_append2(fun (P, BQSN) -> | |
474 | case orddict:find(P, AckTagsByPriority) of | |
475 | {ok, AckTagsN} -> Fun(AckTagsN, BQSN); | |
476 | error -> {[], BQSN} | |
477 | end | |
478 | end, State). | |
479 | ||
480 | %% For a given thing, just go to its BQ | |
481 | pick2(Fun, Prioritisable, #state{bqss = BQSs} = State) -> | |
482 | {P, BQSN} = priority(Prioritisable, BQSs), | |
483 | {Res, BQSN1} = Fun(P, BQSN), | |
484 | {Res, a(State#state{bqss = bq_store(P, BQSN1, BQSs)})}. | |
485 | ||
486 | %% Run through BQs in priority order until one does not return | |
487 | %% {NotFound, NewState} or we have gone through them all. | |
488 | find2(Fun, NotFound, State = #state{bqss = BQSs}) -> | |
489 | {Res, BQSs1} = find2(Fun, NotFound, BQSs, []), | |
490 | {Res, a(State#state{bqss = BQSs1})}. | |
491 | find2(Fun, NotFound, [{P, BQSN} | Rest], BQSAcc) -> | |
492 | case Fun(P, BQSN) of | |
493 | {NotFound, BQSN1} -> find2(Fun, NotFound, Rest, [{P, BQSN1} | BQSAcc]); | |
494 | {Res, BQSN1} -> {Res, lists:reverse([{P, BQSN1} | BQSAcc]) ++ Rest} | |
495 | end; | |
496 | find2(_Fun, NotFound, [], BQSAcc) -> | |
497 | {NotFound, lists:reverse(BQSAcc)}. | |
498 | ||
499 | %% Run through BQs in priority order like find2 but also folding as we go. | |
500 | findfold3(Fun, Acc, NotFound, State = #state{bqss = BQSs}) -> | |
501 | {Res, Acc1, BQSs1} = findfold3(Fun, Acc, NotFound, BQSs, []), | |
502 | {Res, Acc1, a(State#state{bqss = BQSs1})}. | |
503 | findfold3(Fun, Acc, NotFound, [{P, BQSN} | Rest], BQSAcc) -> | |
504 | case Fun(P, BQSN, Acc) of | |
505 | {NotFound, Acc1, BQSN1} -> | |
506 | findfold3(Fun, Acc1, NotFound, Rest, [{P, BQSN1} | BQSAcc]); | |
507 | {Res, Acc1, BQSN1} -> | |
508 | {Res, Acc1, lists:reverse([{P, BQSN1} | BQSAcc]) ++ Rest} | |
509 | end; | |
510 | findfold3(_Fun, Acc, NotFound, [], BQSAcc) -> | |
511 | {NotFound, Acc, lists:reverse(BQSAcc)}. | |
512 | ||
513 | bq_fetch(P, []) -> exit({not_found, P}); | |
514 | bq_fetch(P, [{P, BQSN} | _]) -> BQSN; | |
515 | bq_fetch(P, [{_, _BQSN} | T]) -> bq_fetch(P, T). | |
516 | ||
517 | bq_store(P, BQS, BQSs) -> | |
518 | [{PN, case PN of | |
519 | P -> BQS; | |
520 | _ -> BQSN | |
521 | end} || {PN, BQSN} <- BQSs]. | |
522 | ||
523 | %% | |
524 | a(State = #state{bqss = BQSs}) -> | |
525 | Ps = [P || {P, _} <- BQSs], | |
526 | case lists:reverse(lists:usort(Ps)) of | |
527 | Ps -> State; | |
528 | _ -> exit({bad_order, Ps}) | |
529 | end. | |
530 | ||
531 | %%---------------------------------------------------------------------------- | |
532 | ||
533 | priority(P, BQSs) when is_integer(P) -> | |
534 | {P, bq_fetch(P, BQSs)}; | |
535 | priority(#basic_message{content = Content}, BQSs) -> | |
536 | priority1(rabbit_binary_parser:ensure_content_decoded(Content), BQSs). | |
537 | ||
538 | priority1(_Content, [{P, BQSN}]) -> | |
539 | {P, BQSN}; | |
540 | priority1(Content = #content{properties = Props}, | |
541 | [{P, BQSN} | Rest]) -> | |
542 | #'P_basic'{priority = Priority0} = Props, | |
543 | Priority = case Priority0 of | |
544 | undefined -> 0; | |
545 | _ when is_integer(Priority0) -> Priority0 | |
546 | end, | |
547 | case Priority >= P of | |
548 | true -> {P, BQSN}; | |
549 | false -> priority1(Content, Rest) | |
550 | end. | |
551 | ||
552 | add_maybe_infinity(infinity, _) -> infinity; | |
553 | add_maybe_infinity(_, infinity) -> infinity; | |
554 | add_maybe_infinity(A, B) -> A + B. | |
555 | ||
556 | partition_acktags(AckTags) -> partition_acktags(AckTags, orddict:new()). | |
557 | ||
558 | partition_acktags([], Partitioned) -> | |
559 | orddict:map(fun (_P, RevAckTags) -> | |
560 | lists:reverse(RevAckTags) | |
561 | end, Partitioned); | |
562 | partition_acktags([{P, AckTag} | Rest], Partitioned) -> | |
563 | partition_acktags(Rest, rabbit_misc:orddict_cons(P, AckTag, Partitioned)). | |
564 | ||
565 | priority_on_acktags(P, AckTags) -> | |
566 | [case Tag of | |
567 | _ when is_integer(Tag) -> {P, Tag}; | |
568 | _ -> Tag | |
569 | end || Tag <- AckTags]. | |
570 | ||
571 | combine_status(P, New, nothing) -> | |
572 | [{priority_lengths, [{P, proplists:get_value(len, New)}]} | New]; | |
573 | combine_status(P, New, Old) -> | |
574 | Combined = [{K, cse(V, proplists:get_value(K, Old))} || {K, V} <- New], | |
575 | Lens = [{P, proplists:get_value(len, New)} | | |
576 | proplists:get_value(priority_lengths, Old)], | |
577 | [{priority_lengths, Lens} | Combined]. | |
578 | ||
579 | cse(infinity, _) -> infinity; | |
580 | cse(_, infinity) -> infinity; | |
581 | cse(A, B) when is_number(A) -> A + B; | |
582 | cse({delta, _, _, _}, _) -> {delta, todo, todo, todo}; | |
583 | cse(A, B) -> exit({A, B}). |
174 | 174 | C = #cr{ch_pid = ChPid, |
175 | 175 | acktags = ChAckTags, |
176 | 176 | blocked_consumers = BlockedQ} -> |
177 | AllConsumers = priority_queue:join(Consumers, BlockedQ), | |
177 | All = priority_queue:join(Consumers, BlockedQ), | |
178 | 178 | ok = erase_ch_record(C), |
179 | Filtered = priority_queue:filter(chan_pred(ChPid, true), All), | |
179 | 180 | {[AckTag || {AckTag, _CTag} <- queue:to_list(ChAckTags)], |
180 | tags(priority_queue:to_list(AllConsumers)), | |
181 | tags(priority_queue:to_list(Filtered)), | |
181 | 182 | State#state{consumers = remove_consumers(ChPid, Consumers)}} |
182 | 183 | end. |
183 | 184 | |
441 | 442 | end, Queue). |
442 | 443 | |
443 | 444 | remove_consumers(ChPid, Queue) -> |
444 | priority_queue:filter(fun ({CP, _Consumer}) when CP =:= ChPid -> false; | |
445 | (_) -> true | |
446 | end, Queue). | |
445 | priority_queue:filter(chan_pred(ChPid, false), Queue). | |
446 | ||
447 | chan_pred(ChPid, Want) -> | |
448 | fun ({CP, _Consumer}) when CP =:= ChPid -> Want; | |
449 | (_) -> not Want | |
450 | end. | |
447 | 451 | |
448 | 452 | update_use({inactive, _, _, _} = CUInfo, inactive) -> |
449 | 453 | CUInfo; |
15 | 15 | |
16 | 16 | -module(rabbit_queue_index). |
17 | 17 | |
18 | -export([erase/1, init/2, recover/5, | |
18 | -export([erase/1, init/3, recover/6, | |
19 | 19 | terminate/2, delete_and_terminate/1, |
20 | publish/5, deliver/2, ack/2, sync/1, needs_sync/1, flush/1, | |
20 | publish/6, deliver/2, ack/2, sync/1, needs_sync/1, flush/1, | |
21 | 21 | read/3, next_segment_boundary/1, bounds/1, start/1, stop/0]). |
22 | 22 | |
23 | -export([add_queue_ttl/0, avoid_zeroes/0, store_msg_size/0]). | |
23 | -export([add_queue_ttl/0, avoid_zeroes/0, store_msg_size/0, store_msg/0]). | |
24 | 24 | |
25 | 25 | -define(CLEAN_FILENAME, "clean.dot"). |
26 | 26 | |
27 | 27 | %%---------------------------------------------------------------------------- |
28 | 28 | |
29 | 29 | %% The queue index is responsible for recording the order of messages |
30 | %% within a queue on disk. | |
30 | %% within a queue on disk. As such it contains records of messages | |
31 | %% being published, delivered and acknowledged. The publish record | |
32 | %% includes the sequence ID, message ID and a small quantity of | |
33 | %% metadata about the message; the delivery and acknowledgement | |
34 | %% records just contain the sequence ID. A publish record may also | |
35 | %% contain the complete message if provided to publish/5; this allows | |
36 | %% the message store to be avoided altogether for small messages. In | |
37 | %% either case the publish record is stored in memory in the same | |
38 | %% serialised format it will take on disk. | |
31 | 39 | %% |
32 | 40 | %% Because of the fact that the queue can decide at any point to send |
33 | 41 | %% a queue entry to disk, you can not rely on publishes appearing in |
35 | 43 | %% then delivered, then ack'd. |
36 | 44 | %% |
37 | 45 | %% In order to be able to clean up ack'd messages, we write to segment |
38 | %% files. These files have a fixed maximum size: ?SEGMENT_ENTRY_COUNT | |
46 | %% files. These files have a fixed number of entries: ?SEGMENT_ENTRY_COUNT | |
39 | 47 | %% publishes, delivers and acknowledgements. They are numbered, and so |
40 | 48 | %% it is known that the 0th segment contains messages 0 -> |
41 | 49 | %% ?SEGMENT_ENTRY_COUNT - 1, the 1st segment contains messages |
84 | 92 | %% and seeding the message store on start up. |
85 | 93 | %% |
86 | 94 | %% Note that in general, the representation of a message's state as |
87 | %% the tuple: {('no_pub'|{MsgId, MsgProps, IsPersistent}), | |
95 | %% the tuple: {('no_pub'|{IsPersistent, Bin, MsgBin}), | |
88 | 96 | %% ('del'|'no_del'), ('ack'|'no_ack')} is richer than strictly |
89 | 97 | %% necessary for most operations. However, for startup, and to ensure |
90 | 98 | %% the safe and correct combination of journal entries with entries |
127 | 135 | -define(REL_SEQ_ONLY_RECORD_BYTES, 2). |
128 | 136 | |
129 | 137 | %% publish record is binary 1 followed by a bit for is_persistent, |
130 | %% then 14 bits of rel seq id, 64 bits for message expiry and 128 bits | |
131 | %% of md5sum msg id | |
138 | %% then 14 bits of rel seq id, 64 bits for message expiry, 32 bits of | |
139 | %% size and then 128 bits of md5sum msg id. | |
132 | 140 | -define(PUB_PREFIX, 1). |
133 | 141 | -define(PUB_PREFIX_BITS, 1). |
134 | 142 | |
139 | 147 | -define(MSG_ID_BYTES, 16). %% md5sum is 128 bit or 16 bytes |
140 | 148 | -define(MSG_ID_BITS, (?MSG_ID_BYTES * 8)). |
141 | 149 | |
150 | %% This is the size of the message body content, for stats | |
142 | 151 | -define(SIZE_BYTES, 4). |
143 | 152 | -define(SIZE_BITS, (?SIZE_BYTES * 8)). |
144 | 153 | |
145 | %% 16 bytes for md5sum + 8 for expiry + 4 for size | |
154 | %% This is the size of the message record embedded in the queue | |
155 | %% index. If 0, the message can be found in the message store. | |
156 | -define(EMBEDDED_SIZE_BYTES, 4). | |
157 | -define(EMBEDDED_SIZE_BITS, (?EMBEDDED_SIZE_BYTES * 8)). | |
158 | ||
159 | %% 16 bytes for md5sum + 8 for expiry | |
146 | 160 | -define(PUB_RECORD_BODY_BYTES, (?MSG_ID_BYTES + ?EXPIRY_BYTES + ?SIZE_BYTES)). |
161 | %% + 4 for size | |
162 | -define(PUB_RECORD_SIZE_BYTES, (?PUB_RECORD_BODY_BYTES + ?EMBEDDED_SIZE_BYTES)). | |
163 | ||
147 | 164 | %% + 2 for seq, bits and prefix |
148 | -define(PUB_RECORD_BYTES, (?PUB_RECORD_BODY_BYTES + 2)). | |
149 | ||
150 | %% 1 publish, 1 deliver, 1 ack per msg | |
151 | -define(SEGMENT_TOTAL_SIZE, ?SEGMENT_ENTRY_COUNT * | |
152 | (?PUB_RECORD_BYTES + (2 * ?REL_SEQ_ONLY_RECORD_BYTES))). | |
165 | -define(PUB_RECORD_PREFIX_BYTES, 2). | |
153 | 166 | |
154 | 167 | %% ---- misc ---- |
155 | 168 | |
156 | -define(PUB, {_, _, _}). %% {MsgId, MsgProps, IsPersistent} | |
169 | -define(PUB, {_, _, _}). %% {IsPersistent, Bin, MsgBin} | |
157 | 170 | |
158 | 171 | -define(READ_MODE, [binary, raw, read]). |
159 | -define(READ_AHEAD_MODE, [{read_ahead, ?SEGMENT_TOTAL_SIZE} | ?READ_MODE]). | |
160 | 172 | -define(WRITE_MODE, [write | ?READ_MODE]). |
161 | 173 | |
162 | 174 | %%---------------------------------------------------------------------------- |
163 | 175 | |
164 | -record(qistate, { dir, segments, journal_handle, dirty_count, | |
165 | max_journal_entries, on_sync, unconfirmed }). | |
166 | ||
167 | -record(segment, { num, path, journal_entries, unacked }). | |
176 | -record(qistate, {dir, segments, journal_handle, dirty_count, | |
177 | max_journal_entries, on_sync, on_sync_msg, | |
178 | unconfirmed, unconfirmed_msg}). | |
179 | ||
180 | -record(segment, {num, path, journal_entries, unacked}). | |
168 | 181 | |
169 | 182 | -include("rabbit.hrl"). |
170 | 183 | |
173 | 186 | -rabbit_upgrade({add_queue_ttl, local, []}). |
174 | 187 | -rabbit_upgrade({avoid_zeroes, local, [add_queue_ttl]}). |
175 | 188 | -rabbit_upgrade({store_msg_size, local, [avoid_zeroes]}). |
189 | -rabbit_upgrade({store_msg, local, [store_msg_size]}). | |
176 | 190 | |
177 | 191 | -ifdef(use_specs). |
178 | 192 | |
192 | 206 | dirty_count :: integer(), |
193 | 207 | max_journal_entries :: non_neg_integer(), |
194 | 208 | on_sync :: on_sync_fun(), |
195 | unconfirmed :: gb_sets:set() | |
209 | on_sync_msg :: on_sync_fun(), | |
210 | unconfirmed :: gb_sets:set(), | |
211 | unconfirmed_msg :: gb_sets:set() | |
196 | 212 | }). |
197 | 213 | -type(contains_predicate() :: fun ((rabbit_types:msg_id()) -> boolean())). |
198 | 214 | -type(walker(A) :: fun ((A) -> 'finished' | |
200 | 216 | -type(shutdown_terms() :: [term()] | 'non_clean_shutdown'). |
201 | 217 | |
202 | 218 | -spec(erase/1 :: (rabbit_amqqueue:name()) -> 'ok'). |
203 | -spec(init/2 :: (rabbit_amqqueue:name(), on_sync_fun()) -> qistate()). | |
204 | -spec(recover/5 :: (rabbit_amqqueue:name(), shutdown_terms(), boolean(), | |
205 | contains_predicate(), on_sync_fun()) -> | |
219 | -spec(init/3 :: (rabbit_amqqueue:name(), | |
220 | on_sync_fun(), on_sync_fun()) -> qistate()). | |
221 | -spec(recover/6 :: (rabbit_amqqueue:name(), shutdown_terms(), boolean(), | |
222 | contains_predicate(), | |
223 | on_sync_fun(), on_sync_fun()) -> | |
206 | 224 | {'undefined' | non_neg_integer(), |
207 | 225 | 'undefined' | non_neg_integer(), qistate()}). |
208 | 226 | -spec(terminate/2 :: ([any()], qistate()) -> qistate()). |
209 | 227 | -spec(delete_and_terminate/1 :: (qistate()) -> qistate()). |
210 | -spec(publish/5 :: (rabbit_types:msg_id(), seq_id(), | |
211 | rabbit_types:message_properties(), boolean(), qistate()) | |
212 | -> qistate()). | |
228 | -spec(publish/6 :: (rabbit_types:msg_id(), seq_id(), | |
229 | rabbit_types:message_properties(), boolean(), | |
230 | non_neg_integer(), qistate()) -> qistate()). | |
213 | 231 | -spec(deliver/2 :: ([seq_id()], qistate()) -> qistate()). |
214 | 232 | -spec(ack/2 :: ([seq_id()], qistate()) -> qistate()). |
215 | 233 | -spec(sync/1 :: (qistate()) -> qistate()). |
240 | 258 | false -> ok |
241 | 259 | end. |
242 | 260 | |
243 | init(Name, OnSyncFun) -> | |
261 | init(Name, OnSyncFun, OnSyncMsgFun) -> | |
244 | 262 | State = #qistate { dir = Dir } = blank_state(Name), |
245 | 263 | false = rabbit_file:is_file(Dir), %% is_file == is file or dir |
246 | State #qistate { on_sync = OnSyncFun }. | |
247 | ||
248 | recover(Name, Terms, MsgStoreRecovered, ContainsCheckFun, OnSyncFun) -> | |
264 | State#qistate{on_sync = OnSyncFun, | |
265 | on_sync_msg = OnSyncMsgFun}. | |
266 | ||
267 | recover(Name, Terms, MsgStoreRecovered, ContainsCheckFun, | |
268 | OnSyncFun, OnSyncMsgFun) -> | |
249 | 269 | State = blank_state(Name), |
250 | State1 = State #qistate { on_sync = OnSyncFun }, | |
270 | State1 = State #qistate{on_sync = OnSyncFun, | |
271 | on_sync_msg = OnSyncMsgFun}, | |
251 | 272 | CleanShutdown = Terms /= non_clean_shutdown, |
252 | 273 | case CleanShutdown andalso MsgStoreRecovered of |
253 | 274 | true -> RecoveredCounts = proplists:get_value(segments, Terms, []), |
266 | 287 | ok = rabbit_file:recursive_delete([Dir]), |
267 | 288 | State1. |
268 | 289 | |
269 | publish(MsgId, SeqId, MsgProps, IsPersistent, | |
270 | State = #qistate { unconfirmed = Unconfirmed }) | |
271 | when is_binary(MsgId) -> | |
290 | publish(MsgOrId, SeqId, MsgProps, IsPersistent, JournalSizeHint, | |
291 | State = #qistate{unconfirmed = UC, | |
292 | unconfirmed_msg = UCM}) -> | |
293 | MsgId = case MsgOrId of | |
294 | #basic_message{id = Id} -> Id; | |
295 | Id when is_binary(Id) -> Id | |
296 | end, | |
272 | 297 | ?MSG_ID_BYTES = size(MsgId), |
273 | 298 | {JournalHdl, State1} = |
274 | 299 | get_journal_handle( |
275 | case MsgProps#message_properties.needs_confirming of | |
276 | true -> Unconfirmed1 = gb_sets:add_element(MsgId, Unconfirmed), | |
277 | State #qistate { unconfirmed = Unconfirmed1 }; | |
278 | false -> State | |
300 | case {MsgProps#message_properties.needs_confirming, MsgOrId} of | |
301 | {true, MsgId} -> UC1 = gb_sets:add_element(MsgId, UC), | |
302 | State#qistate{unconfirmed = UC1}; | |
303 | {true, _} -> UCM1 = gb_sets:add_element(MsgId, UCM), | |
304 | State#qistate{unconfirmed_msg = UCM1}; | |
305 | {false, _} -> State | |
279 | 306 | end), |
307 | file_handle_cache_stats:update(queue_index_journal_write), | |
308 | {Bin, MsgBin} = create_pub_record_body(MsgOrId, MsgProps), | |
280 | 309 | ok = file_handle_cache:append( |
281 | 310 | JournalHdl, [<<(case IsPersistent of |
282 | 311 | true -> ?PUB_PERSIST_JPREFIX; |
283 | 312 | false -> ?PUB_TRANS_JPREFIX |
284 | 313 | end):?JPREFIX_BITS, |
285 | SeqId:?SEQ_BITS>>, | |
286 | create_pub_record_body(MsgId, MsgProps)]), | |
314 | SeqId:?SEQ_BITS, Bin/binary, | |
315 | (size(MsgBin)):?EMBEDDED_SIZE_BITS>>, MsgBin]), | |
287 | 316 | maybe_flush_journal( |
288 | add_to_journal(SeqId, {MsgId, MsgProps, IsPersistent}, State1)). | |
317 | JournalSizeHint, | |
318 | add_to_journal(SeqId, {IsPersistent, Bin, MsgBin}, State1)). | |
289 | 319 | |
290 | 320 | deliver(SeqIds, State) -> |
291 | 321 | deliver_or_ack(del, SeqIds, State). |
301 | 331 | ok = file_handle_cache:sync(JournalHdl), |
302 | 332 | notify_sync(State). |
303 | 333 | |
304 | needs_sync(#qistate { journal_handle = undefined }) -> | |
334 | needs_sync(#qistate{journal_handle = undefined}) -> | |
305 | 335 | false; |
306 | needs_sync(#qistate { journal_handle = JournalHdl, unconfirmed = UC }) -> | |
307 | case gb_sets:is_empty(UC) of | |
336 | needs_sync(#qistate{journal_handle = JournalHdl, | |
337 | unconfirmed = UC, | |
338 | unconfirmed_msg = UCM}) -> | |
339 | case gb_sets:is_empty(UC) andalso gb_sets:is_empty(UCM) of | |
308 | 340 | true -> case file_handle_cache:needs_sync(JournalHdl) of |
309 | 341 | true -> other; |
310 | 342 | false -> false |
408 | 440 | dirty_count = 0, |
409 | 441 | max_journal_entries = MaxJournal, |
410 | 442 | on_sync = fun (_) -> ok end, |
411 | unconfirmed = gb_sets:new() }. | |
443 | on_sync_msg = fun (_) -> ok end, | |
444 | unconfirmed = gb_sets:new(), | |
445 | unconfirmed_msg = gb_sets:new() }. | |
412 | 446 | |
413 | 447 | init_clean(RecoveredCounts, State) -> |
414 | 448 | %% Load the journal. Since this is a clean recovery this (almost) |
478 | 512 | {SegEntries1, UnackedCountDelta} = |
479 | 513 | segment_plus_journal(SegEntries, JEntries), |
480 | 514 | array:sparse_foldl( |
481 | fun (RelSeq, {{MsgId, MsgProps, IsPersistent}, Del, no_ack}, | |
515 | fun (RelSeq, {{IsPersistent, Bin, MsgBin}, Del, no_ack}, | |
482 | 516 | {SegmentAndDirtyCount, Bytes}) -> |
483 | {recover_message(ContainsCheckFun(MsgId), CleanShutdown, | |
517 | {MsgOrId, MsgProps} = parse_pub_record_body(Bin, MsgBin), | |
518 | {recover_message(ContainsCheckFun(MsgOrId), CleanShutdown, | |
484 | 519 | Del, RelSeq, SegmentAndDirtyCount), |
485 | 520 | Bytes + case IsPersistent of |
486 | 521 | true -> MsgProps#message_properties.size; |
540 | 575 | queue_index_walker_reader(QueueName, Gatherer) -> |
541 | 576 | State = blank_state(QueueName), |
542 | 577 | ok = scan_segments( |
543 | fun (_SeqId, MsgId, _MsgProps, true, _IsDelivered, no_ack, ok) -> | |
578 | fun (_SeqId, MsgId, _MsgProps, true, _IsDelivered, no_ack, ok) | |
579 | when is_binary(MsgId) -> | |
544 | 580 | gatherer:sync_in(Gatherer, {MsgId, 1}); |
545 | 581 | (_SeqId, _MsgId, _MsgProps, _IsPersistent, _IsDelivered, |
546 | 582 | _IsAcked, Acc) -> |
554 | 590 | Result = lists:foldr( |
555 | 591 | fun (Seg, AccN) -> |
556 | 592 | segment_entries_foldr( |
557 | fun (RelSeq, {{MsgId, MsgProps, IsPersistent}, | |
593 | fun (RelSeq, {{MsgOrId, MsgProps, IsPersistent}, | |
558 | 594 | IsDelivered, IsAcked}, AccM) -> |
559 | Fun(reconstruct_seq_id(Seg, RelSeq), MsgId, MsgProps, | |
595 | Fun(reconstruct_seq_id(Seg, RelSeq), MsgOrId, MsgProps, | |
560 | 596 | IsPersistent, IsDelivered, IsAcked, AccM) |
561 | 597 | end, AccN, segment_find_or_new(Seg, Dir, Segments)) |
562 | 598 | end, Acc, all_segment_nums(State1)), |
567 | 603 | %% expiry/binary manipulation |
568 | 604 | %%---------------------------------------------------------------------------- |
569 | 605 | |
570 | create_pub_record_body(MsgId, #message_properties { expiry = Expiry, | |
571 | size = Size }) -> | |
572 | [MsgId, expiry_to_binary(Expiry), <<Size:?SIZE_BITS>>]. | |
606 | create_pub_record_body(MsgOrId, #message_properties { expiry = Expiry, | |
607 | size = Size }) -> | |
608 | ExpiryBin = expiry_to_binary(Expiry), | |
609 | case MsgOrId of | |
610 | MsgId when is_binary(MsgId) -> | |
611 | {<<MsgId/binary, ExpiryBin/binary, Size:?SIZE_BITS>>, <<>>}; | |
612 | #basic_message{id = MsgId} -> | |
613 | MsgBin = term_to_binary(MsgOrId), | |
614 | {<<MsgId/binary, ExpiryBin/binary, Size:?SIZE_BITS>>, MsgBin} | |
615 | end. | |
573 | 616 | |
574 | 617 | expiry_to_binary(undefined) -> <<?NO_EXPIRY:?EXPIRY_BITS>>; |
575 | 618 | expiry_to_binary(Expiry) -> <<Expiry:?EXPIRY_BITS>>. |
576 | 619 | |
577 | 620 | parse_pub_record_body(<<MsgIdNum:?MSG_ID_BITS, Expiry:?EXPIRY_BITS, |
578 | Size:?SIZE_BITS>>) -> | |
621 | Size:?SIZE_BITS>>, MsgBin) -> | |
579 | 622 | %% work around for binary data fragmentation. See |
580 | 623 | %% rabbit_msg_file:read_next/2 |
581 | 624 | <<MsgId:?MSG_ID_BYTES/binary>> = <<MsgIdNum:?MSG_ID_BITS>>, |
582 | Exp = case Expiry of | |
583 | ?NO_EXPIRY -> undefined; | |
584 | X -> X | |
585 | end, | |
586 | {MsgId, #message_properties { expiry = Exp, | |
587 | size = Size }}. | |
625 | Props = #message_properties{expiry = case Expiry of | |
626 | ?NO_EXPIRY -> undefined; | |
627 | X -> X | |
628 | end, | |
629 | size = Size}, | |
630 | case MsgBin of | |
631 | <<>> -> {MsgId, Props}; | |
632 | _ -> Msg = #basic_message{id = MsgId} = binary_to_term(MsgBin), | |
633 | {Msg, Props} | |
634 | end. | |
588 | 635 | |
589 | 636 | %%---------------------------------------------------------------------------- |
590 | 637 | %% journal manipulation |
627 | 674 | array:reset(RelSeq, JEntries) |
628 | 675 | end. |
629 | 676 | |
630 | maybe_flush_journal(State = #qistate { dirty_count = DCount, | |
631 | max_journal_entries = MaxJournal }) | |
632 | when DCount > MaxJournal -> | |
677 | maybe_flush_journal(State) -> | |
678 | maybe_flush_journal(infinity, State). | |
679 | ||
680 | maybe_flush_journal(Hint, State = #qistate { dirty_count = DCount, | |
681 | max_journal_entries = MaxJournal }) | |
682 | when DCount > MaxJournal orelse (Hint =/= infinity andalso DCount > Hint) -> | |
633 | 683 | flush_journal(State); |
634 | maybe_flush_journal(State) -> | |
684 | maybe_flush_journal(_Hint, State) -> | |
635 | 685 | State. |
636 | 686 | |
637 | 687 | flush_journal(State = #qistate { segments = Segments }) -> |
655 | 705 | path = Path } = Segment) -> |
656 | 706 | case array:sparse_size(JEntries) of |
657 | 707 | 0 -> Segment; |
658 | _ -> {ok, Hdl} = file_handle_cache:open(Path, ?WRITE_MODE, | |
708 | _ -> Seg = array:sparse_foldr( | |
709 | fun entry_to_segment/3, [], JEntries), | |
710 | file_handle_cache_stats:update(queue_index_write), | |
711 | ||
712 | {ok, Hdl} = file_handle_cache:open(Path, ?WRITE_MODE, | |
659 | 713 | [{write_buffer, infinity}]), |
660 | array:sparse_foldl(fun write_entry_to_segment/3, Hdl, JEntries), | |
714 | file_handle_cache:append(Hdl, Seg), | |
661 | 715 | ok = file_handle_cache:close(Hdl), |
662 | 716 | Segment #segment { journal_entries = array_new() } |
663 | 717 | end. |
676 | 730 | %% if you call it more than once on the same state. Assumes the counts |
677 | 731 | %% are 0 to start with. |
678 | 732 | load_journal(State = #qistate { dir = Dir }) -> |
679 | case rabbit_file:is_file(filename:join(Dir, ?JOURNAL_FILENAME)) of | |
733 | Path = filename:join(Dir, ?JOURNAL_FILENAME), | |
734 | case rabbit_file:is_file(Path) of | |
680 | 735 | true -> {JournalHdl, State1} = get_journal_handle(State), |
736 | Size = rabbit_file:file_size(Path), | |
681 | 737 | {ok, 0} = file_handle_cache:position(JournalHdl, 0), |
682 | load_journal_entries(State1); | |
738 | {ok, JournalBin} = file_handle_cache:read(JournalHdl, Size), | |
739 | parse_journal_entries(JournalBin, State1); | |
683 | 740 | false -> State |
684 | 741 | end. |
685 | 742 | |
703 | 760 | end, Segments), |
704 | 761 | State1 #qistate { segments = Segments1 }. |
705 | 762 | |
706 | load_journal_entries(State = #qistate { journal_handle = Hdl }) -> | |
707 | case file_handle_cache:read(Hdl, ?SEQ_BYTES) of | |
708 | {ok, <<Prefix:?JPREFIX_BITS, SeqId:?SEQ_BITS>>} -> | |
709 | case Prefix of | |
710 | ?DEL_JPREFIX -> | |
711 | load_journal_entries(add_to_journal(SeqId, del, State)); | |
712 | ?ACK_JPREFIX -> | |
713 | load_journal_entries(add_to_journal(SeqId, ack, State)); | |
714 | _ -> | |
715 | case file_handle_cache:read(Hdl, ?PUB_RECORD_BODY_BYTES) of | |
716 | %% Journal entry composed only of zeroes was probably | |
717 | %% produced during a dirty shutdown so stop reading | |
718 | {ok, <<0:?PUB_RECORD_BODY_BYTES/unit:8>>} -> | |
719 | State; | |
720 | {ok, <<Bin:?PUB_RECORD_BODY_BYTES/binary>>} -> | |
721 | {MsgId, MsgProps} = parse_pub_record_body(Bin), | |
722 | IsPersistent = case Prefix of | |
723 | ?PUB_PERSIST_JPREFIX -> true; | |
724 | ?PUB_TRANS_JPREFIX -> false | |
725 | end, | |
726 | load_journal_entries( | |
727 | add_to_journal( | |
728 | SeqId, {MsgId, MsgProps, IsPersistent}, State)); | |
729 | _ErrOrEoF -> %% err, we've lost at least a publish | |
730 | State | |
731 | end | |
732 | end; | |
733 | _ErrOrEoF -> State | |
734 | end. | |
763 | parse_journal_entries(<<?DEL_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS, | |
764 | Rest/binary>>, State) -> | |
765 | parse_journal_entries(Rest, add_to_journal(SeqId, del, State)); | |
766 | ||
767 | parse_journal_entries(<<?ACK_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS, | |
768 | Rest/binary>>, State) -> | |
769 | parse_journal_entries(Rest, add_to_journal(SeqId, ack, State)); | |
770 | parse_journal_entries(<<0:?JPREFIX_BITS, 0:?SEQ_BITS, | |
771 | 0:?PUB_RECORD_SIZE_BYTES/unit:8, _/binary>>, State) -> | |
772 | %% Journal entry composed only of zeroes was probably | |
773 | %% produced during a dirty shutdown so stop reading | |
774 | State; | |
775 | parse_journal_entries(<<Prefix:?JPREFIX_BITS, SeqId:?SEQ_BITS, | |
776 | Bin:?PUB_RECORD_BODY_BYTES/binary, | |
777 | MsgSize:?EMBEDDED_SIZE_BITS, MsgBin:MsgSize/binary, | |
778 | Rest/binary>>, State) -> | |
779 | IsPersistent = case Prefix of | |
780 | ?PUB_PERSIST_JPREFIX -> true; | |
781 | ?PUB_TRANS_JPREFIX -> false | |
782 | end, | |
783 | parse_journal_entries( | |
784 | Rest, add_to_journal(SeqId, {IsPersistent, Bin, MsgBin}, State)); | |
785 | parse_journal_entries(_ErrOrEoF, State) -> | |
786 | State. | |
735 | 787 | |
736 | 788 | deliver_or_ack(_Kind, [], State) -> |
737 | 789 | State; |
738 | 790 | deliver_or_ack(Kind, SeqIds, State) -> |
739 | 791 | JPrefix = case Kind of ack -> ?ACK_JPREFIX; del -> ?DEL_JPREFIX end, |
740 | 792 | {JournalHdl, State1} = get_journal_handle(State), |
793 | file_handle_cache_stats:update(queue_index_journal_write), | |
741 | 794 | ok = file_handle_cache:append( |
742 | 795 | JournalHdl, |
743 | 796 | [<<JPrefix:?JPREFIX_BITS, SeqId:?SEQ_BITS>> || SeqId <- SeqIds]), |
745 | 798 | add_to_journal(SeqId, Kind, StateN) |
746 | 799 | end, State1, SeqIds)). |
747 | 800 | |
748 | notify_sync(State = #qistate { unconfirmed = UC, on_sync = OnSyncFun }) -> | |
749 | case gb_sets:is_empty(UC) of | |
750 | true -> State; | |
751 | false -> OnSyncFun(UC), | |
752 | State #qistate { unconfirmed = gb_sets:new() } | |
801 | notify_sync(State = #qistate{unconfirmed = UC, | |
802 | unconfirmed_msg = UCM, | |
803 | on_sync = OnSyncFun, | |
804 | on_sync_msg = OnSyncMsgFun}) -> | |
805 | State1 = case gb_sets:is_empty(UC) of | |
806 | true -> State; | |
807 | false -> OnSyncFun(UC), | |
808 | State#qistate{unconfirmed = gb_sets:new()} | |
809 | end, | |
810 | case gb_sets:is_empty(UCM) of | |
811 | true -> State1; | |
812 | false -> OnSyncMsgFun(UCM), | |
813 | State1#qistate{unconfirmed_msg = gb_sets:new()} | |
753 | 814 | end. |
754 | 815 | |
755 | 816 | %%---------------------------------------------------------------------------- |
822 | 883 | segments_new() -> |
823 | 884 | {dict:new(), []}. |
824 | 885 | |
825 | write_entry_to_segment(_RelSeq, {?PUB, del, ack}, Hdl) -> | |
826 | Hdl; | |
827 | write_entry_to_segment(RelSeq, {Pub, Del, Ack}, Hdl) -> | |
828 | ok = case Pub of | |
829 | no_pub -> | |
830 | ok; | |
831 | {MsgId, MsgProps, IsPersistent} -> | |
832 | file_handle_cache:append( | |
833 | Hdl, [<<?PUB_PREFIX:?PUB_PREFIX_BITS, | |
834 | (bool_to_int(IsPersistent)):1, | |
835 | RelSeq:?REL_SEQ_BITS>>, | |
836 | create_pub_record_body(MsgId, MsgProps)]) | |
837 | end, | |
838 | ok = case {Del, Ack} of | |
839 | {no_del, no_ack} -> | |
840 | ok; | |
841 | _ -> | |
842 | Binary = <<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS, | |
843 | RelSeq:?REL_SEQ_BITS>>, | |
844 | file_handle_cache:append( | |
845 | Hdl, case {Del, Ack} of | |
846 | {del, ack} -> [Binary, Binary]; | |
847 | _ -> Binary | |
848 | end) | |
849 | end, | |
850 | Hdl. | |
886 | entry_to_segment(_RelSeq, {?PUB, del, ack}, Buf) -> | |
887 | Buf; | |
888 | entry_to_segment(RelSeq, {Pub, Del, Ack}, Buf) -> | |
889 | %% NB: we are assembling the segment in reverse order here, so | |
890 | %% del/ack comes first. | |
891 | Buf1 = case {Del, Ack} of | |
892 | {no_del, no_ack} -> | |
893 | Buf; | |
894 | _ -> | |
895 | Binary = <<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS, | |
896 | RelSeq:?REL_SEQ_BITS>>, | |
897 | case {Del, Ack} of | |
898 | {del, ack} -> [[Binary, Binary] | Buf]; | |
899 | _ -> [Binary | Buf] | |
900 | end | |
901 | end, | |
902 | case Pub of | |
903 | no_pub -> | |
904 | Buf1; | |
905 | {IsPersistent, Bin, MsgBin} -> | |
906 | [[<<?PUB_PREFIX:?PUB_PREFIX_BITS, | |
907 | (bool_to_int(IsPersistent)):1, | |
908 | RelSeq:?REL_SEQ_BITS, Bin/binary, | |
909 | (size(MsgBin)):?EMBEDDED_SIZE_BITS>>, MsgBin] | Buf1] | |
910 | end. | |
851 | 911 | |
852 | 912 | read_bounded_segment(Seg, {StartSeg, StartRelSeq}, {EndSeg, EndRelSeq}, |
853 | 913 | {Messages, Segments}, Dir) -> |
854 | 914 | Segment = segment_find_or_new(Seg, Dir, Segments), |
855 | 915 | {segment_entries_foldr( |
856 | fun (RelSeq, {{MsgId, MsgProps, IsPersistent}, IsDelivered, no_ack}, Acc) | |
916 | fun (RelSeq, {{MsgOrId, MsgProps, IsPersistent}, IsDelivered, no_ack}, | |
917 | Acc) | |
857 | 918 | when (Seg > StartSeg orelse StartRelSeq =< RelSeq) andalso |
858 | 919 | (Seg < EndSeg orelse EndRelSeq >= RelSeq) -> |
859 | [ {MsgId, reconstruct_seq_id(StartSeg, RelSeq), MsgProps, | |
860 | IsPersistent, IsDelivered == del} | Acc ]; | |
920 | [{MsgOrId, reconstruct_seq_id(StartSeg, RelSeq), MsgProps, | |
921 | IsPersistent, IsDelivered == del} | Acc]; | |
861 | 922 | (_RelSeq, _Value, Acc) -> |
862 | 923 | Acc |
863 | 924 | end, Messages, Segment), |
867 | 928 | Segment = #segment { journal_entries = JEntries }) -> |
868 | 929 | {SegEntries, _UnackedCount} = load_segment(false, Segment), |
869 | 930 | {SegEntries1, _UnackedCountD} = segment_plus_journal(SegEntries, JEntries), |
870 | array:sparse_foldr(Fun, Init, SegEntries1). | |
931 | array:sparse_foldr( | |
932 | fun (RelSeq, {{IsPersistent, Bin, MsgBin}, Del, Ack}, Acc) -> | |
933 | {MsgOrId, MsgProps} = parse_pub_record_body(Bin, MsgBin), | |
934 | Fun(RelSeq, {{MsgOrId, MsgProps, IsPersistent}, Del, Ack}, Acc) | |
935 | end, Init, SegEntries1). | |
871 | 936 | |
872 | 937 | %% Loading segments |
873 | 938 | %% |
876 | 941 | Empty = {array_new(), 0}, |
877 | 942 | case rabbit_file:is_file(Path) of |
878 | 943 | false -> Empty; |
879 | true -> {ok, Hdl} = file_handle_cache:open(Path, ?READ_AHEAD_MODE, []), | |
944 | true -> Size = rabbit_file:file_size(Path), | |
945 | file_handle_cache_stats:update(queue_index_read), | |
946 | {ok, Hdl} = file_handle_cache:open(Path, ?READ_MODE, []), | |
880 | 947 | {ok, 0} = file_handle_cache:position(Hdl, bof), |
881 | Res = case file_handle_cache:read(Hdl, ?SEGMENT_TOTAL_SIZE) of | |
882 | {ok, SegData} -> load_segment_entries( | |
883 | KeepAcked, SegData, Empty); | |
884 | eof -> Empty | |
885 | end, | |
948 | {ok, SegBin} = file_handle_cache:read(Hdl, Size), | |
886 | 949 | ok = file_handle_cache:close(Hdl), |
950 | Res = parse_segment_entries(SegBin, KeepAcked, Empty), | |
887 | 951 | Res |
888 | 952 | end. |
889 | 953 | |
890 | load_segment_entries(KeepAcked, | |
891 | <<?PUB_PREFIX:?PUB_PREFIX_BITS, | |
892 | IsPersistentNum:1, RelSeq:?REL_SEQ_BITS, | |
893 | PubRecordBody:?PUB_RECORD_BODY_BYTES/binary, | |
894 | SegData/binary>>, | |
895 | {SegEntries, UnackedCount}) -> | |
896 | {MsgId, MsgProps} = parse_pub_record_body(PubRecordBody), | |
897 | Obj = {{MsgId, MsgProps, 1 == IsPersistentNum}, no_del, no_ack}, | |
954 | parse_segment_entries(<<?PUB_PREFIX:?PUB_PREFIX_BITS, | |
955 | IsPersistNum:1, RelSeq:?REL_SEQ_BITS, Rest/binary>>, | |
956 | KeepAcked, Acc) -> | |
957 | parse_segment_publish_entry( | |
958 | Rest, 1 == IsPersistNum, RelSeq, KeepAcked, Acc); | |
959 | parse_segment_entries(<<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS, | |
960 | RelSeq:?REL_SEQ_BITS, Rest/binary>>, KeepAcked, Acc) -> | |
961 | parse_segment_entries( | |
962 | Rest, KeepAcked, add_segment_relseq_entry(KeepAcked, RelSeq, Acc)); | |
963 | parse_segment_entries(<<>>, _KeepAcked, Acc) -> | |
964 | Acc. | |
965 | ||
966 | parse_segment_publish_entry(<<Bin:?PUB_RECORD_BODY_BYTES/binary, | |
967 | MsgSize:?EMBEDDED_SIZE_BITS, | |
968 | MsgBin:MsgSize/binary, Rest/binary>>, | |
969 | IsPersistent, RelSeq, KeepAcked, | |
970 | {SegEntries, Unacked}) -> | |
971 | Obj = {{IsPersistent, Bin, MsgBin}, no_del, no_ack}, | |
898 | 972 | SegEntries1 = array:set(RelSeq, Obj, SegEntries), |
899 | load_segment_entries(KeepAcked, SegData, {SegEntries1, UnackedCount + 1}); | |
900 | load_segment_entries(KeepAcked, | |
901 | <<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS, | |
902 | RelSeq:?REL_SEQ_BITS, SegData/binary>>, | |
903 | {SegEntries, UnackedCount}) -> | |
904 | {UnackedCountDelta, SegEntries1} = | |
905 | case array:get(RelSeq, SegEntries) of | |
906 | {Pub, no_del, no_ack} -> | |
907 | { 0, array:set(RelSeq, {Pub, del, no_ack}, SegEntries)}; | |
908 | {Pub, del, no_ack} when KeepAcked -> | |
909 | {-1, array:set(RelSeq, {Pub, del, ack}, SegEntries)}; | |
910 | {_Pub, del, no_ack} -> | |
911 | {-1, array:reset(RelSeq, SegEntries)} | |
912 | end, | |
913 | load_segment_entries(KeepAcked, SegData, | |
914 | {SegEntries1, UnackedCount + UnackedCountDelta}); | |
915 | load_segment_entries(_KeepAcked, _SegData, Res) -> | |
916 | Res. | |
973 | parse_segment_entries(Rest, KeepAcked, {SegEntries1, Unacked + 1}); | |
974 | parse_segment_publish_entry(Rest, _IsPersistent, _RelSeq, KeepAcked, Acc) -> | |
975 | parse_segment_entries(Rest, KeepAcked, Acc). | |
976 | ||
977 | add_segment_relseq_entry(KeepAcked, RelSeq, {SegEntries, Unacked}) -> | |
978 | case array:get(RelSeq, SegEntries) of | |
979 | {Pub, no_del, no_ack} -> | |
980 | {array:set(RelSeq, {Pub, del, no_ack}, SegEntries), Unacked}; | |
981 | {Pub, del, no_ack} when KeepAcked -> | |
982 | {array:set(RelSeq, {Pub, del, ack}, SegEntries), Unacked - 1}; | |
983 | {_Pub, del, no_ack} -> | |
984 | {array:reset(RelSeq, SegEntries), Unacked - 1} | |
985 | end. | |
917 | 986 | |
918 | 987 | array_new() -> |
919 | 988 | array:new([{default, undefined}, fixed, {size, ?SEGMENT_ENTRY_COUNT}]). |
1120 | 1189 | store_msg_size_segment(_) -> |
1121 | 1190 | stop. |
1122 | 1191 | |
1192 | store_msg() -> | |
1193 | foreach_queue_index({fun store_msg_journal/1, | |
1194 | fun store_msg_segment/1}). | |
1195 | ||
1196 | store_msg_journal(<<?DEL_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS, | |
1197 | Rest/binary>>) -> | |
1198 | {<<?DEL_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS>>, Rest}; | |
1199 | store_msg_journal(<<?ACK_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS, | |
1200 | Rest/binary>>) -> | |
1201 | {<<?ACK_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS>>, Rest}; | |
1202 | store_msg_journal(<<Prefix:?JPREFIX_BITS, SeqId:?SEQ_BITS, | |
1203 | MsgId:?MSG_ID_BITS, Expiry:?EXPIRY_BITS, Size:?SIZE_BITS, | |
1204 | Rest/binary>>) -> | |
1205 | {<<Prefix:?JPREFIX_BITS, SeqId:?SEQ_BITS, MsgId:?MSG_ID_BITS, | |
1206 | Expiry:?EXPIRY_BITS, Size:?SIZE_BITS, | |
1207 | 0:?EMBEDDED_SIZE_BITS>>, Rest}; | |
1208 | store_msg_journal(_) -> | |
1209 | stop. | |
1210 | ||
1211 | store_msg_segment(<<?PUB_PREFIX:?PUB_PREFIX_BITS, IsPersistentNum:1, | |
1212 | RelSeq:?REL_SEQ_BITS, MsgId:?MSG_ID_BITS, | |
1213 | Expiry:?EXPIRY_BITS, Size:?SIZE_BITS, Rest/binary>>) -> | |
1214 | {<<?PUB_PREFIX:?PUB_PREFIX_BITS, IsPersistentNum:1, RelSeq:?REL_SEQ_BITS, | |
1215 | MsgId:?MSG_ID_BITS, Expiry:?EXPIRY_BITS, Size:?SIZE_BITS, | |
1216 | 0:?EMBEDDED_SIZE_BITS>>, Rest}; | |
1217 | store_msg_segment(<<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS, | |
1218 | RelSeq:?REL_SEQ_BITS, Rest/binary>>) -> | |
1219 | {<<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS, RelSeq:?REL_SEQ_BITS>>, | |
1220 | Rest}; | |
1221 | store_msg_segment(_) -> | |
1222 | stop. | |
1223 | ||
1224 | ||
1225 | ||
1123 | 1226 | |
1124 | 1227 | %%---------------------------------------------------------------------------- |
1125 | 1228 | |
1156 | 1259 | [{write_buffer, infinity}]), |
1157 | 1260 | |
1158 | 1261 | {ok, PathHdl} = file_handle_cache:open( |
1159 | Path, [{read_ahead, Size} | ?READ_MODE], []), | |
1262 | Path, ?READ_MODE, [{read_buffer, Size}]), | |
1160 | 1263 | {ok, Content} = file_handle_cache:read(PathHdl, Size), |
1161 | 1264 | ok = file_handle_cache:close(PathHdl), |
1162 | 1265 |
56 | 56 | timeout, frame_max, channel_max, client_properties, connected_at]). |
57 | 57 | |
58 | 58 | -define(INFO_KEYS, ?CREATION_EVENT_KEYS ++ ?STATISTICS_KEYS -- [pid]). |
59 | ||
60 | -define(AUTH_NOTIFICATION_INFO_KEYS, | |
61 | [host, vhost, name, peer_host, peer_port, protocol, auth_mechanism, | |
62 | ssl, ssl_protocol, ssl_cipher, peer_cert_issuer, peer_cert_subject, | |
63 | peer_cert_validity]). | |
59 | 64 | |
60 | 65 | -define(IS_RUNNING(State), |
61 | 66 | (State#v1.connection_state =:= running orelse |
213 | 218 | rabbit_net:fast_close(Sock), |
214 | 219 | exit(normal) |
215 | 220 | end, |
216 | log(info, "accepting AMQP connection ~p (~s)~n", [self(), Name]), | |
217 | 221 | {ok, HandshakeTimeout} = application:get_env(rabbit, handshake_timeout), |
218 | 222 | ClientSock = socket_op(Sock, SockTransform), |
219 | 223 | erlang:send_after(HandshakeTimeout, self(), handshake_timeout), |
259 | 263 | log(info, "closing AMQP connection ~p (~s)~n", [self(), Name]) |
260 | 264 | catch |
261 | 265 | Ex -> log(case Ex of |
262 | connection_closed_abruptly -> warning; | |
263 | _ -> error | |
266 | connection_closed_with_no_data_received -> debug; | |
267 | connection_closed_abruptly -> warning; | |
268 | _ -> error | |
264 | 269 | end, "closing AMQP connection ~p (~s):~n~p~n", |
265 | 270 | [self(), Name, Ex]) |
266 | 271 | after |
312 | 317 | binlist_split(Len, [H|T], Acc) -> |
313 | 318 | binlist_split(Len - size(H), T, [H|Acc]). |
314 | 319 | |
315 | mainloop(Deb, Buf, BufLen, State = #v1{sock = Sock}) -> | |
316 | case rabbit_net:recv(Sock) of | |
320 | mainloop(Deb, Buf, BufLen, State = #v1{sock = Sock, | |
321 | connection_state = CS, | |
322 | connection = #connection{ | |
323 | name = ConnName}}) -> | |
324 | Recv = rabbit_net:recv(Sock), | |
325 | case CS of | |
326 | pre_init when Buf =:= [] -> | |
327 | %% We only log incoming connections when either the | |
328 | %% first byte was received or there was an error (eg. a | |
329 | %% timeout). | |
330 | %% | |
331 | %% The goal is to not log TCP healthchecks (a connection | |
332 | %% with no data received) unless specified otherwise. | |
333 | log(case Recv of | |
334 | closed -> debug; | |
335 | _ -> info | |
336 | end, "accepting AMQP connection ~p (~s)~n", | |
337 | [self(), ConnName]); | |
338 | _ -> | |
339 | ok | |
340 | end, | |
341 | case Recv of | |
317 | 342 | {data, Data} -> |
318 | 343 | recvloop(Deb, [Data | Buf], BufLen + size(Data), |
319 | 344 | State#v1{pending_recv = false}); |
333 | 358 | end |
334 | 359 | end. |
335 | 360 | |
336 | stop(closed, State) -> maybe_emit_stats(State), | |
337 | throw(connection_closed_abruptly); | |
338 | stop(Reason, State) -> maybe_emit_stats(State), | |
339 | throw({inet_error, Reason}). | |
361 | stop(closed, #v1{connection_state = pre_init} = State) -> | |
362 | %% The connection was closed before any packet was received. It's | |
363 | %% probably a load-balancer healthcheck: don't consider this a | |
364 | %% failure. | |
365 | maybe_emit_stats(State), | |
366 | throw(connection_closed_with_no_data_received); | |
367 | stop(closed, State) -> | |
368 | maybe_emit_stats(State), | |
369 | throw(connection_closed_abruptly); | |
370 | stop(Reason, State) -> | |
371 | maybe_emit_stats(State), | |
372 | throw({inet_error, Reason}). | |
340 | 373 | |
341 | 374 | handle_other({conserve_resources, Source, Conserve}, |
342 | 375 | State = #v1{throttle = Throttle = #throttle{alarmed_by = CR}}) -> |
943 | 976 | helper_sup = SupPid, |
944 | 977 | sock = Sock, |
945 | 978 | throttle = Throttle}) -> |
946 | ok = rabbit_access_control:check_vhost_access(User, VHostPath), | |
979 | ok = rabbit_access_control:check_vhost_access(User, VHostPath, Sock), | |
947 | 980 | NewConnection = Connection#connection{vhost = VHostPath}, |
948 | 981 | ok = send_on_channel0(Sock, #'connection.open_ok'{}, Protocol), |
949 | 982 | Conserve = rabbit_alarm:register(self(), {?MODULE, conserve_resources, []}), |
1045 | 1078 | auth_state = AuthState}, |
1046 | 1079 | sock = Sock}) -> |
1047 | 1080 | case AuthMechanism:handle_response(Response, AuthState) of |
1048 | {refused, Msg, Args} -> | |
1049 | auth_fail(Msg, Args, Name, State); | |
1081 | {refused, Username, Msg, Args} -> | |
1082 | auth_fail(Username, Msg, Args, Name, State); | |
1050 | 1083 | {protocol_error, Msg, Args} -> |
1084 | notify_auth_result(none, user_authentication_failure, | |
1085 | [{error, rabbit_misc:format(Msg, Args)}], | |
1086 | State), | |
1051 | 1087 | rabbit_misc:protocol_error(syntax_error, Msg, Args); |
1052 | 1088 | {challenge, Challenge, AuthState1} -> |
1053 | 1089 | Secure = #'connection.secure'{challenge = Challenge}, |
1056 | 1092 | auth_state = AuthState1}}; |
1057 | 1093 | {ok, User = #user{username = Username}} -> |
1058 | 1094 | case rabbit_access_control:check_user_loopback(Username, Sock) of |
1059 | ok -> ok; | |
1060 | not_allowed -> auth_fail("user '~s' can only connect via " | |
1061 | "localhost", [Username], Name, State) | |
1095 | ok -> | |
1096 | notify_auth_result(Username, user_authentication_success, | |
1097 | [], State); | |
1098 | not_allowed -> | |
1099 | auth_fail(Username, "user '~s' can only connect via " | |
1100 | "localhost", [Username], Name, State) | |
1062 | 1101 | end, |
1063 | 1102 | Tune = #'connection.tune'{frame_max = get_env(frame_max), |
1064 | 1103 | channel_max = get_env(channel_max), |
1070 | 1109 | end. |
1071 | 1110 | |
1072 | 1111 | -ifdef(use_specs). |
1073 | -spec(auth_fail/4 :: (string(), [any()], binary(), #v1{}) -> no_return()). | |
1112 | -spec(auth_fail/5 :: | |
1113 | (rabbit_types:username() | none, string(), [any()], binary(), #v1{}) -> | |
1114 | no_return()). | |
1074 | 1115 | -endif. |
1075 | auth_fail(Msg, Args, AuthName, | |
1116 | auth_fail(Username, Msg, Args, AuthName, | |
1076 | 1117 | State = #v1{connection = #connection{protocol = Protocol, |
1077 | 1118 | capabilities = Capabilities}}) -> |
1119 | notify_auth_result(Username, user_authentication_failure, | |
1120 | [{error, rabbit_misc:format(Msg, Args)}], State), | |
1078 | 1121 | AmqpError = rabbit_misc:amqp_error( |
1079 | 1122 | access_refused, "~s login refused: ~s", |
1080 | 1123 | [AuthName, io_lib:format(Msg, Args)], none), |
1092 | 1135 | _ -> ok |
1093 | 1136 | end, |
1094 | 1137 | rabbit_misc:protocol_error(AmqpError). |
1138 | ||
1139 | notify_auth_result(Username, AuthResult, ExtraProps, State) -> | |
1140 | EventProps = [{connection_type, network}, | |
1141 | {name, case Username of none -> ''; _ -> Username end}] ++ | |
1142 | [case Item of | |
1143 | name -> {connection_name, i(name, State)}; | |
1144 | _ -> {Item, i(Item, State)} | |
1145 | end || Item <- ?AUTH_NOTIFICATION_INFO_KEYS] ++ | |
1146 | ExtraProps, | |
1147 | rabbit_event:notify(AuthResult, [P || {_, V} = P <- EventProps, V =/= '']). | |
1095 | 1148 | |
1096 | 1149 | %%-------------------------------------------------------------------------- |
1097 | 1150 | |
1166 | 1219 | |
1167 | 1220 | cert_info(F, #v1{sock = Sock}) -> |
1168 | 1221 | case rabbit_net:peercert(Sock) of |
1169 | nossl -> ''; | |
1170 | {error, no_peercert} -> ''; | |
1171 | {ok, Cert} -> list_to_binary(F(Cert)) | |
1222 | nossl -> ''; | |
1223 | {error, _} -> ''; | |
1224 | {ok, Cert} -> list_to_binary(F(Cert)) | |
1172 | 1225 | end. |
1173 | 1226 | |
1174 | 1227 | maybe_emit_stats(State) -> |
15 | 15 | |
16 | 16 | -module(rabbit_trace). |
17 | 17 | |
18 | -export([init/1, enabled/1, tap_in/5, tap_out/5, start/1, stop/1]). | |
18 | -export([init/1, enabled/1, tap_in/6, tap_out/5, start/1, stop/1]). | |
19 | 19 | |
20 | 20 | -include("rabbit.hrl"). |
21 | 21 | -include("rabbit_framing.hrl"). |
31 | 31 | |
32 | 32 | -spec(init/1 :: (rabbit_types:vhost()) -> state()). |
33 | 33 | -spec(enabled/1 :: (rabbit_types:vhost()) -> boolean()). |
34 | -spec(tap_in/5 :: (rabbit_types:basic_message(), binary(), | |
35 | rabbit_channel:channel_number(), | |
34 | -spec(tap_in/6 :: (rabbit_types:basic_message(), [rabbit_amqqueue:name()], | |
35 | binary(), rabbit_channel:channel_number(), | |
36 | 36 | rabbit_types:username(), state()) -> 'ok'). |
37 | 37 | -spec(tap_out/5 :: (rabbit_amqqueue:qmsg(), binary(), |
38 | 38 | rabbit_channel:channel_number(), |
57 | 57 | {ok, VHosts} = application:get_env(rabbit, ?TRACE_VHOSTS), |
58 | 58 | lists:member(VHost, VHosts). |
59 | 59 | |
60 | tap_in(_Msg, _ConnName, _ChannelNum, _Username, none) -> ok; | |
60 | tap_in(_Msg, _QNames, _ConnName, _ChannelNum, _Username, none) -> ok; | |
61 | 61 | tap_in(Msg = #basic_message{exchange_name = #resource{name = XName, |
62 | 62 | virtual_host = VHost}}, |
63 | ConnName, ChannelNum, Username, TraceX) -> | |
63 | QNames, ConnName, ChannelNum, Username, TraceX) -> | |
64 | 64 | trace(TraceX, Msg, <<"publish">>, XName, |
65 | [{<<"vhost">>, longstr, VHost}, | |
66 | {<<"connection">>, longstr, ConnName}, | |
67 | {<<"channel">>, signedint, ChannelNum}, | |
68 | {<<"user">>, longstr, Username}]). | |
65 | [{<<"vhost">>, longstr, VHost}, | |
66 | {<<"connection">>, longstr, ConnName}, | |
67 | {<<"channel">>, signedint, ChannelNum}, | |
68 | {<<"user">>, longstr, Username}, | |
69 | {<<"routed_queues">>, array, | |
70 | [{longstr, QName#resource.name} || QName <- QNames]}]). | |
69 | 71 | |
70 | 72 | tap_out(_Msg, _ConnName, _ChannelNum, _Username, none) -> ok; |
71 | 73 | tap_out({#resource{name = QName, virtual_host = VHost}, |
26 | 26 | vhost/0, ctag/0, amqp_error/0, r/1, r2/2, r3/3, listener/0, |
27 | 27 | binding/0, binding_source/0, binding_destination/0, |
28 | 28 | amqqueue/0, exchange/0, |
29 | connection/0, protocol/0, user/0, internal_user/0, | |
29 | connection/0, protocol/0, auth_user/0, user/0, internal_user/0, | |
30 | 30 | username/0, password/0, password_hash/0, |
31 | 31 | ok/1, error/1, ok_or_error/1, ok_or_error2/2, ok_pid_or_error/0, |
32 | 32 | channel_exit/0, connection_exit/0, mfargs/0, proc_name/0, |
130 | 130 | |
131 | 131 | -type(protocol() :: rabbit_framing:protocol()). |
132 | 132 | |
133 | -type(auth_user() :: | |
134 | #auth_user{username :: username(), | |
135 | tags :: [atom()], | |
136 | impl :: any()}). | |
137 | ||
133 | 138 | -type(user() :: |
134 | #user{username :: username(), | |
135 | tags :: [atom()], | |
136 | auth_backend :: atom(), | |
137 | impl :: any()}). | |
139 | #user{username :: username(), | |
140 | tags :: [atom()], | |
141 | authz_backends :: [{atom(), any()}]}). | |
138 | 142 | |
139 | 143 | -type(internal_user() :: |
140 | 144 | #internal_user{username :: username(), |
15 | 15 | |
16 | 16 | -module(rabbit_upgrade). |
17 | 17 | |
18 | -export([maybe_upgrade_mnesia/0, maybe_upgrade_local/0]). | |
18 | -export([maybe_upgrade_mnesia/0, maybe_upgrade_local/0, | |
19 | nodes_running/1, secondary_upgrade/1]). | |
19 | 20 | |
20 | 21 | -include("rabbit.hrl"). |
21 | 22 | |
121 | 122 | |
122 | 123 | maybe_upgrade_mnesia() -> |
123 | 124 | AllNodes = rabbit_mnesia:cluster_nodes(all), |
125 | ok = rabbit_mnesia_rename:maybe_finish(AllNodes), | |
124 | 126 | case rabbit_version:upgrades_required(mnesia) of |
125 | 127 | {error, starting_from_scratch} -> |
126 | 128 | ok; |
49 | 49 | -rabbit_upgrade({cluster_name, mnesia, [runtime_parameters]}). |
50 | 50 | -rabbit_upgrade({down_slave_nodes, mnesia, [queue_decorators]}). |
51 | 51 | -rabbit_upgrade({queue_state, mnesia, [down_slave_nodes]}). |
52 | -rabbit_upgrade({recoverable_slaves, mnesia, [queue_state]}). | |
52 | 53 | |
53 | 54 | %% ------------------------------------------------------------------- |
54 | 55 | |
81 | 82 | -spec(cluster_name/0 :: () -> 'ok'). |
82 | 83 | -spec(down_slave_nodes/0 :: () -> 'ok'). |
83 | 84 | -spec(queue_state/0 :: () -> 'ok'). |
85 | -spec(recoverable_slaves/0 :: () -> 'ok'). | |
84 | 86 | |
85 | 87 | -endif. |
86 | 88 | |
417 | 419 | [name, durable, auto_delete, exclusive_owner, arguments, pid, slave_pids, |
418 | 420 | sync_slave_pids, down_slave_nodes, policy, gm_pids, decorators, state]). |
419 | 421 | |
422 | recoverable_slaves() -> | |
423 | ok = recoverable_slaves(rabbit_queue), | |
424 | ok = recoverable_slaves(rabbit_durable_queue). | |
425 | ||
426 | recoverable_slaves(Table) -> | |
427 | transform( | |
428 | Table, fun (Q) -> Q end, %% Don't change shape of record | |
429 | [name, durable, auto_delete, exclusive_owner, arguments, pid, slave_pids, | |
430 | sync_slave_pids, recoverable_slaves, policy, gm_pids, decorators, | |
431 | state]). | |
432 | ||
433 | ||
420 | 434 | %%-------------------------------------------------------------------- |
421 | 435 | |
422 | 436 | transform(TableName, Fun, FieldList) -> |
17 | 17 | |
18 | 18 | -export([init/3, terminate/2, delete_and_terminate/2, delete_crashed/1, |
19 | 19 | purge/1, purge_acks/1, |
20 | publish/5, publish_delivered/4, discard/3, drain_confirmed/1, | |
20 | publish/6, publish_delivered/5, discard/4, drain_confirmed/1, | |
21 | 21 | dropwhile/2, fetchwhile/4, fetch/2, drop/2, ack/2, requeue/2, |
22 | 22 | ackfold/4, fold/3, len/1, is_empty/1, depth/1, |
23 | 23 | set_ram_duration_target/2, ram_duration/1, needs_timeout/1, timeout/1, |
27 | 27 | -export([start/1, stop/0]). |
28 | 28 | |
29 | 29 | %% exported for testing only |
30 | -export([start_msg_store/2, stop_msg_store/0, init/5]). | |
31 | ||
32 | %%---------------------------------------------------------------------------- | |
30 | -export([start_msg_store/2, stop_msg_store/0, init/6]). | |
31 | ||
32 | %%---------------------------------------------------------------------------- | |
33 | %% Messages, and their position in the queue, can be in memory or on | |
34 | %% disk, or both. Persistent messages will have both message and | |
35 | %% position pushed to disk as soon as they arrive; transient messages | |
36 | %% can be written to disk (and thus both types can be evicted from | |
37 | %% memory) under memory pressure. The question of whether a message is | |
38 | %% in RAM and whether it is persistent are orthogonal. | |
39 | %% | |
40 | %% Messages are persisted using the queue index and the message | |
41 | %% store. Normally the queue index holds the position of the message | |
42 | %% *within this queue* along with a couple of small bits of metadata, | |
43 | %% while the message store holds the message itself (including headers | |
44 | %% and other properties). | |
45 | %% | |
46 | %% However, as an optimisation, small messages can be embedded | |
47 | %% directly in the queue index and bypass the message store | |
48 | %% altogether. | |
49 | %% | |
33 | 50 | %% Definitions: |
34 | ||
51 | %% | |
35 | 52 | %% alpha: this is a message where both the message itself, and its |
36 | 53 | %% position within the queue are held in RAM |
37 | 54 | %% |
38 | %% beta: this is a message where the message itself is only held on | |
39 | %% disk, but its position within the queue is held in RAM. | |
55 | %% beta: this is a message where the message itself is only held on | |
56 | %% disk (if persisted to the message store) but its position | |
57 | %% within the queue is held in RAM. | |
40 | 58 | %% |
41 | 59 | %% gamma: this is a message where the message itself is only held on |
42 | 60 | %% disk, but its position is both in RAM and on disk. |
247 | 265 | q3, |
248 | 266 | q4, |
249 | 267 | next_seq_id, |
250 | ram_pending_ack, | |
251 | disk_pending_ack, | |
268 | ram_pending_ack, %% msgs using store, still in RAM | |
269 | disk_pending_ack, %% msgs in store, paged out | |
270 | qi_pending_ack, %% msgs using qi, *can't* be paged out | |
252 | 271 | index_state, |
253 | 272 | msg_store_clients, |
254 | 273 | durable, |
273 | 292 | unconfirmed, |
274 | 293 | confirmed, |
275 | 294 | ack_out_counter, |
276 | ack_in_counter | |
295 | ack_in_counter, | |
296 | %% Unlike the other counters these two do not feed into | |
297 | %% #rates{} and get reset | |
298 | disk_read_count, | |
299 | disk_write_count | |
277 | 300 | }). |
278 | 301 | |
279 | 302 | -record(rates, { in, out, ack_in, ack_out, timestamp }). |
284 | 307 | msg, |
285 | 308 | is_persistent, |
286 | 309 | is_delivered, |
287 | msg_on_disk, | |
310 | msg_in_store, | |
288 | 311 | index_on_disk, |
312 | persist_to, | |
289 | 313 | msg_props |
290 | 314 | }). |
291 | 315 | |
299 | 323 | %% betas, the IO_BATCH_SIZE sets the number of betas that we must be |
300 | 324 | %% due to write indices for before we do any work at all. |
301 | 325 | -define(IO_BATCH_SIZE, 2048). %% next power-of-2 after ?CREDIT_DISC_BOUND |
326 | -define(HEADER_GUESS_SIZE, 100). %% see determine_persist_to/2 | |
302 | 327 | -define(PERSISTENT_MSG_STORE, msg_store_persistent). |
303 | 328 | -define(TRANSIENT_MSG_STORE, msg_store_transient). |
304 | 329 | -define(QUEUE, lqueue). |
305 | 330 | |
306 | 331 | -include("rabbit.hrl"). |
332 | -include("rabbit_framing.hrl"). | |
307 | 333 | |
308 | 334 | %%---------------------------------------------------------------------------- |
309 | 335 | |
340 | 366 | next_seq_id :: seq_id(), |
341 | 367 | ram_pending_ack :: gb_trees:tree(), |
342 | 368 | disk_pending_ack :: gb_trees:tree(), |
369 | qi_pending_ack :: gb_trees:tree(), | |
343 | 370 | index_state :: any(), |
344 | 371 | msg_store_clients :: 'undefined' | {{any(), binary()}, |
345 | 372 | {any(), binary()}}, |
366 | 393 | unconfirmed :: gb_sets:set(), |
367 | 394 | confirmed :: gb_sets:set(), |
368 | 395 | ack_out_counter :: non_neg_integer(), |
369 | ack_in_counter :: non_neg_integer() }). | |
396 | ack_in_counter :: non_neg_integer(), | |
397 | disk_read_count :: non_neg_integer(), | |
398 | disk_write_count :: non_neg_integer() }). | |
370 | 399 | %% Duplicated from rabbit_backing_queue |
371 | 400 | -spec(ack/2 :: ([ack()], state()) -> {[rabbit_guid:guid()], state()}). |
372 | 401 | |
425 | 454 | ok = rabbit_sup:stop_child(?PERSISTENT_MSG_STORE), |
426 | 455 | ok = rabbit_sup:stop_child(?TRANSIENT_MSG_STORE). |
427 | 456 | |
428 | init(Queue, Recover, AsyncCallback) -> | |
429 | init(Queue, Recover, AsyncCallback, | |
430 | fun (MsgIds, ActionTaken) -> | |
431 | msgs_written_to_disk(AsyncCallback, MsgIds, ActionTaken) | |
432 | end, | |
433 | fun (MsgIds) -> msg_indices_written_to_disk(AsyncCallback, MsgIds) end). | |
457 | init(Queue, Recover, Callback) -> | |
458 | init( | |
459 | Queue, Recover, Callback, | |
460 | fun (MsgIds, ActionTaken) -> | |
461 | msgs_written_to_disk(Callback, MsgIds, ActionTaken) | |
462 | end, | |
463 | fun (MsgIds) -> msg_indices_written_to_disk(Callback, MsgIds) end, | |
464 | fun (MsgIds) -> msgs_and_indices_written_to_disk(Callback, MsgIds) end). | |
434 | 465 | |
435 | 466 | init(#amqqueue { name = QueueName, durable = IsDurable }, new, |
436 | AsyncCallback, MsgOnDiskFun, MsgIdxOnDiskFun) -> | |
437 | IndexState = rabbit_queue_index:init(QueueName, MsgIdxOnDiskFun), | |
467 | AsyncCallback, MsgOnDiskFun, MsgIdxOnDiskFun, MsgAndIdxOnDiskFun) -> | |
468 | IndexState = rabbit_queue_index:init(QueueName, | |
469 | MsgIdxOnDiskFun, MsgAndIdxOnDiskFun), | |
438 | 470 | init(IsDurable, IndexState, 0, 0, [], |
439 | 471 | case IsDurable of |
440 | 472 | true -> msg_store_client_init(?PERSISTENT_MSG_STORE, |
445 | 477 | |
446 | 478 | %% We can be recovering a transient queue if it crashed |
447 | 479 | init(#amqqueue { name = QueueName, durable = IsDurable }, Terms, |
448 | AsyncCallback, MsgOnDiskFun, MsgIdxOnDiskFun) -> | |
480 | AsyncCallback, MsgOnDiskFun, MsgIdxOnDiskFun, MsgAndIdxOnDiskFun) -> | |
449 | 481 | {PRef, RecoveryTerms} = process_recovery_terms(Terms), |
450 | 482 | {PersistentClient, ContainsCheckFun} = |
451 | 483 | case IsDurable of |
452 | 484 | true -> C = msg_store_client_init(?PERSISTENT_MSG_STORE, PRef, |
453 | 485 | MsgOnDiskFun, AsyncCallback), |
454 | {C, fun (MId) -> rabbit_msg_store:contains(MId, C) end}; | |
486 | {C, fun (MsgId) when is_binary(MsgId) -> | |
487 | rabbit_msg_store:contains(MsgId, C); | |
488 | (#basic_message{is_persistent = Persistent}) -> | |
489 | Persistent | |
490 | end}; | |
455 | 491 | false -> {undefined, fun(_MsgId) -> false end} |
456 | 492 | end, |
457 | 493 | TransientClient = msg_store_client_init(?TRANSIENT_MSG_STORE, |
460 | 496 | rabbit_queue_index:recover( |
461 | 497 | QueueName, RecoveryTerms, |
462 | 498 | rabbit_msg_store:successfully_recovered_state(?PERSISTENT_MSG_STORE), |
463 | ContainsCheckFun, MsgIdxOnDiskFun), | |
499 | ContainsCheckFun, MsgIdxOnDiskFun, MsgAndIdxOnDiskFun), | |
464 | 500 | init(IsDurable, IndexState, DeltaCount, DeltaBytes, RecoveryTerms, |
465 | 501 | PersistentClient, TransientClient). |
466 | 502 | |
513 | 549 | delete_crashed(#amqqueue{name = QName}) -> |
514 | 550 | ok = rabbit_queue_index:erase(QName). |
515 | 551 | |
516 | purge(State = #vqstate { q4 = Q4, | |
517 | index_state = IndexState, | |
518 | msg_store_clients = MSCState, | |
519 | len = Len, | |
520 | ram_bytes = RamBytes, | |
521 | persistent_count = PCount, | |
522 | persistent_bytes = PBytes }) -> | |
552 | purge(State = #vqstate { q4 = Q4, | |
553 | len = Len }) -> | |
523 | 554 | %% TODO: when there are no pending acks, which is a common case, |
524 | 555 | %% we could simply wipe the qi instead of issuing delivers and |
525 | 556 | %% acks for all the messages. |
526 | Stats = {RamBytes, PCount, PBytes}, | |
527 | {Stats1, IndexState1} = | |
528 | remove_queue_entries(Q4, Stats, IndexState, MSCState), | |
529 | ||
530 | {Stats2, State1 = #vqstate { q1 = Q1, | |
531 | index_state = IndexState2, | |
532 | msg_store_clients = MSCState1 }} = | |
533 | ||
534 | purge_betas_and_deltas( | |
535 | Stats1, State #vqstate { q4 = ?QUEUE:new(), | |
536 | index_state = IndexState1 }), | |
537 | ||
538 | {{RamBytes3, PCount3, PBytes3}, IndexState3} = | |
539 | remove_queue_entries(Q1, Stats2, IndexState2, MSCState1), | |
540 | ||
541 | {Len, a(State1 #vqstate { q1 = ?QUEUE:new(), | |
542 | index_state = IndexState3, | |
543 | len = 0, | |
544 | bytes = 0, | |
545 | ram_msg_count = 0, | |
546 | ram_bytes = RamBytes3, | |
547 | persistent_count = PCount3, | |
548 | persistent_bytes = PBytes3 })}. | |
557 | State1 = remove_queue_entries(Q4, State), | |
558 | ||
559 | State2 = #vqstate { q1 = Q1 } = | |
560 | purge_betas_and_deltas(State1 #vqstate { q4 = ?QUEUE:new() }), | |
561 | ||
562 | State3 = remove_queue_entries(Q1, State2), | |
563 | ||
564 | {Len, a(State3 #vqstate { q1 = ?QUEUE:new() })}. | |
549 | 565 | |
550 | 566 | purge_acks(State) -> a(purge_pending_ack(false, State)). |
551 | 567 | |
552 | 568 | publish(Msg = #basic_message { is_persistent = IsPersistent, id = MsgId }, |
553 | 569 | MsgProps = #message_properties { needs_confirming = NeedsConfirming }, |
554 | IsDelivered, _ChPid, State = #vqstate { q1 = Q1, q3 = Q3, q4 = Q4, | |
555 | next_seq_id = SeqId, | |
556 | len = Len, | |
557 | in_counter = InCount, | |
558 | persistent_count = PCount, | |
559 | durable = IsDurable, | |
560 | unconfirmed = UC }) -> | |
570 | IsDelivered, _ChPid, _Flow, | |
571 | State = #vqstate { q1 = Q1, q3 = Q3, q4 = Q4, | |
572 | next_seq_id = SeqId, | |
573 | in_counter = InCount, | |
574 | durable = IsDurable, | |
575 | unconfirmed = UC }) -> | |
561 | 576 | IsPersistent1 = IsDurable andalso IsPersistent, |
562 | 577 | MsgStatus = msg_status(IsPersistent1, IsDelivered, SeqId, Msg, MsgProps), |
563 | 578 | {MsgStatus1, State1} = maybe_write_to_disk(false, false, MsgStatus, State), |
566 | 581 | true -> State1 #vqstate { q4 = ?QUEUE:in(m(MsgStatus1), Q4) } |
567 | 582 | end, |
568 | 583 | InCount1 = InCount + 1, |
569 | PCount1 = PCount + one_if(IsPersistent1), | |
570 | 584 | UC1 = gb_sets_maybe_insert(NeedsConfirming, MsgId, UC), |
571 | State3 = upd_bytes( | |
572 | 1, 0, MsgStatus1, | |
573 | inc_ram_msg_count(State2 #vqstate { next_seq_id = SeqId + 1, | |
574 | len = Len + 1, | |
575 | in_counter = InCount1, | |
576 | persistent_count = PCount1, | |
577 | unconfirmed = UC1 })), | |
585 | State3 = stats({1, 0}, {none, MsgStatus1}, | |
586 | State2#vqstate{ next_seq_id = SeqId + 1, | |
587 | in_counter = InCount1, | |
588 | unconfirmed = UC1 }), | |
578 | 589 | a(reduce_memory_use(maybe_update_rates(State3))). |
579 | 590 | |
580 | 591 | publish_delivered(Msg = #basic_message { is_persistent = IsPersistent, |
581 | 592 | id = MsgId }, |
582 | 593 | MsgProps = #message_properties { |
583 | 594 | needs_confirming = NeedsConfirming }, |
584 | _ChPid, State = #vqstate { next_seq_id = SeqId, | |
585 | out_counter = OutCount, | |
586 | in_counter = InCount, | |
587 | persistent_count = PCount, | |
588 | durable = IsDurable, | |
589 | unconfirmed = UC }) -> | |
595 | _ChPid, _Flow, | |
596 | State = #vqstate { next_seq_id = SeqId, | |
597 | out_counter = OutCount, | |
598 | in_counter = InCount, | |
599 | durable = IsDurable, | |
600 | unconfirmed = UC }) -> | |
590 | 601 | IsPersistent1 = IsDurable andalso IsPersistent, |
591 | 602 | MsgStatus = msg_status(IsPersistent1, true, SeqId, Msg, MsgProps), |
592 | 603 | {MsgStatus1, State1} = maybe_write_to_disk(false, false, MsgStatus, State), |
593 | 604 | State2 = record_pending_ack(m(MsgStatus1), State1), |
594 | PCount1 = PCount + one_if(IsPersistent1), | |
595 | 605 | UC1 = gb_sets_maybe_insert(NeedsConfirming, MsgId, UC), |
596 | State3 = upd_bytes(0, 1, MsgStatus, | |
597 | State2 #vqstate { next_seq_id = SeqId + 1, | |
598 | out_counter = OutCount + 1, | |
599 | in_counter = InCount + 1, | |
600 | persistent_count = PCount1, | |
601 | unconfirmed = UC1 }), | |
606 | State3 = stats({0, 1}, {none, MsgStatus1}, | |
607 | State2 #vqstate { next_seq_id = SeqId + 1, | |
608 | out_counter = OutCount + 1, | |
609 | in_counter = InCount + 1, | |
610 | unconfirmed = UC1 }), | |
602 | 611 | {SeqId, a(reduce_memory_use(maybe_update_rates(State3)))}. |
603 | 612 | |
604 | discard(_MsgId, _ChPid, State) -> State. | |
613 | discard(_MsgId, _ChPid, _Flow, State) -> State. | |
605 | 614 | |
606 | 615 | drain_confirmed(State = #vqstate { confirmed = C }) -> |
607 | 616 | case gb_sets:is_empty(C) of |
663 | 672 | ack([SeqId], State) -> |
664 | 673 | {#msg_status { msg_id = MsgId, |
665 | 674 | is_persistent = IsPersistent, |
666 | msg_on_disk = MsgOnDisk, | |
675 | msg_in_store = MsgInStore, | |
667 | 676 | index_on_disk = IndexOnDisk }, |
668 | 677 | State1 = #vqstate { index_state = IndexState, |
669 | 678 | msg_store_clients = MSCState, |
673 | 682 | true -> rabbit_queue_index:ack([SeqId], IndexState); |
674 | 683 | false -> IndexState |
675 | 684 | end, |
676 | case MsgOnDisk of | |
685 | case MsgInStore of | |
677 | 686 | true -> ok = msg_store_remove(MSCState, IsPersistent, [MsgId]); |
678 | 687 | false -> ok |
679 | 688 | end, |
732 | 741 | {Its, IndexState1} = lists:foldl(fun inext/2, {[], IndexState}, |
733 | 742 | [msg_iterator(State), |
734 | 743 | disk_ack_iterator(State), |
735 | ram_ack_iterator(State)]), | |
744 | ram_ack_iterator(State), | |
745 | qi_ack_iterator(State)]), | |
736 | 746 | ifold(Fun, Acc, Its, State#vqstate{index_state = IndexState1}). |
737 | 747 | |
738 | 748 | len(#vqstate { len = Len }) -> Len. |
739 | 749 | |
740 | 750 | is_empty(State) -> 0 == len(State). |
741 | 751 | |
742 | depth(State = #vqstate { ram_pending_ack = RPA, disk_pending_ack = DPA }) -> | |
743 | len(State) + gb_trees:size(RPA) + gb_trees:size(DPA). | |
752 | depth(State = #vqstate { ram_pending_ack = RPA, | |
753 | disk_pending_ack = DPA, | |
754 | qi_pending_ack = QPA }) -> | |
755 | len(State) + gb_trees:size(RPA) + gb_trees:size(DPA) + gb_trees:size(QPA). | |
744 | 756 | |
745 | 757 | set_ram_duration_target( |
746 | 758 | DurationTarget, State = #vqstate { |
806 | 818 | ram_msg_count = RamMsgCount, |
807 | 819 | ram_msg_count_prev = RamMsgCountPrev, |
808 | 820 | ram_pending_ack = RPA, |
821 | qi_pending_ack = QPA, | |
809 | 822 | ram_ack_count_prev = RamAckCountPrev } = |
810 | 823 | update_rates(State), |
811 | 824 | |
812 | RamAckCount = gb_trees:size(RPA), | |
825 | RamAckCount = gb_trees:size(RPA) + gb_trees:size(QPA), | |
813 | 826 | |
814 | 827 | Duration = %% msgs+acks / (msgs+acks/sec) == sec |
815 | 828 | case lists:all(fun (X) -> X < 0.01 end, |
845 | 858 | |
846 | 859 | info(messages_ready_ram, #vqstate{ram_msg_count = RamMsgCount}) -> |
847 | 860 | RamMsgCount; |
848 | info(messages_unacknowledged_ram, #vqstate{ram_pending_ack = RPA}) -> | |
849 | gb_trees:size(RPA); | |
861 | info(messages_unacknowledged_ram, #vqstate{ram_pending_ack = RPA, | |
862 | qi_pending_ack = QPA}) -> | |
863 | gb_trees:size(RPA) + gb_trees:size(QPA); | |
850 | 864 | info(messages_ram, State) -> |
851 | 865 | info(messages_ready_ram, State) + info(messages_unacknowledged_ram, State); |
852 | 866 | info(messages_persistent, #vqstate{persistent_count = PersistentCount}) -> |
862 | 876 | RamBytes; |
863 | 877 | info(message_bytes_persistent, #vqstate{persistent_bytes = PersistentBytes}) -> |
864 | 878 | PersistentBytes; |
879 | info(disk_reads, #vqstate{disk_read_count = Count}) -> | |
880 | Count; | |
881 | info(disk_writes, #vqstate{disk_write_count = Count}) -> | |
882 | Count; | |
865 | 883 | info(backing_queue_status, #vqstate { |
866 | 884 | q1 = Q1, q2 = Q2, delta = Delta, q3 = Q3, q4 = Q4, |
867 | 885 | len = Len, |
932 | 950 | when Start + Count =< End -> |
933 | 951 | Delta. |
934 | 952 | |
935 | m(MsgStatus = #msg_status { msg = Msg, | |
936 | is_persistent = IsPersistent, | |
937 | msg_on_disk = MsgOnDisk, | |
953 | m(MsgStatus = #msg_status { is_persistent = IsPersistent, | |
954 | msg_in_store = MsgInStore, | |
938 | 955 | index_on_disk = IndexOnDisk }) -> |
939 | 956 | true = (not IsPersistent) or IndexOnDisk, |
940 | true = (not IndexOnDisk) or MsgOnDisk, | |
941 | true = (Msg =/= undefined) or MsgOnDisk, | |
942 | ||
957 | true = msg_in_ram(MsgStatus) or MsgInStore, | |
943 | 958 | MsgStatus. |
944 | 959 | |
945 | 960 | one_if(true ) -> 1; |
958 | 973 | msg = Msg, |
959 | 974 | is_persistent = IsPersistent, |
960 | 975 | is_delivered = IsDelivered, |
961 | msg_on_disk = false, | |
976 | msg_in_store = false, | |
962 | 977 | index_on_disk = false, |
978 | persist_to = determine_persist_to(Msg, MsgProps), | |
963 | 979 | msg_props = MsgProps}. |
964 | 980 | |
981 | beta_msg_status({Msg = #basic_message{id = MsgId}, | |
982 | SeqId, MsgProps, IsPersistent, IsDelivered}) -> | |
983 | MS0 = beta_msg_status0(SeqId, MsgProps, IsPersistent, IsDelivered), | |
984 | MS0#msg_status{msg_id = MsgId, | |
985 | msg = Msg, | |
986 | persist_to = queue_index, | |
987 | msg_in_store = false}; | |
988 | ||
965 | 989 | beta_msg_status({MsgId, SeqId, MsgProps, IsPersistent, IsDelivered}) -> |
990 | MS0 = beta_msg_status0(SeqId, MsgProps, IsPersistent, IsDelivered), | |
991 | MS0#msg_status{msg_id = MsgId, | |
992 | msg = undefined, | |
993 | persist_to = msg_store, | |
994 | msg_in_store = true}. | |
995 | ||
996 | beta_msg_status0(SeqId, MsgProps, IsPersistent, IsDelivered) -> | |
966 | 997 | #msg_status{seq_id = SeqId, |
967 | msg_id = MsgId, | |
968 | 998 | msg = undefined, |
969 | 999 | is_persistent = IsPersistent, |
970 | 1000 | is_delivered = IsDelivered, |
971 | msg_on_disk = true, | |
972 | 1001 | index_on_disk = true, |
973 | 1002 | msg_props = MsgProps}. |
974 | 1003 | |
975 | trim_msg_status(MsgStatus) -> MsgStatus #msg_status { msg = undefined }. | |
1004 | trim_msg_status(MsgStatus) -> | |
1005 | case persist_to(MsgStatus) of | |
1006 | msg_store -> MsgStatus#msg_status{msg = undefined}; | |
1007 | queue_index -> MsgStatus | |
1008 | end. | |
976 | 1009 | |
977 | 1010 | with_msg_store_state({MSCStateP, MSCStateT}, true, Fun) -> |
978 | 1011 | {Result, MSCStateP1} = Fun(MSCStateP), |
1034 | 1067 | maybe_write_delivered(true, SeqId, IndexState) -> |
1035 | 1068 | rabbit_queue_index:deliver([SeqId], IndexState). |
1036 | 1069 | |
1037 | betas_from_index_entries(List, TransientThreshold, RPA, DPA, IndexState) -> | |
1038 | {Filtered, Delivers, Acks} = | |
1070 | betas_from_index_entries(List, TransientThreshold, RPA, DPA, QPA, IndexState) -> | |
1071 | {Filtered, Delivers, Acks, RamReadyCount, RamBytes} = | |
1039 | 1072 | lists:foldr( |
1040 | fun ({_MsgId, SeqId, _MsgProps, IsPersistent, IsDelivered} = M, | |
1041 | {Filtered1, Delivers1, Acks1} = Acc) -> | |
1073 | fun ({_MsgOrId, SeqId, _MsgProps, IsPersistent, IsDelivered} = M, | |
1074 | {Filtered1, Delivers1, Acks1, RRC, RB} = Acc) -> | |
1042 | 1075 | case SeqId < TransientThreshold andalso not IsPersistent of |
1043 | 1076 | true -> {Filtered1, |
1044 | 1077 | cons_if(not IsDelivered, SeqId, Delivers1), |
1045 | [SeqId | Acks1]}; | |
1046 | false -> case (gb_trees:is_defined(SeqId, RPA) orelse | |
1047 | gb_trees:is_defined(SeqId, DPA)) of | |
1048 | false -> {?QUEUE:in_r(m(beta_msg_status(M)), | |
1049 | Filtered1), | |
1050 | Delivers1, Acks1}; | |
1051 | true -> Acc | |
1052 | end | |
1078 | [SeqId | Acks1], RRC, RB}; | |
1079 | false -> MsgStatus = m(beta_msg_status(M)), | |
1080 | HaveMsg = msg_in_ram(MsgStatus), | |
1081 | Size = msg_size(MsgStatus), | |
1082 | case (gb_trees:is_defined(SeqId, RPA) orelse | |
1083 | gb_trees:is_defined(SeqId, DPA) orelse | |
1084 | gb_trees:is_defined(SeqId, QPA)) of | |
1085 | false -> {?QUEUE:in_r(MsgStatus, Filtered1), | |
1086 | Delivers1, Acks1, | |
1087 | RRC + one_if(HaveMsg), | |
1088 | RB + one_if(HaveMsg) * Size}; | |
1089 | true -> Acc %% [0] | |
1090 | end | |
1053 | 1091 | end |
1054 | end, {?QUEUE:new(), [], []}, List), | |
1055 | {Filtered, rabbit_queue_index:ack( | |
1056 | Acks, rabbit_queue_index:deliver(Delivers, IndexState))}. | |
1092 | end, {?QUEUE:new(), [], [], 0, 0}, List), | |
1093 | {Filtered, RamReadyCount, RamBytes, | |
1094 | rabbit_queue_index:ack( | |
1095 | Acks, rabbit_queue_index:deliver(Delivers, IndexState))}. | |
1096 | %% [0] We don't increase RamBytes here, even though it pertains to | |
1097 | %% unacked messages too, since if HaveMsg then the message must have | |
1098 | %% been stored in the QI, thus the message must have been in | |
1099 | %% qi_pending_ack, thus it must already have been in RAM. | |
1057 | 1100 | |
1058 | 1101 | expand_delta(SeqId, ?BLANK_DELTA_PATTERN(X)) -> |
1059 | 1102 | d(#delta { start_seq_id = SeqId, count = 1, end_seq_id = SeqId + 1 }); |
1100 | 1143 | next_seq_id = NextSeqId, |
1101 | 1144 | ram_pending_ack = gb_trees:empty(), |
1102 | 1145 | disk_pending_ack = gb_trees:empty(), |
1146 | qi_pending_ack = gb_trees:empty(), | |
1103 | 1147 | index_state = IndexState1, |
1104 | 1148 | msg_store_clients = {PersistentClient, TransientClient}, |
1105 | 1149 | durable = IsDurable, |
1124 | 1168 | unconfirmed = gb_sets:new(), |
1125 | 1169 | confirmed = gb_sets:new(), |
1126 | 1170 | ack_out_counter = 0, |
1127 | ack_in_counter = 0 }, | |
1171 | ack_in_counter = 0, | |
1172 | disk_read_count = 0, | |
1173 | disk_write_count = 0 }, | |
1128 | 1174 | a(maybe_deltas_to_betas(State)). |
1129 | 1175 | |
1130 | 1176 | blank_rates(Now) -> |
1140 | 1186 | true -> State #vqstate { q3 = ?QUEUE:in_r(MsgStatus, Q3) }; |
1141 | 1187 | false -> {Msg, State1 = #vqstate { q4 = Q4a }} = |
1142 | 1188 | read_msg(MsgStatus, State), |
1143 | upd_ram_bytes( | |
1144 | 1, MsgStatus, | |
1145 | inc_ram_msg_count( | |
1146 | State1 #vqstate { q4 = ?QUEUE:in_r(MsgStatus#msg_status { | |
1147 | msg = Msg }, Q4a) })) | |
1189 | MsgStatus1 = MsgStatus#msg_status{msg = Msg}, | |
1190 | stats(ready0, {MsgStatus, MsgStatus1}, | |
1191 | State1 #vqstate { q4 = ?QUEUE:in_r(MsgStatus1, Q4a) }) | |
1148 | 1192 | end; |
1149 | 1193 | in_r(MsgStatus, State = #vqstate { q4 = Q4 }) -> |
1150 | 1194 | State #vqstate { q4 = ?QUEUE:in_r(MsgStatus, Q4) }. |
1167 | 1211 | read_msg(#msg_status{msg = Msg}, State) -> |
1168 | 1212 | {Msg, State}. |
1169 | 1213 | |
1170 | read_msg(MsgId, IsPersistent, State = #vqstate{msg_store_clients = MSCState}) -> | |
1214 | read_msg(MsgId, IsPersistent, State = #vqstate{msg_store_clients = MSCState, | |
1215 | disk_read_count = Count}) -> | |
1171 | 1216 | {{ok, Msg = #basic_message {}}, MSCState1} = |
1172 | 1217 | msg_store_read(MSCState, IsPersistent, MsgId), |
1173 | {Msg, State #vqstate {msg_store_clients = MSCState1}}. | |
1174 | ||
1175 | inc_ram_msg_count(State = #vqstate{ram_msg_count = RamMsgCount}) -> | |
1176 | State#vqstate{ram_msg_count = RamMsgCount + 1}. | |
1177 | ||
1178 | upd_bytes(SignReady, SignUnacked, | |
1179 | MsgStatus = #msg_status{msg = undefined}, State) -> | |
1180 | upd_bytes0(SignReady, SignUnacked, MsgStatus, State); | |
1181 | upd_bytes(SignReady, SignUnacked, MsgStatus = #msg_status{msg = _}, State) -> | |
1182 | upd_ram_bytes(SignReady + SignUnacked, MsgStatus, | |
1183 | upd_bytes0(SignReady, SignUnacked, MsgStatus, State)). | |
1184 | ||
1185 | upd_bytes0(SignReady, SignUnacked, MsgStatus = #msg_status{is_persistent = IsP}, | |
1186 | State = #vqstate{bytes = Bytes, | |
1187 | unacked_bytes = UBytes, | |
1188 | persistent_bytes = PBytes}) -> | |
1218 | {Msg, State #vqstate {msg_store_clients = MSCState1, | |
1219 | disk_read_count = Count + 1}}. | |
1220 | ||
1221 | stats(Signs, Statuses, State) -> | |
1222 | stats0(expand_signs(Signs), expand_statuses(Statuses), State). | |
1223 | ||
1224 | expand_signs(ready0) -> {0, 0, true}; | |
1225 | expand_signs({A, B}) -> {A, B, false}. | |
1226 | ||
1227 | expand_statuses({none, A}) -> {false, msg_in_ram(A), A}; | |
1228 | expand_statuses({B, none}) -> {msg_in_ram(B), false, B}; | |
1229 | expand_statuses({B, A}) -> {msg_in_ram(B), msg_in_ram(A), B}. | |
1230 | ||
1231 | %% In this function at least, we are religious: the variable name | |
1232 | %% contains "Ready" or "Unacked" iff that is what it counts. If | |
1233 | %% neither is present it counts both. | |
1234 | stats0({DeltaReady, DeltaUnacked, ReadyMsgPaged}, | |
1235 | {InRamBefore, InRamAfter, MsgStatus}, | |
1236 | State = #vqstate{len = ReadyCount, | |
1237 | bytes = ReadyBytes, | |
1238 | ram_msg_count = RamReadyCount, | |
1239 | persistent_count = PersistentCount, | |
1240 | unacked_bytes = UnackedBytes, | |
1241 | ram_bytes = RamBytes, | |
1242 | persistent_bytes = PersistentBytes}) -> | |
1189 | 1243 | S = msg_size(MsgStatus), |
1190 | SignTotal = SignReady + SignUnacked, | |
1191 | State#vqstate{bytes = Bytes + SignReady * S, | |
1192 | unacked_bytes = UBytes + SignUnacked * S, | |
1193 | persistent_bytes = PBytes + one_if(IsP) * S * SignTotal}. | |
1194 | ||
1195 | upd_ram_bytes(Sign, MsgStatus, State = #vqstate{ram_bytes = RamBytes}) -> | |
1196 | State#vqstate{ram_bytes = RamBytes + Sign * msg_size(MsgStatus)}. | |
1244 | DeltaTotal = DeltaReady + DeltaUnacked, | |
1245 | DeltaRam = case {InRamBefore, InRamAfter} of | |
1246 | {false, false} -> 0; | |
1247 | {false, true} -> 1; | |
1248 | {true, false} -> -1; | |
1249 | {true, true} -> 0 | |
1250 | end, | |
1251 | DeltaRamReady = case DeltaReady of | |
1252 | 1 -> one_if(InRamAfter); | |
1253 | -1 -> -one_if(InRamBefore); | |
1254 | 0 when ReadyMsgPaged -> DeltaRam; | |
1255 | 0 -> 0 | |
1256 | end, | |
1257 | DeltaPersistent = DeltaTotal * one_if(MsgStatus#msg_status.is_persistent), | |
1258 | State#vqstate{len = ReadyCount + DeltaReady, | |
1259 | ram_msg_count = RamReadyCount + DeltaRamReady, | |
1260 | persistent_count = PersistentCount + DeltaPersistent, | |
1261 | bytes = ReadyBytes + DeltaReady * S, | |
1262 | unacked_bytes = UnackedBytes + DeltaUnacked * S, | |
1263 | ram_bytes = RamBytes + DeltaRam * S, | |
1264 | persistent_bytes = PersistentBytes + DeltaPersistent * S}. | |
1197 | 1265 | |
1198 | 1266 | msg_size(#msg_status{msg_props = #message_properties{size = Size}}) -> Size. |
1199 | 1267 | |
1202 | 1270 | remove(AckRequired, MsgStatus = #msg_status { |
1203 | 1271 | seq_id = SeqId, |
1204 | 1272 | msg_id = MsgId, |
1205 | msg = Msg, | |
1206 | 1273 | is_persistent = IsPersistent, |
1207 | 1274 | is_delivered = IsDelivered, |
1208 | msg_on_disk = MsgOnDisk, | |
1275 | msg_in_store = MsgInStore, | |
1209 | 1276 | index_on_disk = IndexOnDisk }, |
1210 | State = #vqstate {ram_msg_count = RamMsgCount, | |
1211 | out_counter = OutCount, | |
1277 | State = #vqstate {out_counter = OutCount, | |
1212 | 1278 | index_state = IndexState, |
1213 | msg_store_clients = MSCState, | |
1214 | len = Len, | |
1215 | persistent_count = PCount}) -> | |
1279 | msg_store_clients = MSCState}) -> | |
1216 | 1280 | %% 1. Mark it delivered if necessary |
1217 | 1281 | IndexState1 = maybe_write_delivered( |
1218 | 1282 | IndexOnDisk andalso not IsDelivered, |
1223 | 1287 | ok = msg_store_remove(MSCState, IsPersistent, [MsgId]) |
1224 | 1288 | end, |
1225 | 1289 | Ack = fun () -> rabbit_queue_index:ack([SeqId], IndexState1) end, |
1226 | IndexState2 = case {AckRequired, MsgOnDisk, IndexOnDisk} of | |
1227 | {false, true, false} -> Rem(), IndexState1; | |
1228 | {false, true, true} -> Rem(), Ack(); | |
1229 | _ -> IndexState1 | |
1290 | IndexState2 = case {AckRequired, MsgInStore, IndexOnDisk} of | |
1291 | {false, true, false} -> Rem(), IndexState1; | |
1292 | {false, true, true} -> Rem(), Ack(); | |
1293 | {false, false, true} -> Ack(); | |
1294 | _ -> IndexState1 | |
1230 | 1295 | end, |
1231 | 1296 | |
1232 | 1297 | %% 3. If an ack is required, add something sensible to PA |
1237 | 1302 | {SeqId, StateN}; |
1238 | 1303 | false -> {undefined, State} |
1239 | 1304 | end, |
1240 | ||
1241 | PCount1 = PCount - one_if(IsPersistent andalso not AckRequired), | |
1242 | RamMsgCount1 = RamMsgCount - one_if(Msg =/= undefined), | |
1243 | 1305 | State2 = case AckRequired of |
1244 | false -> upd_bytes(-1, 0, MsgStatus, State1); | |
1245 | true -> upd_bytes(-1, 1, MsgStatus, State1) | |
1306 | false -> stats({-1, 0}, {MsgStatus, none}, State1); | |
1307 | true -> stats({-1, 1}, {MsgStatus, MsgStatus}, State1) | |
1246 | 1308 | end, |
1247 | 1309 | {AckTag, maybe_update_rates( |
1248 | State2 #vqstate {ram_msg_count = RamMsgCount1, | |
1249 | out_counter = OutCount + 1, | |
1250 | index_state = IndexState2, | |
1251 | len = Len - 1, | |
1252 | persistent_count = PCount1})}. | |
1253 | ||
1254 | purge_betas_and_deltas(Stats, | |
1255 | State = #vqstate { q3 = Q3, | |
1256 | index_state = IndexState, | |
1257 | msg_store_clients = MSCState }) -> | |
1310 | State2 #vqstate {out_counter = OutCount + 1, | |
1311 | index_state = IndexState2})}. | |
1312 | ||
1313 | purge_betas_and_deltas(State = #vqstate { q3 = Q3 }) -> | |
1258 | 1314 | case ?QUEUE:is_empty(Q3) of |
1259 | true -> {Stats, State}; | |
1260 | false -> {Stats1, IndexState1} = remove_queue_entries( | |
1261 | Q3, Stats, IndexState, MSCState), | |
1262 | purge_betas_and_deltas(Stats1, | |
1263 | maybe_deltas_to_betas( | |
1264 | State #vqstate { | |
1265 | q3 = ?QUEUE:new(), | |
1266 | index_state = IndexState1 })) | |
1267 | end. | |
1268 | ||
1269 | remove_queue_entries(Q, {RamBytes, PCount, PBytes}, | |
1270 | IndexState, MSCState) -> | |
1271 | {MsgIdsByStore, RamBytes1, PBytes1, Delivers, Acks} = | |
1315 | true -> State; | |
1316 | false -> State1 = remove_queue_entries(Q3, State), | |
1317 | purge_betas_and_deltas(maybe_deltas_to_betas( | |
1318 | State1#vqstate{q3 = ?QUEUE:new()})) | |
1319 | end. | |
1320 | ||
1321 | remove_queue_entries(Q, State = #vqstate{index_state = IndexState, | |
1322 | msg_store_clients = MSCState}) -> | |
1323 | {MsgIdsByStore, Delivers, Acks, State1} = | |
1272 | 1324 | ?QUEUE:foldl(fun remove_queue_entries1/2, |
1273 | {orddict:new(), RamBytes, PBytes, [], []}, Q), | |
1325 | {orddict:new(), [], [], State}, Q), | |
1274 | 1326 | ok = orddict:fold(fun (IsPersistent, MsgIds, ok) -> |
1275 | 1327 | msg_store_remove(MSCState, IsPersistent, MsgIds) |
1276 | 1328 | end, ok, MsgIdsByStore), |
1277 | {{RamBytes1, | |
1278 | PCount - case orddict:find(true, MsgIdsByStore) of | |
1279 | error -> 0; | |
1280 | {ok, Ids} -> length(Ids) | |
1281 | end, | |
1282 | PBytes1}, | |
1283 | rabbit_queue_index:ack(Acks, | |
1284 | rabbit_queue_index:deliver(Delivers, IndexState))}. | |
1329 | IndexState1 = rabbit_queue_index:ack( | |
1330 | Acks, rabbit_queue_index:deliver(Delivers, IndexState)), | |
1331 | State1#vqstate{index_state = IndexState1}. | |
1285 | 1332 | |
1286 | 1333 | remove_queue_entries1( |
1287 | #msg_status { msg_id = MsgId, seq_id = SeqId, msg = Msg, | |
1288 | is_delivered = IsDelivered, msg_on_disk = MsgOnDisk, | |
1289 | index_on_disk = IndexOnDisk, is_persistent = IsPersistent, | |
1290 | msg_props = #message_properties { size = Size } }, | |
1291 | {MsgIdsByStore, RamBytes, PBytes, Delivers, Acks}) -> | |
1292 | {case MsgOnDisk of | |
1334 | #msg_status { msg_id = MsgId, seq_id = SeqId, is_delivered = IsDelivered, | |
1335 | msg_in_store = MsgInStore, index_on_disk = IndexOnDisk, | |
1336 | is_persistent = IsPersistent} = MsgStatus, | |
1337 | {MsgIdsByStore, Delivers, Acks, State}) -> | |
1338 | {case MsgInStore of | |
1293 | 1339 | true -> rabbit_misc:orddict_cons(IsPersistent, MsgId, MsgIdsByStore); |
1294 | 1340 | false -> MsgIdsByStore |
1295 | 1341 | end, |
1296 | RamBytes - Size * one_if(Msg =/= undefined), | |
1297 | PBytes - Size * one_if(IsPersistent), | |
1298 | 1342 | cons_if(IndexOnDisk andalso not IsDelivered, SeqId, Delivers), |
1299 | cons_if(IndexOnDisk, SeqId, Acks)}. | |
1343 | cons_if(IndexOnDisk, SeqId, Acks), | |
1344 | stats({-1, 0}, {MsgStatus, none}, State)}. | |
1300 | 1345 | |
1301 | 1346 | %%---------------------------------------------------------------------------- |
1302 | 1347 | %% Internal gubbins for publishing |
1303 | 1348 | %%---------------------------------------------------------------------------- |
1304 | 1349 | |
1305 | 1350 | maybe_write_msg_to_disk(_Force, MsgStatus = #msg_status { |
1306 | msg_on_disk = true }, _MSCState) -> | |
1307 | MsgStatus; | |
1351 | msg_in_store = true }, State) -> | |
1352 | {MsgStatus, State}; | |
1308 | 1353 | maybe_write_msg_to_disk(Force, MsgStatus = #msg_status { |
1309 | 1354 | msg = Msg, msg_id = MsgId, |
1310 | is_persistent = IsPersistent }, MSCState) | |
1355 | is_persistent = IsPersistent }, | |
1356 | State = #vqstate{ msg_store_clients = MSCState, | |
1357 | disk_write_count = Count}) | |
1311 | 1358 | when Force orelse IsPersistent -> |
1312 | Msg1 = Msg #basic_message { | |
1313 | %% don't persist any recoverable decoded properties | |
1314 | content = rabbit_binary_parser:clear_decoded_content( | |
1315 | Msg #basic_message.content)}, | |
1316 | ok = msg_store_write(MSCState, IsPersistent, MsgId, Msg1), | |
1317 | MsgStatus #msg_status { msg_on_disk = true }; | |
1318 | maybe_write_msg_to_disk(_Force, MsgStatus, _MSCState) -> | |
1319 | MsgStatus. | |
1359 | case persist_to(MsgStatus) of | |
1360 | msg_store -> ok = msg_store_write(MSCState, IsPersistent, MsgId, | |
1361 | prepare_to_store(Msg)), | |
1362 | {MsgStatus#msg_status{msg_in_store = true}, | |
1363 | State#vqstate{disk_write_count = Count + 1}}; | |
1364 | queue_index -> {MsgStatus, State} | |
1365 | end; | |
1366 | maybe_write_msg_to_disk(_Force, MsgStatus, State) -> | |
1367 | {MsgStatus, State}. | |
1320 | 1368 | |
1321 | 1369 | maybe_write_index_to_disk(_Force, MsgStatus = #msg_status { |
1322 | index_on_disk = true }, IndexState) -> | |
1323 | true = MsgStatus #msg_status.msg_on_disk, %% ASSERTION | |
1324 | {MsgStatus, IndexState}; | |
1370 | index_on_disk = true }, State) -> | |
1371 | {MsgStatus, State}; | |
1325 | 1372 | maybe_write_index_to_disk(Force, MsgStatus = #msg_status { |
1373 | msg = Msg, | |
1326 | 1374 | msg_id = MsgId, |
1327 | 1375 | seq_id = SeqId, |
1328 | 1376 | is_persistent = IsPersistent, |
1329 | 1377 | is_delivered = IsDelivered, |
1330 | msg_props = MsgProps}, IndexState) | |
1378 | msg_props = MsgProps}, | |
1379 | State = #vqstate{target_ram_count = TargetRamCount, | |
1380 | disk_write_count = DiskWriteCount, | |
1381 | index_state = IndexState}) | |
1331 | 1382 | when Force orelse IsPersistent -> |
1332 | true = MsgStatus #msg_status.msg_on_disk, %% ASSERTION | |
1383 | {MsgOrId, DiskWriteCount1} = | |
1384 | case persist_to(MsgStatus) of | |
1385 | msg_store -> {MsgId, DiskWriteCount}; | |
1386 | queue_index -> {prepare_to_store(Msg), DiskWriteCount + 1} | |
1387 | end, | |
1333 | 1388 | IndexState1 = rabbit_queue_index:publish( |
1334 | MsgId, SeqId, MsgProps, IsPersistent, IndexState), | |
1335 | {MsgStatus #msg_status { index_on_disk = true }, | |
1336 | maybe_write_delivered(IsDelivered, SeqId, IndexState1)}; | |
1337 | maybe_write_index_to_disk(_Force, MsgStatus, IndexState) -> | |
1338 | {MsgStatus, IndexState}. | |
1339 | ||
1340 | maybe_write_to_disk(ForceMsg, ForceIndex, MsgStatus, | |
1341 | State = #vqstate { index_state = IndexState, | |
1342 | msg_store_clients = MSCState }) -> | |
1343 | MsgStatus1 = maybe_write_msg_to_disk(ForceMsg, MsgStatus, MSCState), | |
1344 | {MsgStatus2, IndexState1} = | |
1345 | maybe_write_index_to_disk(ForceIndex, MsgStatus1, IndexState), | |
1346 | {MsgStatus2, State #vqstate { index_state = IndexState1 }}. | |
1389 | MsgOrId, SeqId, MsgProps, IsPersistent, TargetRamCount, | |
1390 | IndexState), | |
1391 | IndexState2 = maybe_write_delivered(IsDelivered, SeqId, IndexState1), | |
1392 | {MsgStatus#msg_status{index_on_disk = true}, | |
1393 | State#vqstate{index_state = IndexState2, | |
1394 | disk_write_count = DiskWriteCount1}}; | |
1395 | ||
1396 | maybe_write_index_to_disk(_Force, MsgStatus, State) -> | |
1397 | {MsgStatus, State}. | |
1398 | ||
1399 | maybe_write_to_disk(ForceMsg, ForceIndex, MsgStatus, State) -> | |
1400 | {MsgStatus1, State1} = maybe_write_msg_to_disk(ForceMsg, MsgStatus, State), | |
1401 | maybe_write_index_to_disk(ForceIndex, MsgStatus1, State1). | |
1402 | ||
1403 | determine_persist_to(#basic_message{ | |
1404 | content = #content{properties = Props, | |
1405 | properties_bin = PropsBin}}, | |
1406 | #message_properties{size = BodySize}) -> | |
1407 | {ok, IndexMaxSize} = application:get_env( | |
1408 | rabbit, queue_index_embed_msgs_below), | |
1409 | %% The >= is so that you can set the env to 0 and never persist | |
1410 | %% to the index. | |
1411 | %% | |
1412 | %% We want this to be fast, so we avoid size(term_to_binary()) | |
1413 | %% here, or using the term size estimation from truncate.erl, both | |
1414 | %% of which are too slow. So instead, if the message body size | |
1415 | %% goes over the limit then we avoid any other checks. | |
1416 | %% | |
1417 | %% If it doesn't we need to decide if the properties will push | |
1418 | %% it past the limit. If we have the encoded properties (usual | |
1419 | %% case) we can just check their size. If we don't (message came | |
1420 | %% via the direct client), we make a guess based on the number of | |
1421 | %% headers. | |
1422 | case BodySize >= IndexMaxSize of | |
1423 | true -> msg_store; | |
1424 | false -> Est = case is_binary(PropsBin) of | |
1425 | true -> BodySize + size(PropsBin); | |
1426 | false -> #'P_basic'{headers = Hs} = Props, | |
1427 | case Hs of | |
1428 | undefined -> 0; | |
1429 | _ -> length(Hs) | |
1430 | end * ?HEADER_GUESS_SIZE + BodySize | |
1431 | end, | |
1432 | case Est >= IndexMaxSize of | |
1433 | true -> msg_store; | |
1434 | false -> queue_index | |
1435 | end | |
1436 | end. | |
1437 | ||
1438 | persist_to(#msg_status{persist_to = To}) -> To. | |
1439 | ||
1440 | prepare_to_store(Msg) -> | |
1441 | Msg#basic_message{ | |
1442 | %% don't persist any recoverable decoded properties | |
1443 | content = rabbit_binary_parser:clear_decoded_content( | |
1444 | Msg #basic_message.content)}. | |
1347 | 1445 | |
1348 | 1446 | %%---------------------------------------------------------------------------- |
1349 | 1447 | %% Internal gubbins for acks |
1350 | 1448 | %%---------------------------------------------------------------------------- |
1351 | 1449 | |
1352 | record_pending_ack(#msg_status { seq_id = SeqId, msg = Msg } = MsgStatus, | |
1450 | record_pending_ack(#msg_status { seq_id = SeqId } = MsgStatus, | |
1353 | 1451 | State = #vqstate { ram_pending_ack = RPA, |
1354 | 1452 | disk_pending_ack = DPA, |
1453 | qi_pending_ack = QPA, | |
1355 | 1454 | ack_in_counter = AckInCount}) -> |
1356 | {RPA1, DPA1} = | |
1357 | case Msg of | |
1358 | undefined -> {RPA, gb_trees:insert(SeqId, MsgStatus, DPA)}; | |
1359 | _ -> {gb_trees:insert(SeqId, MsgStatus, RPA), DPA} | |
1455 | Insert = fun (Tree) -> gb_trees:insert(SeqId, MsgStatus, Tree) end, | |
1456 | {RPA1, DPA1, QPA1} = | |
1457 | case {msg_in_ram(MsgStatus), persist_to(MsgStatus)} of | |
1458 | {false, _} -> {RPA, Insert(DPA), QPA}; | |
1459 | {_, queue_index} -> {RPA, DPA, Insert(QPA)}; | |
1460 | {_, msg_store} -> {Insert(RPA), DPA, QPA} | |
1360 | 1461 | end, |
1361 | 1462 | State #vqstate { ram_pending_ack = RPA1, |
1362 | 1463 | disk_pending_ack = DPA1, |
1464 | qi_pending_ack = QPA1, | |
1363 | 1465 | ack_in_counter = AckInCount + 1}. |
1364 | 1466 | |
1365 | 1467 | lookup_pending_ack(SeqId, #vqstate { ram_pending_ack = RPA, |
1366 | disk_pending_ack = DPA }) -> | |
1468 | disk_pending_ack = DPA, | |
1469 | qi_pending_ack = QPA}) -> | |
1367 | 1470 | case gb_trees:lookup(SeqId, RPA) of |
1368 | 1471 | {value, V} -> V; |
1369 | none -> gb_trees:get(SeqId, DPA) | |
1370 | end. | |
1371 | ||
1372 | %% First parameter = UpdatePersistentCount | |
1472 | none -> case gb_trees:lookup(SeqId, DPA) of | |
1473 | {value, V} -> V; | |
1474 | none -> gb_trees:get(SeqId, QPA) | |
1475 | end | |
1476 | end. | |
1477 | ||
1478 | %% First parameter = UpdateStats | |
1373 | 1479 | remove_pending_ack(true, SeqId, State) -> |
1374 | {MsgStatus, State1 = #vqstate { persistent_count = PCount }} = | |
1375 | remove_pending_ack(false, SeqId, State), | |
1376 | PCount1 = PCount - one_if(MsgStatus#msg_status.is_persistent), | |
1377 | {MsgStatus, upd_bytes(0, -1, MsgStatus, | |
1378 | State1 # vqstate{ persistent_count = PCount1 })}; | |
1379 | remove_pending_ack(false, SeqId, State = #vqstate { ram_pending_ack = RPA, | |
1380 | disk_pending_ack = DPA }) -> | |
1480 | {MsgStatus, State1} = remove_pending_ack(false, SeqId, State), | |
1481 | {MsgStatus, stats({0, -1}, {MsgStatus, none}, State1)}; | |
1482 | remove_pending_ack(false, SeqId, State = #vqstate{ram_pending_ack = RPA, | |
1483 | disk_pending_ack = DPA, | |
1484 | qi_pending_ack = QPA}) -> | |
1381 | 1485 | case gb_trees:lookup(SeqId, RPA) of |
1382 | 1486 | {value, V} -> RPA1 = gb_trees:delete(SeqId, RPA), |
1383 | 1487 | {V, State #vqstate { ram_pending_ack = RPA1 }}; |
1384 | none -> DPA1 = gb_trees:delete(SeqId, DPA), | |
1385 | {gb_trees:get(SeqId, DPA), | |
1386 | State #vqstate { disk_pending_ack = DPA1 }} | |
1488 | none -> case gb_trees:lookup(SeqId, DPA) of | |
1489 | {value, V} -> | |
1490 | DPA1 = gb_trees:delete(SeqId, DPA), | |
1491 | {V, State#vqstate{disk_pending_ack = DPA1}}; | |
1492 | none -> | |
1493 | QPA1 = gb_trees:delete(SeqId, QPA), | |
1494 | {gb_trees:get(SeqId, QPA), | |
1495 | State#vqstate{qi_pending_ack = QPA1}} | |
1496 | end | |
1387 | 1497 | end. |
1388 | 1498 | |
1389 | 1499 | purge_pending_ack(KeepPersistent, |
1390 | 1500 | State = #vqstate { ram_pending_ack = RPA, |
1391 | 1501 | disk_pending_ack = DPA, |
1502 | qi_pending_ack = QPA, | |
1392 | 1503 | index_state = IndexState, |
1393 | 1504 | msg_store_clients = MSCState }) -> |
1394 | 1505 | F = fun (_SeqId, MsgStatus, Acc) -> accumulate_ack(MsgStatus, Acc) end, |
1395 | 1506 | {IndexOnDiskSeqIds, MsgIdsByStore, _AllMsgIds} = |
1396 | 1507 | rabbit_misc:gb_trees_fold( |
1397 | F, rabbit_misc:gb_trees_fold(F, accumulate_ack_init(), RPA), DPA), | |
1508 | F, rabbit_misc:gb_trees_fold( | |
1509 | F, rabbit_misc:gb_trees_fold( | |
1510 | F, accumulate_ack_init(), RPA), DPA), QPA), | |
1398 | 1511 | State1 = State #vqstate { ram_pending_ack = gb_trees:empty(), |
1399 | disk_pending_ack = gb_trees:empty() }, | |
1512 | disk_pending_ack = gb_trees:empty(), | |
1513 | qi_pending_ack = gb_trees:empty()}, | |
1400 | 1514 | |
1401 | 1515 | case KeepPersistent of |
1402 | 1516 | true -> case orddict:find(false, MsgIdsByStore) of |
1417 | 1531 | accumulate_ack(#msg_status { seq_id = SeqId, |
1418 | 1532 | msg_id = MsgId, |
1419 | 1533 | is_persistent = IsPersistent, |
1420 | msg_on_disk = MsgOnDisk, | |
1534 | msg_in_store = MsgInStore, | |
1421 | 1535 | index_on_disk = IndexOnDisk }, |
1422 | 1536 | {IndexOnDiskSeqIdsAcc, MsgIdsByStore, AllMsgIds}) -> |
1423 | 1537 | {cons_if(IndexOnDisk, SeqId, IndexOnDiskSeqIdsAcc), |
1424 | case MsgOnDisk of | |
1538 | case MsgInStore of | |
1425 | 1539 | true -> rabbit_misc:orddict_cons(IsPersistent, MsgId, MsgIdsByStore); |
1426 | 1540 | false -> MsgIdsByStore |
1427 | 1541 | end, |
1468 | 1582 | gb_sets:union(MIOD, Confirmed) }) |
1469 | 1583 | end). |
1470 | 1584 | |
1585 | msgs_and_indices_written_to_disk(Callback, MsgIdSet) -> | |
1586 | Callback(?MODULE, | |
1587 | fun (?MODULE, State) -> record_confirms(MsgIdSet, State) end). | |
1588 | ||
1471 | 1589 | %%---------------------------------------------------------------------------- |
1472 | 1590 | %% Internal plumbing for requeue |
1473 | 1591 | %%---------------------------------------------------------------------------- |
1474 | 1592 | |
1475 | 1593 | publish_alpha(#msg_status { msg = undefined } = MsgStatus, State) -> |
1476 | 1594 | {Msg, State1} = read_msg(MsgStatus, State), |
1477 | {MsgStatus#msg_status { msg = Msg }, | |
1478 | upd_ram_bytes(1, MsgStatus, inc_ram_msg_count(State1))}; %% [1] | |
1595 | MsgStatus1 = MsgStatus#msg_status { msg = Msg }, | |
1596 | {MsgStatus1, stats({1, -1}, {MsgStatus, MsgStatus1}, State1)}; | |
1479 | 1597 | publish_alpha(MsgStatus, State) -> |
1480 | {MsgStatus, inc_ram_msg_count(State)}. | |
1481 | %% [1] We increase the ram_bytes here because we paged the message in | |
1482 | %% to requeue it, not purely because we requeued it. Hence in the | |
1483 | %% second head it's already accounted for as already in memory. OTOH | |
1484 | %% ram_msg_count does not include unacked messages, so it needs | |
1485 | %% incrementing in both heads. | |
1598 | {MsgStatus, stats({1, -1}, {MsgStatus, MsgStatus}, State)}. | |
1486 | 1599 | |
1487 | 1600 | publish_beta(MsgStatus, State) -> |
1488 | 1601 | {MsgStatus1, State1} = maybe_write_to_disk(true, false, MsgStatus, State), |
1489 | 1602 | MsgStatus2 = m(trim_msg_status(MsgStatus1)), |
1490 | case msg_in_ram(MsgStatus1) andalso not msg_in_ram(MsgStatus2) of | |
1491 | true -> {MsgStatus2, upd_ram_bytes(-1, MsgStatus, State1)}; | |
1492 | _ -> {MsgStatus2, State1} | |
1493 | end. | |
1603 | {MsgStatus2, stats({1, -1}, {MsgStatus, MsgStatus2}, State1)}. | |
1494 | 1604 | |
1495 | 1605 | %% Rebuild queue, inserting sequence ids to maintain ordering |
1496 | 1606 | queue_merge(SeqIds, Q, MsgIds, Limit, PubFun, State) -> |
1512 | 1622 | {#msg_status { msg_id = MsgId } = MsgStatus1, State2} = |
1513 | 1623 | PubFun(MsgStatus, State1), |
1514 | 1624 | queue_merge(Rest, Q, ?QUEUE:in(MsgStatus1, Front), [MsgId | MsgIds], |
1515 | Limit, PubFun, upd_bytes(1, -1, MsgStatus, State2)) | |
1625 | Limit, PubFun, State2) | |
1516 | 1626 | end; |
1517 | 1627 | queue_merge(SeqIds, Q, Front, MsgIds, |
1518 | 1628 | _Limit, _PubFun, State) -> |
1526 | 1636 | msg_from_pending_ack(SeqId, State0), |
1527 | 1637 | {_MsgStatus, State2} = |
1528 | 1638 | maybe_write_to_disk(true, true, MsgStatus, State1), |
1529 | State3 = | |
1530 | case msg_in_ram(MsgStatus) of | |
1531 | false -> State2; | |
1532 | true -> upd_ram_bytes(-1, MsgStatus, State2) | |
1533 | end, | |
1534 | 1639 | {expand_delta(SeqId, Delta0), [MsgId | MsgIds0], |
1535 | upd_bytes(1, -1, MsgStatus, State3)} | |
1640 | stats({1, -1}, {MsgStatus, none}, State2)} | |
1536 | 1641 | end, {Delta, MsgIds, State}, SeqIds). |
1537 | 1642 | |
1538 | 1643 | %% Mostly opposite of record_pending_ack/2 |
1561 | 1666 | |
1562 | 1667 | disk_ack_iterator(State) -> |
1563 | 1668 | {ack, gb_trees:iterator(State#vqstate.disk_pending_ack)}. |
1669 | ||
1670 | qi_ack_iterator(State) -> | |
1671 | {ack, gb_trees:iterator(State#vqstate.qi_pending_ack)}. | |
1564 | 1672 | |
1565 | 1673 | msg_iterator(State) -> istate(start, State). |
1566 | 1674 | |
1591 | 1699 | next({delta, Delta, State}, IndexState); |
1592 | 1700 | next({delta, Delta, [{_, SeqId, _, _, _} = M | Rest], State}, IndexState) -> |
1593 | 1701 | case (gb_trees:is_defined(SeqId, State#vqstate.ram_pending_ack) orelse |
1594 | gb_trees:is_defined(SeqId, State#vqstate.disk_pending_ack)) of | |
1702 | gb_trees:is_defined(SeqId, State#vqstate.disk_pending_ack) orelse | |
1703 | gb_trees:is_defined(SeqId, State#vqstate.qi_pending_ack)) of | |
1595 | 1704 | false -> Next = {delta, Delta, Rest, State}, |
1596 | 1705 | {value, beta_msg_status(M), false, Next, IndexState}; |
1597 | 1706 | true -> next({delta, Delta, Rest, State}, IndexState) |
1688 | 1797 | {SeqId, MsgStatus, RPA1} = gb_trees:take_largest(RPA), |
1689 | 1798 | {MsgStatus1, State1} = |
1690 | 1799 | maybe_write_to_disk(true, false, MsgStatus, State), |
1691 | DPA1 = gb_trees:insert(SeqId, m(trim_msg_status(MsgStatus1)), DPA), | |
1800 | MsgStatus2 = m(trim_msg_status(MsgStatus1)), | |
1801 | DPA1 = gb_trees:insert(SeqId, MsgStatus2, DPA), | |
1692 | 1802 | limit_ram_acks(Quota - 1, |
1693 | upd_ram_bytes( | |
1694 | -1, MsgStatus1, | |
1695 | State1 #vqstate { ram_pending_ack = RPA1, | |
1696 | disk_pending_ack = DPA1 })) | |
1803 | stats({0, 0}, {MsgStatus, MsgStatus2}, | |
1804 | State1 #vqstate { ram_pending_ack = RPA1, | |
1805 | disk_pending_ack = DPA1 })) | |
1697 | 1806 | end. |
1698 | 1807 | |
1699 | 1808 | permitted_beta_count(#vqstate { len = 0 }) -> |
1754 | 1863 | delta = Delta, |
1755 | 1864 | q3 = Q3, |
1756 | 1865 | index_state = IndexState, |
1866 | ram_msg_count = RamMsgCount, | |
1867 | ram_bytes = RamBytes, | |
1757 | 1868 | ram_pending_ack = RPA, |
1758 | 1869 | disk_pending_ack = DPA, |
1870 | qi_pending_ack = QPA, | |
1871 | disk_read_count = DiskReadCount, | |
1759 | 1872 | transient_threshold = TransientThreshold }) -> |
1760 | 1873 | #delta { start_seq_id = DeltaSeqId, |
1761 | 1874 | count = DeltaCount, |
1765 | 1878 | DeltaSeqIdEnd]), |
1766 | 1879 | {List, IndexState1} = rabbit_queue_index:read(DeltaSeqId, DeltaSeqId1, |
1767 | 1880 | IndexState), |
1768 | {Q3a, IndexState2} = betas_from_index_entries(List, TransientThreshold, | |
1769 | RPA, DPA, IndexState1), | |
1770 | State1 = State #vqstate { index_state = IndexState2 }, | |
1881 | {Q3a, RamCountsInc, RamBytesInc, IndexState2} = | |
1882 | betas_from_index_entries(List, TransientThreshold, | |
1883 | RPA, DPA, QPA, IndexState1), | |
1884 | State1 = State #vqstate { index_state = IndexState2, | |
1885 | ram_msg_count = RamMsgCount + RamCountsInc, | |
1886 | ram_bytes = RamBytes + RamBytesInc, | |
1887 | disk_read_count = DiskReadCount + RamCountsInc}, | |
1771 | 1888 | case ?QUEUE:len(Q3a) of |
1772 | 1889 | 0 -> |
1773 | 1890 | %% we ignored every message in the segment due to it being |
1825 | 1942 | {empty, _Q} -> |
1826 | 1943 | {Quota, State}; |
1827 | 1944 | {{value, MsgStatus}, Qa} -> |
1828 | {MsgStatus1 = #msg_status { msg_on_disk = true }, | |
1829 | State1 = #vqstate { ram_msg_count = RamMsgCount }} = | |
1945 | {MsgStatus1, State1} = | |
1830 | 1946 | maybe_write_to_disk(true, false, MsgStatus, State), |
1831 | 1947 | MsgStatus2 = m(trim_msg_status(MsgStatus1)), |
1832 | State2 = Consumer( | |
1833 | MsgStatus2, Qa, | |
1834 | upd_ram_bytes( | |
1835 | -1, MsgStatus2, | |
1836 | State1 #vqstate { | |
1837 | ram_msg_count = RamMsgCount - 1})), | |
1948 | State2 = stats( | |
1949 | ready0, {MsgStatus, MsgStatus2}, State1), | |
1950 | State3 = Consumer(MsgStatus2, Qa, State2), | |
1838 | 1951 | push_alphas_to_betas(Generator, Consumer, Quota - 1, |
1839 | Qa, State2) | |
1952 | Qa, State3) | |
1840 | 1953 | end |
1841 | 1954 | end. |
1842 | 1955 | |
1843 | push_betas_to_deltas(Quota, State = #vqstate { q2 = Q2, | |
1844 | delta = Delta, | |
1845 | q3 = Q3, | |
1846 | index_state = IndexState }) -> | |
1847 | PushState = {Quota, Delta, IndexState}, | |
1956 | push_betas_to_deltas(Quota, State = #vqstate { q2 = Q2, | |
1957 | delta = Delta, | |
1958 | q3 = Q3}) -> | |
1959 | PushState = {Quota, Delta, State}, | |
1848 | 1960 | {Q3a, PushState1} = push_betas_to_deltas( |
1849 | 1961 | fun ?QUEUE:out_r/1, |
1850 | 1962 | fun rabbit_queue_index:next_segment_boundary/1, |
1853 | 1965 | fun ?QUEUE:out/1, |
1854 | 1966 | fun (Q2MinSeqId) -> Q2MinSeqId end, |
1855 | 1967 | Q2, PushState1), |
1856 | {_, Delta1, IndexState1} = PushState2, | |
1857 | State #vqstate { q2 = Q2a, | |
1858 | delta = Delta1, | |
1859 | q3 = Q3a, | |
1860 | index_state = IndexState1 }. | |
1968 | {_, Delta1, State1} = PushState2, | |
1969 | State1 #vqstate { q2 = Q2a, | |
1970 | delta = Delta1, | |
1971 | q3 = Q3a }. | |
1861 | 1972 | |
1862 | 1973 | push_betas_to_deltas(Generator, LimitFun, Q, PushState) -> |
1863 | 1974 | case ?QUEUE:is_empty(Q) of |
1873 | 1984 | end |
1874 | 1985 | end. |
1875 | 1986 | |
1876 | push_betas_to_deltas1(_Generator, _Limit, Q, | |
1877 | {0, _Delta, _IndexState} = PushState) -> | |
1987 | push_betas_to_deltas1(_Generator, _Limit, Q, {0, _Delta, _State} = PushState) -> | |
1878 | 1988 | {Q, PushState}; |
1879 | push_betas_to_deltas1(Generator, Limit, Q, | |
1880 | {Quota, Delta, IndexState} = PushState) -> | |
1989 | push_betas_to_deltas1(Generator, Limit, Q, {Quota, Delta, State} = PushState) -> | |
1881 | 1990 | case Generator(Q) of |
1882 | 1991 | {empty, _Q} -> |
1883 | 1992 | {Q, PushState}; |
1885 | 1994 | when SeqId < Limit -> |
1886 | 1995 | {Q, PushState}; |
1887 | 1996 | {{value, MsgStatus = #msg_status { seq_id = SeqId }}, Qa} -> |
1888 | {#msg_status { index_on_disk = true }, IndexState1} = | |
1889 | maybe_write_index_to_disk(true, MsgStatus, IndexState), | |
1997 | {#msg_status { index_on_disk = true }, State1} = | |
1998 | maybe_write_index_to_disk(true, MsgStatus, State), | |
1999 | State2 = stats(ready0, {MsgStatus, none}, State1), | |
1890 | 2000 | Delta1 = expand_delta(SeqId, Delta), |
1891 | 2001 | push_betas_to_deltas1(Generator, Limit, Qa, |
1892 | {Quota - 1, Delta1, IndexState1}) | |
2002 | {Quota - 1, Delta1, State2}) | |
1893 | 2003 | end. |
1894 | 2004 | |
1895 | 2005 | %%---------------------------------------------------------------------------- |
44 | 44 | report(Other, Params) -> term(Other, Params). |
45 | 45 | |
46 | 46 | term(Thing, {Max, {Content, Struct, ContentDec, StructDec}}) -> |
47 | case term_limit(Thing, Max) of | |
47 | case exceeds_size(Thing, Max) of | |
48 | 48 | true -> term(Thing, true, #params{content = Content, |
49 | 49 | struct = Struct, |
50 | 50 | content_dec = ContentDec, |
92 | 92 | %% sizes. This is all going to be rather approximate though, these |
93 | 93 | %% sizes are probably not very "fair" but we are just trying to see if |
94 | 94 | %% we reach a fairly arbitrary limit anyway though. |
95 | term_limit(Thing, Max) -> | |
95 | exceeds_size(Thing, Max) -> | |
96 | 96 | case term_size(Thing, Max, erlang:system_info(wordsize)) of |
97 | 97 | limit_exceeded -> true; |
98 | 98 | _ -> false |
17 | 17 | |
18 | 18 | %% Generic worker pool manager. |
19 | 19 | %% |
20 | %% Supports nested submission of jobs (nested jobs always run | |
21 | %% immediately in current worker process). | |
20 | %% Submitted jobs are functions. They can be executed asynchronously | |
21 | %% (using worker_pool:submit/1, worker_pool:submit/2) or synchronously | |
22 | %% (using worker_pool:submit_async/1). | |
22 | 23 | %% |
23 | %% Possible future enhancements: | |
24 | %% We typically use the worker pool if we want to limit the maximum | |
25 | %% parallelism of some job. We are not trying to dodge the cost of | |
26 | %% creating Erlang processes. | |
24 | 27 | %% |
25 | %% 1. Allow priorities (basically, change the pending queue to a | |
26 | %% priority_queue). | |
28 | %% Supports nested submission of jobs and two execution modes: | |
29 | %% 'single' and 'reuse'. Jobs executed in 'single' mode are invoked in | |
30 | %% a one-off process. Those executed in 'reuse' mode are invoked in a | |
31 | %% worker process out of the pool. Nested jobs are always executed | |
32 | %% immediately in current worker process. | |
33 | %% | |
34 | %% 'single' mode is offered to work around a bug in Mnesia: after | |
35 | %% network partitions reply messages for prior failed requests can be | |
36 | %% sent to Mnesia clients - a reused worker pool process can crash on | |
37 | %% receiving one. | |
38 | %% | |
39 | %% Caller submissions are enqueued internally. When the next worker | |
40 | %% process is available, it communicates it to the pool and is | |
41 | %% assigned a job to execute. If job execution fails with an error, no | |
42 | %% response is returned to the caller. | |
43 | %% | |
44 | %% Worker processes prioritise certain command-and-control messages | |
45 | %% from the pool. | |
46 | %% | |
47 | %% Future improvement points: job prioritisation. | |
27 | 48 | |
28 | 49 | -behaviour(gen_server2). |
29 | 50 |
14 | 14 | %% |
15 | 15 | |
16 | 16 | -module(worker_pool_worker). |
17 | ||
18 | %% Executes jobs (functions) submitted to a worker pool with worker_pool:submit/1, | |
19 | %% worker_pool:submit/2 or worker_pool:submit_async/1. | |
20 | %% | |
21 | %% See worker_pool for an overview. | |
17 | 22 | |
18 | 23 | -behaviour(gen_server2). |
19 | 24 |