Codebase list rabbitmq-server / 684f0d9
Imported Upstream version 3.5.1 James Page 8 years ago
199 changed file(s) with 7616 addition(s) and 1942 deletion(s). Raw diff Collapse all Expand all
365365 cp $$manpage $(MAN_DIR)/man$$section; \
366366 done; \
367367 done
368 cp $(DOCS_DIR)/rabbitmq.config.example $(DOC_INSTALL_DIR)/rabbitmq.config.example
368 if test "$(DOC_INSTALL_DIR)"; then \
369 cp $(DOCS_DIR)/rabbitmq.config.example $(DOC_INSTALL_DIR)/rabbitmq.config.example; \
370 fi
369371
370372 install_dirs:
371373 @ OK=true && \
372374 { [ -n "$(TARGET_DIR)" ] || { echo "Please set TARGET_DIR."; OK=false; }; } && \
373375 { [ -n "$(SBIN_DIR)" ] || { echo "Please set SBIN_DIR."; OK=false; }; } && \
374 { [ -n "$(MAN_DIR)" ] || { echo "Please set MAN_DIR."; OK=false; }; } && \
375 { [ -n "$(DOC_INSTALL_DIR)" ] || { echo "Please set DOC_INSTALL_DIR."; OK=false; }; } && $$OK
376 { [ -n "$(MAN_DIR)" ] || { echo "Please set MAN_DIR."; OK=false; }; } && $$OK
376377
377378 mkdir -p $(TARGET_DIR)/sbin
378379 mkdir -p $(SBIN_DIR)
379380 mkdir -p $(MAN_DIR)
380 mkdir -p $(DOC_INSTALL_DIR)
381 if test "$(DOC_INSTALL_DIR)"; then \
382 mkdir -p $(DOC_INSTALL_DIR); \
383 fi
381384
382385 $(foreach XML,$(USAGES_XML),$(eval $(call usage_dep, $(XML))))
383386
0 Please see http://www.rabbitmq.com/build-server.html for build instructions.
0 Please see http://www.rabbitmq.com/build-server.html for build instructions.
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
3232 %% {handshake_timeout, 10000},
3333
3434 %% Log levels (currently just used for connection logging).
35 %% One of 'info', 'warning', 'error' or 'none', in decreasing order
36 %% of verbosity. Defaults to 'info'.
37 %%
38 %% {log_levels, [{connection, info}]},
35 %% One of 'debug', 'info', 'warning', 'error' or 'none', in decreasing
36 %% order of verbosity. Defaults to 'info'.
37 %%
38 %% {log_levels, [{connection, info}, {channel, info}]},
3939
4040 %% Set to 'true' to perform reverse DNS lookups when accepting a
4141 %% connection. Hostnames will then be shown instead of IP addresses
107107
108108 %% This pertains to both the rabbitmq_auth_mechanism_ssl plugin and
109109 %% STOMP ssl_cert_login configurations. See the rabbitmq_stomp
110 %% configuration section later in this fail and the README in
110 %% configuration section later in this file and the README in
111111 %% https://github.com/rabbitmq/rabbitmq-auth-mechanism-ssl for further
112112 %% details.
113113 %%
219219 %%
220220 %% {cluster_nodes, {['rabbit@my.host.com'], disc}},
221221
222 %% Interval (in milliseconds) at which we send keepalive messages
223 %% to other cluster members. Note that this is not the same thing
224 %% as net_ticktime; missed keepalive messages will not cause nodes
225 %% to be considered down.
226 %%
227 %% {cluster_keepalive_interval, 10000},
228
222229 %% Set (internal) statistics collection granularity.
223230 %%
224231 %% {collect_statistics, none},
234241 %% Timeout used when waiting for Mnesia tables in a cluster to
235242 %% become available.
236243 %%
237 %% {mnesia_table_loading_timeout, 30000}
244 %% {mnesia_table_loading_timeout, 30000},
245
246 %% Size in bytes below which to embed messages in the queue index. See
247 %% http://www.rabbitmq.com/persistence-conf.html
248 %%
249 %% {queue_index_embed_msgs_below, 4096}
238250
239251 ]},
240252
405417 %% ----------------------------------------------------------------------------
406418 %% RabbitMQ MQTT Adapter
407419 %%
408 %% See http://hg.rabbitmq.com/rabbitmq-mqtt/file/stable/README.md for details
420 %% See https://github.com/rabbitmq/rabbitmq-mqtt/blob/stable/README.md
421 %% for details
409422 %% ----------------------------------------------------------------------------
410423
411424 {rabbitmq_mqtt,
459472 %% ----------------------------------------------------------------------------
460473 %% RabbitMQ AMQP 1.0 Support
461474 %%
462 %% See http://hg.rabbitmq.com/rabbitmq-amqp1.0/file/default/README.md
475 %% See https://github.com/rabbitmq/rabbitmq-amqp1.0/blob/stable/README.md
463476 %% for details
464477 %% ----------------------------------------------------------------------------
465478
425425 </listitem>
426426 </varlistentry>
427427 <varlistentry>
428 <term><cmdsynopsis><command>rename_cluster_node</command> <arg choice="req">oldnode1</arg> <arg choice="req">newnode1</arg> <arg choice="opt">oldnode2</arg> <arg choice="opt">newnode2 ...</arg></cmdsynopsis></term>
429 <listitem>
430 <para>
431 Supports renaming of cluster nodes in the local database.
432 </para>
433 <para>
434 This subcommand causes rabbitmqctl to temporarily become
435 the node in order to make the change. The local cluster
436 node must therefore be completely stopped; other nodes
437 can be online or offline.
438 </para>
439 <para>
440 This subcommand takes an even number of arguments, in
441 pairs representing the old and new names for nodes. You
442 must specify the old and new names for this node and for
443 any other nodes that are stopped and being renamed at
444 the same time.
445 </para>
446 <para>
447 It is possible to stop all nodes and rename them all
448 simultaneously (in which case old and new names for all
449 nodes must be given to every node) or stop and rename
450 nodes one at a time (in which case each node only needs
451 to be told how its own name is changing).
452 </para>
453 <para role="example-prefix">For example:</para>
454 <screen role="example">rabbitmqctl rename_cluster_node rabbit@misshelpful rabbit@cordelia</screen>
455 <para role="example">
456 This command will rename the node
457 <command>rabbit@misshelpful</command> to the node
458 <command>rabbit@cordelia</command>.
459 </para>
460 </listitem>
461 </varlistentry>
462 <varlistentry>
428463 <term><cmdsynopsis><command>update_cluster_nodes</command> <arg choice="req">clusternode</arg></cmdsynopsis>
429464 </term>
430465 <listitem>
12251260 <listitem><para>Like <command>message_bytes</command> but counting only those messages which are persistent.</para></listitem>
12261261 </varlistentry>
12271262 <varlistentry>
1263 <term>disk_reads</term>
1264 <listitem><para>Total number of times messages have been read from disk by this queue since it started.</para></listitem>
1265 </varlistentry>
1266 <varlistentry>
1267 <term>disk_writes</term>
1268 <listitem><para>Total number of times messages have been written to disk by this queue since it started.</para></listitem>
1269 </varlistentry>
1270 <varlistentry>
12281271 <term>consumers</term>
12291272 <listitem><para>Number of consumers.</para></listitem>
12301273 </varlistentry>
18121855 </varlistentry>
18131856 </variablelist>
18141857 <para>
1815 Starts tracing.
1858 Starts tracing. Note that the trace state is not
1859 persistent; it will revert to being off if the server is
1860 restarted.
18161861 </para>
18171862 </listitem>
18181863 </varlistentry>
00 {application, rabbit, %% -*- erlang -*-
11 [{description, "RabbitMQ"},
22 {id, "RabbitMQ"},
3 {vsn, "3.4.3"},
3 {vsn, "3.5.1"},
44 {modules, []},
55 {registered, [rabbit_amqqueue_sup,
66 rabbit_log,
2828 {heartbeat, 580},
2929 {msg_store_file_size_limit, 16777216},
3030 {queue_index_max_journal_entries, 65536},
31 {queue_index_embed_msgs_below, 4096},
3132 {default_user, <<"guest">>},
3233 {default_pass, <<"guest">>},
3334 {default_user_tags, [administrator]},
1313 %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
1414 %%
1515
16 %% Passed around most places
1617 -record(user, {username,
1718 tags,
18 auth_backend, %% Module this user came from
19 impl %% Scratch space for that module
20 }).
19 authz_backends}). %% List of {Module, AuthUserImpl} pairs
2120
21 %% Passed to auth backends
22 -record(auth_user, {username,
23 tags,
24 impl}).
25
26 %% Implementation for the internal auth backend
2227 -record(internal_user, {username, password_hash, tags}).
2328 -record(permission, {configure, write, read}).
2429 -record(user_vhost, {username, virtual_host}).
5156 arguments, %% immutable
5257 pid, %% durable (just so we know home node)
5358 slave_pids, sync_slave_pids, %% transient
54 down_slave_nodes, %% durable
59 recoverable_slaves, %% durable
5560 policy, %% durable, implicit update as above
5661 gm_pids, %% transient
5762 decorators, %% transient, recalculated as above
8287 is_persistent}).
8388
8489 -record(ssl_socket, {tcp, ssl}).
85 -record(delivery, {mandatory, confirm, sender, message, msg_seq_no}).
90 -record(delivery, {mandatory, confirm, sender, message, msg_seq_no, flow}).
8691 -record(amqp_error, {name, explanation = "", method = none}).
8792
8893 -record(event, {type, props, reference = undefined, timestamp}).
3434 toke \
3535 webmachine-wrapper
3636
37 BRANCH:=default
38
39 HG_CORE_REPOBASE:=$(shell dirname `hg paths default 2>/dev/null` 2>/dev/null)
40 ifndef HG_CORE_REPOBASE
41 HG_CORE_REPOBASE:=http://hg.rabbitmq.com/
37 BRANCH:=master
38
39 UMBRELLA_REPO_FETCH:=$(shell git remote -v 2>/dev/null | awk '/^origin\t.+ \(fetch\)$$/ { print $$2; }')
40 ifdef UMBRELLA_REPO_FETCH
41 GIT_CORE_REPOBASE_FETCH:=$(shell dirname $(UMBRELLA_REPO_FETCH))
42 GIT_CORE_SUFFIX_FETCH:=$(suffix $(UMBRELLA_REPO_FETCH))
43 else
44 GIT_CORE_REPOBASE_FETCH:=https://github.com/rabbitmq
45 GIT_CORE_SUFFIX_FETCH:=.git
46 endif
47
48 UMBRELLA_REPO_PUSH:=$(shell git remote -v 2>/dev/null | awk '/^origin\t.+ \(push\)$$/ { print $$2; }')
49 ifdef UMBRELLA_REPO_PUSH
50 GIT_CORE_REPOBASE_PUSH:=$(shell dirname $(UMBRELLA_REPO_PUSH))
51 GIT_CORE_SUFFIX_PUSH:=$(suffix $(UMBRELLA_REPO_PUSH))
52 else
53 GIT_CORE_REPOBASE_PUSH:=git@github.com:rabbitmq
54 GIT_CORE_SUFFIX_PUSH:=.git
4255 endif
4356
4457 VERSION:=0.0.0
6982 rm -rf $(PLUGINS_SRC_DIST_DIR)
7083 mkdir -p $(PLUGINS_SRC_DIST_DIR)/licensing
7184
72 rsync -a --exclude '.hg*' rabbitmq-erlang-client $(PLUGINS_SRC_DIST_DIR)/
85 rsync -a --exclude '.git*' rabbitmq-erlang-client $(PLUGINS_SRC_DIST_DIR)/
7386 touch $(PLUGINS_SRC_DIST_DIR)/rabbitmq-erlang-client/.srcdist_done
7487
75 rsync -a --exclude '.hg*' rabbitmq-server $(PLUGINS_SRC_DIST_DIR)/
88 rsync -a --exclude '.git*' rabbitmq-server $(PLUGINS_SRC_DIST_DIR)/
7689 touch $(PLUGINS_SRC_DIST_DIR)/rabbitmq-server/.srcdist_done
7790
7891 $(MAKE) -f all-packages.mk copy-srcdist VERSION=$(VERSION) PLUGINS_SRC_DIST_DIR=$(PLUGINS_SRC_DIST_DIR)
7992 cp Makefile *.mk generate* $(PLUGINS_SRC_DIST_DIR)/
8093 echo "This is the released version of rabbitmq-public-umbrella. \
81 You can clone the full version with: hg clone http://hg.rabbitmq.com/rabbitmq-public-umbrella" > $(PLUGINS_SRC_DIST_DIR)/README
82
83 PRESERVE_CLONE_DIR=1 make -C $(PLUGINS_SRC_DIST_DIR) clean
94 You can clone the full version with: git clone https://github.com/rabbitmq/rabbitmq-public-umbrella.git" > $(PLUGINS_SRC_DIST_DIR)/README
95
96 PRESERVE_CLONE_DIR=1 $(MAKE) -C $(PLUGINS_SRC_DIST_DIR) clean
8497 rm -rf $(PLUGINS_SRC_DIST_DIR)/rabbitmq-server
8598
8699 #----------------------------------
104117 #----------------------------------
105118
106119 $(REPOS):
107 hg clone $(HG_CORE_REPOBASE)/$@
120 retries=5; \
121 while ! git clone $(GIT_CORE_REPOBASE_FETCH)/$@$(GIT_CORE_SUFFIX_FETCH); do \
122 retries=$$((retries - 1)); \
123 if test "$$retries" = 0; then break; fi; \
124 sleep 1; \
125 done
126 test -d $@
127 global_user_name="$$(git config --global user.name)"; \
128 global_user_email="$$(git config --global user.email)"; \
129 user_name="$$(git config user.name)"; \
130 user_email="$$(git config user.email)"; \
131 cd $@ && \
132 git remote set-url --push origin $(GIT_CORE_REPOBASE_PUSH)/$@$(GIT_CORE_SUFFIX_PUSH) && \
133 if test "$$global_user_name" != "$$user_name"; then git config user.name "$$user_name"; fi && \
134 if test "$$global_user_email" != "$$user_email"; then git config user.email "$$user_email"; fi
135
108136
109137 .PHONY: checkout
110138 checkout: $(REPOS)
139
140 .PHONY: list-repos
141 list-repos:
142 @for repo in $(REPOS); do echo $$repo; done
143
144 .PHONY: sync-gituser
145 sync-gituser:
146 @global_user_name="$$(git config --global user.name)"; \
147 global_user_email="$$(git config --global user.email)"; \
148 user_name="$$(git config user.name)"; \
149 user_email="$$(git config user.email)"; \
150 for repo in $(REPOS); do \
151 cd $$repo && \
152 git config --unset user.name && \
153 git config --unset user.email && \
154 if test "$$global_user_name" != "$$user_name"; then git config user.name "$$user_name"; fi && \
155 if test "$$global_user_email" != "$$user_email"; then git config user.email "$$user_email"; fi && \
156 cd ..; done
157
158 .PHONY: sync-gitremote
159 sync-gitremote:
160 @for repo in $(REPOS); do \
161 cd $$repo && \
162 git remote set-url --fetch origin $(GIT_CORE_REPOBASE_FETCH)/$$repo$(GIT_CORE_SUFFIX_FETCH) && \
163 git remote set-url --push origin $(GIT_CORE_REPOBASE_PUSH)/$$repo$(GIT_CORE_SUFFIX_PUSH) && \
164 cd ..; done
111165
112166 #----------------------------------
113167 # Subrepository management
136190 # Do not allow status to fork with -j otherwise output will be garbled
137191 .PHONY: status
138192 status: checkout
139 $(foreach DIR,. $(REPOS), \
140 (cd $(DIR); OUT=$$(hg st -mad); \
141 if \[ ! -z "$$OUT" \]; then echo "\n$(DIR):\n$$OUT"; fi) &&) true
193 @for repo in . $(REPOS); do \
194 echo "$$repo:"; \
195 cd "$$repo" && git status -s && cd - >/dev/null; \
196 done
142197
143198 .PHONY: pull
144199 pull: $(foreach DIR,. $(REPOS),$(DIR)+pull)
145200
146 $(eval $(call repo_targets,. $(REPOS),pull,| %,(cd % && hg pull)))
201 $(eval $(call repo_targets,. $(REPOS),pull,| %,\
202 (cd % && git pull --ff-only)))
147203
148204 .PHONY: update
149 update: $(foreach DIR,. $(REPOS),$(DIR)+update)
150
151 $(eval $(call repo_targets,. $(REPOS),update,%+pull,(cd % && hg up)))
205 update: pull
152206
153207 .PHONY: named_update
154208 named_update: $(foreach DIR,. $(REPOS),$(DIR)+named_update)
155209
156 $(eval $(call repo_targets,. $(REPOS),named_update,%+pull,\
157 (cd % && hg up -C $(BRANCH))))
210 $(eval $(call repo_targets,. $(REPOS),named_update,| %,\
211 (cd % && git fetch -p && git checkout $(BRANCH) && \
212 (test "$$$$(git branch | grep '^*')" = "* (detached from $(BRANCH))" || \
213 git pull --ff-only))))
158214
159215 .PHONY: tag
160216 tag: $(foreach DIR,. $(REPOS),$(DIR)+tag)
161217
162 $(eval $(call repo_targets,. $(REPOS),tag,| %,(cd % && hg tag $(TAG))))
218 $(eval $(call repo_targets,. $(REPOS),tag,| %,\
219 (cd % && git tag $(TAG))))
163220
164221 .PHONY: push
165222 push: $(foreach DIR,. $(REPOS),$(DIR)+push)
166223
167 # "|| true" sicne hg push fails if there are no changes
168 $(eval $(call repo_targets,. $(REPOS),push,| %,(cd % && hg push -f || true)))
224 $(eval $(call repo_targets,. $(REPOS),push,| %,\
225 (cd % && git push && git push --tags)))
169226
170227 .PHONY: checkin
171228 checkin: $(foreach DIR,. $(REPOS),$(DIR)+checkin)
172229
173 $(eval $(call repo_targets,. $(REPOS),checkin,| %,(cd % && hg ci)))
230 $(eval $(call repo_targets,. $(REPOS),checkin,| %,\
231 (cd % && (test -z "$$$$(git status -s -uno)" || git commit -a))))
0 This is the released version of rabbitmq-public-umbrella. You can clone the full version with: hg clone http://hg.rabbitmq.com/rabbitmq-public-umbrella
0 This is the released version of rabbitmq-public-umbrella. You can clone the full version with: git clone https://github.com/rabbitmq/rabbitmq-public-umbrella.git
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
236236 # Work around weird github breakage (bug 25264)
237237 cd $(CLONE_DIR) && git pull
238238 $(if $(UPSTREAM_REVISION),cd $(CLONE_DIR) && git checkout $(UPSTREAM_REVISION))
239 $(if $(WRAPPER_PATCHES),$(foreach F,$(WRAPPER_PATCHES),patch --no-backup-if-mismatch -d $(CLONE_DIR) -p1 <$(PACKAGE_DIR)/$(F) &&) :)
239 $(if $(WRAPPER_PATCHES),$(foreach F,$(WRAPPER_PATCHES),patch -E -z .umbrella-orig -d $(CLONE_DIR) -p1 <$(PACKAGE_DIR)/$(F) &&) :)
240 find $(CLONE_DIR) -name "*.umbrella-orig" -delete
240241 touch $$@
241242 endif # UPSTREAM_GIT
242243
244245 $(CLONE_DIR)/.done:
245246 rm -rf $(CLONE_DIR)
246247 hg clone -r $(or $(UPSTREAM_REVISION),default) $(UPSTREAM_HG) $(CLONE_DIR)
247 $(if $(WRAPPER_PATCHES),$(foreach F,$(WRAPPER_PATCHES),patch --no-backup-if-mismatch -d $(CLONE_DIR) -p1 <$(PACKAGE_DIR)/$(F) &&) :)
248 $(if $(WRAPPER_PATCHES),$(foreach F,$(WRAPPER_PATCHES),patch -E -z .umbrella-orig -d $(CLONE_DIR) -p1 <$(PACKAGE_DIR)/$(F) &&) :)
249 find $(CLONE_DIR) -name "*.umbrella-orig" -delete
248250 touch $$@
249251 endif # UPSTREAM_HG
250252
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
33
44 # Status
55
6 This is a prototype. You can send and receive messages between 0-9-1
7 or 0-8 clients and 1.0 clients with broadly the same semantics as you
8 would get with 0-9-1.
6 This is mostly a prototype, but it is supported. We describe it as a
7 prototype since the amount of real world use and thus battle-testing
8 it has received is not so large as that of the STOMP or MQTT
9 plugins. Howver, bugs do get fixed as they are reported.
10
11 You can send and receive messages between 0-9-1 or 0-8 clients and 1.0
12 clients with broadly the same semantics as you would get with 0-9-1.
913
1014 # Building and configuring
1115
156160 | "/topic/" RK Publish to amq.topic with routing key RK
157161 | "/amq/queue/" Q Publish to default exchange with routing key Q
158162 | "/queue/" Q Publish to default exchange with routing key Q
163 | Q (no leading slash) Publish to default exchange with routing key Q
159164 | "/queue" Publish to default exchange with message subj as routing key
160165
161166 For sources, addresses are:
164169 | "/topic/" RK Consume from temp queue bound to amq.topic with routing key RK
165170 | "/amq/queue/" Q Consume from Q
166171 | "/queue/" Q Consume from Q
172 | Q (no leading slash) Consume from Q
173
174 The intent is that the source and destination address formats should be
175 mostly the same as those supported by the STOMP plugin, to the extent
176 permitted by AMQP 1.0 semantics.
167177
168178 ## Virtual Hosts
169179
8686 def print_define(opt, source):
8787 (name, value) = opt
8888 if source == 'symbol':
89 quoted = '"%s"' % value
89 quoted = '<<"%s">>' % value
9090 else:
9191 quoted = value
9292 print """-define(V_1_0_%s, {%s, %s}).""" % (name, source, quoted)
123123 <<16#a3>>.
124124
125125 generate(symbol, Value) ->
126 [<<(length(Value)):8>>, list_to_binary(Value)].
126 [<<(size(Value)):8>>, Value].
6363 [{symbol, symbolify(K)} || K <- rabbit_amqp1_0_framing0:fields(Record)].
6464
6565 symbolify(FieldName) when is_atom(FieldName) ->
66 re:replace(atom_to_list(FieldName), "_", "-", [{return,list}, global]).
66 re:replace(atom_to_list(FieldName), "_", "-", [{return,binary}, global]).
6767
6868 %% TODO: in fields of composite types with multiple=true, "a null
6969 %% value and a zero-length array (with a correct type for its
104104 decode_map(Fields) ->
105105 [{decode(K), decode(V)} || {K, V} <- Fields].
106106
107 encode_described(list, ListOrNumber, Frame) ->
108 Desc = descriptor(ListOrNumber),
109 {described, Desc,
107 encode_described(list, CodeNumber, Frame) ->
108 {described, {ulong, CodeNumber},
110109 {list, lists:map(fun encode/1, tl(tuple_to_list(Frame)))}};
111 encode_described(map, ListOrNumber, Frame) ->
112 Desc = descriptor(ListOrNumber),
113 {described, Desc,
110 encode_described(map, CodeNumber, Frame) ->
111 {described, {ulong, CodeNumber},
114112 {map, lists:zip(keys(Frame),
115113 lists:map(fun encode/1, tl(tuple_to_list(Frame))))}};
116 encode_described(binary, ListOrNumber, #'v1_0.data'{content = Content}) ->
117 Desc = descriptor(ListOrNumber),
118 {described, Desc, {binary, Content}};
119 encode_described('*', ListOrNumber, #'v1_0.amqp_value'{content = Content}) ->
120 Desc = descriptor(ListOrNumber),
121 {described, Desc, Content};
122 encode_described(annotations, ListOrNumber, Frame) ->
123 encode_described(map, ListOrNumber, Frame).
114 encode_described(binary, CodeNumber, #'v1_0.data'{content = Content}) ->
115 {described, {ulong, CodeNumber}, {binary, Content}};
116 encode_described('*', CodeNumber, #'v1_0.amqp_value'{content = Content}) ->
117 {described, {ulong, CodeNumber}, Content};
118 encode_described(annotations, CodeNumber, Frame) ->
119 encode_described(map, CodeNumber, Frame).
124120
125121 encode(X) ->
126122 rabbit_amqp1_0_framing0:encode(X).
139135 number_for(X) ->
140136 rabbit_amqp1_0_framing0:number_for(X).
141137
142 descriptor(Symbol) when is_list(Symbol) ->
143 {symbol, Symbol};
144 descriptor(Number) when is_number(Number) ->
145 {ulong, Number}.
146
147
148138 pprint(Thing) when is_tuple(Thing) ->
149139 case rabbit_amqp1_0_framing0:fields(Thing) of
150140 unknown -> Thing;
4646 case ensure_target(Target,
4747 #incoming_link{
4848 name = Name,
49 route_state = rabbit_routing_util:init_state() },
49 route_state = rabbit_routing_util:init_state(),
50 delivery_count = InitTransfer },
5051 DCh) of
51 {ok, ServerTarget,
52 IncomingLink = #incoming_link{ delivery_count = InitTransfer }} ->
52 {ok, ServerTarget, IncomingLink} ->
5353 {_, _Outcomes} = rabbit_amqp1_0_link_util:outcomes(Source),
5454 %% Default is mixed
5555 Confirm =
8080 IncomingLink#incoming_link{recv_settle_mode = RcvSettleMode},
8181 {ok, [Attach, Flow], IncomingLink1, Confirm};
8282 {error, Reason} ->
83 rabbit_log:warning("AMQP 1.0 attach rejected ~p~n", [Reason]),
8483 %% TODO proper link establishment protocol here?
8584 protocol_error(?V_1_0_AMQP_ERROR_INVALID_FIELD,
8685 "Attach rejected: ~p", [Reason])
193192 timeout = _Timeout},
194193 Link = #incoming_link{ route_state = RouteState }, DCh) ->
195194 DeclareParams = [{durable, rabbit_amqp1_0_link_util:durable(Durable)},
196 {check_exchange, true}],
195 {check_exchange, true},
196 {nowait, false}],
197197 case Dynamic of
198198 true ->
199199 protocol_error(?V_1_0_AMQP_ERROR_NOT_IMPLEMENTED,
225225 E
226226 end;
227227 _Else ->
228 {error, {unknown_address, Address}}
228 {error, {address_not_utf8_string, Address}}
229229 end.
230230
231231 incoming_flow(#incoming_link{ delivery_count = Count }, Handle) ->
9090 protocol_error(?V_1_0_AMQP_ERROR_INTERNAL_ERROR,
9191 "Consume failed: ~p", [Fail])
9292 end;
93 {error, _Reason} ->
94 %% TODO Deal with this properly -- detach and what have you
95 {ok, [#'v1_0.attach'{source = undefined}]}
93 {error, Reason} ->
94 %% TODO proper link establishment protocol here?
95 protocol_error(?V_1_0_AMQP_ERROR_INVALID_FIELD,
96 "Attach rejected: ~p", [Reason])
9697 end.
9798
9899 credit_drained(#'basic.credit_drained'{credit_drained = CreditDrained},
155156 timeout = _Timeout},
156157 Link = #outgoing_link{ route_state = RouteState }, DCh) ->
157158 DeclareParams = [{durable, rabbit_amqp1_0_link_util:durable(Durable)},
158 {check_exchange, true}],
159 {check_exchange, true},
160 {nowait, false}],
159161 case Dynamic of
160162 true -> protocol_error(?V_1_0_AMQP_ERROR_NOT_IMPLEMENTED,
161163 "Dynamic sources not supported", []);
175177 ER = rabbit_routing_util:parse_routing(Dest),
176178 ok = rabbit_routing_util:ensure_binding(Queue, ER, DCh),
177179 {ok, Source, Link#outgoing_link{route_state = RouteState1,
178 queue = Queue}}
180 queue = Queue}};
181 {error, _} = E ->
182 E
179183 end;
180184 _ ->
181 {error, {unknown_address, Address}}
185 {error, {address_not_utf8_string, Address}}
182186 end.
183187
184188 delivery(Deliver = #'basic.deliver'{delivery_tag = DeliveryTag,
517517 end,
518518 case Size of
519519 8 -> % length inclusive
520 {State, {frame_header_1_0, Mode}, 8}; %% heartbeat
520 State; %% heartbeat
521521 _ ->
522522 switch_callback(State, {frame_payload_1_0, Mode, DOff, Channel}, Size - 8)
523523 end;
544544 Ms = {array, symbol,
545545 case application:get_env(rabbitmq_amqp1_0, default_user) of
546546 {ok, none} -> [];
547 {ok, _} -> ["ANONYMOUS"]
548 end ++ [ atom_to_list(M) || M <- auth_mechanisms(Sock)]},
547 {ok, _} -> [<<"ANONYMOUS">>]
548 end ++
549 [list_to_binary(atom_to_list(M)) || M <- auth_mechanisms(Sock)]},
549550 Mechanisms = #'v1_0.sasl_mechanisms'{sasl_server_mechanisms = Ms},
550551 ok = send_on_channel0(Sock, Mechanisms, rabbit_amqp1_0_sasl),
551552 start_1_0_connection0(sasl, State);
146146 catch exit:Reason = #'v1_0.error'{} ->
147147 %% TODO shut down nicely like rabbit_channel
148148 End = #'v1_0.end'{ error = Reason },
149 rabbit_log:warning("Closing session for connection ~p: ~p~n",
149 rabbit_log:warning("Closing session for connection ~p:~n~p~n",
150150 [ReaderPid, Reason]),
151151 ok = rabbit_amqp1_0_writer:send_command_sync(Sock, End),
152152 {stop, normal, State};
241241 requeue = false};
242242 #'v1_0.released'{} ->
243243 #'basic.reject'{delivery_tag = DeliveryTag,
244 requeue = true}
244 requeue = true};
245 _ ->
246 protocol_error(
247 ?V_1_0_AMQP_ERROR_INVALID_FIELD,
248 "Unrecognised state: ~p~n"
249 "Disposition was: ~p~n", [Outcome, Disp])
245250 end)
246251 end,
247252 case rabbit_amqp1_0_session:settle(Disp, session(State), AckFun) of
1212 mv build/tmp/$(CLIENT_DIR)/jars/*.jar build/lib
1313 rm -rf build/tmp
1414 cp ../lib-java/*.jar build/lib
15 (cd ../../../rabbitmq-java-client && ant dist)
16 cp ../../../rabbitmq-java-client/build/dist/rabbitmq-client.jar build/lib
1517
1618 $(CLIENT_PKG):
1719 @echo
00 package com.rabbitmq.amqp1_0.tests.swiftmq;
11
2 import com.rabbitmq.client.*;
23 import com.swiftmq.amqp.AMQPContext;
34 import com.swiftmq.amqp.v100.client.*;
5 import com.swiftmq.amqp.v100.client.Connection;
6 import com.swiftmq.amqp.v100.client.Consumer;
47 import com.swiftmq.amqp.v100.generated.messaging.message_format.*;
58 import com.swiftmq.amqp.v100.generated.messaging.message_format.Properties;
69 import com.swiftmq.amqp.v100.messaging.AMQPMessage;
212215 route(QUEUE, "test", "", true);
213216 route("test", "test", "", true);
214217
215 try {
216 route(QUEUE, "/exchange/missing", "", false);
217 fail("Missing exchange should fail");
218 } catch (Exception e) { }
219
220 try {
221 route("/exchange/missing/", QUEUE, "", false);
222 fail("Missing exchange should fail");
223 } catch (Exception e) { }
224
225218 route("/topic/#.c.*", "/topic/a.b.c.d", "", true);
226219 route("/topic/#.c.*", "/exchange/amq.topic", "a.b.c.d", true);
227220 route("/exchange/amq.topic/#.y.*", "/topic/w.x.y.z", "", true);
239232 route(QUEUE, "/exchange/amq.fanout", "", false);
240233 route(QUEUE, "/exchange/amq.headers", "", false);
241234 emptyQueue(QUEUE);
235 }
236
237 public void testRoutingInvalidRoutes() throws Exception {
238 ConnectionFactory factory = new ConnectionFactory();
239 com.rabbitmq.client.Connection connection = factory.newConnection();
240 Channel channel = connection.createChannel();
241 channel.queueDeclare("transient", false, false, false, null);
242 connection.close();
243
244 for (String dest : Arrays.asList("/exchange/missing", "/queue/transient", "/fruit/orange")) {
245 routeInvalidSource(dest);
246 routeInvalidTarget(dest);
247 }
242248 }
243249
244250 private void emptyQueue(String q) throws Exception {
290296 conn.close();
291297 }
292298
299 private void routeInvalidSource(String consumerSource) throws Exception {
300 AMQPContext ctx = new AMQPContext(AMQPContext.CLIENT);
301 Connection conn = new Connection(ctx, host, port, false);
302 conn.connect();
303 Session s = conn.createSession(INBOUND_WINDOW, OUTBOUND_WINDOW);
304 try {
305 Consumer c = s.createConsumer(consumerSource, CONSUMER_LINK_CREDIT, QoS.AT_LEAST_ONCE, false, null);
306 c.close();
307 fail("Source '" + consumerSource + "' should fail");
308 }
309 catch (Exception e) {
310 // no-op
311 }
312 finally {
313 conn.close();
314 }
315 }
316
317 private void routeInvalidTarget(String producerTarget) throws Exception {
318 AMQPContext ctx = new AMQPContext(AMQPContext.CLIENT);
319 Connection conn = new Connection(ctx, host, port, false);
320 conn.connect();
321 Session s = conn.createSession(INBOUND_WINDOW, OUTBOUND_WINDOW);
322 try {
323 Producer p = s.createProducer(producerTarget, QoS.AT_LEAST_ONCE);
324 p.close();
325 fail("Target '" + producerTarget + "' should fail");
326 }
327 catch (Exception e) {
328 // no-op
329 }
330 finally {
331 conn.close();
332 }
333 }
334
293335 // TODO: generalise to a comparison of all immutable parts of messages
294336 private boolean compareMessageData(AMQPMessage m1, AMQPMessage m2) throws IOException {
295337 ByteArrayOutputStream b1 = new ByteArrayOutputStream();
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
55
66 sudo apt-get --yes purge slapd
77 sudo rm -rf /var/lib/ldap
8 sudo apt-get --yes install slapd
8 sudo apt-get --yes install slapd ldap-utils
99 sleep 1
1010
1111 DIR=$(dirname $0)
11 DEPS:=rabbitmq-server rabbitmq-erlang-client eldap-wrapper
22
33 ifeq ($(shell nc -z localhost 389 && echo true),true)
4 WITH_BROKER_TEST_COMMANDS:=eunit:test(rabbit_auth_backend_ldap_test,[verbose])
4 WITH_BROKER_TEST_COMMANDS:=eunit:test([rabbit_auth_backend_ldap_unit_test,rabbit_auth_backend_ldap_test],[verbose])
55 WITH_BROKER_TEST_CONFIG:=$(PACKAGE_DIR)/etc/rabbit-test
6 else
7 $(warning Not running LDAP tests; no LDAP server found on localhost)
68 endif
2020 -include_lib("eldap/include/eldap.hrl").
2121 -include_lib("rabbit_common/include/rabbit.hrl").
2222
23 -behaviour(rabbit_auth_backend).
24
25 -export([description/0]).
26 -export([check_user_login/2, check_vhost_access/2, check_resource_access/3]).
23 -behaviour(rabbit_authn_backend).
24 -behaviour(rabbit_authz_backend).
25
26 -export([user_login_authentication/2, user_login_authorization/1,
27 check_vhost_access/3, check_resource_access/3]).
2728
2829 -define(L(F, A), log("LDAP " ++ F, A)).
2930 -define(L1(F, A), log(" LDAP " ++ F, A)).
3536
3637 %%--------------------------------------------------------------------
3738
38 description() ->
39 [{name, <<"LDAP">>},
40 {description, <<"LDAP authentication / authorisation">>}].
41
42 %%--------------------------------------------------------------------
43
44 check_user_login(Username, []) ->
39 user_login_authentication(Username, []) ->
4540 %% Without password, e.g. EXTERNAL
4641 ?L("CHECK: passwordless login for ~s", [Username]),
4742 R = with_ldap(creds(none),
5045 [Username, log_result(R)]),
5146 R;
5247
53 check_user_login(Username, [{password, <<>>}]) ->
48 user_login_authentication(Username, [{password, <<>>}]) ->
5449 %% Password "" is special in LDAP, see
5550 %% https://tools.ietf.org/html/rfc4513#section-5.1.2
5651 ?L("CHECK: unauthenticated login for ~s", [Username]),
5752 ?L("DECISION: unauthenticated login for ~s: denied", [Username]),
5853 {refused, "user '~s' - unauthenticated bind not allowed", [Username]};
5954
60 check_user_login(User, [{password, PW}]) ->
55 user_login_authentication(User, [{password, PW}]) ->
6156 ?L("CHECK: login for ~s", [User]),
6257 R = case dn_lookup_when() of
6358 prebind -> UserDN = username_to_dn_prebind(User),
6964 ?L("DECISION: login for ~s: ~p", [User, log_result(R)]),
7065 R;
7166
72 check_user_login(Username, AuthProps) ->
67 user_login_authentication(Username, AuthProps) ->
7368 exit({unknown_auth_props, Username, AuthProps}).
7469
75 check_vhost_access(User = #user{username = Username,
76 impl = #impl{user_dn = UserDN}}, VHost) ->
70 user_login_authorization(Username) ->
71 case user_login_authentication(Username, []) of
72 {ok, #auth_user{impl = Impl}} -> {ok, Impl};
73 Else -> Else
74 end.
75
76 check_vhost_access(User = #auth_user{username = Username,
77 impl = #impl{user_dn = UserDN}},
78 VHost, _Sock) ->
7779 Args = [{username, Username},
7880 {user_dn, UserDN},
7981 {vhost, VHost}],
8385 [log_vhost(Args), log_user(User), log_result(R)]),
8486 R.
8587
86 check_resource_access(User = #user{username = Username,
87 impl = #impl{user_dn = UserDN}},
88 check_resource_access(User = #auth_user{username = Username,
89 impl = #impl{user_dn = UserDN}},
8890 #resource{virtual_host = VHost, kind = Type, name = Name},
8991 Permission) ->
9092 Args = [{username, Username},
132134 evaluate({in_group, DNPattern, "member"}, Args, User, LDAP);
133135
134136 evaluate0({in_group, DNPattern, Desc}, Args,
135 #user{impl = #impl{user_dn = UserDN}}, LDAP) ->
137 #auth_user{impl = #impl{user_dn = UserDN}}, LDAP) ->
136138 Filter = eldap:equalityMatch(Desc, UserDN),
137139 DN = fill(DNPattern, Args),
138140 R = object_exists(DN, Filter, LDAP),
336338 unknown -> username_to_dn(Username, LDAP, dn_lookup_when());
337339 _ -> PrebindUserDN
338340 end,
339 User = #user{username = Username,
340 auth_backend = ?MODULE,
341 impl = #impl{user_dn = UserDN,
342 password = Password}},
343 TagRes = [begin
344 ?L1("CHECK: does ~s have tag ~s?", [Username, Tag]),
345 R = evaluate(Q, [{username, Username},
346 {user_dn, UserDN}], User, LDAP),
347 ?L1("DECISION: does ~s have tag ~s? ~p",
348 [Username, Tag, R]),
349 {Tag, R}
350 end || {Tag, Q} <- env(tag_queries)],
351 case [E || {_, E = {error, _}} <- TagRes] of
352 [] -> {ok, User#user{tags = [Tag || {Tag, true} <- TagRes]}};
353 [E | _] -> E
354 end.
341 User = #auth_user{username = Username,
342 impl = #impl{user_dn = UserDN,
343 password = Password}},
344 DTQ = fun (LDAPn) -> do_tag_queries(Username, UserDN, User, LDAPn) end,
345 TagRes = case env(other_bind) of
346 as_user -> DTQ(LDAP);
347 _ -> with_ldap(creds(User), DTQ)
348 end,
349 case TagRes of
350 {ok, L} -> case [E || {_, E = {error, _}} <- L] of
351 [] -> Tags = [Tag || {Tag, true} <- L],
352 {ok, User#auth_user{tags = Tags}};
353 [E | _] -> E
354 end;
355 E -> E
356 end.
357
358 do_tag_queries(Username, UserDN, User, LDAP) ->
359 {ok, [begin
360 ?L1("CHECK: does ~s have tag ~s?", [Username, Tag]),
361 R = evaluate(Q, [{username, Username},
362 {user_dn, UserDN}], User, LDAP),
363 ?L1("DECISION: does ~s have tag ~s? ~p",
364 [Username, Tag, R]),
365 {Tag, R}
366 end || {Tag, Q} <- env(tag_queries)]}.
355367
356368 dn_lookup_when() -> case {env(dn_lookup_attribute), env(dn_lookup_bind)} of
357369 {none, _} -> never;
391403
392404 creds(none, as_user) ->
393405 {error, "'other_bind' set to 'as_user' but no password supplied"};
394 creds(#user{impl = #impl{user_dn = UserDN, password = Password}}, as_user) ->
395 {ok, {UserDN, Password}};
406 creds(#auth_user{impl = #impl{user_dn = UserDN, password = PW}}, as_user) ->
407 {ok, {UserDN, PW}};
396408 creds(_, Creds) ->
397409 {ok, Creds}.
398410
407419 ?L2("template result: \"~s\"", [R]),
408420 R.
409421
410 log_result({ok, #user{}}) -> ok;
411 log_result(true) -> ok;
412 log_result(false) -> denied;
413 log_result({refused, _, _}) -> denied;
414 log_result(E) -> E.
415
416 log_user(#user{username = U}) -> rabbit_misc:format("\"~s\"", [U]).
422 log_result({ok, #auth_user{}}) -> ok;
423 log_result(true) -> ok;
424 log_result(false) -> denied;
425 log_result({refused, _, _}) -> denied;
426 log_result(E) -> E.
427
428 log_user(#auth_user{username = U}) -> rabbit_misc:format("\"~s\"", [U]).
417429
418430 log_vhost(Args) ->
419431 rabbit_misc:format("access to vhost \"~s\"", [pget(vhost, Args)]).
2424 Var = [[$\\, $$, ${] ++ atom_to_list(K) ++ [$}]],
2525 fill(re:replace(Fmt, Var, [to_repl(V)], [global]), T).
2626
27 to_repl(V) when is_atom(V) ->
28 atom_to_list(V);
29 to_repl(V) ->
30 V.
27 to_repl(V) when is_atom(V) -> to_repl(atom_to_list(V));
28 to_repl(V) when is_binary(V) -> to_repl(binary_to_list(V));
29 to_repl([]) -> [];
30 to_repl([$\\ | T]) -> [$\\, $\\ | to_repl(T)];
31 to_repl([$& | T]) -> [$\\, $& | to_repl(T)];
32 to_repl([H | T]) -> [H | to_repl(T)].
33
0 %% The contents of this file are subject to the Mozilla Public License
1 %% Version 1.1 (the "License"); you may not use this file except in
2 %% compliance with the License. You may obtain a copy of the License
3 %% at http://www.mozilla.org/MPL/
4 %%
5 %% Software distributed under the License is distributed on an "AS IS"
6 %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 %% the License for the specific language governing rights and
8 %% limitations under the License.
9 %%
10 %% The Original Code is RabbitMQ
11 %%
12 %% The Initial Developer of the Original Code is GoPivotal, Inc.
13 %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
14 %%
15
16 -module(rabbit_auth_backend_ldap_unit_test).
17
18 -include_lib("eunit/include/eunit.hrl").
19
20 fill_test() ->
21 F = fun(Fmt, Args, Res) ->
22 ?assertEqual(Res, rabbit_auth_backend_ldap_util:fill(Fmt, Args))
23 end,
24 F("x${username}x", [{username, "ab"}], "xabx"),
25 F("x${username}x", [{username, ab}], "xabx"),
26 F("x${username}x", [{username, <<"ab">>}], "xabx"),
27 F("x${username}x", [{username, ""}], "xx"),
28 F("x${username}x", [{fusername, "ab"}], "x${username}x"),
29 F("x${usernamex", [{username, "ab"}], "x${usernamex"),
30 F("x${username}x", [{username, "a\\b"}], "xa\\bx"),
31 F("x${username}x", [{username, "a&b"}], "xa&bx"),
32 ok.
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
5656 Username = case rabbit_net:peercert(Sock) of
5757 {ok, C} ->
5858 case rabbit_ssl:peer_cert_auth_name(C) of
59 unsafe -> {refused, "configuration unsafe", []};
60 not_found -> {refused, "no name found", []};
59 unsafe -> {refused, none,
60 "configuration unsafe", []};
61 not_found -> {refused, none, "no name found", []};
6162 Name -> Name
6263 end;
6364 {error, no_peercert} ->
64 {refused, "no peer certificate", []};
65 {refused, none, "no peer certificate", []};
6566 nossl ->
66 {refused, "not SSL connection", []}
67 {refused, none, "not SSL connection", []}
6768 end,
6869 #state{username = Username}.
6970
7071 handle_response(_Response, #state{username = Username}) ->
7172 case Username of
72 {refused, _, _} = E ->
73 {refused, _, _, _} = E ->
7374 E;
7475 _ ->
7576 rabbit_access_control:check_user_login(Username, [])
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
1818 rabbit_command_assembler,
1919 rabbit_exchange_type,
2020 rabbit_exchange_decorator,
21 rabbit_auth_backend,
21 rabbit_authn_backend,
22 rabbit_authz_backend,
2223 rabbit_auth_mechanism,
2324 rabbit_framing_amqp_0_8,
2425 rabbit_framing_amqp_0_9_1,
7373 {ok, State};
7474 handle_message(closing_timeout, State = #state{closing_reason = Reason}) ->
7575 {stop, {closing_timeout, Reason}, State};
76 handle_message({'DOWN', _MRef, process, _ConnSup, Reason},
77 State = #state{node = Node}) ->
76 handle_message({'DOWN', _MRef, process, _ConnSup, Reason}, State) ->
7877 {stop, {remote_node_down, Reason}, State};
7978 handle_message(Msg, State) ->
8079 {stop, {unexpected_msg, Msg}, State}.
105105 #'queue.declare'{queue = Queue,
106106 nowait = true},
107107 queue, Params1),
108 amqp_channel:cast(Channel, Method),
108 case Method#'queue.declare'.nowait of
109 true -> amqp_channel:cast(Channel, Method);
110 false -> amqp_channel:call(Channel, Method)
111 end,
109112 sets:add_element(Queue, State)
110113 end,
111114 {ok, Queue, State1};
184187 Val -> Method#'queue.declare'{auto_delete = Val}
185188 end.
186189
190 update_queue_declare_nowait(Method, Params) ->
191 case proplists:get_value(nowait, Params) of
192 undefined -> Method;
193 Val -> Method#'queue.declare'{nowait = Val}
194 end.
195
187196 queue_declare_method(#'queue.declare'{} = Method, Type, Params) ->
188197 %% defaults
189198 Method1 = case proplists:get_value(durable, Params, false) of
195204 Method2 = lists:foldl(fun (F, Acc) -> F(Acc, Params) end,
196205 Method1, [fun update_queue_declare_arguments/2,
197206 fun update_queue_declare_exclusive/2,
198 fun update_queue_declare_auto_delete/2]),
207 fun update_queue_declare_auto_delete/2,
208 fun update_queue_declare_nowait/2]),
199209 case {Type, proplists:get_value(subscription_queue_name_gen, Params)} of
200210 {topic, SQNG} when is_function(SQNG) ->
201211 Method2#'queue.declare'{queue = SQNG()};
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
7373 </td>
7474 <td class="r">
7575 <% if (link.local_channel) { %>
76 <%= fmt_rate(link.local_channel.message_stats, 'confirm') %>
76 <%= fmt_detail_rate(link.local_channel.message_stats, 'confirm') %>
7777 <% } %>
7878 </td>
7979 <td><%= link.timestamp %></td>
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
1515 # Copyright (c) 2010-2014 GoPivotal, Inc. All rights reserved.
1616
1717 import sys
18 if sys.version_info[0] < 2 or sys.version_info[1] < 6:
19 print "Sorry, rabbitmqadmin requires at least Python 2.6."
18 if sys.version_info[0] < 2 or (sys.version_info[0] == 2 and sys.version_info[1] < 6):
19 print("Sorry, rabbitmqadmin requires at least Python 2.6.")
2020 sys.exit(1)
2121
22 from ConfigParser import ConfigParser, NoSectionError
2322 from optparse import OptionParser, TitledHelpFormatter
24 import httplib
2523 import urllib
26 import urlparse
2724 import base64
2825 import json
2926 import os
3027 import socket
28
29 if sys.version_info[0] == 2:
30 from ConfigParser import ConfigParser, NoSectionError
31 import httplib
32 import urlparse
33 from urllib import quote_plus
34 def b64(s):
35 return base64.b64encode(s)
36 else:
37 from configparser import ConfigParser, NoSectionError
38 import http.client as httplib
39 import urllib.parse as urlparse
40 from urllib.parse import quote_plus
41 def b64(s):
42 return base64.b64encode(s.encode('utf-8')).decode('utf-8')
3143
3244 VERSION = '%%VSN%%'
3345
345357 try:
346358 config.read(options.config)
347359 new_conf = dict(config.items(options.node))
348 except NoSectionError, error:
360 except NoSectionError as error:
349361 if options.node == "default":
350362 pass
351363 else:
388400 method()
389401
390402 def output(s):
391 print maybe_utf8(s, sys.stdout)
403 print(maybe_utf8(s, sys.stdout))
392404
393405 def die(s):
394406 sys.stderr.write(maybe_utf8("*** {0}\n".format(s), sys.stderr))
395407 exit(1)
396408
397409 def maybe_utf8(s, stream):
398 if stream.isatty():
410 if sys.version_info[0] == 3 or stream.isatty():
399411 # It will have an encoding, which Python will respect
400412 return s
401413 else:
428440 else:
429441 conn = httplib.HTTPConnection(self.options.hostname,
430442 self.options.port)
431 headers = {"Authorization":
432 "Basic " + base64.b64encode(self.options.username + ":" +
433 self.options.password)}
443 auth = (self.options.username + ":" + self.options.password)
444
445 headers = {"Authorization": "Basic " + b64(auth)}
434446 if body != "":
435447 headers["Content-Type"] = "application/json"
436448 try:
437449 conn.request(method, path, body, headers)
438 except socket.error, e:
450 except socket.error as e:
439451 die("Could not connect: {0}".format(e))
440452 resp = conn.getresponse()
441453 if resp.status == 400:
453465 if resp.status < 200 or resp.status > 400:
454466 raise Exception("Received %d %s for path %s\n%s"
455467 % (resp.status, resp.reason, path, resp.read()))
456 return resp.read()
468 return resp.read().decode('utf-8')
457469
458470 def verbose(self, string):
459471 if self.options.verbose:
481493 assert_usage(False, """help topic must be one of:
482494 subcommands
483495 config""")
484 print usage
496 print(usage)
485497 exit(0)
486498
487499 def invoke_publish(self):
488500 (uri, upload) = self.parse_args(self.args, EXTRA_VERBS['publish'])
489501 if not 'payload' in upload:
490502 data = sys.stdin.read()
491 upload['payload'] = base64.b64encode(data)
503 upload['payload'] = b64(data)
492504 upload['payload_encoding'] = 'base64'
493505 resp = json.loads(self.post(uri, json.dumps(upload)))
494506 if resp['routed']:
544556 uri = "/%s" % obj_type
545557 query = []
546558 if obj_info['vhost'] and self.options.vhost:
547 uri += "/%s" % urllib.quote_plus(self.options.vhost)
559 uri += "/%s" % quote_plus(self.options.vhost)
548560 cols = self.args[1:]
549561 if cols == [] and 'cols' in obj_info and self.use_cols():
550562 cols = obj_info['cols']
618630 uri_args = {}
619631 for k in upload:
620632 v = upload[k]
621 if v and isinstance(v, basestring):
622 uri_args[k] = urllib.quote_plus(v)
633 if v and isinstance(v, (str, bytes)):
634 uri_args[k] = quote_plus(v)
623635 if k == 'destination_type':
624636 uri_args['destination_char'] = v[0]
625637 uri = uri_template.format(**uri_args)
629641 try:
630642 return json.loads(text)
631643 except ValueError:
632 print "Could not parse JSON:\n {0}".format(text)
644 print("Could not parse JSON:\n {0}".format(text))
633645 sys.exit(1)
634646
635647 def format_list(json_list, columns, args, options):
655667 output(string)
656668
657669 def display(self, json_list):
658 depth = sys.maxint
670 depth = sys.maxsize
659671 if len(self.columns) == 0:
660672 depth = int(self.options.depth)
661673 (columns, table) = self.list_to_table(json.loads(json_list), depth)
675687 column = prefix == '' and key or (prefix + '.' + key)
676688 subitem = item[key]
677689 if type(subitem) == dict:
678 if self.obj_info.has_key('json') and key in self.obj_info['json']:
690 if 'json' in self.obj_info and key in self.obj_info['json']:
679691 fun(column, json.dumps(subitem))
680692 else:
681693 if depth < max_depth:
685697 # mind (which come out looking decent); the second
686698 # one has applications in nodes (which look less
687699 # so, but what would look good?).
688 if [x for x in subitem if type(x) != unicode] == []:
700 if [x for x in subitem if type(x) != str] == []:
689701 serialised = " ".join(subitem)
690702 else:
691703 serialised = json.dumps(subitem)
698710
699711 def add_to_row(col, val):
700712 if col in column_ix:
701 row[column_ix[col]] = unicode(val)
713 row[column_ix[col]] = str(val)
702714
703715 if len(self.columns) == 0:
704716 for item in items:
705717 add('', 1, item, add_to_columns)
706 columns = columns.keys()
718 columns = list(columns.keys())
707719 columns.sort(key=column_sort_key)
708720 else:
709721 columns = self.columns
710722
711 for i in xrange(0, len(columns)):
723 for i in range(0, len(columns)):
712724 column_ix[columns[i]] = i
713725 for item in items:
714726 row = len(columns) * ['']
742754 max_width = 0
743755 for col in columns:
744756 max_width = max(max_width, len(col))
745 fmt = "{0:>" + unicode(max_width) + "}: {1}"
757 fmt = "{0:>" + str(max_width) + "}: {1}"
746758 output(sep)
747 for i in xrange(0, len(table)):
748 for j in xrange(0, len(columns)):
759 for i in range(0, len(table)):
760 for j in range(0, len(columns)):
749761 output(fmt.format(columns[j], table[i][j]))
750762 output(sep)
751763
763775 def ascii_table(self, rows):
764776 table = ""
765777 col_widths = [0] * len(rows[0])
766 for i in xrange(0, len(rows[0])):
767 for j in xrange(0, len(rows)):
778 for i in range(0, len(rows[0])):
779 for j in range(0, len(rows)):
768780 col_widths[i] = max(col_widths[i], len(rows[j][i]))
769781 self.ascii_bar(col_widths)
770782 self.ascii_row(col_widths, rows[0], "^")
775787
776788 def ascii_row(self, col_widths, row, align):
777789 txt = "|"
778 for i in xrange(0, len(col_widths)):
779 fmt = " {0:" + align + unicode(col_widths[i]) + "} "
790 for i in range(0, len(col_widths)):
791 fmt = " {0:" + align + str(col_widths[i]) + "} "
780792 txt += fmt.format(row[i]) + "|"
781793 output(txt)
782794
793805 self.options = options
794806
795807 def display_list(self, columns, table):
796 for i in xrange(0, len(table)):
808 for i in range(0, len(table)):
797809 row = []
798 for j in xrange(0, len(columns)):
810 for j in range(0, len(columns)):
799811 row.append("{0}=\"{1}\"".format(columns[j], table[i][j]))
800812 output(" ".join(row))
801813
808820
809821 def display_list(self, columns, table):
810822 ix = None
811 for i in xrange(0, len(columns)):
823 for i in range(0, len(columns)):
812824 if columns[i] == 'name':
813825 ix = i
814826 if ix is not None:
44 WITH_BROKER_TEST_COMMANDS:=rabbit_test_runner:run_in_broker(\"$(PACKAGE_DIR)/test/ebin\",\"$(FILTER)\")
55 WITH_BROKER_TEST_CONFIG:=$(PACKAGE_DIR)/etc/rabbit-test
66 STANDALONE_TEST_COMMANDS:=rabbit_test_runner:run_multi(\"$(UMBRELLA_BASE_DIR)/rabbitmq-server\",\"$(PACKAGE_DIR)/test/ebin\",\"$(FILTER)\",$(COVER),\"/tmp/rabbitmq-multi-node/plugins\")
7 WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/src/rabbitmqadmin-test.py
7 WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/src/rabbitmqadmin-test-wrapper.sh
88
99 CONSTRUCT_APP_PREREQS:=$(shell find $(PACKAGE_DIR)/priv -type f) $(PACKAGE_DIR)/bin/rabbitmqadmin
1010 define construct_app_commands
201201 <td class="path">/api/nodes/<i>name</i></td>
202202 <td>
203203 An individual node in the RabbitMQ cluster. Add
204 "?memory=true" to get memory statistics.
204 "?memory=true" to get memory statistics, and "?binary=true"
205 to get a breakdown of binary memory use (may be expensive if
206 there are many small binaries in the system).
205207 </td>
206208 </tr>
207209 <tr>
226228 messages. POST to upload an existing set of definitions. Note
227229 that:
228230 <ul>
229 <li>The definitions are merged. Anything already existing is
230 untouched.</li>
231 <li>Conflicts will cause an error.</li>
232 <li>In the event of an error you will be left with a
233 part-applied set of definitions.</li>
231 <li>
232 The definitions are merged. Anything already existing on
233 the server but not in the uploaded definitions is
234 untouched.
235 </li>
236 <li>
237 Conflicting definitions on immutable objects (exchanges,
238 queues and bindings) will cause an error.
239 </li>
240 <li>
241 Conflicting definitions on mutable objects will cause
242 the object in the server to be overwritten with the
243 object from the definitions.
244 </li>
245 <li>
246 In the event of an error you will be left with a
247 part-applied set of definitions.
248 </li>
234249 </ul>
235250 For convenience you may upload a file from a browser to this
236251 URI (i.e. you can use <code>multipart/form-data</code> as
323338 <td>X</td>
324339 <td></td>
325340 <td class="path">/api/exchanges/<i>vhost</i>/<i>name</i></td>
326 <td>An individual exchange. To PUT an exchange, you will need a body looking something like this:
327 <pre>{"type":"direct","auto_delete":false,"durable":true,"internal":false,"arguments":[]}</pre>
328 The <code>type</code> key is mandatory; other keys are optional.</td>
341 <td>
342 An individual exchange. To PUT an exchange, you will need a body looking something like this:
343 <pre>{"type":"direct","auto_delete":false,"durable":true,"internal":false,"arguments":{}}</pre>
344 The <code>type</code> key is mandatory; other keys are optional.
345 <p>
346 When DELETEing an exchange you can add the query string
347 parameter <code>if-unused=true</code>. This prevents the
348 delete from succeeding if the exchange is bound to a queue
349 or as a source to another exchange.
350 </p>
351 </td>
329352 </tr>
330353 <tr>
331354 <td>X</td>
363386 <pre>{"routed": true}</pre>
364387 <code>routed</code> will be true if the message was sent to
365388 at least one queue.
366 <p>Please note that the publish / get paths in the HTTP API are
367 intended for injecting test messages, diagnostics etc - they do not
368 implement reliable delivery and so should be treated as a sysadmin's
369 tool rather than a general API for messaging.</p>
389 <p>
390 Please note that the HTTP API is not ideal for high
391 performance publishing; the need to create a new TCP
392 connection for each message published can limit message
393 throughput compared to AMQP or other protocols using
394 long-lived connections.
395 </p>
370396 </td>
371397 </tr>
372398 <tr>
391417 <td>X</td>
392418 <td></td>
393419 <td class="path">/api/queues/<i>vhost</i>/<i>name</i></td>
394 <td>An individual queue. To PUT a queue, you will need a body looking something like this:
395 <pre>{"auto_delete":false,"durable":true,"arguments":[],"node":"rabbit@smacmullen"}</pre>
396 All keys are optional.</td>
420 <td>
421 An individual queue. To PUT a queue, you will need a body looking something like this:
422 <pre>{"auto_delete":false,"durable":true,"arguments":{},"node":"rabbit@smacmullen"}</pre>
423 All keys are optional.
424 <p>
425 When DELETEing a queue you can add the query string
426 parameters <code>if-empty=true</code> and /
427 or <code>if-unused=true</code>. These prevent the delete
428 from succeeding if the queue contains messages, or has
429 consumers, respectively.
430 </p>
431 </td>
397432 </tr>
398433 <tr>
399434 <td>X</td>
450485 message payload if it is larger than the size given (in bytes).</li>
451486 </ul>
452487 <p><code>truncate</code> is optional; all other keys are mandatory.</p>
453 <p>Please note that the publish / get paths in the HTTP API are
454 intended for injecting test messages, diagnostics etc - they do not
455 implement reliable delivery and so should be treated as a sysadmin's
456 tool rather than a general API for messaging.</p>
488 <p>
489 Please note that the get path in the HTTP API is intended
490 for diagnostics etc - it does not implement reliable
491 delivery and so should be treated as a sysadmin's tool
492 rather than a general API for messaging.
493 </p>
457494 </td>
458495 </tr>
459496 <tr>
482519 queue. Remember, an exchange and a queue can be bound
483520 together many times! To create a new binding, POST to this
484521 URI. You will need a body looking something like this:
485 <pre>{"routing_key":"my_routing_key","arguments":[]}</pre>
522 <pre>{"routing_key":"my_routing_key","arguments":{}}</pre>
486523 All keys are optional.
487524 The response will contain a <code>Location</code> header
488525 telling you the URI of your new binding.
7171
7272 table { border-collapse: collapse; }
7373 table th { font-weight: normal; color: black; }
74 table th, table td { font: 12px/17px Verdana,sans-serif; padding: 4px; }
74 table th, table td { font: 12px Verdana,sans-serif; padding: 5px 4px; }
7575 table.list th, table.list td { vertical-align: top; min-width: 5em; width: auto; }
7676
7777 table.list { border-width: 1px; margin-bottom: 1em; }
8383 table.list th a.sort .arrow { color: #888; }
8484 table.list td p { margin: 0; padding: 1px 0 0 0; }
8585 table.list td p.warning { margin: 0; padding: 5px; }
86
87 table.list td.plain, table.list td.plain td, table.list td.plain th { border: none; background: none; }
88 table.list th.plain { border-left: none; border-top: none; border-right: none; background: none; }
89 table.list th.plain h3 { margin: 0; border: 0; }
8690
8791 #main .internal-purpose, #main .internal-purpose * { color: #aaa; }
8892
396396 </td>
397397 </tr>
398398 <tr>
399 <td><code>statistics_db_event_queue</code></td>
400 <td>
401 Number of outstanding statistics events yet to be processed
402 by the database.
403 </td>
404 </tr>
405 <tr>
399406 <td><code>statistics_db_node</code></td>
400407 <td>
401408 Name of the cluster node hosting the management statistics database.
423430 </td>
424431 </tr>
425432 <tr>
433 <td><code>cluster_links</code></td>
434 <td>
435 A list of the other nodes in the cluster. For each node,
436 there are details of the TCP connection used to connect to
437 it and statistics on data that has been transferred.
438 </td>
439 </tr>
440 <tr>
426441 <td><code>config_files</code></td>
427442 <td>
428443 List of config files read by the node.
483498 </td>
484499 </tr>
485500 <tr>
501 <td><code>io_read_avg_time</code></td>
502 <td>
503 Average wall time (milliseconds) for each disk read operation in
504 the last statistics interval.
505 </td>
506 </tr>
507 <tr>
508 <td><code>io_read_bytes</code></td>
509 <td>
510 Total number of bytes read from disk by the persister.
511 </td>
512 </tr>
513 <tr>
514 <td><code>io_read_count</code></td>
515 <td>
516 Total number of read operations by the persister.
517 </td>
518 </tr>
519 <tr>
520 <td><code>io_reopen_count</code></td>
521 <td>
522 Total number of times the persister has needed to recycle
523 file handles between queues. In an ideal world this number
524 will be zero; if the number is large, performance might be
525 improved by increasing the number of file handles available
526 to RabbitMQ.
527 </td>
528 </tr>
529 <tr>
530 <td><code>io_seek_avg_time</code></td>
531 <td>
532 Average wall time (milliseconds) for each seek operation in
533 the last statistics interval.
534 </td>
535 </tr>
536 </tr>
537 <tr>
538 <td><code>io_seek_count</code></td>
539 <td>
540 Total number of seek operations by the persister.
541 </td>
542 </tr>
543 <tr>
544 <td><code>io_sync_avg_time</code></td>
545 <td>
546 Average wall time (milliseconds) for each fsync() operation in
547 the last statistics interval.
548 </td>
549 </tr>
550 </tr>
551 <tr>
552 <td><code>io_sync_count</code></td>
553 <td>
554 Total number of fsync() operations by the persister.
555 </td>
556 </tr>
557 <tr>
558 <td><code>io_write_avg_time</code></td>
559 <td>
560 Average wall time (milliseconds) for each disk write operation in
561 the last statistics interval.
562 </td>
563 </tr>
564 <tr>
565 <td><code>io_write_bytes</code></td>
566 <td>
567 Total number of bytes written to disk by the persister.
568 </td>
569 </tr>
570 <tr>
571 <td><code>io_write_count</code></td>
572 <td>
573 Total number of write operations by the persister.
574 </td>
575 </tr>
576 <tr>
486577 <td><code>log_file</code></td>
487578 <td>
488579 Location of main log file.
504595 <td><code>mem_limit</code></td>
505596 <td>
506597 Point at which the memory alarm will go off.
598 </td>
599 </tr>
600 <tr>
601 <td><code>mnesia_disk_tx_count</code></td>
602 <td>
603 Number of Mnesia transactions which have been performed that
604 required writes to disk. (e.g. creating a durable
605 queue). Only transactions which originated on this node are
606 included.
607 </td>
608 </tr>
609 <tr>
610 <td><code>mnesia_ram_tx_count</code></td>
611 <td>
612 Number of Mnesia transactions which have been performed that
613 did not require writes to disk. (e.g. creating a transient
614 queue). Only transactions which originated on this node are
615 included.
616 </td>
617 </tr>
618 <tr>
619 <td><code>msg_store_read_count</code></td>
620 <td>
621 Number of messages which have been read from the message store.
622 </td>
623 </tr>
624 <tr>
625 <td><code>msg_store_write_count</code></td>
626 <td>
627 Number of messages which have been written to the message store.
507628 </td>
508629 </tr>
509630 <tr>
547668 <td><code>processors</code></td>
548669 <td>
549670 Number of cores detected and usable by Erlang.
671 </td>
672 </tr>
673 <tr>
674 <td><code>queue_index_journal_write_count</code></td>
675 <td>
676 Number of records written to the queue index journal. Each
677 record represents a message being published to a queue,
678 being delivered from a queue, and being acknowledged in a
679 queue.
680 </td>
681 </tr>
682 <tr>
683 <td><code>queue_index_read_count</code></td>
684 <td>
685 Number of records read from the queue index.
686 </td>
687 </tr>
688 <tr>
689 <td><code>queue_index_write_count</code></td>
690 <td>
691 Number of records written to the queue index.
550692 </td>
551693 </tr>
552694 <tr>
1010 ['Acknowledge', 'ack'],
1111 ['Get', 'get'], ['Deliver (noack)', 'deliver_no_ack'],
1212 ['Get (noack)', 'get_no_ack'],
13 ['Return', 'return_unroutable']];
14 return rates_chart_or_text(id, stats, items, fmt_rate, fmt_rate_large, fmt_rate_axis, true, 'Message rates', 'message-rates');
13 ['Return', 'return_unroutable'],
14 ['Disk read', 'disk_reads'],
15 ['Disk write', 'disk_writes']];
16 return rates_chart_or_text(id, stats, items, fmt_rate, fmt_rate_axis, true, 'Message rates', 'message-rates');
1517 }
1618
1719 function queue_lengths(id, stats) {
1820 var items = [['Ready', 'messages_ready'],
1921 ['Unacked', 'messages_unacknowledged'],
2022 ['Total', 'messages']];
21 return rates_chart_or_text(id, stats, items, fmt_msgs, fmt_msgs_large, fmt_num_axis, false, 'Queued messages', 'queued-messages');
23 return rates_chart_or_text(id, stats, items, fmt_num_thousands, fmt_plain_axis, false, 'Queued messages', 'queued-messages');
2224 }
2325
2426 function data_rates(id, stats) {
2527 var items = [['From client', 'recv_oct'], ['To client', 'send_oct']];
26 return rates_chart_or_text(id, stats, items, fmt_rate_bytes, fmt_rate_bytes_large, fmt_rate_bytes_axis, true, 'Data rates');
27 }
28
29 function rates_chart_or_text(id, stats, items, chart_fmt, text_fmt, axis_fmt, chart_rates,
28 return rates_chart_or_text(id, stats, items, fmt_rate_bytes, fmt_rate_bytes_axis, true, 'Data rates');
29 }
30
31 function rates_chart_or_text(id, stats, items, fmt, axis_fmt, chart_rates,
3032 heading, heading_help) {
31 var mode = get_pref('rate-mode-' + id);
33 var prefix = chart_h3(id, heading, heading_help);
34
35 return prefix + rates_chart_or_text_no_heading(
36 id, id, stats, items, fmt, axis_fmt, chart_rates);
37 }
38
39 function rates_chart_or_text_no_heading(type_id, id, stats, items,
40 fmt, axis_fmt, chart_rates) {
41 var mode = get_pref('rate-mode-' + type_id);
3242 var range = get_pref('chart-range');
33 var prefix = chart_h3(id, heading, heading_help);
3443 var res;
35
3644 if (keys(stats).length > 0) {
3745 if (mode == 'chart') {
3846 res = rates_chart(
39 id, id, items, stats, chart_fmt, axis_fmt, 'full', chart_rates);
47 type_id, id, items, stats, fmt, axis_fmt, 'full', chart_rates);
4048 }
4149 else {
42 res = rates_text(items, stats, mode, text_fmt);
50 res = rates_text(items, stats, mode, fmt, chart_rates);
4351 }
4452 if (res == "") res = '<p>Waiting for data...</p>';
4553 }
4654 else {
4755 res = '<p>Currently idle</p>';
4856 }
49 return prefix + '<div class="updatable">' + res + '</div>';
57 return res;
5058 }
5159
5260 function chart_h3(id, heading, heading_help) {
5361 var mode = get_pref('rate-mode-' + id);
5462 var range = get_pref('chart-range');
5563 return '<h3>' + heading +
56 ' <span class="popup-options-link updatable" title="Click to change" ' +
64 ' <span class="popup-options-link" title="Click to change" ' +
5765 'type="rate" for="' + id + '">(' + prefix_title(mode, range) +
5866 ')</span>' + (heading_help == undefined ? '' :
5967 ' <span class="help" id="' + heading_help + '"></span>') +
7886 var limit = stats[limit_key];
7987 if (typeof used == 'number') {
8088 return node_stat(used_key, 'Used', limit_key, 'available', stats,
81 fmt_num_obj, fmt_num_axis,
89 fmt_plain, fmt_plain_axis,
8290 fmt_color(used / limit, thresholds));
8391 } else {
8492 return used;
9098 var limit = stats[limit_key];
9199 if (typeof used == 'number') {
92100 return node_stat_bar(used_key, limit_key, 'available', stats,
93 fmt_num_axis, fmt_color(used / limit, thresholds));
101 fmt_plain_axis,
102 fmt_color(used / limit, thresholds));
94103 } else {
95104 return used;
96105 }
97106 }
98107
99 function node_stat(used_key, used_name, limit_key, suffix, stats, rate_fmt,
108 function node_stat(used_key, used_name, limit_key, suffix, stats, fmt,
100109 axis_fmt, colour, help, invert) {
101110 if (get_pref('rate-mode-node-stats') == 'chart') {
102111 var items = [[used_name, used_key], ['Limit', limit_key]];
103112 add_fake_limit_details(used_key, limit_key, stats);
104113 return rates_chart('node-stats', 'node-stats-' + used_key, items, stats,
105 rate_fmt, axis_fmt, 'node', false);
114 fmt, axis_fmt, 'node', false);
106115 } else {
107116 return node_stat_bar(used_key, limit_key, suffix, stats, axis_fmt,
108117 colour, help, invert);
155164 return chart_h3('node-stats', 'Node statistics');
156165 }
157166
158 function rates_chart(type_id, id, items, stats, rate_fmt, axis_fmt, type,
167 function rates_chart(type_id, id, items, stats, fmt, axis_fmt, type,
159168 chart_rates) {
160169 function show(key) {
161170 return get_pref('chart-line-' + id + key) === 'true';
176185 chart_data[id]['data'][name] = stats[key_details];
177186 chart_data[id]['data'][name].ix = ix;
178187 }
188 var value = chart_rates ? pick_rate(fmt, stats, key) :
189 pick_abs(fmt, stats, key);
179190 legend.push({name: name,
180191 key: key,
181 value: rate_fmt(stats, key),
192 value: value,
182193 show: show(key)});
183194 ix++;
184195 }
188199 (chart_rates ? ' chart-rates' : '') + '"></div>';
189200 html += '<table class="legend">';
190201 for (var i = 0; i < legend.length; i++) {
202 if (i % 3 == 0 && i < legend.length - 1) {
203 html += '</table><table class="legend">';
204 }
205
191206 html += '<tr><th><span title="Click to toggle line" ';
192207 html += 'class="rate-visibility-option';
193208 html += legend[i].show ? '' : ' rate-visibility-option-hidden';
200215 return legend.length > 0 ? html : '';
201216 }
202217
203 function rates_text(items, stats, mode, rate_fmt) {
218 function rates_text(items, stats, mode, fmt, chart_rates) {
204219 var res = '';
205220 for (var i in items) {
206221 var name = items[i][0];
208223 var key_details = key + '_details';
209224 if (key_details in stats) {
210225 var details = stats[key_details];
211 res += '<div class="highlight">' + name;
212 res += rate_fmt(stats, key, mode);
213 res += '</div>';
226 res += '<div class="highlight">' + name + '<strong>';
227 res += chart_rates ? pick_rate(fmt, stats, key, mode) :
228 pick_abs(fmt, stats, key, mode);
229 res += '</strong></div>';
214230 }
215231 }
216232 return res == '' ? '' : '<div class="box">' + res + '</div>';
99 if (unknown == undefined) unknown = UNKNOWN_REPR;
1010 if (str == undefined) return unknown;
1111 return fmt_escape_html("" + str);
12 }
13
14 function fmt_bytes(bytes) {
15 if (bytes == undefined) return UNKNOWN_REPR;
16 return fmt_si_prefix(bytes, bytes, 1024, false) + 'B';
1712 }
1813
1914 function fmt_si_prefix(num0, max0, thousand, allow_fractions) {
227222 }
228223 }
229224
230 function fmt_rate(obj, name, mode) {
231 var raw = fmt_rate0(obj, name, mode, fmt_rate_num);
232 return raw == '' ? '' : (raw + '/s');
233 }
234
235 function fmt_rate_bytes(obj, name, mode) {
236 var raw = fmt_rate0(obj, name, mode, fmt_bytes);
237 return raw == '' ? '' : (raw + '/s' +
238 '<sub>(' + fmt_bytes(obj[name]) + ' total)</sub>');
239 }
240
241 function fmt_bytes_obj(obj, name, mode) {
242 return fmt_bytes(obj[name]);
243 }
244
245 function fmt_num_obj(obj, name, mode) {
246 return obj[name];
247 }
248
249 function fmt_rate_large(obj, name, mode) {
250 return '<strong>' + fmt_rate0(obj, name, mode, fmt_rate_num) +
251 '</strong>msg/s';
252 }
253
254 function fmt_rate_bytes_large(obj, name, mode) {
255 return '<strong>' + fmt_rate0(obj, name, mode, fmt_bytes) + '/s</strong>' +
256 '(' + fmt_bytes(obj[name]) + ' total)';
257 }
258
259 function fmt_rate0(obj, name, mode, fmt) {
225 function pick_rate(fmt, obj, name, mode) {
260226 if (obj == undefined || obj[name] == undefined ||
261227 obj[name + '_details'] == undefined) return '';
262228 var details = obj[name + '_details'];
263229 return fmt(mode == 'avg' ? details.avg_rate : details.rate);
264230 }
265231
266 function fmt_msgs(obj, name, mode) {
267 return fmt_msgs0(obj, name, mode) + ' msg';
268 }
269
270 function fmt_msgs_large(obj, name, mode) {
271 return '<strong>' + fmt_msgs0(obj, name, mode) + '</strong>' +
272 fmt_rate0(obj, name, mode, fmt_msgs_rate);
273 }
274
275 function fmt_msgs0(obj, name, mode) {
232 function pick_abs(fmt, obj, name, mode) {
276233 if (obj == undefined || obj[name] == undefined ||
277234 obj[name + '_details'] == undefined) return '';
278235 var details = obj[name + '_details'];
279 return mode == 'avg' ? fmt_rate_num(details.avg) :
280 fmt_num_thousands(obj[name]);
281 }
282
283 function fmt_msgs_rate(num) {
284 if (num > 0) return '+' + fmt_rate_num(num) + ' msg/s';
285 else if (num < 0) return '-' + fmt_rate_num(-num) + ' msg/s';
286 else return '&nbsp;';
236 return fmt(mode == 'avg' ? details.avg : obj[name]);
237 }
238
239 function fmt_detail_rate(obj, name, mode) {
240 return pick_rate(fmt_rate, obj, name, mode);
241 }
242
243 function fmt_detail_rate_bytes(obj, name, mode) {
244 return pick_rate(fmt_rate_bytes, obj, name, mode);
245 }
246
247 // ---------------------------------------------------------------------
248
249 // These are pluggable for charts etc
250
251 function fmt_plain(num) {
252 return num;
253 }
254
255 function fmt_plain_axis(num, max) {
256 return fmt_si_prefix(num, max, 1000, true);
257 }
258
259 function fmt_rate(num) {
260 return fmt_rate_num(num) + '/s';
287261 }
288262
289263 function fmt_rate_axis(num, max) {
290 return fmt_si_prefix(num, max, 1000, true) + '/s';
291 }
292
293 function fmt_num_axis(num, max) {
294 return fmt_si_prefix(num, max, 1000, true);
264 return fmt_plain_axis(num, max) + '/s';
265 }
266
267 function fmt_bytes(bytes) {
268 if (bytes == undefined) return UNKNOWN_REPR;
269 return fmt_si_prefix(bytes, bytes, 1024, false) + 'B';
295270 }
296271
297272 function fmt_bytes_axis(num, max) {
299274 return fmt_bytes(isNaN(num) ? 0 : num);
300275 }
301276
277 function fmt_rate_bytes(num) {
278 return fmt_bytes(num) + '/s';
279 }
302280
303281 function fmt_rate_bytes_axis(num, max) {
304282 return fmt_bytes_axis(num, max) + '/s';
305283 }
284
285 function fmt_ms(num) {
286 return fmt_rate_num(num) + 'ms';
287 }
288
289 // ---------------------------------------------------------------------
306290
307291 function fmt_maybe_vhost(name) {
308292 return vhosts_interesting ?
421405 var plugins = [];
422406 for (var i = 0; i < node.applications.length; i++) {
423407 var application = node.applications[i];
424 if (node.enabled_plugins.indexOf(application.name) != -1) {
408 if (jQuery.inArray(application.name, node.enabled_plugins) != -1 ) {
425409 plugins.push(application.name);
426410 }
427411 }
433417 var result = [];
434418 for (var i = 0; i < node.applications.length; i++) {
435419 var application = node.applications[i];
436 if (node.enabled_plugins.indexOf(application.name) != -1) {
420 if (jQuery.inArray(application.name, node.enabled_plugins) != -1 ) {
437421 result.push(application);
438422 }
439423 }
1919 'x-max-length': {'short': 'Lim', 'type': 'int'},
2020 'x-max-length-bytes': {'short': 'Lim B', 'type': 'int'},
2121 'x-dead-letter-exchange': {'short': 'DLX', 'type': 'string'},
22 'x-dead-letter-routing-key': {'short': 'DLK', 'type': 'string'}};
22 'x-dead-letter-routing-key': {'short': 'DLK', 'type': 'string'},
23 'x-max-priority': {'short': 'Pri', 'type': 'int'}};
2324
2425 // Things that are like arguments that we format the same way in listings.
2526 var IMPLICIT_ARGS = {'durable': {'short': 'D', 'type': 'boolean'},
136137 // All these are to do with hiding UI elements if
137138 var rates_mode; // ...there are no fine stats
138139 var user_administrator; // ...user is not an admin
140 var user_policymaker; // ...user is not a policymaker
139141 var user_monitor; // ...user cannot monitor
140142 var nodes_interesting; // ...we are not in a cluster
141143 var vhosts_interesting; // ...there is only one vhost
163165 rates_mode = overview.rates_mode;
164166 user_tags = expand_user_tags(user.tags.split(","));
165167 user_administrator = jQuery.inArray("administrator", user_tags) != -1;
168 user_policymaker = jQuery.inArray("policymaker", user_tags) != -1;
166169 user_monitor = jQuery.inArray("monitoring", user_tags) != -1;
167170 replace_content('login-details',
168171 '<p>User: <b>' + fmt_escape_html(user.name) + '</b></p>' +
2727
2828 'queue-dead-letter-routing-key':
2929 'Optional replacement routing key to use when a message is dead-lettered. If this is not set, the message\'s original routing key will be used.<br/>(Sets the "<a target="_blank" href="http://rabbitmq.com/dlx.html">x-dead-letter-routing-key</a>" argument.)',
30
31 'queue-max-priority':
32 'Maximum number of priority levels for the queue to support; if not set, the queue will not support message priorities.<br/>(Sets the "<a target="_blank" href="http://rabbitmq.com/priority.html">x-max-priority</a>" argument.)',
3033
3134 'queue-messages':
3235 '<p>Message counts.</p><p>Note that "in memory" and "persistent" are not mutually exclusive; persistent messages can be in memory as well as on disc, and transient messages can be paged out if memory is tight. Non-durable queues will consider all messages to be transient.</p>',
216219 <dd>Rate at which messages with the \'redelivered\' flag set are being delivered. Note that these messages will <b>also</b> be counted in one of the delivery rates above.</dd>\
217220 <dt>Return</dt>\
218221 <dd>Rate at which basic.return is sent to publishers for unroutable messages published with the \'mandatory\' flag set.</dd>\
219 </dl>',
222 <dt>Disk read</dt>\
223 <dd>Rate at which queues read messages from disk.</dd>\
224 <dt>Disk write</dt>\
225 <dd>Rate at which queues write messages to disk.</dd>\
226 </dl>\
227 <p>\
228 Note that the last two items are originate in queues rather than \
229 channels; they may therefore be slightly out of sync with other \
230 statistics.\
231 </p>',
220232
221233 'disk-monitoring-no-watermark' : 'There is no <a target="_blank" href="http://www.rabbitmq.com/memory.html#diskfreesup">disk space low watermark</a> set. RabbitMQ will not take any action to avoid running out of disk space.',
222234
250262 'plugins' :
251263 'Note that only plugins which are both explicitly enabled and running are shown here.',
252264
265 'io-operations':
266 'Rate of I/O operations. Only operations performed by the message \
267 persister are shown here (e.g. metadata changes in Mnesia or writes \
268 to the log files are not shown).\
269 <dl>\
270 <dt>Read</dt>\
271 <dd>Rate at which data is read from the disk.</dd>\
272 <dt>Write</dt>\
273 <dd>Rate at which data is written to the disk.</dd>\
274 <dt>Seek</dt>\
275 <dd>Rate at which the broker switches position while reading or \
276 writing to disk.</dd>\
277 <dt>Sync</dt>\
278 <dd>Rate at which the broker invokes <code>fsync()</code> to ensure \
279 data is flushed to disk.</dd>\
280 <dt>Reopen</dt>\
281 <dd>Rate at which the broker recycles file handles in order to support \
282 more queues than it has file handles. If this operation is occurring \
283 frequently you may get a performance boost from increasing the number \
284 of file handles available.</dd>\
285 </dl>',
286
287 'mnesia-transactions':
288 'Rate at which Mnesia transactions are initiated on this node (this node \
289 will also take part in Mnesia transactions initiated on other nodes).\
290 <dl>\
291 <dt>RAM only</dt>\
292 <dd>Rate at which RAM-only transactions take place (e.g. creation / \
293 deletion of transient queues).</dd>\
294 <dt>Disk</dt>\
295 <dd>Rate at which disk (and RAM) transactions take place (.e.g \
296 creation / deletion of durable queues).</dd>\
297 </dl>',
298
299 'persister-operations-msg':
300 'Rate at which per-message persister operations take place on this node. See \
301 <a href="http://www.rabbitmq.com/persistence-conf.html" target="_blank">here</a> \
302 for more information on the persister. \
303 <dl>\
304 <dt>QI Journal</dt>\
305 <dd>Rate at which message information (publishes, deliveries and \
306 acknowledgements) is written to queue index journals.</dd>\
307 <dt>Store Read</dt>\
308 <dd>Rate at which messages are read from the message store.</dd>\
309 <dt>Store Write</dt>\
310 <dd>Rate at which messages are written to the message store.</dd>\
311 </dl>',
312
313 'persister-operations-bulk':
314 'Rate at which whole-file persister operations take place on this node. See \
315 <a href="http://www.rabbitmq.com/persistence-conf.html" target="_blank">here</a> \
316 for more information on the persister. \
317 <dl>\
318 <dt>QI Read</dt>\
319 <dd>Rate at which queue index segment files are read.</dd>\
320 <dt>QI Write</dt>\
321 <dd>Rate at which queue index segment files are written. </dd>\
322 </dl>',
323
253324 'foo': 'foo' // No comma.
254325 };
255326
462462 return confirm("Are you sure? This object cannot be recovered " +
463463 "after deletion.");
464464 });
465 $('div.section h2, div.section-hidden h2').click(function() {
465 $('div.section h2, div.section-hidden h2').die().live('click', function() {
466466 toggle_visibility($(this));
467467 });
468468 $('label').map(function() {
506506 }
507507 }
508508 });
509 setup_visibility();
510509 $('.help').die().live('click', function() {
511510 help($(this).attr('id'))
512511 });
560559 }
561560
562561 function postprocess_partial() {
562 setup_visibility();
563563 $('.sort').click(function() {
564564 var sort = $(this).attr('sort');
565565 if (current_sort == sort) {
11
22 <div class="section">
33 <h2>Overview</h2>
4 <div class="hider">
4 <div class="hider updatable">
55 <% if (rates_mode != 'none') { %>
66 <%= message_rates('msg-rates-ch', channel.message_stats) %>
77 <% } %>
88
9 <div class="updatable">
109 <h3>Details</h3>
1110 <table class="facts facts-l">
1211 <tr>
6261 <td><%= channel.acks_uncommitted %></td>
6362 </tr>
6463 </table>
65 </div>
6664
6765 </div>
6866 </div>
166166 <% } %>
167167 <% if (rates_mode != 'none') { %>
168168 <% if (show_column('channels', 'rate-publish')) { %>
169 <td class="r"><%= fmt_rate(channel.message_stats, 'publish') %></td>
169 <td class="r"><%= fmt_detail_rate(channel.message_stats, 'publish') %></td>
170170 <% } %>
171171 <% if (show_column('channels', 'rate-confirm')) { %>
172 <td class="r"><%= fmt_rate(channel.message_stats, 'confirm') %></td>
172 <td class="r"><%= fmt_detail_rate(channel.message_stats, 'confirm') %></td>
173173 <% } %>
174174 <% if (show_column('channels', 'rate-return')) { %>
175 <td class="r"><%= fmt_rate(channel.message_stats, 'return_unroutable') %></td>
175 <td class="r"><%= fmt_detail_rate(channel.message_stats, 'return_unroutable') %></td>
176176 <% } %>
177177 <% if (show_column('channels', 'rate-deliver')) { %>
178 <td class="r"><%= fmt_rate(channel.message_stats, 'deliver_get') %></td>
178 <td class="r"><%= fmt_detail_rate(channel.message_stats, 'deliver_get') %></td>
179179 <% } %>
180180 <% if (show_column('channels', 'rate-redeliver')) { %>
181 <td class="r"><%= fmt_rate(channel.message_stats, 'redeliver') %></td>
181 <td class="r"><%= fmt_detail_rate(channel.message_stats, 'redeliver') %></td>
182182 <% } %>
183183 <% if (show_column('channels', 'rate-ack')) { %>
184 <td class="r"><%= fmt_rate(channel.message_stats, 'ack') %></td>
184 <td class="r"><%= fmt_detail_rate(channel.message_stats, 'ack') %></td>
185185 <% } %>
186186 <% } %>
187187 </tr>
11
22 <div class="section">
33 <h2>Overview</h2>
4 <div class="hider">
4 <div class="hider updatable">
55 <%= data_rates('data-rates-conn', connection, 'Data rates') %>
66
7 <div class="updatable">
87 <h3>Details</h3>
98 <table class="facts facts-l">
109 <% if (nodes_interesting) { %>
6160 </tr>
6261 </table>
6362 <% } %>
64 </div>
6563
6664 </div>
6765 </div>
114114 <td><%= fmt_client_name(connection.client_properties) %></td>
115115 <% } %>
116116 <% if (show_column('connections', 'from_client')) { %>
117 <td><%= fmt_rate_bytes(connection, 'recv_oct') %></td>
117 <td><%= fmt_detail_rate_bytes(connection, 'recv_oct') %></td>
118118 <% } %>
119119 <% if (show_column('connections', 'to_client')) { %>
120 <td><%= fmt_rate_bytes(connection, 'send_oct') %></td>
120 <td><%= fmt_detail_rate_bytes(connection, 'send_oct') %></td>
121121 <% } %>
122122 <% if (show_column('connections', 'heartbeat')) { %>
123123 <td class="r"><%= fmt_time(connection.timeout, 's') %></td>
11
22 <div class="section">
33 <h2>Overview</h2>
4 <div class="hider">
4 <div class="hider updatable">
55 <% if (rates_mode != 'none') { %>
66 <%= message_rates('msg-rates-x', exchange.message_stats) %>
77 <% } %>
8
9 <div class="updatable">
108 <h3>Details</h3>
119 <table class="facts">
1210 <tr>
2220 <td><%= fmt_string(exchange.policy, '') %></td>
2321 </tr>
2422 </table>
25 </div>
2623 </div>
2724 </div>
2825
6363 <% } %>
6464 <% if (rates_mode != 'none') { %>
6565 <% if (show_column('exchanges', 'rate-in')) { %>
66 <td class="r"><%= fmt_rate(exchange.message_stats, 'publish_in') %></td>
66 <td class="r"><%= fmt_detail_rate(exchange.message_stats, 'publish_in') %></td>
6767 <% } %>
6868 <% if (show_column('exchanges', 'rate-out')) { %>
69 <td class="r"><%= fmt_rate(exchange.message_stats, 'publish_out') %></td>
69 <td class="r"><%= fmt_detail_rate(exchange.message_stats, 'publish_out') %></td>
7070 <% } %>
7171 <% } %>
7272 </tr>
1919 <% } else { %>
2020 <td><%= link_queue(del.queue.vhost, del.queue.name) %></td>
2121 <% } %>
22 <td class="r"><%= fmt_rate(del.stats, 'deliver_get') %></td>
23 <td class="r"><%= fmt_rate(del.stats, 'ack') %></td>
22 <td class="r"><%= fmt_detail_rate(del.stats, 'deliver_get') %></td>
23 <td class="r"><%= fmt_detail_rate(del.stats, 'ack') %></td>
2424 </tr>
2525 <% } %>
2626 </table>
3333 <% } else { %>
3434 <td><%= link_exchange(pub.exchange.vhost, pub.exchange.name) %></td>
3535 <% } %>
36 <td class="r"><%= fmt_rate(pub.stats, 'publish') %></td>
36 <td class="r"><%= fmt_detail_rate(pub.stats, 'publish') %></td>
3737 <% if (col_confirm) { %>
38 <td class="r"><%= fmt_rate(pub.stats, 'confirm') %></td>
38 <td class="r"><%= fmt_detail_rate(pub.stats, 'confirm') %></td>
3939 <% } %>
4040 </tr>
4141 <% } %>
00 <h1>Node <b><%= node.name %></b></h1>
1
2 <div class="section">
3 <h2>Overview</h2>
4 <div class="hider updatable">
1 <div class="updatable">
2
53 <% if (!node.running) { %>
64 <p class="warning">Node not running</p>
75 <% } else if (node.os_pid == undefined) { %>
86 <p class="warning">Node statistics not available</p>
97 <% } else { %>
108
9 <div class="section">
10 <h2>Overview</h2>
11 <div class="hider">
1112 <div class="box">
1213 <table class="facts facts-l">
1314 <tr>
5455 </tr>
5556 <% } %>
5657 </table>
57 <% } %>
5858 </div>
5959 </div>
6060
6161 <div class="section">
62 <h2>Statistics</h2>
63 <div class="hider">
64 <% if (!node.running) { %>
65 <p class="warning">Node not running</p>
66 <% } else if (node.os_pid == undefined) { %>
67 <p class="warning">Node statistics not available</p>
68 <% } else { %>
62 <h2>Process statistics</h2>
63 <div class="hider">
6964 <%= node_stats_prefs() %>
70 <div class="updatable">
7165 <table class="facts">
7266 <tr>
7367 <th>
10599 <td>
106100 <% if (node.mem_limit != 'memory_monitoring_disabled') { %>
107101 <%= node_stat('mem_used', 'Used', 'mem_limit', 'high watermark', node,
108 fmt_bytes_obj, fmt_bytes_axis,
102 fmt_bytes, fmt_bytes_axis,
109103 node.mem_alarm ? 'red' : 'green',
110104 node.mem_alarm ? 'memory-alarm' : null) %>
111105 <% } else { %>
120114 <td>
121115 <% if (node.disk_free_limit != 'disk_free_monitoring_disabled') { %>
122116 <%= node_stat('disk_free', 'Free', 'disk_free_limit', 'low watermark', node,
123 fmt_bytes_obj, fmt_bytes_axis,
117 fmt_bytes, fmt_bytes_axis,
124118 node.disk_free_alarm ? 'red' : 'green',
125119 node.disk_free_alarm ? 'disk_free-alarm' : null,
126120 true) %>
130124 </td>
131125 </tr>
132126 </table>
133
134 </div>
135 <% } %>
136 </div>
137 </div>
127 </div>
128 </div>
129
130 <div class="section-hidden">
131 <h2>Persistence statistics</h2>
132 <div class="hider">
133 <%= rates_chart_or_text('mnesia-stats-count', node,
134 [['RAM only', 'mnesia_ram_tx_count'],
135 ['Disk', 'mnesia_disk_tx_count']],
136 fmt_rate, fmt_rate_axis, true, 'Mnesia transactions', 'mnesia-transactions') %>
137
138 <%= rates_chart_or_text('persister-msg-stats-count', node,
139 [['QI Journal', 'queue_index_journal_write_count'],
140 ['Store Read', 'msg_store_read_count'],
141 ['Store Write', 'msg_store_write_count']],
142 fmt_rate, fmt_rate_axis, true, 'Persistence operations (messages)', 'persister-operations-msg') %>
143
144 <%= rates_chart_or_text('persister-bulk-stats-count', node,
145 [['QI Read', 'queue_index_read_count'],
146 ['QI Write', 'queue_index_write_count']],
147 fmt_rate, fmt_rate_axis, true, 'Persistence operations (bulk)', 'persister-operations-bulk') %>
148 </div>
149 </div>
150
151 <div class="section-hidden">
152 <h2>I/O statistics</h2>
153 <div class="hider">
154 <%= rates_chart_or_text('persister-io-stats-count', node,
155 [['Read', 'io_read_count'],
156 ['Write', 'io_write_count'],
157 ['Seek', 'io_seek_count'],
158 ['Sync', 'io_sync_count'],
159 ['Reopen', 'io_reopen_count']],
160 fmt_rate, fmt_rate_axis, true, 'I/O operations', 'io-operations') %>
161
162 <%= rates_chart_or_text('persister-io-stats-bytes', node,
163 [['Read', 'io_read_bytes'],
164 ['Write', 'io_write_bytes']],
165 fmt_rate_bytes, fmt_rate_bytes_axis, true, 'I/O data rates') %>
166
167 <%= rates_chart_or_text('persister-io-stats-time', node,
168 [['Read', 'io_read_avg_time'],
169 ['Write', 'io_write_avg_time'],
170 ['Seek', 'io_seek_avg_time'],
171 ['Sync', 'io_sync_avg_time']],
172 fmt_ms, fmt_ms, false, 'I/O average time per operation') %>
173 </div>
174 </div>
175
176 <div class="section-hidden">
177 <h2>Cluster links</h2>
178 <div class="hider">
179 <% if (node.cluster_links.length > 0) { %>
180 <table class="list">
181 <tr>
182 <th>Remote node</th>
183 <th>Local address</th>
184 <th>Local port</th>
185 <th>Remote address</th>
186 <th>Remote port</th>
187 <th class="plain">
188 <%= chart_h3('cluster-link-data-rates', 'Data rates') %>
189 </th>
190 </tr>
191 <%
192 for (var i = 0; i < node.cluster_links.length; i++) {
193 var link = node.cluster_links[i];
194 %>
195 <tr<%= alt_rows(i)%>>
196 <td><%= link_node(link.name) %></td>
197 <td><%= fmt_string(link.sock_addr) %></td>
198 <td><%= fmt_string(link.sock_port) %></td>
199 <td><%= fmt_string(link.peer_addr) %></td>
200 <td><%= fmt_string(link.peer_port) %></td>
201 <td class="plain">
202 <%= rates_chart_or_text_no_heading(
203 'cluster-link-data-rates', 'cluster-link-data-rates' + link.name,
204 link.stats,
205 [['Recv', 'recv_bytes'],
206 ['Send', 'send_bytes']],
207 fmt_rate_bytes, fmt_rate_bytes_axis, true) %>
208 </td>
209 </tr>
210 <% } %>
211 </table>
212 <% } else { %>
213 <p>... no cluster links ...</p>
214 <% } %>
215 </div>
216 </div>
217
218 <% } %>
219
220 </div>
221
222 <!--
223 The next two need to be non-updatable or we will wipe the memory details
224 as soon as we have drawn it.
225 -->
226
227 <% if (node.running && node.os_pid != undefined) { %>
138228
139229 <div class="section">
140230 <h2>Memory details</h2>
156246 </div>
157247 </div>
158248
249 <% } %>
250
251 <div class="updatable">
252 <% if (node.running && node.os_pid != undefined) { %>
253
159254 <div class="section-hidden">
160255 <h2>Advanced</h2>
161 <div class="hider updatable">
162 <% if (!node.running) { %>
163 <p class="warning">Node not running</p>
164 <% } else if (node.os_pid == undefined) { %>
165 <p class="warning">Node statistics not available</p>
166 <% } else { %>
256 <div class="hider">
167257 <div class="box">
168258 <h3>VM</h3>
169259 <table class="facts">
217307 <h3>Authentication mechanisms</h3>
218308 <%= format('registry', {'list': node.auth_mechanisms, 'node': node, 'show_enabled': true} ) %>
219309
220 <% } %>
221 </div>
222 </div>
310 </div>
311 </div>
312
313 <% } %>
314
315 </div>
11 <% if (user_monitor) { %>
22 <%= format('partition', {'nodes': nodes}) %>
33 <% } %>
4
5 <div class="updatable">
6 <% if (overview.statistics_db_event_queue > 1000) { %>
7 <p class="warning">
8 The management statistics database currently has a queue
9 of <b><%= overview.statistics_db_event_queue %></b> events to
10 process. If this number keeps increasing, so will the memory used by
11 the management plugin.
12
13 <% if (overview.rates_mode != 'none') { %>
14 You may find it useful to set the <code>rates_mode</code> config item
15 to <code>none</code>.
16 <% } %>
17 </p>
18 <% } %>
19 </div>
20
421 <div class="section">
522 <h2>Totals</h2>
6 <div class="hider">
23 <div class="hider updatable">
724 <% if (overview.statistics_db_node != 'not_running') { %>
825 <%= queue_lengths('lengths-over', overview.queue_totals) %>
926 <% if (rates_mode != 'none') { %>
1330 Totals not available
1431 <% } %>
1532
16 <div class="updatable">
1733 <% if (overview.object_totals) { %>
1834 <h3>Global counts <span class="help" id="resource-counts"></span></h3>
1935
4561 <% } %>
4662 </div>
4763 <% } %>
48 </div>
4964
5065 </div>
5166 </div>
306321
307322 <% if (overview.rates_mode == 'none') { %>
308323 <div class="section-hidden">
309 <h2>Message Rates Disabled</h2>
324 <h2>Message rates disabled</h2>
310325 <div class="hider">
311326 <p>
312327 Message rates are currently disabled.
11
22 <div class="section">
33 <h2>Overview</h2>
4 <div class="hider">
4 <div class="hider updatable">
55 <%= queue_lengths('lengths-q', queue) %>
66 <% if (rates_mode != 'none') { %>
77 <%= message_rates('msg-rates-q', queue.message_stats) %>
88 <% } %>
99
10 <div class="updatable">
1110 <h3>Details</h3>
1211 <table class="facts facts-l">
1312 <tr>
152151 <td class="r"><%= fmt_bytes(queue.memory) %></td>
153152 </tr>
154153 </table>
155 </div>
156154 </div>
157155 </div>
158156
244242 </div>
245243 </div>
246244
245 <% if (user_policymaker) { %>
246 <div class="section-hidden">
247 <h2>Move messages</h2>
248 <div class="hider">
249 <% if (NAVIGATION['Admin'][0]['Shovel Management'] == undefined) { %>
250 <p>To move messages, the shovel plugin must be enabled, try:</p>
251 <pre>$ rabbitmq-plugins enable rabbitmq_shovel rabbitmq_shovel_management</pre>
252 <% } else { %>
253 <p>
254 The shovel plugin can be used to move messages from this queue
255 to another one. The form below will create a temporary shovel to
256 move messages to another queue on the same virtual host, with
257 default settings.
258 </p>
259 <p>
260 For more options <a href="#/dynamic-shovels">see the shovel
261 interface</a>.
262 </p>
263 <form action="#/shovel-parameters" method="put">
264 <input type="hidden" name="component" value="shovel"/>
265 <input type="hidden" name="vhost" value="<%= fmt_string(queue.vhost) %>"/>
266 <input type="hidden" name="name" value="Move from <%= fmt_string(queue.name) %>"/>
267 <input type="hidden" name="src-uri" value="amqp:///<%= esc(queue.vhost) %>"/>
268 <input type="hidden" name="src-queue" value="<%= fmt_string(queue.name) %>"/>
269
270 <input type="hidden" name="dest-uri" value="amqp:///<%= esc(queue.vhost) %>"/>
271 <input type="hidden" name="prefetch-count" value="1000"/>
272 <input type="hidden" name="add-forward-headers" value="false"/>
273 <input type="hidden" name="ack-mode" value="on-confirm"/>
274 <input type="hidden" name="delete-after" value="queue-length"/>
275 <input type="hidden" name="redirect" value="#/queues"/>
276
277 <table class="form">
278 <tr>
279 <th>Destination queue:</th>
280 <td><input type="text" name="dest-queue"/></td>
281 </tr>
282 </table>
283 <input type="submit" value="Move messages"/>
284 </form>
285 <% } %>
286 </div>
287 </div>
288 <% } %>
289
247290 <div class="section-hidden">
248291 <h2>Delete / purge</h2>
249292 <div class="hider">
159159 <% } %>
160160 <% if (rates_mode != 'none') { %>
161161 <% if (show_column('queues', 'rate-incoming')) { %>
162 <td class="r"><%= fmt_rate(queue.message_stats, 'publish') %></td>
162 <td class="r"><%= fmt_detail_rate(queue.message_stats, 'publish') %></td>
163163 <% } %>
164164 <% if (show_column('queues', 'rate-deliver')) { %>
165 <td class="r"><%= fmt_rate(queue.message_stats, 'deliver_get') %></td>
165 <td class="r"><%= fmt_detail_rate(queue.message_stats, 'deliver_get') %></td>
166166 <% } %>
167167 <% if (show_column('queues', 'rate-redeliver')) { %>
168 <td class="r"><%= fmt_rate(queue.message_stats, 'redeliver') %></td>
168 <td class="r"><%= fmt_detail_rate(queue.message_stats, 'redeliver') %></td>
169169 <% } %>
170170 <% if (show_column('queues', 'rate-ack')) { %>
171 <td class="r"><%= fmt_rate(queue.message_stats, 'ack') %></td>
171 <td class="r"><%= fmt_detail_rate(queue.message_stats, 'ack') %></td>
172172 <% } %>
173173 <% } %>
174174 </tr>
251251 <span class="argument-link" field="arguments" key="x-max-length" type="number">Max length</span> <span class="help" id="queue-max-length"></span> |
252252 <span class="argument-link" field="arguments" key="x-max-length-bytes" type="number">Max length bytes</span> <span class="help" id="queue-max-length-bytes"></span><br/>
253253 <span class="argument-link" field="arguments" key="x-dead-letter-exchange" type="string">Dead letter exchange</span> <span class="help" id="queue-dead-letter-exchange"></span> |
254 <span class="argument-link" field="arguments" key="x-dead-letter-routing-key" type="string">Dead letter routing key</span> <span class="help" id="queue-dead-letter-routing-key"></span>
254 <span class="argument-link" field="arguments" key="x-dead-letter-routing-key" type="string">Dead letter routing key</span> <span class="help" id="queue-dead-letter-routing-key"></span> |
255 <span class="argument-link" field="arguments" key="x-max-priority" type="number">Maximum priority</span> <span class="help" id="queue-max-priority"></span>
255256 </td>
256257 </tr>
257258 </table>
88
99 <div class="section">
1010 <h2>Overview</h2>
11 <div class="hider">
11 <div class="hider updatable">
1212 <%= queue_lengths('lengths-vhost', vhost) %>
1313 <% if (rates_mode != 'none') { %>
1414 <%= message_rates('msg-rates-vhost', vhost.message_stats) %>
1515 <% } %>
1616 <%= data_rates('data-rates-vhost', vhost, 'Data rates') %>
17 <div class="updatable">
1817 <h3>Details</h3>
1918 <table class="facts">
2019 <tr>
2221 <td><%= fmt_boolean(vhost.tracing) %></td>
2322 </tr>
2423 </table>
25 </div>
2624 </div>
2725 </div>
2826
6363 <td class="r"><%= fmt_num_thousands(vhost.messages) %></td>
6464 <% } %>
6565 <% if (show_column('vhosts', 'from_client')) { %>
66 <td><%= fmt_rate_bytes(vhost, 'recv_oct') %></td>
66 <td><%= fmt_detail_rate_bytes(vhost, 'recv_oct') %></td>
6767 <% } %>
6868 <% if (show_column('vhosts', 'to_client')) { %>
69 <td><%= fmt_rate_bytes(vhost, 'send_oct') %></td>
69 <td><%= fmt_detail_rate_bytes(vhost, 'send_oct') %></td>
7070 <% } %>
7171 <% if (rates_mode != 'none') { %>
7272 <% if (show_column('vhosts', 'rate-publish')) { %>
73 <td class="r"><%= fmt_rate(vhost.message_stats, 'publish') %></td>
73 <td class="r"><%= fmt_detail_rate(vhost.message_stats, 'publish') %></td>
7474 <% } %>
7575 <% if (show_column('vhosts', 'rate-deliver')) { %>
76 <td class="r"><%= fmt_rate(vhost.message_stats, 'deliver_get') %></td>
76 <td class="r"><%= fmt_detail_rate(vhost.message_stats, 'deliver_get') %></td>
7777 <% } %>
7878 <% } %>
7979 </tr>
147147 channel_queue_exchange_stats]).
148148 -define(TABLES, [queue_stats, connection_stats, channel_stats,
149149 consumers_by_queue, consumers_by_channel,
150 node_stats]).
150 node_stats, node_node_stats]).
151151
152152 -define(DELIVER_GET, [deliver, deliver_no_ack, get, get_no_ack]).
153153 -define(FINE_STATS, [publish, publish_in, publish_out,
154154 ack, deliver_get, confirm, return_unroutable, redeliver] ++
155155 ?DELIVER_GET).
156156
157 -define(COARSE_QUEUE_STATS,
158 [messages, messages_ready, messages_unacknowledged]).
157 %% Most come from channels as fine stats, but queues emit these directly.
158 -define(QUEUE_MSG_RATES, [disk_reads, disk_writes]).
159
160 -define(MSG_RATES, ?FINE_STATS ++ ?QUEUE_MSG_RATES).
161
162 -define(QUEUE_MSG_COUNTS, [messages, messages_ready, messages_unacknowledged]).
159163
160164 -define(COARSE_NODE_STATS,
161 [mem_used, fd_used, sockets_used, proc_used, disk_free]).
165 [mem_used, fd_used, sockets_used, proc_used, disk_free,
166 io_read_count, io_read_bytes, io_read_avg_time,
167 io_write_count, io_write_bytes, io_write_avg_time,
168 io_sync_count, io_sync_avg_time,
169 io_seek_count, io_seek_avg_time,
170 io_reopen_count, mnesia_ram_tx_count, mnesia_disk_tx_count,
171 msg_store_read_count, msg_store_write_count,
172 queue_index_journal_write_count,
173 queue_index_write_count, queue_index_read_count]).
174
175 -define(COARSE_NODE_NODE_STATS, [send_bytes, recv_bytes]).
176
177 %% Normally 0 and no history means "has never happened, don't
178 %% report". But for these things we do want to report even at 0 with
179 %% no history.
180 -define(ALWAYS_REPORT_STATS,
181 [io_read_avg_time, io_write_avg_time,
182 io_sync_avg_time | ?QUEUE_MSG_COUNTS]).
162183
163184 -define(COARSE_CONN_STATS, [recv_oct, send_oct]).
164185
179200 prioritise_cast(_Msg, _Len, _State) ->
180201 0.
181202
182 %% We want timely replies to queries even when overloaded!
183 prioritise_call(_Msg, _From, _Len, _State) -> 5.
203 %% We want timely replies to queries even when overloaded, so return 5
204 %% as priority. Also we only have access to the queue length here, not
205 %% in handle_call/3, so stash it in the dictionary. This is a bit ugly
206 %% but better than fiddling with gen_server2 even more.
207 prioritise_call(_Msg, _From, Len, _State) ->
208 put(last_queue_length, Len),
209 5.
184210
185211 %%----------------------------------------------------------------------------
186212 %% API
320346 %% recv_oct now!
321347 VStats = [read_simple_stats(vhost_stats, VHost, State) ||
322348 VHost <- VHosts],
323 MessageStats = [overview_sum(Type, VStats) || Type <- ?FINE_STATS],
324 QueueStats = [overview_sum(Type, VStats) || Type <- ?COARSE_QUEUE_STATS],
349 MessageStats = [overview_sum(Type, VStats) || Type <- ?MSG_RATES],
350 QueueStats = [overview_sum(Type, VStats) || Type <- ?QUEUE_MSG_COUNTS],
325351 F = case User of
326352 all -> fun (L) -> length(L) end;
327353 _ -> fun (L) -> length(rabbit_mgmt_util:filter_user(L, User)) end
342368 {channels, F(created_events(channel_stats, Tables))}],
343369 reply([{message_stats, format_samples(Ranges, MessageStats, State)},
344370 {queue_totals, format_samples(Ranges, QueueStats, State)},
345 {object_totals, ObjectTotals}], State);
371 {object_totals, ObjectTotals},
372 {statistics_db_event_queue, get(last_queue_length)}], State);
346373
347374 handle_call({override_lookups, Lookups}, _From, State) ->
348375 reply(ok, State#state{lookups = Lookups});
433460 %% passed a queue proplist that will already have been formatted -
434461 %% i.e. it will have name and vhost keys.
435462 id_name(node_stats) -> name;
463 id_name(node_node_stats) -> route;
436464 id_name(vhost_stats) -> name;
437465 id_name(queue_stats) -> name;
438466 id_name(exchange_stats) -> name;
475503 [{fun rabbit_mgmt_format:properties/1,[backing_queue_status]},
476504 {fun rabbit_mgmt_format:now_to_str/1, [idle_since]},
477505 {fun rabbit_mgmt_format:queue_state/1, [state]}],
478 ?COARSE_QUEUE_STATS, State);
506 ?QUEUE_MSG_COUNTS, ?QUEUE_MSG_RATES, State);
479507
480508 handle_event(Event = #event{type = queue_deleted,
481509 props = [{name, Name}],
490518 %% This ceil must correspond to the ceil in append_samples/5
491519 TS = ceil(Timestamp, State),
492520 OldStats = lookup_element(OldTable, Id),
493 [record_sample(Id, {Key, -pget(Key, OldStats, 0), TS, State}, State)
494 || Key <- ?COARSE_QUEUE_STATS],
521 [record_sample(Id, {Key, -pget(Key, OldStats, 0), TS, State}, true, State)
522 || Key <- ?QUEUE_MSG_COUNTS],
495523 delete_samples(channel_queue_stats, {'_', Name}, State),
496524 delete_samples(queue_exchange_stats, {Name, '_'}, State),
497525 delete_samples(queue_stats, Name, State),
506534
507535 handle_event(#event{type = vhost_deleted,
508536 props = [{name, Name}]}, State) ->
509 delete_samples(vhost_stats, Name, State),
510 {ok, State};
537 delete_samples(vhost_stats, Name, State);
511538
512539 handle_event(#event{type = connection_created, props = Stats}, State) ->
513540 handle_created(
542569 ets:match_delete(OldTable, {{fine, {ChPid, '_'}}, '_'}),
543570 ets:match_delete(OldTable, {{fine, {ChPid, '_', '_'}}, '_'}),
544571 [handle_fine_stats(Timestamp, AllStatsElem, State)
545 || AllStatsElem <- AllStats],
546 {ok, State};
572 || AllStatsElem <- AllStats];
547573
548574 handle_event(Event = #event{type = channel_closed,
549575 props = [{pid, Pid}]},
571597 %% TODO: we don't clear up after dead nodes here - this is a very tiny
572598 %% leak every time a node is permanently removed from the cluster. Do
573599 %% we care?
574 handle_event(#event{type = node_stats, props = Stats, timestamp = Timestamp},
600 handle_event(#event{type = node_stats, props = Stats0, timestamp = Timestamp},
575601 State) ->
602 Stats = proplists:delete(persister_stats, Stats0) ++
603 pget(persister_stats, Stats0),
576604 handle_stats(node_stats, Stats, Timestamp, [], ?COARSE_NODE_STATS, State);
577605
578 handle_event(_Event, State) ->
579 {ok, State}.
606 handle_event(#event{type = node_node_stats, props = Stats,
607 timestamp = Timestamp}, State) ->
608 handle_stats(node_node_stats, Stats, Timestamp, [], ?COARSE_NODE_NODE_STATS,
609 State);
610
611 handle_event(Event = #event{type = node_node_deleted,
612 props = [{route, Route}]}, State) ->
613 delete_samples(node_node_stats, Route, State),
614 handle_deleted(node_node_stats, Event, State);
615
616 handle_event(_Event, _State) ->
617 ok.
580618
581619 handle_created(TName, Stats, Funs, State = #state{tables = Tables}) ->
582620 Formatted = rabbit_mgmt_format:format(Stats, Funs),
585623 pget(name, Stats)}),
586624 {ok, State}.
587625
588 handle_stats(TName, Stats, Timestamp, Funs, RatesKeys,
626 handle_stats(TName, Stats, Timestamp, Funs, RatesKeys, State) ->
627 handle_stats(TName, Stats, Timestamp, Funs, RatesKeys, [], State).
628
629 handle_stats(TName, Stats, Timestamp, Funs, RatesKeys, NoAggRatesKeys,
589630 State = #state{tables = Tables, old_stats = OldTable}) ->
590631 Id = id(TName, Stats),
591632 IdSamples = {coarse, {TName, Id}},
592633 OldStats = lookup_element(OldTable, IdSamples),
593 append_samples(Stats, Timestamp, OldStats, IdSamples, RatesKeys, State),
634 append_samples(
635 Stats, Timestamp, OldStats, IdSamples, RatesKeys, true, State),
636 append_samples(
637 Stats, Timestamp, OldStats, IdSamples, NoAggRatesKeys, false, State),
594638 StripKeys = [id_name(TName)] ++ RatesKeys ++ ?FINE_STATS_TYPES,
595639 Stats1 = [{K, V} || {K, V} <- Stats, not lists:member(K, StripKeys)],
596640 Stats2 = rabbit_mgmt_format:format(Stats1, Funs),
654698 0 -> Stats;
655699 _ -> [{deliver_get, Total}|Stats]
656700 end,
657 append_samples(Stats1, Timestamp, OldStats, {fine, Id}, all, State).
701 append_samples(Stats1, Timestamp, OldStats, {fine, Id}, all, true, State).
658702
659703 delete_samples(Type, {Id, '_'}, State) ->
660704 delete_samples_with_index(Type, Id, fun forward/2, State);
678722
679723 delete_match(Type, Id) -> {{{Type, Id}, '_'}, '_'}.
680724
681 append_samples(Stats, TS, OldStats, Id, Keys,
725 append_samples(Stats, TS, OldStats, Id, Keys, Agg,
682726 State = #state{old_stats = OldTable}) ->
683727 case ignore_coarse_sample(Id, State) of
684728 false ->
686730 %% queue_deleted
687731 NewMS = ceil(TS, State),
688732 case Keys of
689 all -> [append_sample(Key, Value, NewMS, OldStats, Id, State)
690 || {Key, Value} <- Stats];
691 _ -> [append_sample(
692 Key, pget(Key, Stats), NewMS, OldStats, Id, State)
693 || Key <- Keys]
733 all -> [append_sample(K, V, NewMS, OldStats, Id, Agg, State)
734 || {K, V} <- Stats];
735 _ -> [append_sample(K, V, NewMS, OldStats, Id, Agg, State)
736 || K <- Keys,
737 V <- [pget(K, Stats)],
738 V =/= 0 orelse lists:member(K, ?ALWAYS_REPORT_STATS)]
694739 end,
695740 ets:insert(OldTable, {Id, Stats});
696741 true ->
697742 ok
698743 end.
699744
700 append_sample(Key, Value, NewMS, OldStats, Id, State) when is_number(Value) ->
745 append_sample(Key, Val, NewMS, OldStats, Id, Agg, State) when is_number(Val) ->
701746 record_sample(
702 Id, {Key, Value - pget(Key, OldStats, 0), NewMS, State}, State);
703
704 append_sample(_Key, _Value, _NewMS, _OldStats, _Id, _State) ->
747 Id, {Key, Val - pget(Key, OldStats, 0), NewMS, State}, Agg, State);
748
749 append_sample(_Key, _Value, _NewMS, _OldStats, _Id, _Agg, _State) ->
705750 ok.
706751
707752 ignore_coarse_sample({coarse, {queue_stats, Q}}, State) ->
710755 false.
711756
712757 %% Node stats do not have a vhost of course
713 record_sample({coarse, {node_stats, _Node} = Id}, Args, _State) ->
758 record_sample({coarse, {node_stats, _Node} = Id}, Args, true, _State) ->
714759 record_sample0(Id, Args);
715760
716 record_sample({coarse, Id}, Args, State) ->
761 record_sample({coarse, {node_node_stats, _Names} = Id}, Args, true, _State) ->
762 record_sample0(Id, Args);
763
764 record_sample({coarse, Id}, Args, false, _State) ->
765 record_sample0(Id, Args);
766
767 record_sample({coarse, Id}, Args, true, State) ->
717768 record_sample0(Id, Args),
718769 record_sample0({vhost_stats, vhost(Id, State)}, Args);
719770
720771 %% Deliveries / acks (Q -> Ch)
721 record_sample({fine, {Ch, Q = #resource{kind = queue}}}, Args, State) ->
772 record_sample({fine, {Ch, Q = #resource{kind = queue}}}, Args, true, State) ->
722773 case object_exists(Q, State) of
723774 true -> record_sample0({channel_queue_stats, {Ch, Q}}, Args),
724775 record_sample0({queue_stats, Q}, Args);
728779 record_sample0({vhost_stats, vhost(Q)}, Args);
729780
730781 %% Publishes / confirms (Ch -> X)
731 record_sample({fine, {Ch, X = #resource{kind = exchange}}}, Args, State) ->
782 record_sample({fine, {Ch, X = #resource{kind = exchange}}}, Args, true,State) ->
732783 case object_exists(X, State) of
733784 true -> record_sample0({channel_exchange_stats, {Ch, X}}, Args),
734785 record_sampleX(publish_in, X, Args);
740791 %% Publishes (but not confirms) (Ch -> X -> Q)
741792 record_sample({fine, {_Ch,
742793 Q = #resource{kind = queue},
743 X = #resource{kind = exchange}}}, Args, State) ->
794 X = #resource{kind = exchange}}}, Args, true, State) ->
744795 %% TODO This one logically feels like it should be here. It would
745796 %% correspond to "publishing channel message rates to queue" -
746797 %% which would be nice to handle - except we don't. And just
799850
800851 %% Ignore case where ID1 and ID2 are in a tuple, i.e. detailed stats,
801852 %% when in basic mode
802 record_sample0({_, {_ID1, _ID2}}, {_, _, _, #state{rates_mode = basic}}) ->
853 record_sample0({Type, {_ID1, _ID2}}, {_, _, _, #state{rates_mode = basic}})
854 when Type =/= node_node_stats ->
803855 ok;
804856 record_sample0(Id0, {Key, Diff, TS, #state{aggregated_stats = ETS,
805857 aggregated_stats_index = ETSi}}) ->
833885 {channel_stats, [{publishes, channel_exchange_stats, fun first/1},
834886 {deliveries, channel_queue_stats, fun first/1}]}).
835887
888 -define(NODE_DETAILS,
889 {node_stats, [{cluster_links, node_node_stats, fun first/1}]}).
890
836891 first(Id) -> {Id, '$1'}.
837892 second(Id) -> {'$1', Id}.
838893
886941
887942 node_stats(Ranges, Objs, State) ->
888943 merge_stats(Objs, [basic_stats_fun(node_stats, State),
889 simple_stats_fun(Ranges, node_stats, State)]).
944 simple_stats_fun(Ranges, node_stats, State),
945 detail_and_basic_stats_fun(
946 node_node_stats, Ranges, ?NODE_DETAILS, State)]).
890947
891948 merge_stats(Objs, Funs) ->
892949 [lists:foldl(fun (Fun, Props) -> combine(Fun(Props), Props) end, Obj, Funs)
921978 Id = id_lookup(IdType, Props),
922979 [detail_stats(Ranges, Name, AggregatedStatsType, IdFun(Id), State)
923980 || {Name, AggregatedStatsType, IdFun} <- FineSpecs]
981 end.
982
983 %% This does not quite do the same as detail_stats_fun +
984 %% basic_stats_fun; the basic part here assumes compound keys (like
985 %% detail stats) but non-calculated (like basic stats). Currently the
986 %% only user of that is node-node stats.
987 %%
988 %% We also assume that FineSpecs is single length here (at [1]).
989 detail_and_basic_stats_fun(Type, Ranges, {IdType, FineSpecs},
990 State = #state{tables = Tables}) ->
991 Table = orddict:fetch(Type, Tables),
992 F = detail_stats_fun(Ranges, {IdType, FineSpecs}, State),
993 fun (Props) ->
994 Id = id_lookup(IdType, Props),
995 BasicStatsRaw = ets:match(Table, {{{Id, '$1'}, stats}, '$2', '_'}),
996 BasicStatsDict = dict:from_list([{K, V} || [K,V] <- BasicStatsRaw]),
997 [{K, Items}] = F(Props), %% [1]
998 Items2 = [case dict:find(id_lookup(IdType, Item), BasicStatsDict) of
999 {ok, BasicStats} -> BasicStats ++ Item;
1000 error -> Item
1001 end || Item <- Items],
1002 [{K, Items2}]
9241003 end.
9251004
9261005 read_simple_stats(Type, Id, #state{aggregated_stats = ETS}) ->
9421021 end, [], FromETS).
9431022
9441023 extract_msg_stats(Stats) ->
945 FineStats = lists:append([[K, details_key(K)] || K <- ?FINE_STATS]),
1024 FineStats = lists:append([[K, details_key(K)] || K <- ?MSG_RATES]),
9461025 {MsgStats, Other} =
9471026 lists:partition(fun({K, _}) -> lists:member(K, FineStats) end, Stats),
9481027 case MsgStats of
9591038 augment_msg_stats([{channel, ChPid}], State);
9601039 format_detail_id(#resource{name = Name, virtual_host = Vhost, kind = Kind},
9611040 _State) ->
962 [{Kind, [{name, Name}, {vhost, Vhost}]}].
1041 [{Kind, [{name, Name}, {vhost, Vhost}]}];
1042 format_detail_id(Node, _State) when is_atom(Node) ->
1043 [{name, Node}].
9631044
9641045 format_samples(Ranges, ManyStats, #state{interval = Interval}) ->
9651046 lists:append(
9661047 [case rabbit_mgmt_stats:is_blank(Stats) andalso
967 not lists:member(K, ?COARSE_QUEUE_STATS) of
1048 not lists:member(K, ?ALWAYS_REPORT_STATS) of
9681049 true -> [];
9691050 false -> {Details, Counter} = rabbit_mgmt_stats:format(
9701051 pick_range(K, Ranges),
9741055 end || {K, Stats} <- ManyStats]).
9751056
9761057 pick_range(K, {RangeL, RangeM, RangeD, RangeN}) ->
977 case {lists:member(K, ?COARSE_QUEUE_STATS),
978 lists:member(K, ?FINE_STATS),
1058 case {lists:member(K, ?QUEUE_MSG_COUNTS),
1059 lists:member(K, ?MSG_RATES),
9791060 lists:member(K, ?COARSE_CONN_STATS),
980 lists:member(K, ?COARSE_NODE_STATS)} of
1061 lists:member(K, ?COARSE_NODE_STATS)
1062 orelse lists:member(K, ?COARSE_NODE_NODE_STATS)} of
9811063 {true, false, false, false} -> RangeL;
9821064 {false, true, false, false} -> RangeM;
9831065 {false, false, true, false} -> RangeD;
10941176 gc_batch(0, _Policies, State) ->
10951177 State;
10961178 gc_batch(Rows, Policies, State = #state{aggregated_stats = ETS,
1097 gc_next_key = Key0}) ->
1179 gc_next_key = Key0}) ->
10981180 Key = case Key0 of
10991181 undefined -> ets:first(ETS);
11001182 _ -> ets:next(ETS, Key0)
11161198 end.
11171199
11181200 retention_policy(node_stats) -> global;
1201 retention_policy(node_node_stats) -> global;
11191202 retention_policy(vhost_stats) -> global;
11201203 retention_policy(queue_stats) -> basic;
11211204 retention_policy(exchange_stats) -> basic;
156156 {tags, tags(User#internal_user.tags)}].
157157
158158 user(User) ->
159 [{name, User#user.username},
160 {tags, tags(User#user.tags)},
161 {auth_backend, User#user.auth_backend}].
159 [{name, User#user.username},
160 {tags, tags(User#user.tags)}].
162161
163162 tags(Tags) ->
164163 list_to_binary(string:join([atom_to_list(T) || T <- Tags], ",")).
2727 -export([with_channel/4, with_channel/5]).
2828 -export([props_to_method/2, props_to_method/4]).
2929 -export([all_or_one_vhost/2, http_to_amqp/5, reply/3, filter_vhost/3]).
30 -export([filter_conn_ch_list/3, filter_user/2, list_login_vhosts/1]).
30 -export([filter_conn_ch_list/3, filter_user/2, list_login_vhosts/2]).
3131 -export([with_decode/5, decode/1, decode/2, redirect/2, set_resp_header/3,
3232 args/1]).
3333 -export([reply_list/3, reply_list/4, sort_list/2, destination_type/1]).
7676 case vhost(ReqData) of
7777 not_found -> true;
7878 none -> true;
79 V -> lists:member(V, list_login_vhosts(User))
79 V -> lists:member(V, list_login_vhosts(User, peersock(ReqData)))
8080 end.
8181
8282 %% Used for connections / channels. A normal user can only see / delete
136136 not_allowed ->
137137 ErrFun(<<"User can only log in via localhost">>)
138138 end;
139 {refused, Msg, Args} ->
139 {refused, _Username, Msg, Args} ->
140140 rabbit_log:warning("HTTP access denied: ~s~n",
141141 [rabbit_misc:format(Msg, Args)]),
142142 not_authorised(<<"Login failed">>, ReqData, Context)
143143 end.
144144
145 peer(ReqData) ->
146 {ok, {IP,_Port}} = peername(peersock(ReqData)),
147 IP.
148
145149 %% We can't use wrq:peer/1 because that trusts X-Forwarded-For.
146 peer(ReqData) ->
150 peersock(ReqData) ->
147151 WMState = ReqData#wm_reqdata.wm_state,
148 {ok, {IP,_Port}} = peername(WMState#wm_reqstate.socket),
149 IP.
152 WMState#wm_reqstate.socket.
150153
151154 %% Like the one in rabbit_net, but we and webmachine have a different
152155 %% way of wrapping
451454 end;
452455 {error, {auth_failure, Msg}} ->
453456 not_authorised(Msg, ReqData, Context);
457 {error, access_refused} ->
458 not_authorised(<<"Access refused.">>, ReqData, Context);
454459 {error, {nodedown, N}} ->
455460 bad_request(
456461 list_to_binary(
469474 VHost -> Fun(VHost)
470475 end.
471476
472 filter_vhost(List, _ReqData, Context) ->
473 VHosts = list_login_vhosts(Context#context.user),
477 filter_vhost(List, ReqData, Context) ->
478 VHosts = list_login_vhosts(Context#context.user, peersock(ReqData)),
474479 [I || I <- List, lists:member(pget(vhost, I), VHosts)].
475480
476481 filter_user(List, _ReqData, #context{user = User}) ->
510515 {{halt, Code}, ReqData, Context};
511516 post_respond({JSON, ReqData, Context}) ->
512517 {true, set_resp_header(
513 "content-type", "application/json",
518 "Content-Type", "application/json",
514519 wrq:append_to_response_body(JSON, ReqData)), Context}.
515520
516521 is_admin(T) -> intersects(T, [administrator]).
532537 list_visible_vhosts(User = #user{tags = Tags}) ->
533538 case is_monitor(Tags) of
534539 true -> rabbit_vhost:list();
535 false -> list_login_vhosts(User)
536 end.
537
538 list_login_vhosts(User) ->
540 false -> list_login_vhosts(User, undefined)
541 end.
542
543 list_login_vhosts(User, Sock) ->
539544 [V || V <- rabbit_vhost:list(),
540 case catch rabbit_access_control:check_vhost_access(User, V) of
545 case catch rabbit_access_control:check_vhost_access(User, V, Sock) of
541546 ok -> true;
542547 _ -> false
543548 end].
5454 [{exchange, rabbit_mgmt_util:id(exchange, ReqData)}]).
5555
5656 delete_resource(ReqData, Context) ->
57 IfUnused = "true" =:= wrq:get_qs_value("if-unused", ReqData),
5758 rabbit_mgmt_util:amqp_request(
5859 rabbit_mgmt_util:vhost(ReqData), ReqData, Context,
59 #'exchange.delete'{ exchange = id(ReqData) }).
60 #'exchange.delete'{exchange = id(ReqData),
61 if_unused = IfUnused}).
6062
6163 is_authorized(ReqData, Context) ->
6264 rabbit_mgmt_util:is_authorized_vhost(ReqData, Context).
1616 -module(rabbit_mgmt_wm_exchange_publish).
1717
1818 -export([init/1, resource_exists/2, post_is_create/2, is_authorized/2,
19 allowed_methods/2, process_post/2]).
19 allowed_methods/2, content_types_provided/2, process_post/2]).
2020
2121 -include("rabbit_mgmt.hrl").
2222 -include_lib("webmachine/include/webmachine.hrl").
2727
2828 allowed_methods(ReqData, Context) ->
2929 {['POST'], ReqData, Context}.
30
31 content_types_provided(ReqData, Context) ->
32 {[{"application/json", to_json}], ReqData, Context}.
3033
3134 resource_exists(ReqData, Context) ->
3235 {case rabbit_mgmt_wm_exchange:exchange(ReqData) of
4747 case rabbit_mgmt_util:is_monitor(Tags) of
4848 true ->
4949 Overview0 ++
50 [{K, {struct, V}} ||
51 {K, V} <- rabbit_mgmt_db:get_overview(Range)] ++
50 [{K, maybe_struct(V)} ||
51 {K,V} <- rabbit_mgmt_db:get_overview(Range)] ++
5252 [{node, node()},
5353 {statistics_db_node, stats_db_node()},
5454 {listeners, listeners()},
5555 {contexts, web_contexts(ReqData)}];
5656 _ ->
5757 Overview0 ++
58 [{K, {struct, V}} ||
58 [{K, maybe_struct(V)} ||
5959 {K, V} <- rabbit_mgmt_db:get_overview(User, Range)]
6060 end,
6161 rabbit_mgmt_util:reply(Overview, ReqData, Context).
8181 || L <- rabbit_networking:active_listeners()],
8282 ["protocol", "port", "node"] ).
8383
84 maybe_struct(L) when is_list(L) -> {struct, L};
85 maybe_struct(V) -> V.
86
8487 %%--------------------------------------------------------------------
8588
8689 web_contexts(ReqData) ->
5757 rabbit_mgmt_util:amqp_request(
5858 rabbit_mgmt_util:vhost(ReqData),
5959 ReqData, Context,
60 #'queue.delete'{ queue = rabbit_mgmt_util:id(queue, ReqData) }).
60 #'queue.delete'{ queue = rabbit_mgmt_util:id(queue, ReqData),
61 if_empty = qs_true("if-empty", ReqData),
62 if_unused = qs_true("if-unused", ReqData) }).
6163
6264 is_authorized(ReqData, Context) ->
6365 rabbit_mgmt_util:is_authorized_vhost(ReqData, Context).
7779 {ok, Q} -> rabbit_mgmt_format:queue(Q);
7880 {error, not_found} -> not_found
7981 end.
82
83 qs_true(Key, ReqData) -> "true" =:= wrq:get_qs_value(Key, ReqData).
88 {load_definitions, none},
99 {rates_mode, basic},
1010 {sample_retention_policies,
11 %% List of {MaxAgeSecs, IfTimestampDivisibleBySecs}
11 %% List of {MaxAgeInSeconds, SampleEveryNSeconds}
1212 [{global, [{605, 5}, {3660, 60}, {29400, 600}, {86400, 1800}]},
1313 {basic, [{605, 5}, {3600, 60}]},
1414 {detailed, [{10, 5}]}]}
10061006 ?assertEqual([{routed, false}],
10071007 http_post("/exchanges/%2f/amq.default/publish", Msg, ?OK)).
10081008
1009 if_empty_unused_test() ->
1010 http_put("/exchanges/%2f/test", [], ?NO_CONTENT),
1011 http_put("/queues/%2f/test", [], ?NO_CONTENT),
1012 http_post("/bindings/%2f/e/test/q/test", [], ?CREATED),
1013 http_post("/exchanges/%2f/amq.default/publish",
1014 msg(<<"test">>, [], <<"Hello world">>), ?OK),
1015 http_delete("/queues/%2f/test?if-empty=true", ?BAD_REQUEST),
1016 http_delete("/exchanges/%2f/test?if-unused=true", ?BAD_REQUEST),
1017 http_delete("/queues/%2f/test/contents", ?NO_CONTENT),
1018
1019 {Conn, _ConnPath, _ChPath, _ConnChPath} = get_conn("guest", "guest"),
1020 {ok, Ch} = amqp_connection:open_channel(Conn),
1021 amqp_channel:subscribe(Ch, #'basic.consume'{queue = <<"test">> }, self()),
1022 http_delete("/queues/%2f/test?if-unused=true", ?BAD_REQUEST),
1023 amqp_connection:close(Conn),
1024
1025 http_delete("/queues/%2f/test?if-empty=true", ?NO_CONTENT),
1026 http_delete("/exchanges/%2f/test?if-unused=true", ?NO_CONTENT),
1027 passed.
1028
10091029 parameters_test() ->
10101030 rabbit_runtime_parameters_test:register(),
10111031
0 #!/bin/sh -e
1 TWO=$(python2 -c 'import sys;print(sys.version_info[0])')
2 THREE=$(python3 -c 'import sys;print(sys.version_info[0])')
3
4 if [ $TWO != 2 ] ; then
5 echo Python 2 not found!
6 exit 1
7 fi
8
9 if [ $THREE != 3 ] ; then
10 echo Python 3 not found!
11 exit 1
12 fi
13
14 echo
15 echo ----------------------
16 echo Testing under Python 2
17 echo ----------------------
18
19 python2 $(dirname $0)/rabbitmqadmin-test.py
20
21 echo
22 echo ----------------------
23 echo Testing under Python 3
24 echo ----------------------
25
26 python3 $(dirname $0)/rabbitmqadmin-test.py
155155 self.run_success(['declare', 'queue', 'name=test'])
156156 self.run_success(['publish', 'routing_key=test', 'payload=test_1'])
157157 self.run_success(['publish', 'routing_key=test', 'payload=test_2'])
158 self.run_success(['publish', 'routing_key=test'], stdin='test_3')
158 self.run_success(['publish', 'routing_key=test'], stdin=b'test_3')
159159 self.assert_table([exp_msg('test', 2, False, 'test_1')], ['get', 'queue=test', 'requeue=false'])
160160 self.assert_table([exp_msg('test', 1, False, 'test_2')], ['get', 'queue=test', 'requeue=true'])
161161 self.assert_table([exp_msg('test', 1, True, 'test_2')], ['get', 'queue=test', 'requeue=false'])
162162 self.assert_table([exp_msg('test', 0, False, 'test_3')], ['get', 'queue=test', 'requeue=false'])
163 self.run_success(['publish', 'routing_key=test'], stdin='test_4')
163 self.run_success(['publish', 'routing_key=test'], stdin=b'test_4')
164164 filename = '/tmp/rabbitmq-test/get.txt'
165165 self.run_success(['get', 'queue=test', 'requeue=false', 'payload_file=' + filename])
166166 with open(filename) as f:
211211 args.extend(args0)
212212 self.assertEqual(expected, [l.split('\t') for l in self.admin(args)[0].splitlines()])
213213
214 def admin(self, args, stdin=None):
215 return run('../../../bin/rabbitmqadmin', args, stdin)
214 def admin(self, args0, stdin=None):
215 args = ['python{0}'.format(sys.version_info[0]),
216 norm('../../../bin/rabbitmqadmin')]
217 args.extend(args0)
218 return run(args, stdin)
216219
217220 def ctl(self, args0, stdin=None):
218 args = ['-n', 'rabbit-test']
219 args.extend(args0)
220 (stdout, ret) = run('../../../../rabbitmq-server/scripts/rabbitmqctl', args, stdin)
221 args = [norm('../../../../rabbitmq-server/scripts/rabbitmqctl'), '-n', 'rabbit-test']
222 args.extend(args0)
223 (stdout, ret) = run(args, stdin)
221224 if ret != 0:
222225 self.fail(stdout)
223226
224 def run(cmd, args, stdin):
225 path = os.path.normpath(os.path.join(os.getcwd(), sys.argv[0], cmd))
226 cmdline = [path]
227 cmdline.extend(args)
228 proc = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
227 def norm(cmd):
228 return os.path.normpath(os.path.join(os.getcwd(), sys.argv[0], cmd))
229
230 def run(args, stdin):
231 proc = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
229232 (stdout, stderr) = proc.communicate(stdin)
230233 returncode = proc.returncode
231 return (stdout + stderr, returncode)
234 res = stdout.decode('utf-8') + stderr.decode('utf-8')
235 return (res, returncode)
232236
233237 def l(thing):
234238 return ['list', thing, 'name']
238242 return [key, '', str(count), payload, str(len(payload)), 'string', '', str(redelivered)]
239243
240244 if __name__ == '__main__':
241 print "\nrabbitmqadmin tests\n===================\n"
245 print("\nrabbitmqadmin tests\n===================\n")
242246 suite = unittest.TestLoader().loadTestsFromTestCase(TestRabbitMQAdmin)
243247 results = unittest.TextTestRunner(verbosity=2).run(suite)
244248 if not results.wasSuccessful():
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
2222 code_change/3]).
2323
2424 -export([list_registry_plugins/1]).
25
26 -import(rabbit_misc, [pget/2]).
2527
2628 -include_lib("rabbit_common/include/rabbit.hrl").
2729
3335 uptime, run_queue, processors, exchange_types,
3436 auth_mechanisms, applications, contexts,
3537 log_file, sasl_log_file, db_dir, config_files, net_ticktime,
36 enabled_plugins]).
37
38 %%--------------------------------------------------------------------
39
40 -record(state, {fd_total}).
38 enabled_plugins, persister_stats]).
39
40 %%--------------------------------------------------------------------
41
42 -record(state, {fd_total, fhc_stats, fhc_stats_derived, node_owners}).
4143
4244 %%--------------------------------------------------------------------
4345
182184 i(db_dir, _State) -> list_to_binary(rabbit_mnesia:dir());
183185 i(config_files, _State) -> [list_to_binary(F) || F <- rabbit:config_files()];
184186 i(net_ticktime, _State) -> net_kernel:get_net_ticktime();
187 i(persister_stats, State) -> persister_stats(State);
185188 i(enabled_plugins, _State) -> {ok, Dir} = application:get_env(
186189 rabbit, enabled_plugins_file),
187190 rabbit_plugins:read_enabled(Dir);
222225 set_plugin_name(Name, Module) ->
223226 [{name, list_to_binary(atom_to_list(Name))} |
224227 proplists:delete(name, Module:description())].
228
229 persister_stats(#state{fhc_stats = FHC,
230 fhc_stats_derived = FHCD}) ->
231 [{flatten_key(K), V} || {{_Op, Type} = K, V} <- FHC,
232 Type =/= time] ++
233 [{flatten_key(K), V} || {K, V} <- FHCD].
234
235 flatten_key({A, B}) ->
236 list_to_atom(atom_to_list(A) ++ "_" ++ atom_to_list(B)).
237
238 cluster_links() ->
239 {ok, Items} = net_kernel:nodes_info(),
240 [Link || Item <- Items,
241 Link <- [format_nodes_info(Item)], Link =/= undefined].
242
243 format_nodes_info({Node, Info}) ->
244 Owner = proplists:get_value(owner, Info),
245 case catch process_info(Owner, links) of
246 {links, Links} ->
247 case [Link || Link <- Links, is_port(Link)] of
248 [Port] ->
249 {Node, Owner, format_nodes_info1(Port)};
250 _ ->
251 undefined
252 end;
253 _ ->
254 undefined
255 end.
256
257 format_nodes_info1(Port) ->
258 case {rabbit_net:socket_ends(Port, inbound),
259 rabbit_net:getstat(Port, [recv_oct, send_oct])} of
260 {{ok, {PeerAddr, PeerPort, SockAddr, SockPort}}, {ok, Stats}} ->
261 [{peer_addr, maybe_ntoab(PeerAddr)},
262 {peer_port, PeerPort},
263 {sock_addr, maybe_ntoab(SockAddr)},
264 {sock_port, SockPort},
265 {recv_bytes, pget(recv_oct, Stats)},
266 {send_bytes, pget(send_oct, Stats)}];
267 _ ->
268 []
269 end.
270
271 maybe_ntoab(A) when is_tuple(A) -> list_to_binary(rabbit_misc:ntoab(A));
272 maybe_ntoab(H) -> H.
225273
226274 %%--------------------------------------------------------------------
227275
255303
256304 format_mochiweb_option(ssl_opts, V) ->
257305 format_mochiweb_option_list(V);
258 format_mochiweb_option(ciphers, V) ->
259 list_to_binary(rabbit_misc:format("~w", [V]));
260 format_mochiweb_option(_K, V) when is_list(V) ->
261 list_to_binary(V);
262306 format_mochiweb_option(_K, V) ->
263 V.
307 case io_lib:printable_list(V) of
308 true -> list_to_binary(V);
309 false -> list_to_binary(rabbit_misc:format("~w", [V]))
310 end.
264311
265312 %%--------------------------------------------------------------------
266313
267314 init([]) ->
268 State = #state{fd_total = file_handle_cache:ulimit()},
315 State = #state{fd_total = file_handle_cache:ulimit(),
316 fhc_stats = file_handle_cache_stats:get(),
317 node_owners = sets:new()},
269318 %% If we emit an update straight away we will do so just before
270319 %% the mgmt db starts up - and then have to wait ?REFRESH_RATIO
271320 %% until we send another. So let's have a shorter wait in the hope
293342
294343 %%--------------------------------------------------------------------
295344
296 emit_update(State) ->
345 emit_update(State0) ->
346 State = update_state(State0),
297347 rabbit_event:notify(node_stats, infos(?KEYS, State)),
298348 erlang:send_after(?REFRESH_RATIO, self(), emit_update),
299 State.
349 emit_node_node_stats(State).
350
351 emit_node_node_stats(State = #state{node_owners = Owners}) ->
352 Links = cluster_links(),
353 NewOwners = sets:from_list([{Node, Owner} || {Node, Owner, _} <- Links]),
354 Dead = sets:to_list(sets:subtract(Owners, NewOwners)),
355 [rabbit_event:notify(
356 node_node_deleted, [{route, Route}]) || {Node, _Owner} <- Dead,
357 Route <- [{node(), Node},
358 {Node, node()}]],
359 [rabbit_event:notify(
360 node_node_stats, [{route, {node(), Node}} | Stats]) ||
361 {Node, _Owner, Stats} <- Links],
362 State#state{node_owners = NewOwners}.
363
364 update_state(State0 = #state{fhc_stats = FHC0}) ->
365 FHC = file_handle_cache_stats:get(),
366 Avgs = [{{Op, avg_time}, avg_op_time(Op, V, FHC, FHC0)}
367 || {{Op, time}, V} <- FHC],
368 State0#state{fhc_stats = FHC,
369 fhc_stats_derived = Avgs}.
370
371 -define(MICRO_TO_MILLI, 1000).
372
373 avg_op_time(Op, Time, FHC, FHC0) ->
374 Time0 = pget({Op, time}, FHC0),
375 TimeDelta = Time - Time0,
376 OpDelta = pget({Op, count}, FHC) - pget({Op, count}, FHC0),
377 case OpDelta of
378 0 -> 0;
379 _ -> (TimeDelta / OpDelta) / ?MICRO_TO_MILLI
380 end.
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
3939 will_msg,
4040 channels,
4141 connection,
42 exchange }).
42 exchange,
43 ssl_login_name }).
plugins-src/rabbitmq-mqtt/lib/junit.jar less more
Binary diff not shown
00 RELEASABLE:=true
1 DEPS:=rabbitmq-erlang-client
1 DEPS:=rabbitmq-server rabbitmq-erlang-client rabbitmq-test
2 WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/test.sh
3 WITH_BROKER_TEST_CONFIG:=$(PACKAGE_DIR)/test/ebin/test
4 WITH_BROKER_SETUP_SCRIPTS:=$(PACKAGE_DIR)/test/setup-rabbit-test.sh
25
3 RABBITMQ_TEST_PATH=$(PACKAGE_DIR)/../../rabbitmq-test
4 WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/test.sh
6 define package_rules
7
8 $(PACKAGE_DIR)+pre-test::
9 rm -rf $(PACKAGE_DIR)/test/certs
10 mkdir $(PACKAGE_DIR)/test/certs
11 mkdir -p $(PACKAGE_DIR)/test/ebin
12 sed -e "s|%%CERTS_DIR%%|$(abspath $(PACKAGE_DIR))/test/certs|g" < $(PACKAGE_DIR)/test/src/test.config > $(PACKAGE_DIR)/test/ebin/test.config
13 make -C $(PACKAGE_DIR)/../rabbitmq-test/certs all PASSWORD=bunnychow DIR=$(abspath $(PACKAGE_DIR))/test/certs
14
15 $(PACKAGE_DIR)+clean::
16 rm -rf $(PACKAGE_DIR)/test/certs
17
18 endef
1515
1616 -module(rabbit_mqtt_processor).
1717
18 -export([info/2, initial_state/1,
18 -export([info/2, initial_state/2,
1919 process_frame/2, amqp_pub/2, amqp_callback/2, send_will/1,
2020 close_connection/1]).
2121
2323 -include("rabbit_mqtt_frame.hrl").
2424 -include("rabbit_mqtt.hrl").
2525
26 -define(APP, rabbitmq_mqtt).
2627 -define(FRAME_TYPE(Frame, Type),
2728 Frame = #mqtt_frame{ fixed = #mqtt_frame_fixed{ type = Type }}).
2829
29 initial_state(Socket) ->
30 initial_state(Socket,SSLLoginName) ->
3031 #proc_state{ unacked_pubs = gb_trees:empty(),
3132 awaiting_ack = gb_trees:empty(),
3233 message_id = 1,
3435 consumer_tags = {undefined, undefined},
3536 channels = {undefined, undefined},
3637 exchange = rabbit_mqtt_util:env(exchange),
37 socket = Socket }.
38 socket = Socket,
39 ssl_login_name = SSLLoginName }.
3840
3941 info(client_id, #proc_state{ client_id = ClientId }) -> ClientId.
4042
5355 proto_ver = ProtoVersion,
5456 clean_sess = CleanSess,
5557 client_id = ClientId0,
56 keep_alive = Keepalive} = Var}, PState) ->
58 keep_alive = Keepalive} = Var},
59 PState = #proc_state{ ssl_login_name = SSLLoginName }) ->
5760 ClientId = case ClientId0 of
5861 [] -> rabbit_mqtt_util:gen_client_id();
5962 [_|_] -> ClientId0
6669 {_, true} ->
6770 {?CONNACK_INVALID_ID, PState};
6871 _ ->
69 case creds(Username, Password) of
72 case creds(Username, Password, SSLLoginName) of
7073 nocreds ->
7174 rabbit_log:error("MQTT login failed - no credentials~n"),
7275 {?CONNACK_CREDENTIALS, PState};
7578 {?CONNACK_ACCEPT, Conn} ->
7679 link(Conn),
7780 {ok, Ch} = amqp_connection:open_channel(Conn),
81 link(Ch),
7882 amqp_channel:enable_delivery_flow_control(Ch),
7983 ok = rabbit_mqtt_collector:register(
8084 ClientId, self()),
363367 [UserBin] -> {rabbit_mqtt_util:env(vhost), UserBin}
364368 end.
365369
366 creds(User, Pass) ->
367 DefaultUser = rabbit_mqtt_util:env(default_user),
368 DefaultPass = rabbit_mqtt_util:env(default_pass),
369 Anon = rabbit_mqtt_util:env(allow_anonymous),
370 U = case {User =/= undefined, is_binary(DefaultUser), Anon =:= true} of
371 {true, _, _ } -> list_to_binary(User);
372 {false, true, true} -> DefaultUser;
373 _ -> nocreds
370 creds(User, Pass, SSLLoginName) ->
371 DefaultUser = rabbit_mqtt_util:env(default_user),
372 DefaultPass = rabbit_mqtt_util:env(default_pass),
373 {ok, Anon} = application:get_env(?APP, allow_anonymous),
374 {ok, TLSAuth} = application:get_env(?APP, ssl_cert_login),
375 U = case {User =/= undefined, is_binary(DefaultUser),
376 Anon =:= true, (TLSAuth andalso SSLLoginName =/= none)} of
377 {true, _, _, _} -> list_to_binary(User);
378 {false, _, _, true} -> SSLLoginName;
379 {false, true, true, false} -> DefaultUser;
380 _ -> nocreds
374381 end,
375382 case U of
376383 nocreds ->
377384 nocreds;
378385 _ ->
379 case {Pass =/= undefined, is_binary(DefaultPass), Anon =:= true} of
380 {true, _, _ } -> {U, list_to_binary(Pass)};
381 {false, true, true} -> {U, DefaultPass};
382 _ -> {U, none}
386 case {Pass =/= undefined, is_binary(DefaultPass), Anon =:= true, SSLLoginName == U} of
387 {true, _, _, _} -> {U, list_to_binary(Pass)};
388 {false, _, _, _} -> {U, none};
389 {false, true, true, _} -> {U, DefaultPass};
390 _ -> {U, none}
383391 end
384392 end.
385393
512520 catch amqp_connection:close(Connection),
513521 PState #proc_state{ channels = {undefined, undefined},
514522 connection = undefined }.
515
5151 {ok, Sock} ->
5252 rabbit_alarm:register(
5353 self(), {?MODULE, conserve_resources, []}),
54 ProcessorState = rabbit_mqtt_processor:initial_state(Sock),
54 ProcessorState = rabbit_mqtt_processor:initial_state(Sock,ssl_login_name(Sock)),
5555 {noreply,
5656 control_throttle(
5757 #state{socket = Sock,
137137 KeepaliveSup, Sock, 0, SendFun, Keepalive, ReceiveFun),
138138 {noreply, State #state { keepalive = Heartbeater }};
139139
140 handle_info(keepalive_timeout, State = #state { conn_name = ConnStr }) ->
140 handle_info(keepalive_timeout, State = #state {conn_name = ConnStr,
141 proc_state = PState}) ->
141142 log(error, "closing MQTT connection ~p (keepalive timeout)~n", [ConnStr]),
142 {stop, {shutdown, keepalive_timeout}, State};
143 send_will_and_terminate(PState, {shutdown, keepalive_timeout}, State);
143144
144145 handle_info(Msg, State) ->
145146 {stop, {mqtt_unexpected_msg, Msg}, State}.
189190
190191 code_change(_OldVsn, State, _Extra) ->
191192 {ok, State}.
193
194 ssl_login_name(Sock) ->
195 case rabbit_net:peercert(Sock) of
196 {ok, C} -> case rabbit_ssl:peer_cert_auth_name(C) of
197 unsafe -> none;
198 not_found -> none;
199 Name -> Name
200 end;
201 {error, no_peercert} -> none;
202 nossl -> none
203 end.
192204
193205 %%----------------------------------------------------------------------------
194206
244256 log(Level, Fmt, Args) -> rabbit_log:log(connection, Level, Fmt, Args).
245257
246258 send_will_and_terminate(PState, State) ->
259 send_will_and_terminate(PState, {shutdown, conn_closed}, State).
260
261 send_will_and_terminate(PState, Reason, State) ->
247262 rabbit_mqtt_processor:send_will(PState),
248263 % todo: flush channel after publish
249 {stop, {shutdown, conn_closed}, State}.
264 {stop, Reason, State}.
250265
251266 network_error(closed,
252267 State = #state{ conn_name = ConnStr,
55 {mod, {rabbit_mqtt, []}},
66 {env, [{default_user, <<"guest">>},
77 {default_pass, <<"guest">>},
8 {ssl_cert_login,false},
89 {allow_anonymous, true},
910 {vhost, <<"/">>},
1011 {exchange, <<"amq.topic">>},
1010 JAVA_AMQP_DIR=../../rabbitmq-java-client/
1111 JAVA_AMQP_CLASSES=$(JAVA_AMQP_DIR)build/classes/
1212
13 TEST_SRCS:=$(shell find $(TEST_SRC) -name '*.java')
1413 ALL_CLASSES:=$(foreach f,$(shell find src -name '*.class'),'$(f)')
15 TEST_CLASSES:=$(TEST_SRCS:.java=.class)
1614 CP:=$(PAHO_JAR):$(JUNIT_JAR):$(TEST_SRC):$(JAVA_AMQP_CLASSES)
15
16 HOSTNAME:=$(shell hostname)
1717
1818 define class_from_path
1919 $(subst .class,,$(subst src.,,$(subst /,.,$(1))))
2020 endef
2121
2222 .PHONY: test
23 test: $(TEST_CLASSES) build_java_amqp
24 $(foreach test,$(TEST_CLASSES),CLASSPATH=$(CP) java junit.textui.TestRunner -text $(call class_from_path,$(test)))
23 test: build_java_amqp
24 ant test -Dhostname=$(HOSTNAME)
2525
2626 clean:
27 rm -rf $(PAHO_JAR) $(ALL_CLASSES)
27 ant clean
28 rm -rf test_client
29
2830
2931 distclean: clean
3032 rm -rf $(CHECKOUT_DIR)
3335 git clone $(UPSTREAM_GIT) $@
3436 (cd $@ && git checkout $(REVISION)) || rm -rf $@
3537
36 $(PAHO_JAR): $(CHECKOUT_DIR)
37 ant -buildfile $</org.eclipse.paho.client.mqttv3/build.xml \
38 -Dship.folder=. -Dmqttv3-client-jar=$(PAHO_JAR_NAME) full
39
40 %.class: %.java $(PAHO_JAR) $(JUNIT_JAR)
41 $(JC) -cp $(CP) $<
4238
4339 .PHONY: build_java_amqp
44 build_java_amqp:
45 make -C $(JAVA_AMQP_DIR)
40 build_java_amqp: $(CHECKOUT_DIR)
41 make -C $(JAVA_AMQP_DIR) jar
42
0 build.out=build
1 test.resources=${build.out}/test/resources
2 javac.debug=true
3 test.javac.out=${build.out}/test/classes
4 test.resources=${build.out}/test/resources
5 test.src.home=src
6 certs.dir=certs
7 certs.password=test
8 server.keystore=${test.resources}/server.jks
9 server.cert=${certs.dir}/server/cert.pem
10 ca.cert=${certs.dir}/testca/cacert.pem
11 server.keystore.phrase=bunnyhop
12
13 client.keystore=${test.resources}/client.jks
14 client.keystore.phrase=bunnychow
15 client.srckeystore=${certs.dir}/client/keycert.p12
16 client.srckeystore.password=bunnychow
0 <?xml version="1.0"?>
1 <project name="MQTT Java Test client" default="build">
2
3 <property name="output.folder" value="./target/work" />
4 <property name="ship.folder" value="./" />
5
6 <property file="build.properties"/>
7
8 <property name="java-amqp-client-path" location="../../rabbitmq-java-client" />
9
10 <path id="test.javac.classpath">
11 <!-- cf dist target, infra -->
12 <fileset dir="lib">
13 <include name="**/*.jar"/>
14 </fileset>
15 <fileset dir="test_client">
16 <include name="**/*.jar"/>
17 </fileset>
18 <fileset dir="${java-amqp-client-path}">
19 <include name="**/rabbitmq-client.jar" />
20 </fileset>
21 </path>
22
23 <target name="clean-paho" description="Clean compiled Eclipe Paho Test Client jars" >
24 <ant antfile="test_client/org.eclipse.paho.client.mqttv3/build.xml" useNativeBasedir="true" target="clean"/>
25 </target>
26
27 <target name="clean" >
28 <delete dir="${build.out}"/>
29 </target>
30
31 <target name="build-paho" depends="clean-paho" description="Build the Eclipse Paho Test Client">
32 <ant antfile="test_client/org.eclipse.paho.client.mqttv3/build.xml" useNativeBasedir="true" />
33 </target>
34
35 <target name="detect-ssl">
36 <available property="SSL_AVAILABLE" file="${certs.dir}/client"/>
37 <property name="CLIENT_KEYSTORE_PHRASE" value="bunnies"/>
38 <property name="SSL_P12_PASSWORD" value="${certs.password}"/>
39 </target>
40
41 <target name="detect-tmpdir">
42 <property environment="env"/>
43 <condition property="TMPDIR" value="${env.TMPDIR}" else="/tmp">
44 <available file="${env.TMPDIR}" type="dir"/>
45 </condition>
46 </target>
47
48 <target name="make-server-keystore" if="SSL_AVAILABLE" depends="detect-ssl, detect-tmpdir">
49 <mkdir dir="${test.resources}"/>
50 <exec executable="keytool" failonerror="true" osfamily="unix">
51 <arg line="-import"/>
52 <arg value="-alias"/>
53 <arg value="server1"/>
54 <arg value="-file"/>
55 <arg value="${server.cert}"/>
56 <arg value="-keystore"/>
57 <arg value="${server.keystore}"/>
58 <arg value="-noprompt"/>
59 <arg value="-storepass"/>
60 <arg value="${server.keystore.phrase}"/>
61 </exec>
62 <exec executable="keytool" failonerror="true" osfamily="unix">
63 <arg line="-import"/>
64 <arg value="-alias"/>
65 <arg value="testca"/>
66 <arg value="-trustcacerts"/>
67 <arg value="-file"/>
68 <arg value="${ca.cert}"/>
69 <arg value="-keystore"/>
70 <arg value="${server.keystore}"/>
71 <arg value="-noprompt"/>
72 <arg value="-storepass"/>
73 <arg value="${server.keystore.phrase}"/>
74 </exec>
75 </target>
76
77 <target name="make-client-keystore" if="SSL_AVAILABLE" depends="detect-ssl, detect-tmpdir">
78 <mkdir dir="${test.resources}"/>
79 <exec executable="keytool" failonerror="true" osfamily="unix">
80 <arg line="-importkeystore"/>
81 <arg line="-srckeystore" />
82 <arg line="${client.srckeystore}" />
83 <arg value="-srcstoretype"/>
84 <arg value="PKCS12"/>
85 <arg value="-srcstorepass"/>
86 <arg value="${client.srckeystore.password}"/>
87 <arg value="-destkeystore"/>
88 <arg value="${client.keystore}"/>
89 <arg value="-deststoretype"/>
90 <arg value="JKS"/>
91 <arg value="-noprompt"/>
92 <arg value="-storepass"/>
93 <arg value="${client.keystore.phrase}"/>
94 </exec>
95 </target>
96
97 <target name="test-build" depends="clean,build-paho">
98 <mkdir dir="${test.javac.out}"/>
99
100 <javac srcdir="./src"
101 destdir="${test.javac.out}"
102 debug="on"
103 includeantruntime="false" >
104 <classpath>
105 <path refid="test.javac.classpath"/>
106 </classpath>
107 </javac>
108 </target>
109
110 <target name="test-ssl" depends="test-build, make-server-keystore, make-client-keystore" if="SSL_AVAILABLE">
111 <junit printSummary="withOutAndErr"
112 haltOnFailure="true"
113 failureproperty="test.failure"
114 fork="yes">
115 <classpath>
116 <path refid="test.javac.classpath"/>
117 <pathelement path="${test.javac.out}"/>
118 <pathelement path="${test.resources}"/>
119 </classpath>
120 <jvmarg value="-Dhostname=${hostname}"/>
121 <jvmarg value="-Dserver.keystore.passwd=${server.keystore.phrase}"/>
122 <jvmarg value="-Dclient.keystore.passwd=${client.keystore.phrase}"/>
123 <formatter type="plain"/>
124 <formatter type="xml"/>
125 <test todir="${build.out}" name="com.rabbitmq.mqtt.test.tls.MqttSSLTest"/>
126 </junit>
127 </target>
128
129 <target name="test-server" depends="test-build">
130 <junit printSummary="withOutAndErr"
131 haltOnFailure="true"
132 failureproperty="test.failure"
133 fork="yes">
134 <classpath>
135 <path refid="test.javac.classpath"/>
136 <pathelement path="${test.javac.out}"/>
137 </classpath>
138
139 <formatter type="plain"/>
140 <formatter type="xml"/>
141 <test todir="${build.out}" name="com.rabbitmq.mqtt.test.MqttTest"/>
142 </junit>
143 </target>
144
145 <target name="test" depends="test-server, test-ssl" description="Build the test mqtt client libraries.">
146
147 </target>
148
149 </project>
0 #!/bin/sh
1 CTL=$1
2 USER="O=client,CN=$(hostname)"
3
4 $CTL add_user "$USER" ''
5 $CTL set_permissions -p / "$USER" ".*" ".*" ".*"
0 #!/bin/sh -e
1 sh -e `dirname $0`/rabbit-test.sh "`dirname $0`/../../rabbitmq-server/scripts/rabbitmqctl -n rabbit-test"
0 #!/bin/sh
1 CTL=$1
2 USER="O=client,CN=$(hostname)"
3
4 # Test direct connections
5 $CTL add_user "$USER" ''
6 $CTL set_permissions -p / "$USER" ".*" ".*" ".*"
0 #!/bin/sh -e
1 sh -e `dirname $0`/rabbit-test.sh "`dirname $0`/../../rabbitmq-server/scripts/rabbitmqctl -n rabbit-test"
0 // The contents of this file are subject to the Mozilla Public License
1 // Version 1.1 (the "License"); you may not use this file except in
2 // compliance with the License. You may obtain a copy of the License
3 // at http://www.mozilla.org/MPL/
4 //
5 // Software distributed under the License is distributed on an "AS IS"
6 // basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 // the License for the specific language governing rights and
8 // limitations under the License.
9 //
10 // The Original Code is RabbitMQ.
11 //
12 // The Initial Developer of the Original Code is GoPivotal, Inc.
13 // Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
14 //
15
16 package com.rabbitmq.mqtt.test.tls;
17
18 import junit.framework.Assert;
19 import junit.framework.TestCase;
20 import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken;
21 import org.eclipse.paho.client.mqttv3.MqttCallback;
22 import org.eclipse.paho.client.mqttv3.MqttClient;
23 import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
24 import org.eclipse.paho.client.mqttv3.MqttException;
25 import org.eclipse.paho.client.mqttv3.MqttMessage;
26
27 import java.io.IOException;
28 import java.util.ArrayList;
29
30
31 /**
32 * MQTT v3.1 tests
33 * TODO: synchronise access to variables
34 */
35
36 public class MqttSSLTest extends TestCase implements MqttCallback {
37
38 private final int port = 8883;
39 private final String brokerUrl = "ssl://" + getHost() + ":" + port;
40 private String clientId;
41 private String clientId2;
42 private MqttClient client;
43 private MqttClient client2;
44 private MqttConnectOptions conOpt;
45 private ArrayList<MqttMessage> receivedMessages;
46
47 private long lastReceipt;
48 private boolean expectConnectionFailure;
49
50
51 private static final String getHost() {
52 Object host = System.getProperty("hostname");
53 assertNotNull(host);
54 return host.toString();
55 }
56
57 // override 10s limit
58 private class MyConnOpts extends MqttConnectOptions {
59 private int keepAliveInterval = 60;
60
61 @Override
62 public void setKeepAliveInterval(int keepAliveInterval) {
63 this.keepAliveInterval = keepAliveInterval;
64 }
65
66 @Override
67 public int getKeepAliveInterval() {
68 return keepAliveInterval;
69 }
70 }
71
72
73 @Override
74 public void setUp() throws MqttException, IOException {
75 clientId = getClass().getSimpleName() + ((int) (10000 * Math.random()));
76 clientId2 = clientId + "-2";
77 client = new MqttClient(brokerUrl, clientId, null);
78 client2 = new MqttClient(brokerUrl, clientId2, null);
79 conOpt = new MyConnOpts();
80 conOpt.setSocketFactory(MutualAuth.getSSLContextWithoutCert().getSocketFactory());
81 setConOpts(conOpt);
82 receivedMessages = new ArrayList<MqttMessage>();
83 expectConnectionFailure = false;
84 }
85
86 @Override
87 public void tearDown() throws MqttException {
88 // clean any sticky sessions
89 setConOpts(conOpt);
90 client = new MqttClient(brokerUrl, clientId, null);
91 try {
92 client.connect(conOpt);
93 client.disconnect();
94 } catch (Exception _) {
95 }
96
97 client2 = new MqttClient(brokerUrl, clientId2, null);
98 try {
99 client2.connect(conOpt);
100 client2.disconnect();
101 } catch (Exception _) {
102 }
103 }
104
105
106 private void setConOpts(MqttConnectOptions conOpts) {
107 // provide authentication if the broker needs it
108 // conOpts.setUserName("guest");
109 // conOpts.setPassword("guest".toCharArray());
110 conOpts.setCleanSession(true);
111 conOpts.setKeepAliveInterval(60);
112 }
113
114 public void testCertLogin() throws MqttException {
115 try {
116 conOpt.setSocketFactory(MutualAuth.getSSLContextWithClientCert().getSocketFactory());
117 client.connect(conOpt);
118 } catch (Exception e) {
119 e.printStackTrace();
120 fail("Exception: " + e.getMessage());
121 }
122 }
123
124
125 public void testInvalidUser() throws MqttException {
126 conOpt.setUserName("invalid-user");
127 try {
128 client.connect(conOpt);
129 fail("Authentication failure expected");
130 } catch (MqttException ex) {
131 Assert.assertEquals(MqttException.REASON_CODE_FAILED_AUTHENTICATION, ex.getReasonCode());
132 } catch (Exception e) {
133 e.printStackTrace();
134 fail("Exception: " + e.getMessage());
135 }
136 }
137
138 public void testInvalidPassword() throws MqttException {
139 conOpt.setUserName("invalid-user");
140 conOpt.setPassword("invalid-password".toCharArray());
141 try {
142 client.connect(conOpt);
143 fail("Authentication failure expected");
144 } catch (MqttException ex) {
145 Assert.assertEquals(MqttException.REASON_CODE_FAILED_AUTHENTICATION, ex.getReasonCode());
146 } catch (Exception e) {
147 e.printStackTrace();
148 fail("Exception: " + e.getMessage());
149 }
150 }
151
152
153 public void connectionLost(Throwable cause) {
154 if (!expectConnectionFailure)
155 fail("Connection unexpectedly lost");
156 }
157
158 public void messageArrived(String topic, MqttMessage message) throws Exception {
159 lastReceipt = System.currentTimeMillis();
160 receivedMessages.add(message);
161 }
162
163 public void deliveryComplete(IMqttDeliveryToken token) {
164 }
165 }
0 package com.rabbitmq.mqtt.test.tls;
1
2 import javax.net.ssl.KeyManagerFactory;
3 import javax.net.ssl.SSLContext;
4 import javax.net.ssl.TrustManagerFactory;
5 import java.io.IOException;
6 import java.security.KeyStore;
7 import java.security.KeyStoreException;
8 import java.security.NoSuchAlgorithmException;
9 import java.security.cert.CertificateException;
10 import java.util.Arrays;
11 import java.util.List;
12
13 public class MutualAuth {
14
15 private MutualAuth() {
16
17 }
18
19 private static String getStringProperty(String propertyName) throws IllegalArgumentException {
20 Object value = System.getProperty(propertyName);
21 if (value == null) throw new IllegalArgumentException("Property: " + propertyName + " not found");
22 return value.toString();
23 }
24
25 private static TrustManagerFactory getServerTrustManagerFactory() throws NoSuchAlgorithmException, CertificateException, IOException, KeyStoreException {
26 char[] trustPhrase = getStringProperty("server.keystore.passwd").toCharArray();
27 MutualAuth dummy = new MutualAuth();
28
29 // Server TrustStore
30 KeyStore tks = KeyStore.getInstance("JKS");
31 tks.load(dummy.getClass().getResourceAsStream("/server.jks"), trustPhrase);
32
33 TrustManagerFactory tmf = TrustManagerFactory.getInstance("X509");
34 tmf.init(tks);
35
36 return tmf;
37 }
38
39 public static SSLContext getSSLContextWithClientCert() throws IOException {
40
41 char[] clientPhrase = getStringProperty("client.keystore.passwd").toCharArray();
42
43 MutualAuth dummy = new MutualAuth();
44 try {
45 SSLContext sslContext = getVanillaSSLContext();
46 // Client Keystore
47 KeyStore ks = KeyStore.getInstance("JKS");
48 ks.load(dummy.getClass().getResourceAsStream("/client.jks"), clientPhrase);
49 KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509");
50 kmf.init(ks, clientPhrase);
51
52 sslContext.init(kmf.getKeyManagers(), getServerTrustManagerFactory().getTrustManagers(), null);
53 return sslContext;
54 } catch (Exception e) {
55 throw new IOException(e);
56 }
57
58 }
59
60 private static SSLContext getVanillaSSLContext() throws NoSuchAlgorithmException {
61 SSLContext result = null;
62 List<String> xs = Arrays.asList("TLSv1.2", "TLSv1.1", "TLSv1");
63 for(String x : xs) {
64 try {
65 return SSLContext.getInstance(x);
66 } catch (NoSuchAlgorithmException nae) {
67 // keep trying
68 }
69 }
70 throw new NoSuchAlgorithmException("Could not obtain an SSLContext for TLS 1.0-1.2");
71 }
72
73 public static SSLContext getSSLContextWithoutCert() throws IOException {
74 try {
75 SSLContext sslContext = getVanillaSSLContext();
76 sslContext.init(null, getServerTrustManagerFactory().getTrustManagers(), null);
77 return sslContext;
78 } catch (Exception e) {
79 throw new IOException(e);
80 }
81 }
82
83 }
0 [{rabbitmq_mqtt, [
1 {ssl_cert_login, true},
2 {allow_anonymous, true},
3 {tcp_listeners, [1883]},
4 {ssl_listeners, [8883]}
5 ]},
6 {rabbit, [{ssl_options, [{cacertfile,"%%CERTS_DIR%%/testca/cacert.pem"},
7 {certfile,"%%CERTS_DIR%%/server/cert.pem"},
8 {keyfile,"%%CERTS_DIR%%/server/key.pem"},
9 {verify,verify_peer},
10 {fail_if_no_peer_cert,false}
11 ]}
12 ]}
13 ].
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
116116 validate_params_user(#amqp_params_direct{}, none) ->
117117 ok;
118118 validate_params_user(#amqp_params_direct{virtual_host = VHost},
119 User = #user{username = Username,
120 auth_backend = M}) ->
121 case rabbit_vhost:exists(VHost) andalso M:check_vhost_access(User, VHost) of
122 true -> ok;
123 false -> {error, "user \"~s\" may not connect to vhost \"~s\"",
119 User = #user{username = Username}) ->
120 case rabbit_vhost:exists(VHost) andalso
121 (catch rabbit_access_control:check_vhost_access(
122 User, VHost, undefined)) of
123 ok -> ok;
124 _ -> {error, "user \"~s\" may not connect to vhost \"~s\"",
124125 [Username, VHost]}
125126 end;
126127 validate_params_user(#amqp_params_network{}, _User) ->
256256 valid_param(Value) -> valid_param(Value, none).
257257
258258 lookup_user(Name) ->
259 {ok, User} = rabbit_auth_backend_internal:check_user_login(Name, []),
259 {ok, User} = rabbit_access_control:check_user_login(Name, []),
260260 User.
261261
262262 clear_param(Name) ->
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
1717 var num_keys = ['prefetch-count', 'reconnect-delay'];
1818 var bool_keys = ['add-forward-headers'];
1919 var arrayable_keys = ['src-uri', 'dest-uri'];
20 var redirect = this.params['redirect'];
21 if (redirect != undefined) {
22 delete this.params['redirect'];
23 }
2024 put_parameter(this, [], num_keys, bool_keys, arrayable_keys);
25 if (redirect != undefined) {
26 go_to(redirect);
27 }
2128 return false;
2229 });
2330 sammy.del('#/shovel-parameters', function() {
6464 %% static shovels do not have a vhost, so only allow admins (not
6565 %% monitors) to see them.
6666 filter_vhost_user(List, _ReqData, #context{user = User = #user{tags = Tags}}) ->
67 VHosts = rabbit_mgmt_util:list_login_vhosts(User),
67 VHosts = rabbit_mgmt_util:list_login_vhosts(User, undefined),
6868 [I || I <- List, case pget(vhost, I) of
6969 undefined -> lists:member(administrator, Tags);
7070 VHost -> lists:member(VHost, VHosts)
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
00 RELEASABLE:=true
1 DEPS:=rabbitmq-server rabbitmq-erlang-client
1 DEPS:=rabbitmq-server rabbitmq-erlang-client rabbitmq-test
22 STANDALONE_TEST_COMMANDS:=eunit:test([rabbit_stomp_test_util,rabbit_stomp_test_frame],[verbose])
3 WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/src/test.py $(PACKAGE_DIR)/test/src/test_connect_options.py
3 WITH_BROKER_TEST_SCRIPTS:=$(PACKAGE_DIR)/test/src/test.py $(PACKAGE_DIR)/test/src/test_connect_options.py $(PACKAGE_DIR)/test/src/test_ssl.py
44 WITH_BROKER_TEST_COMMANDS:=rabbit_stomp_test:all_tests() rabbit_stomp_amqqueue_test:all_tests()
5
6 RABBITMQ_TEST_PATH=$(PACKAGE_DIR)/../rabbitmq-test
7 ABS_PACKAGE_DIR:=$(abspath $(PACKAGE_DIR))
8
9 CERTS_DIR:=$(ABS_PACKAGE_DIR)/test/certs
10 CAN_RUN_SSL:=$(shell if [ -d $(RABBITMQ_TEST_PATH) ]; then echo "true"; else echo "false"; fi)
11
12 TEST_CONFIG_PATH=$(TEST_EBIN_DIR)/test.config
13 WITH_BROKER_TEST_CONFIG:=$(TEST_EBIN_DIR)/test
14
15 .PHONY: $(TEST_CONFIG_PATH)
16
17 ifeq ($(CAN_RUN_SSL),true)
18
19 WITH_BROKER_TEST_SCRIPTS += $(PACKAGE_DIR)/test/src/test_ssl.py
20
21 $(TEST_CONFIG_PATH): $(CERTS_DIR) $(ABS_PACKAGE_DIR)/test/src/ssl.config
22 sed -e "s|%%CERTS_DIR%%|$(CERTS_DIR)|g" < $(ABS_PACKAGE_DIR)/test/src/ssl.config > $@
23 @echo "\nRunning SSL tests\n"
24
25 $(CERTS_DIR):
26 mkdir -p $(CERTS_DIR)
27 make -C $(RABBITMQ_TEST_PATH)/certs all PASSWORD=test DIR=$(CERTS_DIR)
28
29 else
30 $(TEST_CONFIG_PATH): $(ABS_PACKAGE_DIR)/test/src/non_ssl.config
31 cp $(ABS_PACKAGE_DIR)/test/src/non_ssl.config $@
32 @echo "\nNOT running SSL tests - looked in $(RABBITMQ_TEST_PATH) \n"
33
34 endif
5 WITH_BROKER_TEST_CONFIG:=$(PACKAGE_DIR)/test/ebin/test
356
367 define package_rules
378
38 $(PACKAGE_DIR)+pre-test:: $(TEST_CONFIG_PATH)
9 $(PACKAGE_DIR)+pre-test::
10 rm -rf $(PACKAGE_DIR)/test/certs
11 mkdir $(PACKAGE_DIR)/test/certs
12 mkdir -p $(PACKAGE_DIR)/test/ebin
13 sed -e "s|%%CERTS_DIR%%|$(abspath $(PACKAGE_DIR))/test/certs|g" < $(PACKAGE_DIR)/test/src/test.config > $(PACKAGE_DIR)/test/ebin/test.config
14 make -C $(PACKAGE_DIR)/../rabbitmq-test/certs all PASSWORD=test DIR=$(abspath $(PACKAGE_DIR))/test/certs
3915 make -C $(PACKAGE_DIR)/deps/stomppy
4016
4117 $(PACKAGE_DIR)+clean::
42 rm -rf $(CERTS_DIR)
18 rm -rf $(PACKAGE_DIR)/test/certs
4319
4420 $(PACKAGE_DIR)+clean-with-deps::
4521 make -C $(PACKAGE_DIR)/deps/stomppy distclean
157157 {shutdown, {server_initiated_close, Code, Explanation}}},
158158 State = #state{connection = Conn}) ->
159159 amqp_death(Code, Explanation, State);
160 handle_info({'EXIT', Conn,
161 {shutdown, {connection_closing,
162 {server_initiated_close, Code, Explanation}}}},
163 State = #state{connection = Conn}) ->
164 amqp_death(Code, Explanation, State);
160165 handle_info({'EXIT', Conn, Reason}, State = #state{connection = Conn}) ->
161166 send_error("AMQP connection died", "Reason: ~p", [Reason], State),
162167 {stop, {conn_died, Reason}, State};
168
169 handle_info({'EXIT', Ch, Reason}, State = #state{channel = Ch}) ->
170 send_error("AMQP channel died", "Reason: ~p", [Reason], State),
171 {stop, {channel_died, Reason}, State};
172 handle_info({'EXIT', Ch,
173 {shutdown, {server_initiated_close, Code, Explanation}}},
174 State = #state{channel = Ch}) ->
175 amqp_death(Code, Explanation, State);
176
177
163178 handle_info({inet_reply, _, ok}, State) ->
164179 {noreply, State, hibernate};
165180 handle_info({bump_credit, Msg}, State) ->
510525 {ok, Connection} ->
511526 link(Connection),
512527 {ok, Channel} = amqp_connection:open_channel(Connection),
528 link(Channel),
513529 amqp_channel:enable_delivery_flow_control(Channel),
514530 SessionId = rabbit_guid:string(rabbit_guid:gen_secure(), "session"),
515531 {{SendTimeout, ReceiveTimeout}, State1} =
1616 -module(rabbit_stomp_reader).
1717
1818 -export([start_link/3]).
19 -export([init/3]).
19 -export([init/3, mainloop/2]).
20 -export([system_continue/3, system_terminate/4, system_code_change/4]).
2021 -export([conserve_resources/3]).
2122
2223 -include("rabbit_stomp.hrl").
2425 -include_lib("amqp_client/include/amqp_client.hrl").
2526
2627 -record(reader_state, {socket, parse_state, processor, state,
27 conserve_resources, recv_outstanding}).
28 conserve_resources, recv_outstanding,
29 parent}).
2830
2931 %%----------------------------------------------------------------------------
3032
4749 {ok, ConnStr} ->
4850 case SockTransform(Sock0) of
4951 {ok, Sock} ->
50
52 DebugOpts = sys:debug_options([]),
5153 ProcInitArgs = processor_args(SupHelperPid,
5254 Configuration,
5355 Sock),
5860
5961 ParseState = rabbit_stomp_frame:initial_state(),
6062 try
61 mainloop(
63 mainloop(DebugOpts,
6264 register_resource_alarm(
6365 #reader_state{socket = Sock,
6466 parse_state = ParseState,
8587 end
8688 end.
8789
88 mainloop(State0 = #reader_state{socket = Sock}) ->
90 mainloop(DebugOpts, State0 = #reader_state{socket = Sock}) ->
8991 State = run_socket(control_throttle(State0)),
9092 receive
9193 {inet_async, Sock, _Ref, {ok, Data}} ->
92 mainloop(process_received_bytes(
94 mainloop(DebugOpts, process_received_bytes(
9395 Data, State#reader_state{recv_outstanding = false}));
9496 {inet_async, _Sock, _Ref, {error, closed}} ->
9597 ok;
98100 {inet_reply, _Sock, {error, closed}} ->
99101 ok;
100102 {conserve_resources, Conserve} ->
101 mainloop(State#reader_state{conserve_resources = Conserve});
103 mainloop(DebugOpts, State#reader_state{conserve_resources = Conserve});
102104 {bump_credit, Msg} ->
103105 credit_flow:handle_bump_msg(Msg),
104 mainloop(State);
106 mainloop(DebugOpts, State);
107 {system, From, Request} ->
108 sys:handle_system_msg(Request, From, State#reader_state.parent,
109 ?MODULE, DebugOpts, State);
105110 {'EXIT', _From, shutdown} ->
106111 ok;
107112 Other ->
159164
160165 %%----------------------------------------------------------------------------
161166
167 system_continue(Parent, DebugOpts, State) ->
168 mainloop(DebugOpts, State#reader_state{parent = Parent}).
169
170 system_terminate(Reason, _Parent, _OldVsn, _Extra) ->
171 exit(Reason).
172
173 system_code_change(Misc, _Module, _OldSvn, _Extra) ->
174 {ok, Misc}.
175
176 %%----------------------------------------------------------------------------
177
162178 processor_args(SupPid, Configuration, Sock) ->
163179 SendFun = fun (sync, IoData) ->
164180 %% no messages emitted
+0
-6
plugins-src/rabbitmq-stomp/test/src/non_ssl.config less more
0 [{rabbitmq_stomp, [{default_user, [{login, "guest"},
1 {passcode, "guest"}
2 ]},
3 {implicit_connect, true}
4 ]}
5 ].
+0
-12
plugins-src/rabbitmq-stomp/test/src/ssl.config less more
0 [{rabbitmq_stomp, [{default_user, []},
1 {ssl_cert_login, true},
2 {ssl_listeners, [61614]}
3 ]},
4 {rabbit, [{ssl_options, [{cacertfile,"%%CERTS_DIR%%/testca/cacert.pem"},
5 {certfile,"%%CERTS_DIR%%/server/cert.pem"},
6 {keyfile,"%%CERTS_DIR%%/server/key.pem"},
7 {verify,verify_peer},
8 {fail_if_no_peer_cert,true}
9 ]}
10 ]}
11 ].
00 import unittest
11 import os
2 import os.path
3 import sys
24
35 import stomp
46 import base
57
6 ssl_key_file = os.path.abspath("test/certs/client/key.pem")
7 ssl_cert_file = os.path.abspath("test/certs/client/cert.pem")
8 ssl_ca_certs = os.path.abspath("test/certs/testca/cacert.pem")
8
9 base_path = os.path.dirname(sys.argv[0])
10
11 ssl_key_file = os.path.abspath(base_path + "/../certs/client/key.pem")
12 ssl_cert_file = os.path.abspath(base_path + "/../certs/client/cert.pem")
13 ssl_ca_certs = os.path.abspath(base_path + "/../certs/testca/cacert.pem")
914
1015 class TestSslClient(unittest.TestCase):
1116
1520 use_ssl = True, ssl_key_file = ssl_key_file,
1621 ssl_cert_file = ssl_cert_file,
1722 ssl_ca_certs = ssl_ca_certs)
18
23 print "FILE: ", ssl_cert_file
1924 conn.start()
2025 conn.connect()
2126 return conn
0 [{rabbitmq_stomp, [{default_user, []},
1 {ssl_cert_login, true},
2 {ssl_listeners, [61614]}
3 ]},
4 {rabbit, [{ssl_options, [{cacertfile,"%%CERTS_DIR%%/testca/cacert.pem"},
5 {certfile,"%%CERTS_DIR%%/server/cert.pem"},
6 {keyfile,"%%CERTS_DIR%%/server/key.pem"},
7 {verify,verify_peer},
8 {fail_if_no_peer_cert,true}
9 ]}
10 ]}
11 ].
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
00 DEPS:=rabbitmq-erlang-client
11 FILTER:=all
22 COVER:=false
3 WITH_BROKER_TEST_COMMANDS:=rabbit_test_runner:run_in_broker(\"$(PACKAGE_DIR)/test/ebin\",\"$(FILTER)\")
34 STANDALONE_TEST_COMMANDS:=rabbit_test_runner:run_multi(\"$(UMBRELLA_BASE_DIR)/rabbitmq-server\",\"$(PACKAGE_DIR)/test/ebin\",\"$(FILTER)\",$(COVER),none)
45
56 ## Require R15B to compile inet_proxy_dist since it requires includes
6565 %% Modification START
6666 ProxyPort = case TcpPort >= 25672 andalso TcpPort < 25700
6767 andalso inet_tcp_proxy:is_enabled() of
68 true -> TcpPort + 10000;
68 true -> TcpPort + 5000;
6969 false -> TcpPort
7070 end,
7171 case inet_tcp:connect(Ip, ProxyPort,
6060 go() ->
6161 ets:new(?TABLE, [public, named_table]),
6262 {ok, Port} = application:get_env(kernel, inet_dist_listen_min),
63 ProxyPort = Port + 10000,
63 ProxyPort = Port + 5000,
6464 {ok, Sock} = gen_tcp:listen(ProxyPort, [inet,
6565 {reuseaddr, true}]),
6666 accept_loop(Sock, Port).
3030 -import(rabbit_misc, [pget/2, pget/3]).
3131
3232 -define(INITIAL_KEYS, [cover, base, server, plugins]).
33 -define(NON_RUNNING_KEYS, ?INITIAL_KEYS ++ [nodename, port]).
33 -define(NON_RUNNING_KEYS, ?INITIAL_KEYS ++ [nodename, port, mnesia_dir]).
3434
3535 cluster_ab(InitialCfg) -> cluster(InitialCfg, [a, b]).
3636 cluster_abc(InitialCfg) -> cluster(InitialCfg, [a, b, c]).
5252 [{_, _}|_] -> [InitialCfg0 || _ <- NodeNames];
5353 _ -> InitialCfg0
5454 end,
55 Nodes = [[{nodename, N}, {port, P} | strip_non_initial(Cfg)]
55 Nodes = [[{nodename, N}, {port, P},
56 {mnesia_dir, rabbit_misc:format("rabbitmq-~s-mnesia", [N])} |
57 strip_non_initial(Cfg)]
5658 || {N, P, Cfg} <- lists:zip3(NodeNames, Ports, InitialCfgs)],
5759 [start_node(Node) || Node <- Nodes].
5860
160162
161163 kill_node(Cfg) ->
162164 maybe_flush_cover(Cfg),
163 catch execute(Cfg, {"kill -9 ~s", [pget(os_pid, Cfg)]}),
165 OSPid = pget(os_pid, Cfg),
166 catch execute(Cfg, {"kill -9 ~s", [OSPid]}),
167 await_os_pid_death(OSPid),
164168 strip_running(Cfg).
169
170 await_os_pid_death(OSPid) ->
171 case rabbit_misc:is_os_process_alive(OSPid) of
172 true -> timer:sleep(100),
173 await_os_pid_death(OSPid);
174 false -> ok
175 end.
165176
166177 restart_node(Cfg) ->
167178 start_node(stop_node(Cfg)).
192203 execute(Env0, Cmd0, AcceptableExitCodes) ->
193204 Env = [{"RABBITMQ_" ++ K, fmt(V)} || {K, V} <- Env0],
194205 Cmd = fmt(Cmd0),
206 error_logger:info_msg("Invoking '~s'~n", [Cmd]),
195207 Port = erlang:open_port(
196208 {spawn, "/usr/bin/env sh -c \"" ++ Cmd ++ "\""},
197209 [{env, Env}, exit_status,
208220 Port = pget(port, Cfg),
209221 Base = pget(base, Cfg),
210222 Server = pget(server, Cfg),
211 [{"MNESIA_BASE", {"~s/rabbitmq-~s-mnesia", [Base, Nodename]}},
212 {"LOG_BASE", {"~s", [Base]}},
213 {"NODENAME", {"~s", [Nodename]}},
214 {"NODE_PORT", {"~B", [Port]}},
215 {"PID_FILE", pid_file(Cfg)},
216 {"CONFIG_FILE", "/some/path/which/does/not/exist"},
217 {"ALLOW_INPUT", "1"}, %% Needed to make it close on exit
223 [{"MNESIA_DIR", {"~s/~s", [Base, pget(mnesia_dir, Cfg)]}},
224 {"PLUGINS_EXPAND_DIR", {"~s/~s-plugins-expand", [Base, Nodename]}},
225 {"LOG_BASE", {"~s", [Base]}},
226 {"NODENAME", {"~s", [Nodename]}},
227 {"NODE_PORT", {"~B", [Port]}},
228 {"PID_FILE", pid_file(Cfg)},
229 {"CONFIG_FILE", "/some/path/which/does/not/exist"},
230 {"ALLOW_INPUT", "1"}, %% Needed to make it close on exit
218231 %% Bit of a hack - only needed for mgmt tests.
219232 {"SERVER_START_ARGS",
220233 {"-rabbitmq_management listener [{port,1~B}]", [Port]}},
241254 port_receive_loop(Port, Stdout, AcceptableExitCodes) ->
242255 receive
243256 {Port, {exit_status, X}} ->
257 Fmt = "Command exited with code ~p~nStdout: ~s~n",
258 Args = [X, Stdout],
244259 case lists:member(X, AcceptableExitCodes) of
245 true -> Stdout;
246 false -> exit({exit_status, X, AcceptableExitCodes, Stdout})
260 true -> error_logger:info_msg(Fmt, Args),
261 Stdout;
262 false -> error_logger:error_msg(Fmt, Args),
263 exit({exit_status, X, AcceptableExitCodes, Stdout})
247264 end;
248265 {Port, {data, Out}} ->
249 %%io:format(user, "~s", [Out]),
250266 port_receive_loop(Port, Stdout ++ Out, AcceptableExitCodes)
251267 end.
252268
0 %% The contents of this file are subject to the Mozilla Public License
1 %% Version 1.1 (the "License"); you may not use this file except in
2 %% compliance with the License. You may obtain a copy of the License
3 %% at http://www.mozilla.org/MPL/
4 %%
5 %% Software distributed under the License is distributed on an "AS IS"
6 %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 %% the License for the specific language governing rights and
8 %% limitations under the License.
9 %%
10 %% The Original Code is RabbitMQ.
11 %%
12 %% The Initial Developer of the Original Code is GoPivotal, Inc.
13 %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
14 %%
15 -module(cluster_rename).
16
17 -compile(export_all).
18 -include_lib("eunit/include/eunit.hrl").
19 -include_lib("amqp_client/include/amqp_client.hrl").
20
21 -import(rabbit_misc, [pget/2]).
22
23 -define(CLUSTER2,
24 fun(C) -> rabbit_test_configs:cluster(C, [bugs, bigwig]) end).
25
26 -define(CLUSTER3,
27 fun(C) -> rabbit_test_configs:cluster(C, [bugs, bigwig, peter]) end).
28
29 %% Rolling rename of a cluster, each node should do a secondary rename.
30 rename_cluster_one_by_one_with() -> ?CLUSTER3.
31 rename_cluster_one_by_one([Bugs, Bigwig, Peter]) ->
32 publish_all([{Bugs, <<"1">>}, {Bigwig, <<"2">>}, {Peter, <<"3">>}]),
33
34 Jessica = stop_rename_start(Bugs, jessica, [bugs, jessica]),
35 Hazel = stop_rename_start(Bigwig, hazel, [bigwig, hazel]),
36 Flopsy = stop_rename_start(Peter, flopsy, [peter, flopsy]),
37
38 consume_all([{Jessica, <<"1">>}, {Hazel, <<"2">>}, {Flopsy, <<"3">>}]),
39 stop_all([Jessica, Hazel, Flopsy]),
40 ok.
41
42 %% Big bang rename of a cluster, bugs should do a primary rename.
43 rename_cluster_big_bang_with() -> ?CLUSTER3.
44 rename_cluster_big_bang([Bugs, Bigwig, Peter]) ->
45 publish_all([{Bugs, <<"1">>}, {Bigwig, <<"2">>}, {Peter, <<"3">>}]),
46
47 Peter1 = rabbit_test_configs:stop_node(Peter),
48 Bigwig1 = rabbit_test_configs:stop_node(Bigwig),
49 Bugs1 = rabbit_test_configs:stop_node(Bugs),
50
51 Map = [bugs, jessica, bigwig, hazel, peter, flopsy],
52 Jessica0 = rename_node(Bugs1, jessica, Map),
53 Hazel0 = rename_node(Bigwig1, hazel, Map),
54 Flopsy0 = rename_node(Peter1, flopsy, Map),
55
56 Jessica = rabbit_test_configs:start_node(Jessica0),
57 Hazel = rabbit_test_configs:start_node(Hazel0),
58 Flopsy = rabbit_test_configs:start_node(Flopsy0),
59
60 consume_all([{Jessica, <<"1">>}, {Hazel, <<"2">>}, {Flopsy, <<"3">>}]),
61 stop_all([Jessica, Hazel, Flopsy]),
62 ok.
63
64 %% Here we test that bugs copes with things being renamed around it.
65 partial_one_by_one_with() -> ?CLUSTER3.
66 partial_one_by_one([Bugs, Bigwig, Peter]) ->
67 publish_all([{Bugs, <<"1">>}, {Bigwig, <<"2">>}, {Peter, <<"3">>}]),
68
69 Jessica = stop_rename_start(Bugs, jessica, [bugs, jessica]),
70 Hazel = stop_rename_start(Bigwig, hazel, [bigwig, hazel]),
71
72 consume_all([{Jessica, <<"1">>}, {Hazel, <<"2">>}, {Peter, <<"3">>}]),
73 stop_all([Jessica, Hazel, Peter]),
74 ok.
75
76 %% Here we test that bugs copes with things being renamed around it.
77 partial_big_bang_with() -> ?CLUSTER3.
78 partial_big_bang([Bugs, Bigwig, Peter]) ->
79 publish_all([{Bugs, <<"1">>}, {Bigwig, <<"2">>}, {Peter, <<"3">>}]),
80
81 Peter1 = rabbit_test_configs:stop_node(Peter),
82 Bigwig1 = rabbit_test_configs:stop_node(Bigwig),
83 Bugs1 = rabbit_test_configs:stop_node(Bugs),
84
85 Map = [bigwig, hazel, peter, flopsy],
86 Hazel0 = rename_node(Bigwig1, hazel, Map),
87 Flopsy0 = rename_node(Peter1, flopsy, Map),
88
89 Bugs2 = rabbit_test_configs:start_node(Bugs1),
90 Hazel = rabbit_test_configs:start_node(Hazel0),
91 Flopsy = rabbit_test_configs:start_node(Flopsy0),
92
93 consume_all([{Bugs2, <<"1">>}, {Hazel, <<"2">>}, {Flopsy, <<"3">>}]),
94 stop_all([Bugs2, Hazel, Flopsy]),
95 ok.
96
97 %% We should be able to specify the -n parameter on ctl with either
98 %% the before or after name for the local node (since in real cases
99 %% one might want to invoke the command before or after the hostname
100 %% has changed) - usually we test before so here we test after.
101 post_change_nodename_with() -> ?CLUSTER2.
102 post_change_nodename([Bugs, _Bigwig]) ->
103 publish(Bugs, <<"bugs">>),
104
105 Bugs1 = rabbit_test_configs:stop_node(Bugs),
106 Bugs2 = [{nodename, jessica} | proplists:delete(nodename, Bugs1)],
107 Jessica0 = rename_node(Bugs2, jessica, [bugs, jessica]),
108 Jessica = rabbit_test_configs:start_node(Jessica0),
109
110 consume(Jessica, <<"bugs">>),
111 stop_all([Jessica]),
112 ok.
113
114 %% If we invoke rename but the node name does not actually change, we
115 %% should roll back.
116 abortive_rename_with() -> ?CLUSTER2.
117 abortive_rename([Bugs, _Bigwig]) ->
118 publish(Bugs, <<"bugs">>),
119
120 Bugs1 = rabbit_test_configs:stop_node(Bugs),
121 _Jessica = rename_node(Bugs1, jessica, [bugs, jessica]),
122 Bugs2 = rabbit_test_configs:start_node(Bugs1),
123
124 consume(Bugs2, <<"bugs">>),
125 ok.
126
127 %% And test some ways the command can fail.
128 rename_fail_with() -> ?CLUSTER2.
129 rename_fail([Bugs, _Bigwig]) ->
130 Bugs1 = rabbit_test_configs:stop_node(Bugs),
131 %% Rename from a node that does not exist
132 rename_node_fail(Bugs1, [bugzilla, jessica]),
133 %% Rename to a node which does
134 rename_node_fail(Bugs1, [bugs, bigwig]),
135 %% Rename two nodes to the same thing
136 rename_node_fail(Bugs1, [bugs, jessica, bigwig, jessica]),
137 %% Rename while impersonating a node not in the cluster
138 rename_node_fail(set_node(rabbit, Bugs1), [bugs, jessica]),
139 ok.
140
141 rename_twice_fail_with() -> ?CLUSTER2.
142 rename_twice_fail([Bugs, _Bigwig]) ->
143 Bugs1 = rabbit_test_configs:stop_node(Bugs),
144 Indecisive = rename_node(Bugs1, indecisive, [bugs, indecisive]),
145 rename_node_fail(Indecisive, [indecisive, jessica]),
146 ok.
147
148 %% ----------------------------------------------------------------------------
149
150 %% Normal post-test stop does not work since names have changed...
151 stop_all(Cfgs) ->
152 [rabbit_test_configs:stop_node(Cfg) || Cfg <- Cfgs].
153
154 stop_rename_start(Cfg, Nodename, Map) ->
155 rabbit_test_configs:start_node(
156 rename_node(rabbit_test_configs:stop_node(Cfg), Nodename, Map)).
157
158 rename_node(Cfg, Nodename, Map) ->
159 rename_node(Cfg, Nodename, Map, fun rabbit_test_configs:rabbitmqctl/2).
160
161 rename_node_fail(Cfg, Map) ->
162 rename_node(Cfg, ignored, Map, fun rabbit_test_configs:rabbitmqctl_fail/2).
163
164 rename_node(Cfg, Nodename, Map, Ctl) ->
165 MapS = string:join(
166 [atom_to_list(rabbit_nodes:make(N)) || N <- Map], " "),
167 Ctl(Cfg, {"rename_cluster_node ~s", [MapS]}),
168 set_node(Nodename, Cfg).
169
170 publish(Cfg, Q) ->
171 Ch = pget(channel, Cfg),
172 amqp_channel:call(Ch, #'confirm.select'{}),
173 amqp_channel:call(Ch, #'queue.declare'{queue = Q, durable = true}),
174 amqp_channel:cast(Ch, #'basic.publish'{routing_key = Q},
175 #amqp_msg{props = #'P_basic'{delivery_mode = 2},
176 payload = Q}),
177 amqp_channel:wait_for_confirms(Ch).
178
179 consume(Cfg, Q) ->
180 {_Conn, Ch} = rabbit_test_util:connect(Cfg),
181 amqp_channel:call(Ch, #'queue.declare'{queue = Q, durable = true}),
182 {#'basic.get_ok'{}, #amqp_msg{payload = Q}} =
183 amqp_channel:call(Ch, #'basic.get'{queue = Q}).
184
185
186 publish_all(CfgsKeys) ->
187 [publish(Cfg, Key) || {Cfg, Key} <- CfgsKeys].
188
189 consume_all(CfgsKeys) ->
190 [consume(Cfg, Key) || {Cfg, Key} <- CfgsKeys].
191
192 set_node(Nodename, Cfg) ->
193 [{nodename, Nodename} | proplists:delete(nodename, Cfg)].
216216 passive = true})),
217217 ok.
218218
219 forget_offline_promotes_slave_with() -> [cluster_ab, ha_policy_all].
220 forget_offline_promotes_slave([Rabbit, Hare]) ->
221 RabbitCh = pget(channel, Rabbit),
222 Mirrored = <<"mirrored-queue">>,
223 declare(RabbitCh, Mirrored),
224 amqp_channel:call(RabbitCh, #'confirm.select'{}),
225 amqp_channel:cast(RabbitCh, #'basic.publish'{routing_key = Mirrored},
219 forget_promotes_offline_slave_with() ->
220 fun (Cfgs) ->
221 rabbit_test_configs:cluster(Cfgs, [a, b, c, d])
222 end.
223
224 forget_promotes_offline_slave([A, B, C, D]) ->
225 ACh = pget(channel, A),
226 ANode = pget(node, A),
227 Q = <<"mirrored-queue">>,
228 declare(ACh, Q),
229 set_ha_policy(Q, A, [B, C]),
230 set_ha_policy(Q, A, [C, D]), %% Test add and remove from recoverable_slaves
231
232 %% Publish and confirm
233 amqp_channel:call(ACh, #'confirm.select'{}),
234 amqp_channel:cast(ACh, #'basic.publish'{routing_key = Q},
226235 #amqp_msg{props = #'P_basic'{delivery_mode = 2}}),
227 amqp_channel:wait_for_confirms(RabbitCh),
228
229 %% We should have a down slave on hare and a down master on rabbit.
230 Hare2 = rabbit_test_configs:stop_node(Hare),
231 _Rabbit2 = rabbit_test_configs:stop_node(Rabbit),
232
233 rabbit_test_configs:rabbitmqctl(
234 Hare2, {"forget_cluster_node --offline ~s", [pget(node, Rabbit)]}),
235
236 Hare3 = rabbit_test_configs:start_node(Hare2),
237
238 {_HConn2, HareCh2} = rabbit_test_util:connect(Hare3),
239 #'queue.declare_ok'{message_count = 1} = declare(HareCh2, Mirrored),
240
236 amqp_channel:wait_for_confirms(ACh),
237
238 %% We kill nodes rather than stop them in order to make sure
239 %% that we aren't dependent on anything that happens as they shut
240 %% down (see bug 26467).
241 D2 = rabbit_test_configs:kill_node(D),
242 C2 = rabbit_test_configs:kill_node(C),
243 _B2 = rabbit_test_configs:kill_node(B),
244 _A2 = rabbit_test_configs:kill_node(A),
245
246 rabbit_test_configs:rabbitmqctl(C2, "force_boot"),
247
248 C3 = rabbit_test_configs:start_node(C2),
249
250 %% We should now have the following dramatis personae:
251 %% A - down, master
252 %% B - down, used to be slave, no longer is, never had the message
253 %% C - running, should be slave, but has wiped the message on restart
254 %% D - down, recoverable slave, contains message
255 %%
256 %% So forgetting A should offline-promote the queue to D, keeping
257 %% the message.
258
259 rabbit_test_configs:rabbitmqctl(C3, {"forget_cluster_node ~s", [ANode]}),
260
261 D3 = rabbit_test_configs:start_node(D2),
262 {_DConn2, DCh2} = rabbit_test_util:connect(D3),
263 #'queue.declare_ok'{message_count = 1} = declare(DCh2, Q),
241264 ok.
265
266 set_ha_policy(Q, MasterCfg, SlaveCfgs) ->
267 Nodes = [list_to_binary(atom_to_list(pget(node, N))) ||
268 N <- [MasterCfg | SlaveCfgs]],
269 rabbit_test_util:set_ha_policy(MasterCfg, Q, {<<"nodes">>, Nodes}),
270 await_slaves(Q, pget(node, MasterCfg), [pget(node, C) || C <- SlaveCfgs]).
271
272 await_slaves(Q, MNode, SNodes) ->
273 {ok, #amqqueue{pid = MPid,
274 slave_pids = SPids}} =
275 rpc:call(MNode, rabbit_amqqueue, lookup,
276 [rabbit_misc:r(<<"/">>, queue, Q)]),
277 ActMNode = node(MPid),
278 ActSNodes = lists:usort([node(P) || P <- SPids]),
279 case {MNode, lists:usort(SNodes)} of
280 {ActMNode, ActSNodes} -> ok;
281 _ -> timer:sleep(100),
282 await_slaves(Q, MNode, SNodes)
283 end.
242284
243285 force_boot_with() -> cluster_ab.
244286 force_boot([Rabbit, Hare]) ->
330372 [Rabbit, Hare]),
331373 assert_not_clustered(Bunny).
332374
333 update_cluster_nodes_test_with() -> start_abc.
334 update_cluster_nodes_test(Config) ->
375 update_cluster_nodes_with() -> start_abc.
376 update_cluster_nodes(Config) ->
335377 [Rabbit, Hare, Bunny] = cluster_members(Config),
336378
337379 %% Mnesia is running...
392434 assert_not_clustered(Hare),
393435 assert_not_clustered(Rabbit),
394436
395 %% If we use a legacy config file, it still works (and a warning is emitted)
437 %% If we use a legacy config file, the node fails to start.
396438 ok = stop_app(Hare),
397439 ok = reset(Hare),
398440 ok = rpc:call(Hare, application, set_env,
399441 [rabbit, cluster_nodes, [Rabbit]]),
400 ok = start_app(Hare),
401 assert_cluster_status({[Rabbit, Hare], [Rabbit], [Rabbit, Hare]},
402 [Rabbit, Hare]).
403
404 force_reset_test_with() -> start_abc.
405 force_reset_test(Config) ->
442 assert_failure(fun () -> start_app(Hare) end),
443 assert_not_clustered(Rabbit),
444
445 %% If we use an invalid node name, the node fails to start.
446 ok = stop_app(Hare),
447 ok = reset(Hare),
448 ok = rpc:call(Hare, application, set_env,
449 [rabbit, cluster_nodes, {["Mike's computer"], disc}]),
450 assert_failure(fun () -> start_app(Hare) end),
451 assert_not_clustered(Rabbit),
452
453 %% If we use an invalid node type, the node fails to start.
454 ok = stop_app(Hare),
455 ok = reset(Hare),
456 ok = rpc:call(Hare, application, set_env,
457 [rabbit, cluster_nodes, {[Rabbit], blue}]),
458 assert_failure(fun () -> start_app(Hare) end),
459 assert_not_clustered(Rabbit),
460
461 %% If we use an invalid cluster_nodes conf, the node fails to start.
462 ok = stop_app(Hare),
463 ok = reset(Hare),
464 ok = rpc:call(Hare, application, set_env,
465 [rabbit, cluster_nodes, true]),
466 assert_failure(fun () -> start_app(Hare) end),
467 assert_not_clustered(Rabbit),
468
469 ok = stop_app(Hare),
470 ok = reset(Hare),
471 ok = rpc:call(Hare, application, set_env,
472 [rabbit, cluster_nodes, "Yes, please"]),
473 assert_failure(fun () -> start_app(Hare) end),
474 assert_not_clustered(Rabbit).
475
476 force_reset_node_with() -> start_abc.
477 force_reset_node(Config) ->
406478 [Rabbit, Hare, _Bunny] = cluster_members(Config),
407479
408480 stop_join_start(Rabbit, Hare),
8686 %% Add D and E, D joins in
8787 [CfgD, CfgE] = CfgsDE = rabbit_test_configs:start_nodes(CfgA, [d, e], 5675),
8888 D = pget(node, CfgD),
89 E = pget(node, CfgE),
8990 rabbit_test_configs:add_to_cluster(CfgsABC, CfgsDE),
9091 assert_slaves(A, ?QNAME, {A, [B, C, D]}),
9192
92 %% Remove D, E does not join in
93 %% Remove D, E joins in
9394 rabbit_test_configs:stop_node(CfgD),
94 assert_slaves(A, ?QNAME, {A, [B, C]}),
95 assert_slaves(A, ?QNAME, {A, [B, C, E]}),
9596
9697 %% Clean up since we started this by hand
9798 rabbit_test_configs:stop_node(CfgE),
3636 [A] = partitions(C),
3737 ok.
3838
39 pause_on_down_with() -> ?CONFIG.
40 pause_on_down([CfgA, CfgB, CfgC] = Cfgs) ->
39 pause_minority_on_down_with() -> ?CONFIG.
40 pause_minority_on_down([CfgA, CfgB, CfgC] = Cfgs) ->
4141 A = pget(node, CfgA),
4242 set_mode(Cfgs, pause_minority),
4343 true = is_running(A),
5050 await_running(A, false),
5151 ok.
5252
53 pause_on_blocked_with() -> ?CONFIG.
54 pause_on_blocked(Cfgs) ->
53 pause_minority_on_blocked_with() -> ?CONFIG.
54 pause_minority_on_blocked(Cfgs) ->
5555 [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs],
5656 set_mode(Cfgs, pause_minority),
57 pause_on_blocked(A, B, C).
58
59 pause_if_all_down_on_down_with() -> ?CONFIG.
60 pause_if_all_down_on_down([_, CfgB, CfgC] = Cfgs) ->
61 [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs],
62 set_mode(Cfgs, {pause_if_all_down, [C], ignore}),
63 [(true = is_running(N)) || N <- [A, B, C]],
64
65 rabbit_test_util:kill(CfgB, sigkill),
66 timer:sleep(?DELAY),
67 [(true = is_running(N)) || N <- [A, C]],
68
69 rabbit_test_util:kill(CfgC, sigkill),
70 timer:sleep(?DELAY),
71 await_running(A, false),
72 ok.
73
74 pause_if_all_down_on_blocked_with() -> ?CONFIG.
75 pause_if_all_down_on_blocked(Cfgs) ->
76 [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs],
77 set_mode(Cfgs, {pause_if_all_down, [C], ignore}),
78 pause_on_blocked(A, B, C).
79
80 pause_on_blocked(A, B, C) ->
5781 [(true = is_running(N)) || N <- [A, B, C]],
5882 block([{A, B}, {A, C}]),
5983 await_running(A, false),
76100 %% test to pass since there are a lot of things in the broker that can
77101 %% suddenly take several seconds to time out when TCP connections
78102 %% won't establish.
79 pause_false_promises_mirrored_with() ->
103 pause_minority_false_promises_mirrored_with() ->
80104 [start_ab, fun enable_dist_proxy/1,
81105 build_cluster, short_ticktime(10), start_connections, ha_policy_all].
82106
83 pause_false_promises_mirrored(Cfgs) ->
84 pause_false_promises(Cfgs).
85
86 pause_false_promises_unmirrored_with() ->
107 pause_minority_false_promises_mirrored(Cfgs) ->
108 pause_false_promises(Cfgs, pause_minority).
109
110 pause_minority_false_promises_unmirrored_with() ->
87111 [start_ab, fun enable_dist_proxy/1,
88112 build_cluster, short_ticktime(10), start_connections].
89113
90 pause_false_promises_unmirrored(Cfgs) ->
91 pause_false_promises(Cfgs).
92
93 pause_false_promises([CfgA, CfgB | _] = Cfgs) ->
114 pause_minority_false_promises_unmirrored(Cfgs) ->
115 pause_false_promises(Cfgs, pause_minority).
116
117 pause_if_all_down_false_promises_mirrored_with() ->
118 [start_ab, fun enable_dist_proxy/1,
119 build_cluster, short_ticktime(10), start_connections, ha_policy_all].
120
121 pause_if_all_down_false_promises_mirrored([_, CfgB | _] = Cfgs) ->
122 B = pget(node, CfgB),
123 pause_false_promises(Cfgs, {pause_if_all_down, [B], ignore}).
124
125 pause_if_all_down_false_promises_unmirrored_with() ->
126 [start_ab, fun enable_dist_proxy/1,
127 build_cluster, short_ticktime(10), start_connections].
128
129 pause_if_all_down_false_promises_unmirrored([_, CfgB | _] = Cfgs) ->
130 B = pget(node, CfgB),
131 pause_false_promises(Cfgs, {pause_if_all_down, [B], ignore}).
132
133 pause_false_promises([CfgA, CfgB | _] = Cfgs, ClusterPartitionHandling) ->
94134 [A, B] = [pget(node, Cfg) || Cfg <- Cfgs],
95 set_mode([CfgA], pause_minority),
135 set_mode([CfgA], ClusterPartitionHandling),
96136 ChA = pget(channel, CfgA),
97137 ChB = pget(channel, CfgB),
98138 amqp_channel:call(ChB, #'queue.declare'{queue = <<"test">>,
172212 %% NB: we test full and partial partitions here.
173213 autoheal_with() -> ?CONFIG.
174214 autoheal(Cfgs) ->
175 [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs],
176215 set_mode(Cfgs, autoheal),
216 do_autoheal(Cfgs).
217
218 autoheal_after_pause_if_all_down_with() -> ?CONFIG.
219 autoheal_after_pause_if_all_down([_, CfgB, CfgC | _] = Cfgs) ->
220 B = pget(node, CfgB),
221 C = pget(node, CfgC),
222 set_mode(Cfgs, {pause_if_all_down, [B, C], autoheal}),
223 do_autoheal(Cfgs).
224
225 do_autoheal(Cfgs) ->
226 [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs],
177227 Test = fun (Pairs) ->
178228 block_unblock(Pairs),
179229 %% Sleep to make sure all the partitions are noticed
180230 %% ?DELAY for the net_tick timeout
181231 timer:sleep(?DELAY),
182232 [await_listening(N, true) || N <- [A, B, C]],
183 [] = partitions(A),
184 [] = partitions(B),
185 [] = partitions(C)
233 [await_partitions(N, []) || N <- [A, B, C]]
186234 end,
187235 Test([{B, C}]),
188236 Test([{A, C}, {B, C}]),
224272 Partitions -> exit({partitions, Partitions})
225273 end.
226274
227 partial_pause_with() -> ?CONFIG.
228 partial_pause(Cfgs) ->
275 partial_pause_minority_with() -> ?CONFIG.
276 partial_pause_minority(Cfgs) ->
229277 [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs],
230278 set_mode(Cfgs, pause_minority),
231279 block([{A, B}]),
233281 await_running(C, true),
234282 unblock([{A, B}]),
235283 [await_listening(N, true) || N <- [A, B, C]],
236 [] = partitions(A),
237 [] = partitions(B),
238 [] = partitions(C),
284 [await_partitions(N, []) || N <- [A, B, C]],
285 ok.
286
287 partial_pause_if_all_down_with() -> ?CONFIG.
288 partial_pause_if_all_down(Cfgs) ->
289 [A, B, C] = [pget(node, Cfg) || Cfg <- Cfgs],
290 set_mode(Cfgs, {pause_if_all_down, [B], ignore}),
291 block([{A, B}]),
292 await_running(A, false),
293 [await_running(N, true) || N <- [B, C]],
294 unblock([{A, B}]),
295 [await_listening(N, true) || N <- [A, B, C]],
296 [await_partitions(N, []) || N <- [A, B, C]],
239297 ok.
240298
241299 set_mode(Cfgs, Mode) ->
270328 rpc:call(X, inet_tcp_proxy, allow, [Y]),
271329 rpc:call(Y, inet_tcp_proxy, allow, [X]).
272330
273 await_running (Node, Bool) -> await(Node, Bool, fun is_running/1).
274 await_listening(Node, Bool) -> await(Node, Bool, fun is_listening/1).
275
276 await(Node, Bool, Fun) ->
331 await_running (Node, Bool) -> await(Node, Bool, fun is_running/1).
332 await_listening (Node, Bool) -> await(Node, Bool, fun is_listening/1).
333 await_partitions(Node, Parts) -> await(Node, Parts, fun partitions/1).
334
335 await(Node, Res, Fun) ->
277336 case Fun(Node) of
278 Bool -> ok;
279 _ -> timer:sleep(100),
280 await(Node, Bool, Fun)
337 Res -> ok;
338 _ -> timer:sleep(100),
339 await(Node, Res, Fun)
281340 end.
282341
283342 is_running(Node) -> rpc:call(Node, rabbit, is_running, []).
0 %% The contents of this file are subject to the Mozilla Public License
1 %% Version 1.1 (the "License"); you may not use this file except in
2 %% compliance with the License. You may obtain a copy of the License
3 %% at http://www.mozilla.org/MPL/
4 %%
5 %% Software distributed under the License is distributed on an "AS IS"
6 %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 %% the License for the specific language governing rights and
8 %% limitations under the License.
9 %%
10 %% The Original Code is RabbitMQ.
11 %%
12 %% The Initial Developer of the Original Code is GoPivotal, Inc.
13 %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
14 %%
15
16 -module(rabbit_priority_queue_test).
17
18 -compile(export_all).
19 -include_lib("eunit/include/eunit.hrl").
20 -include_lib("amqp_client/include/amqp_client.hrl").
21
22 -import(rabbit_misc, [pget/2]).
23
24 %% The BQ API is used in all sorts of places in all sorts of
25 %% ways. Therefore we have to jump through a few different hoops
26 %% in order to integration-test it.
27 %%
28 %% * start/1, stop/0, init/3, terminate/2, delete_and_terminate/2
29 %% - starting and stopping rabbit. durable queues / persistent msgs needed
30 %% to test recovery
31 %%
32 %% * publish/5, drain_confirmed/1, fetch/2, ack/2, is_duplicate/2, msg_rates/1,
33 %% needs_timeout/1, timeout/1, invoke/3, resume/1 [0]
34 %% - regular publishing and consuming, with confirms and acks and durability
35 %%
36 %% * publish_delivered/4 - publish with acks straight through
37 %% * discard/3 - publish without acks straight through
38 %% * dropwhile/2 - expire messages without DLX
39 %% * fetchwhile/4 - expire messages with DLX
40 %% * ackfold/4 - reject messages with DLX
41 %% * requeue/2 - reject messages without DLX
42 %% * drop/2 - maxlen messages without DLX
43 %% * purge/1 - issue AMQP queue.purge
44 %% * purge_acks/1 - mirror queue explicit sync with unacked msgs
45 %% * fold/3 - mirror queue explicit sync
46 %% * depth/1 - mirror queue implicit sync detection
47 %% * len/1, is_empty/1 - info items
48 %% * handle_pre_hibernate/1 - hibernation
49 %%
50 %% * set_ram_duration_target/2, ram_duration/1, status/1
51 %% - maybe need unit testing?
52 %%
53 %% [0] publish enough to get credit flow from msg store
54
55 recovery_test() ->
56 {Conn, Ch} = open(),
57 Q = <<"test">>,
58 declare(Ch, Q, 3),
59 publish(Ch, Q, [1, 2, 3, 1, 2, 3, 1, 2, 3]),
60 amqp_connection:close(Conn),
61
62 %% TODO these break coverage
63 rabbit:stop(),
64 rabbit:start(),
65
66 {Conn2, Ch2} = open(),
67 get_all(Ch2, Q, do_ack, [3, 3, 3, 2, 2, 2, 1, 1, 1]),
68 delete(Ch2, Q),
69 amqp_connection:close(Conn2),
70 passed.
71
72 simple_order_test() ->
73 {Conn, Ch} = open(),
74 Q = <<"test">>,
75 declare(Ch, Q, 3),
76 publish(Ch, Q, [1, 2, 3, 1, 2, 3, 1, 2, 3]),
77 get_all(Ch, Q, do_ack, [3, 3, 3, 2, 2, 2, 1, 1, 1]),
78 publish(Ch, Q, [2, 3, 1, 2, 3, 1, 2, 3, 1]),
79 get_all(Ch, Q, no_ack, [3, 3, 3, 2, 2, 2, 1, 1, 1]),
80 publish(Ch, Q, [3, 1, 2, 3, 1, 2, 3, 1, 2]),
81 get_all(Ch, Q, do_ack, [3, 3, 3, 2, 2, 2, 1, 1, 1]),
82 delete(Ch, Q),
83 amqp_connection:close(Conn),
84 passed.
85
86 matching_test() ->
87 {Conn, Ch} = open(),
88 Q = <<"test">>,
89 declare(Ch, Q, 5),
90 %% We round priority down, and 0 is the default
91 publish(Ch, Q, [undefined, 0, 5, 10, undefined]),
92 get_all(Ch, Q, do_ack, [5, 10, undefined, 0, undefined]),
93 delete(Ch, Q),
94 amqp_connection:close(Conn),
95 passed.
96
97 resume_test() ->
98 {Conn, Ch} = open(),
99 Q = <<"test">>,
100 declare(Ch, Q, 5),
101 amqp_channel:call(Ch, #'confirm.select'{}),
102 publish_many(Ch, Q, 10000),
103 amqp_channel:wait_for_confirms(Ch),
104 amqp_channel:call(Ch, #'queue.purge'{queue = Q}), %% Assert it exists
105 delete(Ch, Q),
106 amqp_connection:close(Conn),
107 passed.
108
109 straight_through_test() ->
110 {Conn, Ch} = open(),
111 Q = <<"test">>,
112 declare(Ch, Q, 3),
113 [begin
114 consume(Ch, Q, Ack),
115 [begin
116 publish1(Ch, Q, P),
117 assert_delivered(Ch, Ack, P)
118 end || P <- [1, 2, 3]],
119 cancel(Ch)
120 end || Ack <- [do_ack, no_ack]],
121 get_empty(Ch, Q),
122 delete(Ch, Q),
123 amqp_connection:close(Conn),
124 passed.
125
126 dropwhile_fetchwhile_test() ->
127 {Conn, Ch} = open(),
128 Q = <<"test">>,
129 [begin
130 declare(Ch, Q, Args ++ arguments(3)),
131 publish(Ch, Q, [1, 2, 3, 1, 2, 3, 1, 2, 3]),
132 timer:sleep(10),
133 get_empty(Ch, Q),
134 delete(Ch, Q)
135 end ||
136 Args <- [[{<<"x-message-ttl">>, long, 1}],
137 [{<<"x-message-ttl">>, long, 1},
138 {<<"x-dead-letter-exchange">>, longstr, <<"amq.fanout">>}]
139 ]],
140 amqp_connection:close(Conn),
141 passed.
142
143 ackfold_test() ->
144 {Conn, Ch} = open(),
145 Q = <<"test">>,
146 Q2 = <<"test2">>,
147 declare(Ch, Q,
148 [{<<"x-dead-letter-exchange">>, longstr, <<>>},
149 {<<"x-dead-letter-routing-key">>, longstr, Q2}
150 | arguments(3)]),
151 declare(Ch, Q2, none),
152 publish(Ch, Q, [1, 2, 3]),
153 [_, _, DTag] = get_all(Ch, Q, manual_ack, [3, 2, 1]),
154 amqp_channel:cast(Ch, #'basic.nack'{delivery_tag = DTag,
155 multiple = true,
156 requeue = false}),
157 timer:sleep(100),
158 get_all(Ch, Q2, do_ack, [3, 2, 1]),
159 delete(Ch, Q),
160 delete(Ch, Q2),
161 amqp_connection:close(Conn),
162 passed.
163
164 requeue_test() ->
165 {Conn, Ch} = open(),
166 Q = <<"test">>,
167 declare(Ch, Q, 3),
168 publish(Ch, Q, [1, 2, 3]),
169 [_, _, DTag] = get_all(Ch, Q, manual_ack, [3, 2, 1]),
170 amqp_channel:cast(Ch, #'basic.nack'{delivery_tag = DTag,
171 multiple = true,
172 requeue = true}),
173 get_all(Ch, Q, do_ack, [3, 2, 1]),
174 delete(Ch, Q),
175 amqp_connection:close(Conn),
176 passed.
177
178 drop_test() ->
179 {Conn, Ch} = open(),
180 Q = <<"test">>,
181 declare(Ch, Q, [{<<"x-max-length">>, long, 4} | arguments(3)]),
182 publish(Ch, Q, [1, 2, 3, 1, 2, 3, 1, 2, 3]),
183 %% We drop from the head, so this is according to the "spec" even
184 %% if not likely to be what the user wants.
185 get_all(Ch, Q, do_ack, [2, 1, 1, 1]),
186 delete(Ch, Q),
187 amqp_connection:close(Conn),
188 passed.
189
190 purge_test() ->
191 {Conn, Ch} = open(),
192 Q = <<"test">>,
193 declare(Ch, Q, 3),
194 publish(Ch, Q, [1, 2, 3]),
195 amqp_channel:call(Ch, #'queue.purge'{queue = Q}),
196 get_empty(Ch, Q),
197 delete(Ch, Q),
198 amqp_connection:close(Conn),
199 passed.
200
201 ram_duration_test() ->
202 QName = rabbit_misc:r(<<"/">>, queue, <<"pseudo">>),
203 Q0 = rabbit_amqqueue:pseudo_queue(QName, self()),
204 Q = Q0#amqqueue{arguments = [{<<"x-max-priority">>, long, 5}]},
205 PQ = rabbit_priority_queue,
206 BQS1 = PQ:init(Q, new, fun(_, _) -> ok end),
207 {Duration1, BQS2} = PQ:ram_duration(BQS1),
208 BQS3 = PQ:set_ram_duration_target(infinity, BQS2),
209 BQS4 = PQ:set_ram_duration_target(1, BQS3),
210 {Duration2, BQS5} = PQ:ram_duration(BQS4),
211 PQ:delete_and_terminate(a_whim, BQS5),
212 passed.
213
214 mirror_queue_sync_with() -> cluster_ab.
215 mirror_queue_sync([CfgA, _CfgB]) ->
216 Ch = pget(channel, CfgA),
217 Q = <<"test">>,
218 declare(Ch, Q, 3),
219 publish(Ch, Q, [1, 2, 3]),
220 ok = rabbit_test_util:set_ha_policy(CfgA, <<".*">>, <<"all">>),
221 publish(Ch, Q, [1, 2, 3, 1, 2, 3]),
222 %% master now has 9, slave 6.
223 get_partial(Ch, Q, manual_ack, [3, 3, 3, 2, 2, 2]),
224 %% So some but not all are unacked at the slave
225 rabbit_test_util:control_action(sync_queue, CfgA, [binary_to_list(Q)],
226 [{"-p", "/"}]),
227 wait_for_sync(CfgA, rabbit_misc:r(<<"/">>, queue, Q)),
228 passed.
229
230 %%----------------------------------------------------------------------------
231
232 open() ->
233 {ok, Conn} = amqp_connection:start(#amqp_params_network{}),
234 {ok, Ch} = amqp_connection:open_channel(Conn),
235 {Conn, Ch}.
236
237 declare(Ch, Q, Args) when is_list(Args) ->
238 amqp_channel:call(Ch, #'queue.declare'{queue = Q,
239 durable = true,
240 arguments = Args});
241 declare(Ch, Q, Max) ->
242 declare(Ch, Q, arguments(Max)).
243
244 delete(Ch, Q) ->
245 amqp_channel:call(Ch, #'queue.delete'{queue = Q}).
246
247 publish(Ch, Q, Ps) ->
248 amqp_channel:call(Ch, #'confirm.select'{}),
249 [publish1(Ch, Q, P) || P <- Ps],
250 amqp_channel:wait_for_confirms(Ch).
251
252 publish_many(_Ch, _Q, 0) -> ok;
253 publish_many( Ch, Q, N) -> publish1(Ch, Q, random:uniform(5)),
254 publish_many(Ch, Q, N - 1).
255
256 publish1(Ch, Q, P) ->
257 amqp_channel:cast(Ch, #'basic.publish'{routing_key = Q},
258 #amqp_msg{props = props(P),
259 payload = priority2bin(P)}).
260
261 props(undefined) -> #'P_basic'{delivery_mode = 2};
262 props(P) -> #'P_basic'{priority = P,
263 delivery_mode = 2}.
264
265 consume(Ch, Q, Ack) ->
266 amqp_channel:subscribe(Ch, #'basic.consume'{queue = Q,
267 no_ack = Ack =:= no_ack,
268 consumer_tag = <<"ctag">>},
269 self()),
270 receive
271 #'basic.consume_ok'{consumer_tag = <<"ctag">>} ->
272 ok
273 end.
274
275 cancel(Ch) ->
276 amqp_channel:call(Ch, #'basic.cancel'{consumer_tag = <<"ctag">>}).
277
278 assert_delivered(Ch, Ack, P) ->
279 PBin = priority2bin(P),
280 receive
281 {#'basic.deliver'{delivery_tag = DTag}, #amqp_msg{payload = PBin2}} ->
282 ?assertEqual(PBin, PBin2),
283 maybe_ack(Ch, Ack, DTag)
284 end.
285
286 get_all(Ch, Q, Ack, Ps) ->
287 DTags = get_partial(Ch, Q, Ack, Ps),
288 get_empty(Ch, Q),
289 DTags.
290
291 get_partial(Ch, Q, Ack, Ps) ->
292 [get_ok(Ch, Q, Ack, P) || P <- Ps].
293
294 get_empty(Ch, Q) ->
295 #'basic.get_empty'{} = amqp_channel:call(Ch, #'basic.get'{queue = Q}).
296
297 get_ok(Ch, Q, Ack, P) ->
298 PBin = priority2bin(P),
299 {#'basic.get_ok'{delivery_tag = DTag}, #amqp_msg{payload = PBin2}} =
300 amqp_channel:call(Ch, #'basic.get'{queue = Q,
301 no_ack = Ack =:= no_ack}),
302 ?assertEqual(PBin, PBin2),
303 maybe_ack(Ch, Ack, DTag).
304
305 maybe_ack(Ch, do_ack, DTag) ->
306 amqp_channel:cast(Ch, #'basic.ack'{delivery_tag = DTag}),
307 DTag;
308 maybe_ack(_Ch, _, DTag) ->
309 DTag.
310
311 arguments(none) -> [];
312 arguments(Max) -> [{<<"x-max-priority">>, byte, Max}].
313
314 priority2bin(undefined) -> <<"undefined">>;
315 priority2bin(Int) -> list_to_binary(integer_to_list(Int)).
316
317 %%----------------------------------------------------------------------------
318
319 wait_for_sync(Cfg, Q) ->
320 case synced(Cfg, Q) of
321 true -> ok;
322 false -> timer:sleep(100),
323 wait_for_sync(Cfg, Q)
324 end.
325
326 synced(Cfg, Q) ->
327 Info = rpc:call(pget(node, Cfg),
328 rabbit_amqqueue, info_all,
329 [<<"/">>, [name, synchronised_slave_pids]]),
330 [SSPids] = [Pids || [{name, Q1}, {synchronised_slave_pids, Pids}] <- Info,
331 Q =:= Q1],
332 length(SSPids) =:= 1.
333
334 %%----------------------------------------------------------------------------
3333 amqp_channel:call(Ch, #'queue.delete'{queue = Queue})
3434 end || _I <- lists:seq(1, 20)],
3535 ok.
36
37 %% Check that by the time we get a declare-ok back, the slaves are up
38 %% and in Mnesia.
39 declare_synchrony_with() -> [cluster_ab, ha_policy_all].
40 declare_synchrony([Rabbit, Hare]) ->
41 RabbitCh = pget(channel, Rabbit),
42 HareCh = pget(channel, Hare),
43 Q = <<"mirrored-queue">>,
44 declare(RabbitCh, Q),
45 amqp_channel:call(RabbitCh, #'confirm.select'{}),
46 amqp_channel:cast(RabbitCh, #'basic.publish'{routing_key = Q},
47 #amqp_msg{props = #'P_basic'{delivery_mode = 2}}),
48 amqp_channel:wait_for_confirms(RabbitCh),
49 _Rabbit2 = rabbit_test_configs:kill_node(Rabbit),
50
51 #'queue.declare_ok'{message_count = 1} = declare(HareCh, Q),
52 ok.
53
54 declare(Ch, Name) ->
55 amqp_channel:call(Ch, #'queue.declare'{durable = true, queue = Name}).
3656
3757 consume_survives_stop_with() -> ?CONFIG.
3858 consume_survives_sigkill_with() -> ?CONFIG.
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
3535
3636 $ curl -i -u guest:guest -H "content-type:application/json" -XPUT \
3737 http://localhost:55672/api/traces/%2f/my-trace \
38 -d'{"format":"text","pattern":"#"}'
38 -d'{"format":"text","pattern":"#", "max_payload_bytes":1000}'
3939
40 max_payload_bytes is optional (omit it to prevent payload truncation),
41 format and pattern are mandatory.
1515 <th>Name</th>
1616 <th>Pattern</th>
1717 <th>Format</th>
18 <th>Payload limit</th>
1819 <th>Rate</th>
1920 <th>Queued</th>
2021 <th></th>
3233 <td><%= fmt_string(trace.name) %></td>
3334 <td><%= fmt_string(trace.pattern) %></td>
3435 <td><%= fmt_string(trace.format) %></td>
36 <td class="c"><%= fmt_string(trace.max_payload_bytes, 'Unlimited') %></td>
3537 <% if (trace.queue) { %>
3638 <td class="r">
37 <%= fmt_rate(trace.queue.message_stats, 'ack', false) %>
39 <%= fmt_detail_rate(trace.queue.message_stats, 'deliver_no_ack') %>
3840 </td>
3941 <td class="r">
4042 <%= trace.queue.messages %>
131133 </td>
132134 </tr>
133135 <tr>
136 <th><label>Max payload bytes: <span class="help" id="tracing-max-payload"></span></label></th>
137 <td>
138 <input type="text" name="max_payload_bytes" value=""/>
139 </td>
140 </tr>
141 <tr>
134142 <th><label>Pattern:</label></th>
135143 <td>
136144 <input type="text" name="pattern" value="#"/>
1010 'trace', '#/traces');
1111 });
1212 sammy.put('#/traces', function() {
13 if (this.params['max_payload_bytes'] === '') {
14 delete this.params['max_payload_bytes'];
15 }
16 else {
17 this.params['max_payload_bytes'] =
18 parseInt(this.params['max_payload_bytes']);
19 }
1320 if (sync_put(this, '/traces/:vhost/:name'))
1421 update();
1522 return false;
2835
2936 NAVIGATION['Admin'][0]['Tracing'] = ['#/traces', 'administrator'];
3037
38 HELP['tracing-max-payload'] =
39 'Maximum size of payload to log, in bytes. Payloads larger than this limit will be truncated. Leave blank to prevent truncation. Set to 0 to prevent logging of payload altogether.';
40
3141 function link_trace(name) {
3242 return _link_to(name, 'api/trace-files/' + esc(name));
3343 }
2121
2222 -import(rabbit_misc, [pget/2, pget/3, table_lookup/2]).
2323
24 -record(state, {conn, ch, vhost, queue, file, filename, format}).
24 -record(state, {conn, ch, vhost, queue, file, filename, format, buf, buf_cnt,
25 max_payload}).
2526 -record(log_record, {timestamp, type, exchange, queue, node, connection,
26 vhost, username, channel, routing_keys,
27 vhost, username, channel, routing_keys, routed_queues,
2728 properties, payload}).
2829
2930 -define(X, <<"amq.rabbitmq.trace">>).
31 -define(MAX_BUF, 100).
3032
3133 -export([start_link/1, info_all/1]).
3234 -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2,
4446 process_flag(trap_exit, true),
4547 Name = pget(name, Args),
4648 VHost = pget(vhost, Args),
49 MaxPayload = pget(max_payload_bytes, Args, unlimited),
4750 {ok, Conn} = amqp_connection:start(
4851 #amqp_params_direct{virtual_host = VHost}),
4952 link(Conn),
5659 amqp_channel:call(
5760 Ch, #'queue.bind'{exchange = ?X, queue = Q,
5861 routing_key = pget(pattern, Args)}),
59 #'basic.qos_ok'{} =
60 amqp_channel:call(Ch, #'basic.qos'{prefetch_count = 10}),
62 amqp_channel:enable_delivery_flow_control(Ch),
6163 #'basic.consume_ok'{} =
6264 amqp_channel:subscribe(Ch, #'basic.consume'{queue = Q,
63 no_ack = false}, self()),
65 no_ack = true}, self()),
6466 {ok, Dir} = application:get_env(directory),
6567 Filename = Dir ++ "/" ++ binary_to_list(Name) ++ ".log",
6668 case filelib:ensure_dir(Filename) of
6769 ok ->
68 case file:open(Filename, [append]) of
70 case prim_file:open(Filename, [append]) of
6971 {ok, F} ->
7072 rabbit_tracing_traces:announce(VHost, Name, self()),
7173 Format = list_to_atom(binary_to_list(pget(format, Args))),
7375 "format ~p~n", [Filename, Format]),
7476 {ok, #state{conn = Conn, ch = Ch, vhost = VHost, queue = Q,
7577 file = F, filename = Filename,
76 format = Format}};
78 format = Format, buf = [], buf_cnt = 0,
79 max_payload = MaxPayload}};
7780 {error, E} ->
7881 {stop, {could_not_open, Filename, E}}
7982 end;
9396 handle_cast(_C, State) ->
9497 {noreply, State}.
9598
96 handle_info(Delivery = {#'basic.deliver'{delivery_tag = Seq}, #amqp_msg{}},
97 State = #state{ch = Ch, file = F, format = Format}) ->
98 Print = fun(Fmt, Args) -> io:format(F, Fmt, Args) end,
99 log(Format, Print, delivery_to_log_record(Delivery)),
100 amqp_channel:cast(Ch, #'basic.ack'{delivery_tag = Seq}),
101 {noreply, State};
99 handle_info({BasicDeliver, Msg, DeliveryCtx},
100 State = #state{format = Format}) ->
101 amqp_channel:notify_received(DeliveryCtx),
102 {noreply, log(Format, delivery_to_log_record({BasicDeliver, Msg}, State),
103 State),
104 0};
105
106 handle_info(timeout, State) ->
107 {noreply, flush(State)};
102108
103109 handle_info(_I, State) ->
104110 {noreply, State}.
105111
106 terminate(shutdown, #state{conn = Conn, ch = Ch,
107 file = F, filename = Filename}) ->
112 terminate(shutdown, State = #state{conn = Conn, ch = Ch,
113 file = F, filename = Filename}) ->
114 flush(State),
108115 catch amqp_channel:close(Ch),
109116 catch amqp_connection:close(Conn),
110 catch file:close(F),
117 catch prim_file:close(F),
111118 rabbit_log:info("Tracer closed log file ~p~n", [Filename]),
112119 ok;
113120
120127
121128 delivery_to_log_record({#'basic.deliver'{routing_key = Key},
122129 #amqp_msg{props = #'P_basic'{headers = H},
123 payload = Payload}}) ->
124 {Type, Q} = case Key of
125 <<"publish.", _Rest/binary>> -> {published, none};
126 <<"deliver.", Rest/binary>> -> {received, Rest}
127 end,
130 payload = Payload}}, State) ->
131 {Type, Q, RQs} = case Key of
132 <<"publish.", _Rest/binary>> ->
133 {array, Qs} = table_lookup(H, <<"routed_queues">>),
134 {published, none, [Q || {_, Q} <- Qs]};
135 <<"deliver.", Rest/binary>> ->
136 {received, Rest, none}
137 end,
128138 {longstr, Node} = table_lookup(H, <<"node">>),
129139 {longstr, X} = table_lookup(H, <<"exchange_name">>),
130140 {array, Keys} = table_lookup(H, <<"routing_keys">>),
143153 username = User,
144154 channel = Chan,
145155 routing_keys = [K || {_, K} <- Keys],
156 routed_queues= RQs,
146157 properties = Props,
147 payload = Payload}.
148
149 log(text, P, Record) ->
150 P("~n~s~n", [string:copies("=", 80)]),
151 P("~s: ", [Record#log_record.timestamp]),
152 case Record#log_record.type of
153 published -> P("Message published~n~n", []);
154 received -> P("Message received~n~n", [])
155 end,
156 P("Node: ~s~n", [Record#log_record.node]),
157 P("Connection: ~s~n", [Record#log_record.connection]),
158 P("Virtual host: ~s~n", [Record#log_record.vhost]),
159 P("User: ~s~n", [Record#log_record.username]),
160 P("Channel: ~p~n", [Record#log_record.channel]),
161 P("Exchange: ~s~n", [Record#log_record.exchange]),
162 case Record#log_record.queue of
163 none -> ok;
164 Q -> P("Queue: ~s~n", [Q])
165 end,
166 P("Routing keys: ~p~n", [Record#log_record.routing_keys]),
167 P("Properties: ~p~n", [Record#log_record.properties]),
168 P("Payload: ~n~s~n", [Record#log_record.payload]);
169
170 log(json, P, Record) ->
171 P("~s~n", [mochijson2:encode(
172 [{timestamp, Record#log_record.timestamp},
173 {type, Record#log_record.type},
174 {node, Record#log_record.node},
175 {connection, Record#log_record.connection},
176 {vhost, Record#log_record.vhost},
177 {user, Record#log_record.username},
178 {channel, Record#log_record.channel},
179 {exchange, Record#log_record.exchange},
180 {queue, Record#log_record.queue},
181 {routing_keys, Record#log_record.routing_keys},
182 {properties, rabbit_mgmt_format:amqp_table(
158 payload = truncate(Payload, State)}.
159
160 log(text, Record, State) ->
161 Fmt = "~n========================================"
162 "========================================~n~s: Message ~s~n~n"
163 "Node: ~s~nConnection: ~s~n"
164 "Virtual host: ~s~nUser: ~s~n"
165 "Channel: ~p~nExchange: ~s~n"
166 "Routing keys: ~p~n" ++
167 case Record#log_record.queue of
168 none -> "";
169 _ -> "Queue: ~s~n"
170 end ++
171 case Record#log_record.routed_queues of
172 none -> "";
173 _ -> "Routed queues: ~p~n"
174 end ++
175 "Properties: ~p~nPayload: ~n~s~n",
176 Args =
177 [Record#log_record.timestamp,
178 Record#log_record.type,
179 Record#log_record.node, Record#log_record.connection,
180 Record#log_record.vhost, Record#log_record.username,
181 Record#log_record.channel, Record#log_record.exchange,
182 Record#log_record.routing_keys] ++
183 case Record#log_record.queue of
184 none -> [];
185 Q -> [Q]
186 end ++
187 case Record#log_record.routed_queues of
188 none -> [];
189 RQs -> [RQs]
190 end ++
191 [Record#log_record.properties, Record#log_record.payload],
192 print_log(io_lib:format(Fmt, Args), State);
193
194 log(json, Record, State) ->
195 print_log(mochijson2:encode(
196 [{timestamp, Record#log_record.timestamp},
197 {type, Record#log_record.type},
198 {node, Record#log_record.node},
199 {connection, Record#log_record.connection},
200 {vhost, Record#log_record.vhost},
201 {user, Record#log_record.username},
202 {channel, Record#log_record.channel},
203 {exchange, Record#log_record.exchange},
204 {queue, Record#log_record.queue},
205 {routed_queues, Record#log_record.routed_queues},
206 {routing_keys, Record#log_record.routing_keys},
207 {properties, rabbit_mgmt_format:amqp_table(
183208 Record#log_record.properties)},
184 {payload, base64:encode(Record#log_record.payload)}])]).
209 {payload, base64:encode(Record#log_record.payload)}])
210 ++ "\n",
211 State).
212
213 print_log(LogMsg, State = #state{buf = Buf, buf_cnt = BufCnt}) ->
214 maybe_flush(State#state{buf = [LogMsg | Buf], buf_cnt = BufCnt + 1}).
215
216 maybe_flush(State = #state{buf_cnt = ?MAX_BUF}) ->
217 flush(State);
218 maybe_flush(State) ->
219 State.
220
221 flush(State = #state{file = F, buf = Buf}) ->
222 prim_file:write(F, lists:reverse(Buf)),
223 State#state{buf = [], buf_cnt = 0}.
224
225 truncate(Payload, #state{max_payload = Max}) ->
226 case Max =:= unlimited orelse size(Payload) =< Max of
227 true -> Payload;
228 false -> <<Trunc:Max/binary, _/binary>> = Payload,
229 Trunc
230 end.
2222 -define(ERR, <<"Something went wrong trying to start the trace - check the "
2323 "logs.">>).
2424
25 -import(rabbit_misc, [pget/2, pget/3]).
26
2527 -include_lib("rabbitmq_management/include/rabbit_mgmt.hrl").
2628 -include_lib("webmachine/include/webmachine.hrl").
2729
4648 to_json(ReqData, Context) ->
4749 rabbit_mgmt_util:reply(trace(ReqData), ReqData, Context).
4850
49 accept_content(ReqData, Context) ->
50 case rabbit_mgmt_util:vhost(ReqData) of
51 not_found -> not_found;
52 VHost -> Name = rabbit_mgmt_util:id(name, ReqData),
53 rabbit_mgmt_util:with_decode(
54 [format], ReqData, Context,
55 fun([_], Trace) ->
56 case rabbit_tracing_traces:create(
57 VHost, Name, Trace) of
58 {ok, _} -> {true, ReqData, Context};
59 _ -> rabbit_mgmt_util:bad_request(
60 ?ERR, ReqData, Context)
61 end
62 end)
51 accept_content(RD, Ctx) ->
52 case rabbit_mgmt_util:vhost(RD) of
53 not_found ->
54 not_found;
55 VHost ->
56 Name = rabbit_mgmt_util:id(name, RD),
57 rabbit_mgmt_util:with_decode(
58 [format, pattern], RD, Ctx,
59 fun([_, _], Trace) ->
60 Fs = [fun val_payload_bytes/3, fun val_format/3,
61 fun val_create/3],
62 case lists:foldl(fun (F, ok) -> F(VHost, Name, Trace);
63 (_F, Err) -> Err
64 end, ok, Fs) of
65 ok -> {true, RD, Ctx};
66 Err -> rabbit_mgmt_util:bad_request(Err, RD, Ctx)
67 end
68 end)
6369 end.
6470
6571 delete_resource(ReqData, Context) ->
7985 VHost -> rabbit_tracing_traces:lookup(
8086 VHost, rabbit_mgmt_util:id(name, ReqData))
8187 end.
88
89 val_payload_bytes(_VHost, _Name, Trace) ->
90 case is_integer(pget(max_payload_bytes, Trace, 0)) of
91 false -> <<"max_payload_bytes not integer">>;
92 true -> ok
93 end.
94
95 val_format(_VHost, _Name, Trace) ->
96 case lists:member(pget(format, Trace), [<<"json">>, <<"text">>]) of
97 false -> <<"format not json or text">>;
98 true -> ok
99 end.
100
101 val_create(VHost, Name, Trace) ->
102 case rabbit_tracing_traces:create(VHost, Name, Trace) of
103 {ok, _} -> ok;
104 _ -> ?ERR
105 end.
6868 http_delete("/trace-files/test.log", ?NO_CONTENT),
6969 ok.
7070
71 tracing_validation_test() ->
72 Path = "/traces/%2f/test",
73 http_put(Path, [{pattern, <<"#">>}], ?BAD_REQUEST),
74 http_put(Path, [{format, <<"json">>}], ?BAD_REQUEST),
75 http_put(Path, [{format, <<"ebcdic">>},
76 {pattern, <<"#">>}], ?BAD_REQUEST),
77 http_put(Path, [{format, <<"text">>},
78 {pattern, <<"#">>},
79 {max_payload_bytes, <<"abc">>}], ?BAD_REQUEST),
80 http_put(Path, [{format, <<"json">>},
81 {pattern, <<"#">>},
82 {max_payload_bytes, 1000}], ?NO_CONTENT),
83 http_delete(Path, ?NO_CONTENT),
84 ok.
85
7186 %%---------------------------------------------------------------------------
7287 %% Below is copypasta from rabbit_mgmt_test_http, it's not obvious how
7388 %% to share that given the build system.
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
3131 <<"/stomp">>, fun service_stomp/3, {}, SockjsOpts),
3232 VhostRoutes = [{[<<"stomp">>, '...'], sockjs_cowboy_handler, SockjsState}],
3333 Routes = [{'_', VhostRoutes}], % any vhost
34 cowboy:start_listener(http, 100,
34 NbAcceptors = get_env(nb_acceptors, 100),
35 cowboy:start_listener(http, NbAcceptors,
3536 cowboy_tcp_transport, [{port, Port}],
3637 cowboy_http_protocol, [{dispatch, Routes}]),
3738 rabbit_log:info("rabbit_web_stomp: listening for HTTP connections on ~s:~w~n",
4243 Conf ->
4344 rabbit_networking:ensure_ssl(),
4445 TLSPort = proplists:get_value(port, Conf),
45 cowboy:start_listener(https, 100,
46 cowboy:start_listener(https, NbAcceptors,
4647 cowboy_ssl_transport, Conf,
4748 cowboy_http_protocol, [{dispatch, Routes}]),
4849 rabbit_log:info("rabbit_web_stomp: listening for HTTPS connections on ~s:~w~n",
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
0
10 RabbitMQ-Web-Stomp-Examples plugin
21 ==================================
32
99 SIGNING_USER_EMAIL=info@rabbitmq.com
1010 SIGNING_USER_ID=RabbitMQ Release Signing Key <info@rabbitmq.com>
1111
12 # Misc options to pass to hg commands
13 HG_OPTS=
12 # Misc options to pass to git commands
13 GIT_OPTS=
1414
1515 # Misc options to pass to ssh commands
1616 SSH_OPTS=
3434
3535 REPOS:=rabbitmq-codegen rabbitmq-server rabbitmq-java-client rabbitmq-dotnet-client rabbitmq-test
3636
37 HGREPOBASE:=$(shell dirname `hg paths default 2>/dev/null` 2>/dev/null)
38
39 ifeq ($(HGREPOBASE),)
40 HGREPOBASE=ssh://hg@hg.rabbitmq.com
37 GITREPOBASE:=$(shell dirname `git remote -v 2>/dev/null | awk '/^origin\t.+ \(fetch\)$$/ { print $$2; }'` 2>/dev/null)
38
39 ifeq ($(GITREPOBASE),)
40 GITREPOBASE=https://github.com/rabbitmq
4141 endif
4242
4343 .PHONY: all
129129
130130 .PHONY: rabbitmq-server-windows-exe-packaging
131131 rabbitmq-server-windows-exe-packaging: rabbitmq-server-windows-packaging
132 $(MAKE) -C rabbitmq-server/packaging/windows-exe clean
132133 $(MAKE) -C rabbitmq-server/packaging/windows-exe dist VERSION=$(VERSION)
133134 cp rabbitmq-server/packaging/windows-exe/rabbitmq-server-*.exe $(SERVER_PACKAGES_DIR)
134135
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
0 ## Overview
1
2 RabbitMQ projects use pull requests to discuss, collaborate on and accept code contributions.
3 Pull requests is the primary place of discussing code changes.
4
5 ## How to Contribute
6
7 The process is fairly standard:
8
9 * Fork the repository or repositories you plan on contributing to
10 * Clone [RabbitMQ umbrella repository](https://github.com/rabbitmq/rabbitmq-public-umbrella)
11 * `cd umbrella`, `make co`
12 * Create a branch with a descriptive name in the relevant repositories
13 * Make your changes, run tests, commit with a [descriptive message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html), push to your fork
14 * Submit pull requests with an explanation what has been changed and **why**
15 * Submit a filled out and signed [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) if needed (see below)
16 * Be patient. We will get to your pull request eventually
17
18 If what you are going to work on is a substantial change, please first ask the core team
19 of their opinion on [RabbitMQ mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
20
21
22 ## (Brief) Code of Conduct
23
24 In one line: don't be a dick.
25
26 Be respectful to the maintainers and other contributors. Open source
27 contributors put long hours into developing projects and doing user
28 support. Those projects and user support are available for free. We
29 believe this deserves some respect.
30
31 Be respectful to people of all races, genders, religious beliefs and
32 political views. Regardless of how brilliant a pull request is
33 technically, we will not tolerate disrespectful or aggressive
34 behaviour.
35
36 Contributors who violate this straightforward Code of Conduct will see
37 their pull requests closed and locked.
38
39
40 ## Contributor Agreement
41
42 If you want to contribute a non-trivial change, please submit a signed copy of our
43 [Contributor Agreement](https://github.com/rabbitmq/ca#how-to-submit) around the time
44 you submit your pull request. This will make it much easier (in some cases, possible)
45 for the RabbitMQ team at Pivotal to merge your contribution.
46
47
48 ## Where to Ask Questions
49
50 If something isn't clear, feel free to ask on our [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users).
5959 [ "x" = "x$RABBITMQ_USE_LONGNAME" ] && RABBITMQ_USE_LONGNAME=${USE_LONGNAME}
6060 if [ "xtrue" = "x$RABBITMQ_USE_LONGNAME" ] ; then
6161 RABBITMQ_NAME_TYPE=-name
62 [ "x" = "x$HOSTNAME" ] && HOSTNAME=`env hostname --fqdn`
62 [ "x" = "x$HOSTNAME" ] && HOSTNAME=`env hostname -f`
6363 [ "x" = "x$NODENAME" ] && NODENAME=rabbit@${HOSTNAME}
6464 else
6565 RABBITMQ_NAME_TYPE=-sname
1818 # Non-empty defaults should be set in rabbitmq-env
1919 . `dirname $0`/rabbitmq-env
2020
21 RABBITMQ_USE_LONGNAME=${RABBITMQ_USE_LONGNAME} \
2122 exec ${ERL_DIR}erl \
2223 -pa "${RABBITMQ_HOME}/ebin" \
2324 -noinput \
2425 -hidden \
25 ${RABBITMQ_NAME_TYPE} rabbitmq-plugins$$ \
26 ${RABBITMQ_PLUGINS_ERL_ARGS} \
2627 -boot "${CLEAN_BOOT_FILE}" \
2728 -s rabbit_plugins_main \
2829 -enabled_plugins_file "$RABBITMQ_ENABLED_PLUGINS_FILE" \
2222 set STAR=%*
2323 setlocal enabledelayedexpansion
2424
25 if "!RABBITMQ_USE_LONGNAME!"=="" (
26 set RABBITMQ_NAME_TYPE="-sname"
27 )
28
29 if "!RABBITMQ_USE_LONGNAME!"=="true" (
30 set RABBITMQ_NAME_TYPE="-name"
31 )
32
3325 if "!RABBITMQ_SERVICENAME!"=="" (
3426 set RABBITMQ_SERVICENAME=RabbitMQ
3527 )
5143 echo Please either set ERLANG_HOME to point to your Erlang installation or place the
5244 echo RabbitMQ server distribution in the Erlang lib folder.
5345 echo.
54 exit /B
46 exit /B 1
5547 )
5648
5749 if "!RABBITMQ_ENABLED_PLUGINS_FILE!"=="" (
6254 set RABBITMQ_PLUGINS_DIR=!TDP0!..\plugins
6355 )
6456
65 "!ERLANG_HOME!\bin\erl.exe" -pa "!TDP0!..\ebin" -noinput -hidden !RABBITMQ_NAME_TYPE! rabbitmq-plugins!RANDOM!!TIME:~9! -s rabbit_plugins_main -enabled_plugins_file "!RABBITMQ_ENABLED_PLUGINS_FILE!" -plugins_dist_dir "!RABBITMQ_PLUGINS_DIR:\=/!" -nodename !RABBITMQ_NODENAME! -extra !STAR!
57 "!ERLANG_HOME!\bin\erl.exe" ^
58 -pa "!TDP0!..\ebin" ^
59 -noinput ^
60 -hidden ^
61 !RABBITMQ_CTL_ERL_ARGS! ^
62 -s rabbit_plugins_main ^
63 -enabled_plugins_file "!RABBITMQ_ENABLED_PLUGINS_FILE!" ^
64 -plugins_dist_dir "!RABBITMQ_PLUGINS_DIR:\=/!" ^
65 -nodename !RABBITMQ_NODENAME! ^
66 -extra !STAR!
6667
6768 endlocal
6869 endlocal
6969 echo Please either set ERLANG_HOME to point to your Erlang installation or place the
7070 echo RabbitMQ server distribution in the Erlang lib folder.
7171 echo.
72 exit /B
72 exit /B 1
7373 )
7474
7575 if "!RABBITMQ_MNESIA_BASE!"=="" (
141141 )
142142 )
143143
144 set RABBITMQ_START_RABBIT=
145 if "!RABBITMQ_NODE_ONLY!"=="" (
146 set RABBITMQ_START_RABBIT=-s rabbit boot
147 )
148
144149 "!ERLANG_HOME!\bin\erl.exe" ^
145150 -pa "!RABBITMQ_EBIN_ROOT!" ^
146151 -noinput ^
147152 -boot start_sasl ^
148 -s rabbit boot ^
153 !RABBITMQ_START_RABBIT! ^
149154 !RABBITMQ_CONFIG_ARG! ^
150155 !RABBITMQ_NAME_TYPE! !RABBITMQ_NODENAME! ^
151156 +W w ^
218218 )
219219 )
220220
221 set RABBITMQ_START_RABBIT=
222 if "!RABBITMQ_NODE_ONLY!"=="" (
223 set RABBITMQ_START_RABBIT=-s rabbit boot
224 )
225
221226 set ERLANG_SERVICE_ARGUMENTS= ^
222227 -pa "!RABBITMQ_EBIN_ROOT!" ^
223 -boot start_sasl ^
228 !RABBITMQ_START_RABBIT! ^
224229 -s rabbit boot ^
225230 !RABBITMQ_CONFIG_ARG! ^
226231 +W w ^
1818 # Non-empty defaults should be set in rabbitmq-env
1919 . `dirname $0`/rabbitmq-env
2020
21 # rabbitmqctl starts distribution itself, so we need to make sure epmd
22 # is running.
23 ${ERL_DIR}erl ${RABBITMQ_NAME_TYPE} rabbitmqctl-prelaunch-$$ -noinput \
24 -eval 'erlang:halt().' -boot "${CLEAN_BOOT_FILE}"
25
2621 # We specify Mnesia dir and sasl error logger since some actions
2722 # (e.g. forget_cluster_node --offline) require us to impersonate the
2823 # real node.
2626 set RABBITMQ_BASE=!APPDATA!\RabbitMQ
2727 )
2828
29 if "!RABBITMQ_USE_LONGNAME!"=="" (
30 set RABBITMQ_NAME_TYPE="-sname"
31 )
32
33 if "!RABBITMQ_USE_LONGNAME!"=="true" (
34 set RABBITMQ_NAME_TYPE="-name"
35 )
36
3729 if "!COMPUTERNAME!"=="" (
3830 set COMPUTERNAME=localhost
3931 )
5951 echo Please either set ERLANG_HOME to point to your Erlang installation or place the
6052 echo RabbitMQ server distribution in the Erlang lib folder.
6153 echo.
62 exit /B
54 exit /B 1
6355 )
64
65 rem rabbitmqctl starts distribution itself, so we need to make sure epmd
66 rem is running.
67 "!ERLANG_HOME!\bin\erl.exe" !RABBITMQ_NAME_TYPE! rabbitmqctl-prelaunch-!RANDOM!!TIME:~9! -noinput -eval "erlang:halt()."
6856
6957 "!ERLANG_HOME!\bin\erl.exe" ^
7058 -pa "!TDP0!..\ebin" ^
6161
6262 stop_applications(Apps, ErrorHandler) ->
6363 manage_applications(fun lists:foldr/3,
64 %% Mitigation for bug 26467. TODO remove when we fix it.
65 fun (mnesia) ->
66 timer:sleep(1000),
67 application:stop(mnesia);
68 (App) ->
69 application:stop(App)
70 end,
64 fun application:stop/1,
7165 fun application:start/1,
7266 not_started,
7367 ErrorHandler,
1414 %%
1515
1616 -module(delegate).
17
18 %% delegate is an alternative way of doing remote calls. Compared to
19 %% the rpc module, it reduces inter-node communication. For example,
20 %% if a message is routed to 1,000 queues on node A and needs to be
21 %% propagated to nodes B and C, it would be nice to avoid doing 2,000
22 %% remote casts to queue processes.
23 %%
24 %% An important issue here is preserving order - we need to make sure
25 %% that messages from a certain channel to a certain queue take a
26 %% consistent route, to prevent them being reordered. In fact all
27 %% AMQP-ish things (such as queue declaration results and basic.get)
28 %% must take the same route as well, to ensure that clients see causal
29 %% ordering correctly. Therefore we have a rather generic mechanism
30 %% here rather than just a message-reflector. That's also why we pick
31 %% the delegate process to use based on a hash of the source pid.
32 %%
33 %% When a function is invoked using delegate:invoke/2, delegate:call/2
34 %% or delegate:cast/2 on a group of pids, the pids are first split
35 %% into local and remote ones. Remote processes are then grouped by
36 %% node. The function is then invoked locally and on every node (using
37 %% gen_server2:multi/4) as many times as there are processes on that
38 %% node, sequentially.
39 %%
40 %% Errors returned when executing functions on remote nodes are re-raised
41 %% in the caller.
42 %%
43 %% RabbitMQ starts a pool of delegate processes on boot. The size of
44 %% the pool is configurable, the aim is to make sure we don't have too
45 %% few delegates and thus limit performance on many-CPU machines.
1746
1847 -behaviour(gen_server2).
1948
2929 %% may happen, especially for writes.
3030 %% 3) Writes are all appends. You cannot write to the middle of a
3131 %% file, although you can truncate and then append if you want.
32 %% 4) Although there is a write buffer, there is no read buffer. Feel
33 %% free to use the read_ahead mode, but beware of the interaction
34 %% between that buffer and the write buffer.
32 %% 4) There are read and write buffers. Feel free to use the read_ahead
33 %% mode, but beware of the interaction between that buffer and the write
34 %% buffer.
3535 %%
3636 %% Some benefits
3737 %% 1) You do not have to remember to call sync before close
177177 write_buffer_size,
178178 write_buffer_size_limit,
179179 write_buffer,
180 read_buffer,
181 read_buffer_pos,
182 read_buffer_rem, %% Num of bytes from pos to end
183 read_buffer_size, %% Next size of read buffer to use
184 read_buffer_size_limit, %% Max size of read buffer to use
185 read_buffer_usage, %% Bytes we have read from it, for tuning
180186 at_eof,
181187 path,
182188 mode,
236242 -spec(register_callback/3 :: (atom(), atom(), [any()]) -> 'ok').
237243 -spec(open/3 ::
238244 (file:filename(), [any()],
239 [{'write_buffer', (non_neg_integer() | 'infinity' | 'unbuffered')}])
245 [{'write_buffer', (non_neg_integer() | 'infinity' | 'unbuffered')} |
246 {'read_buffer', (non_neg_integer() | 'unbuffered')}])
240247 -> val_or_error(ref())).
241248 -spec(close/1 :: (ref()) -> ok_or_error()).
242249 -spec(read/2 :: (ref(), non_neg_integer()) ->
330337
331338 read(Ref, Count) ->
332339 with_flushed_handles(
333 [Ref],
340 [Ref], keep,
334341 fun ([#handle { is_read = false }]) ->
335342 {error, not_open_for_reading};
336 ([Handle = #handle { hdl = Hdl, offset = Offset }]) ->
337 case prim_file:read(Hdl, Count) of
338 {ok, Data} = Obj -> Offset1 = Offset + iolist_size(Data),
339 {Obj,
340 [Handle #handle { offset = Offset1 }]};
341 eof -> {eof, [Handle #handle { at_eof = true }]};
342 Error -> {Error, [Handle]}
343 ([Handle = #handle{read_buffer = Buf,
344 read_buffer_pos = BufPos,
345 read_buffer_rem = BufRem,
346 read_buffer_usage = BufUsg,
347 offset = Offset}])
348 when BufRem >= Count ->
349 <<_:BufPos/binary, Res:Count/binary, _/binary>> = Buf,
350 {{ok, Res}, [Handle#handle{offset = Offset + Count,
351 read_buffer_pos = BufPos + Count,
352 read_buffer_rem = BufRem - Count,
353 read_buffer_usage = BufUsg + Count }]};
354 ([Handle0]) ->
355 Handle = #handle{read_buffer = Buf,
356 read_buffer_pos = BufPos,
357 read_buffer_rem = BufRem,
358 read_buffer_size = BufSz,
359 hdl = Hdl,
360 offset = Offset}
361 = tune_read_buffer_limit(Handle0, Count),
362 WantedCount = Count - BufRem,
363 case prim_file_read(Hdl, lists:max([BufSz, WantedCount])) of
364 {ok, Data} ->
365 <<_:BufPos/binary, BufTl/binary>> = Buf,
366 ReadCount = size(Data),
367 case ReadCount < WantedCount of
368 true ->
369 OffSet1 = Offset + BufRem + ReadCount,
370 {{ok, <<BufTl/binary, Data/binary>>},
371 [reset_read_buffer(
372 Handle#handle{offset = OffSet1})]};
373 false ->
374 <<Hd:WantedCount/binary, _/binary>> = Data,
375 OffSet1 = Offset + BufRem + WantedCount,
376 BufRem1 = ReadCount - WantedCount,
377 {{ok, <<BufTl/binary, Hd/binary>>},
378 [Handle#handle{offset = OffSet1,
379 read_buffer = Data,
380 read_buffer_pos = WantedCount,
381 read_buffer_rem = BufRem1,
382 read_buffer_usage = WantedCount}]}
383 end;
384 eof ->
385 {eof, [Handle #handle { at_eof = true }]};
386 Error ->
387 {Error, [reset_read_buffer(Handle)]}
343388 end
344389 end).
345390
354399 write_buffer_size_limit = 0,
355400 at_eof = true } = Handle1} ->
356401 Offset1 = Offset + iolist_size(Data),
357 {prim_file:write(Hdl, Data),
402 {prim_file_write(Hdl, Data),
358403 [Handle1 #handle { is_dirty = true, offset = Offset1 }]};
359404 {{ok, _Offset}, #handle { write_buffer = WriteBuffer,
360405 write_buffer_size = Size,
376421
377422 sync(Ref) ->
378423 with_flushed_handles(
379 [Ref],
424 [Ref], keep,
380425 fun ([#handle { is_dirty = false, write_buffer = [] }]) ->
381426 ok;
382427 ([Handle = #handle { hdl = Hdl,
383428 is_dirty = true, write_buffer = [] }]) ->
384 case prim_file:sync(Hdl) of
429 case prim_file_sync(Hdl) of
385430 ok -> {ok, [Handle #handle { is_dirty = false }]};
386431 Error -> {Error, [Handle]}
387432 end
396441
397442 position(Ref, NewOffset) ->
398443 with_flushed_handles(
399 [Ref],
444 [Ref], keep,
400445 fun ([Handle]) -> {Result, Handle1} = maybe_seek(NewOffset, Handle),
401446 {Result, [Handle1]}
402447 end).
464509 fun ([#handle { at_eof = true, write_buffer_size = 0, offset = 0 }]) ->
465510 ok;
466511 ([Handle]) ->
467 case maybe_seek(bof, Handle #handle { write_buffer = [],
468 write_buffer_size = 0 }) of
512 case maybe_seek(bof, Handle#handle{write_buffer = [],
513 write_buffer_size = 0}) of
469514 {{ok, 0}, Handle1 = #handle { hdl = Hdl }} ->
470515 case prim_file:truncate(Hdl) of
471516 ok -> {ok, [Handle1 #handle { at_eof = true }]};
538583 %% Internal functions
539584 %%----------------------------------------------------------------------------
540585
586 prim_file_read(Hdl, Size) ->
587 file_handle_cache_stats:update(
588 io_read, Size, fun() -> prim_file:read(Hdl, Size) end).
589
590 prim_file_write(Hdl, Bytes) ->
591 file_handle_cache_stats:update(
592 io_write, iolist_size(Bytes), fun() -> prim_file:write(Hdl, Bytes) end).
593
594 prim_file_sync(Hdl) ->
595 file_handle_cache_stats:update(io_sync, fun() -> prim_file:sync(Hdl) end).
596
597 prim_file_position(Hdl, NewOffset) ->
598 file_handle_cache_stats:update(
599 io_seek, fun() -> prim_file:position(Hdl, NewOffset) end).
600
541601 is_reader(Mode) -> lists:member(read, Mode).
542602
543603 is_writer(Mode) -> lists:member(write, Mode).
549609 end.
550610
551611 with_handles(Refs, Fun) ->
612 with_handles(Refs, reset, Fun).
613
614 with_handles(Refs, ReadBuffer, Fun) ->
552615 case get_or_reopen([{Ref, reopen} || Ref <- Refs]) of
553 {ok, Handles} ->
616 {ok, Handles0} ->
617 Handles = case ReadBuffer of
618 reset -> [reset_read_buffer(H) || H <- Handles0];
619 keep -> Handles0
620 end,
554621 case Fun(Handles) of
555622 {Result, Handles1} when is_list(Handles1) ->
556623 lists:zipwith(fun put_handle/2, Refs, Handles1),
563630 end.
564631
565632 with_flushed_handles(Refs, Fun) ->
633 with_flushed_handles(Refs, reset, Fun).
634
635 with_flushed_handles(Refs, ReadBuffer, Fun) ->
566636 with_handles(
567 Refs,
637 Refs, ReadBuffer,
568638 fun (Handles) ->
569639 case lists:foldl(
570640 fun (Handle, {ok, HandlesAcc}) ->
610680 {ok, lists:reverse(RefHdls)};
611681 reopen([{Ref, NewOrReopen, Handle = #handle { hdl = closed,
612682 path = Path,
613 mode = Mode,
683 mode = Mode0,
614684 offset = Offset,
615685 last_used_at = undefined }} |
616686 RefNewOrReopenHdls] = ToOpen, Tree, RefHdls) ->
617 case prim_file:open(Path, case NewOrReopen of
618 new -> Mode;
619 reopen -> [read | Mode]
620 end) of
687 Mode = case NewOrReopen of
688 new -> Mode0;
689 reopen -> file_handle_cache_stats:update(io_reopen),
690 [read | Mode0]
691 end,
692 case prim_file:open(Path, Mode) of
621693 {ok, Hdl} ->
622694 Now = now(),
623695 {{ok, _Offset}, Handle1} =
624 maybe_seek(Offset, Handle #handle { hdl = Hdl,
625 offset = 0,
626 last_used_at = Now }),
696 maybe_seek(Offset, reset_read_buffer(
697 Handle#handle{hdl = Hdl,
698 offset = 0,
699 last_used_at = Now})),
627700 put({Ref, fhc_handle}, Handle1),
628701 reopen(RefNewOrReopenHdls, gb_trees:insert(Now, Ref, Tree),
629702 [{Ref, Handle1} | RefHdls]);
708781 infinity -> infinity;
709782 N when is_integer(N) -> N
710783 end,
784 ReadBufferSize =
785 case proplists:get_value(read_buffer, Options, unbuffered) of
786 unbuffered -> 0;
787 N2 when is_integer(N2) -> N2
788 end,
711789 Ref = make_ref(),
712790 put({Ref, fhc_handle}, #handle { hdl = closed,
713791 offset = 0,
715793 write_buffer_size = 0,
716794 write_buffer_size_limit = WriteBufferSize,
717795 write_buffer = [],
796 read_buffer = <<>>,
797 read_buffer_pos = 0,
798 read_buffer_rem = 0,
799 read_buffer_size = ReadBufferSize,
800 read_buffer_size_limit = ReadBufferSize,
801 read_buffer_usage = 0,
718802 at_eof = false,
719803 path = Path,
720804 mode = Mode,
741825 is_dirty = IsDirty,
742826 last_used_at = Then } = Handle1 } ->
743827 ok = case IsDirty of
744 true -> prim_file:sync(Hdl);
828 true -> prim_file_sync(Hdl);
745829 false -> ok
746830 end,
747831 ok = prim_file:close(Hdl),
775859 Result
776860 end.
777861
778 maybe_seek(NewOffset, Handle = #handle { hdl = Hdl, offset = Offset,
779 at_eof = AtEoF }) ->
780 {AtEoF1, NeedsSeek} = needs_seek(AtEoF, Offset, NewOffset),
781 case (case NeedsSeek of
782 true -> prim_file:position(Hdl, NewOffset);
783 false -> {ok, Offset}
784 end) of
785 {ok, Offset1} = Result ->
786 {Result, Handle #handle { offset = Offset1, at_eof = AtEoF1 }};
787 {error, _} = Error ->
788 {Error, Handle}
862 maybe_seek(New, Handle = #handle{hdl = Hdl,
863 offset = Old,
864 read_buffer_pos = BufPos,
865 read_buffer_rem = BufRem,
866 at_eof = AtEoF}) ->
867 {AtEoF1, NeedsSeek} = needs_seek(AtEoF, Old, New),
868 case NeedsSeek of
869 true when is_number(New) andalso
870 ((New >= Old andalso New =< BufRem + Old)
871 orelse (New < Old andalso Old - New =< BufPos)) ->
872 Diff = New - Old,
873 {{ok, New}, Handle#handle{offset = New,
874 at_eof = AtEoF1,
875 read_buffer_pos = BufPos + Diff,
876 read_buffer_rem = BufRem - Diff}};
877 true ->
878 case prim_file_position(Hdl, New) of
879 {ok, Offset1} = Result ->
880 {Result, reset_read_buffer(Handle#handle{offset = Offset1,
881 at_eof = AtEoF1})};
882 {error, _} = Error ->
883 {Error, Handle}
884 end;
885 false ->
886 {{ok, Old}, Handle}
789887 end.
790888
791889 needs_seek( AtEoF, _CurOffset, cur ) -> {AtEoF, false};
816914 write_buffer = WriteBuffer,
817915 write_buffer_size = DataSize,
818916 at_eof = true }) ->
819 case prim_file:write(Hdl, lists:reverse(WriteBuffer)) of
917 case prim_file_write(Hdl, lists:reverse(WriteBuffer)) of
820918 ok ->
821919 Offset1 = Offset + DataSize,
822920 {ok, Handle #handle { offset = Offset1, is_dirty = true,
825923 {Error, Handle}
826924 end.
827925
926 reset_read_buffer(Handle) ->
927 Handle#handle{read_buffer = <<>>,
928 read_buffer_pos = 0,
929 read_buffer_rem = 0}.
930
931 %% We come into this function whenever there's been a miss while
932 %% reading from the buffer - but note that when we first start with a
933 %% new handle the usage will be 0. Therefore in that case don't take
934 %% it as meaning the buffer was useless, we just haven't done anything
935 %% yet!
936 tune_read_buffer_limit(Handle = #handle{read_buffer_usage = 0}, _Count) ->
937 Handle;
938 %% In this head we have been using the buffer but now tried to read
939 %% outside it. So how did we do? If we used less than the size of the
940 %% buffer, make the new buffer the size of what we used before, but
941 %% add one byte (so that next time we can distinguish between getting
942 %% the buffer size exactly right and actually wanting more). If we
943 %% read 100% of what we had, then double it for next time, up to the
944 %% limit that was set when we were created.
945 tune_read_buffer_limit(Handle = #handle{read_buffer = Buf,
946 read_buffer_usage = Usg,
947 read_buffer_size = Sz,
948 read_buffer_size_limit = Lim}, Count) ->
949 %% If the buffer is <<>> then we are in the first read after a
950 %% reset, the read_buffer_usage is the total usage from before the
951 %% reset. But otherwise we are in a read which read off the end of
952 %% the buffer, so really the size of this read should be included
953 %% in the usage.
954 TotalUsg = case Buf of
955 <<>> -> Usg;
956 _ -> Usg + Count
957 end,
958 Handle#handle{read_buffer_usage = 0,
959 read_buffer_size = erlang:min(case TotalUsg < Sz of
960 true -> Usg + 1;
961 false -> Usg * 2
962 end, Lim)}.
963
828964 infos(Items, State) -> [{Item, i(Item, State)} || Item <- Items].
829965
830966 i(total_limit, #fhc_state{limit = Limit}) -> Limit;
842978 %%----------------------------------------------------------------------------
843979
844980 init([AlarmSet, AlarmClear]) ->
981 file_handle_cache_stats:init(),
845982 Limit = case application:get_env(file_handles_high_watermark) of
846983 {ok, Watermark} when (is_integer(Watermark) andalso
847984 Watermark > 0) ->
0 %% The contents of this file are subject to the Mozilla Public License
1 %% Version 1.1 (the "License"); you may not use this file except in
2 %% compliance with the License. You may obtain a copy of the License
3 %% at http://www.mozilla.org/MPL/
4 %%
5 %% Software distributed under the License is distributed on an "AS IS"
6 %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 %% the License for the specific language governing rights and
8 %% limitations under the License.
9 %%
10 %% The Original Code is RabbitMQ.
11 %%
12 %% The Initial Developer of the Original Code is GoPivotal, Inc.
13 %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
14 %%
15
16 -module(file_handle_cache_stats).
17
18 %% stats about read / write operations that go through the fhc.
19
20 -export([init/0, update/3, update/2, update/1, get/0]).
21
22 -define(TABLE, ?MODULE).
23
24 -define(COUNT,
25 [io_reopen, mnesia_ram_tx, mnesia_disk_tx,
26 msg_store_read, msg_store_write,
27 queue_index_journal_write, queue_index_write, queue_index_read]).
28 -define(COUNT_TIME, [io_sync, io_seek]).
29 -define(COUNT_TIME_BYTES, [io_read, io_write]).
30
31 init() ->
32 ets:new(?TABLE, [public, named_table]),
33 [ets:insert(?TABLE, {{Op, Counter}, 0}) || Op <- ?COUNT_TIME_BYTES,
34 Counter <- [count, bytes, time]],
35 [ets:insert(?TABLE, {{Op, Counter}, 0}) || Op <- ?COUNT_TIME,
36 Counter <- [count, time]],
37 [ets:insert(?TABLE, {{Op, Counter}, 0}) || Op <- ?COUNT,
38 Counter <- [count]].
39
40 update(Op, Bytes, Thunk) ->
41 {Time, Res} = timer_tc(Thunk),
42 ets:update_counter(?TABLE, {Op, count}, 1),
43 ets:update_counter(?TABLE, {Op, bytes}, Bytes),
44 ets:update_counter(?TABLE, {Op, time}, Time),
45 Res.
46
47 update(Op, Thunk) ->
48 {Time, Res} = timer_tc(Thunk),
49 ets:update_counter(?TABLE, {Op, count}, 1),
50 ets:update_counter(?TABLE, {Op, time}, Time),
51 Res.
52
53 update(Op) ->
54 ets:update_counter(?TABLE, {Op, count}, 1),
55 ok.
56
57 get() ->
58 lists:sort(ets:tab2list(?TABLE)).
59
60 %% TODO timer:tc/1 was introduced in R14B03; use that function once we
61 %% require that version.
62 timer_tc(Thunk) ->
63 T1 = os:timestamp(),
64 Res = Thunk(),
65 T2 = os:timestamp(),
66 {timer:now_diff(T2, T1), Res}.
1414 %%
1515
1616 -module(gatherer).
17
18 %% Gatherer is a queue which has producer and consumer processes. Before producers
19 %% push items to the queue using gatherer:in/2 they need to declare their intent
20 %% to do so with gatherer:fork/1. When a publisher's work is done, it states so
21 %% using gatherer:finish/1.
22 %%
23 %% Consumers pop messages off queues with gatherer:out/1. If a queue is empty
24 %% and there are producers that haven't finished working, the caller is blocked
25 %% until an item is available. If there are no active producers, gatherer:out/1
26 %% immediately returns 'empty'.
27 %%
28 %% This module is primarily used to collect results from asynchronous tasks
29 %% running in a worker pool, e.g. when recovering bindings or rebuilding
30 %% message store indices.
1731
1832 -behaviour(gen_server2).
1933
1414 %%
1515
1616 -module(lqueue).
17
18 %% lqueue implements a subset of Erlang's queue module. lqueues
19 %% maintain their own length, so lqueue:len/1
20 %% is an O(1) operation, in contrast with queue:len/1 which is O(n).
1721
1822 -export([new/0, is_empty/1, len/1, in/2, in_r/2, out/1, out_r/1, join/2,
1923 foldl/3, foldr/3, from_list/1, to_list/1, peek/1, peek_r/1]).
1414 %%
1515
1616 -module(pmon).
17
18 %% Process Monitor
19 %% ================
20 %%
21 %% This module monitors processes so that every process has at most
22 %% 1 monitor.
23 %% Processes monitored can be dynamically added and removed.
24 %%
25 %% Unlike erlang:[de]monitor* functions, this module
26 %% provides basic querying capability and avoids contacting down nodes.
27 %%
28 %% It is used to monitor nodes, queue mirrors, and by
29 %% the queue collector, among other things.
1730
1831 -export([new/0, new/1, monitor/2, monitor_all/2, demonitor/2,
1932 is_monitored/2, erase/2, monitored/1, is_empty/1]).
115115 {mfa, {rabbit_sup, start_restartable_child,
116116 [rabbit_node_monitor]}},
117117 {requires, [rabbit_alarm, guid_generator]},
118 {enables, core_initialized}]}).
119
120 -rabbit_boot_step({rabbit_epmd_monitor,
121 [{description, "epmd monitor"},
122 {mfa, {rabbit_sup, start_restartable_child,
123 [rabbit_epmd_monitor]}},
124 {requires, kernel_ready},
118125 {enables, core_initialized}]}).
119126
120127 -rabbit_boot_step({core_initialized,
242249 {ok, Want} = application:get_env(rabbit, hipe_compile),
243250 Can = code:which(hipe) =/= non_existing,
244251 case {Want, Can} of
245 {true, true} -> hipe_compile(),
246 true;
252 {true, true} -> hipe_compile();
247253 {true, false} -> false;
248 {false, _} -> true
249 end.
250
251 warn_if_hipe_compilation_failed(true) ->
254 {false, _} -> {ok, disabled}
255 end.
256
257 log_hipe_result({ok, disabled}) ->
252258 ok;
253 warn_if_hipe_compilation_failed(false) ->
259 log_hipe_result({ok, Count, Duration}) ->
260 rabbit_log:info(
261 "HiPE in use: compiled ~B modules in ~Bs.~n", [Count, Duration]);
262 log_hipe_result(false) ->
263 io:format(
264 "~nNot HiPE compiling: HiPE not found in this Erlang installation.~n"),
254265 rabbit_log:warning(
255266 "Not HiPE compiling: HiPE not found in this Erlang installation.~n").
256267
275286 {'DOWN', MRef, process, _, Reason} -> exit(Reason)
276287 end || {_Pid, MRef} <- PidMRefs],
277288 T2 = erlang:now(),
278 io:format("|~n~nCompiled ~B modules in ~Bs~n",
279 [Count, timer:now_diff(T2, T1) div 1000000]).
289 Duration = timer:now_diff(T2, T1) div 1000000,
290 io:format("|~n~nCompiled ~B modules in ~Bs~n", [Count, Duration]),
291 {ok, Count, Duration}.
280292
281293 split(L, N) -> split0(L, [[] || _ <- lists:seq(1, N)]).
282294
306318 boot() ->
307319 start_it(fun() ->
308320 ok = ensure_application_loaded(),
309 Success = maybe_hipe_compile(),
321 HipeResult = maybe_hipe_compile(),
310322 ok = ensure_working_log_handlers(),
311 warn_if_hipe_compilation_failed(Success),
323 log_hipe_result(HipeResult),
312324 rabbit_node_monitor:prepare_cluster_status_files(),
313325 ok = rabbit_upgrade:maybe_upgrade_mnesia(),
314326 %% It's important that the consistency check happens after
322334 Plugins = rabbit_plugins:setup(),
323335 ToBeLoaded = Plugins ++ ?APPS,
324336 start_apps(ToBeLoaded),
337 case code:load_file(sd_notify) of
338 {module, sd_notify} -> SDNotify = sd_notify,
339 SDNotify:sd_notify(0, "READY=1");
340 {error, _} -> ok
341 end,
325342 ok = log_broker_started(rabbit_plugins:active()).
326343
327344 start_it(StartFun) ->
333350 false -> StartFun()
334351 end
335352 catch
336 throw:{could_not_start, _App, _Reason}=Err ->
353 throw:{could_not_start, _App, _Reason} = Err ->
337354 boot_error(Err, not_available);
338355 _:Reason ->
339356 boot_error(Reason, erlang:get_stacktrace())
386403 ok.
387404
388405 handle_app_error(Term) ->
389 fun(App, {bad_return, {_MFA, {'EXIT', {ExitReason, _}}}}) ->
406 fun(App, {bad_return, {_MFA, {'EXIT', ExitReason}}}) ->
390407 throw({Term, App, ExitReason});
391408 (App, Reason) ->
392409 throw({Term, App, Reason})
393410 end.
394411
395412 run_cleanup_steps(Apps) ->
396 [run_step(Name, Attrs, cleanup) || {_, Name, Attrs} <- find_steps(Apps)],
413 [run_step(Attrs, cleanup) || Attrs <- find_steps(Apps)],
397414 ok.
398415
399416 await_startup() ->
521538 run_boot_steps([App || {App, _, _} <- application:loaded_applications()]).
522539
523540 run_boot_steps(Apps) ->
524 [ok = run_step(Step, Attrs, mfa) || {_, Step, Attrs} <- find_steps(Apps)],
541 [ok = run_step(Attrs, mfa) || Attrs <- find_steps(Apps)],
525542 ok.
526543
527544 find_steps(Apps) ->
528545 All = sort_boot_steps(rabbit_misc:all_module_attributes(rabbit_boot_step)),
529 [Step || {App, _, _} = Step <- All, lists:member(App, Apps)].
530
531 run_step(StepName, Attributes, AttributeName) ->
546 [Attrs || {App, _, Attrs} <- All, lists:member(App, Apps)].
547
548 run_step(Attributes, AttributeName) ->
532549 case [MFA || {Key, MFA} <- Attributes,
533550 Key =:= AttributeName] of
534551 [] ->
535552 ok;
536553 MFAs ->
537 [try
538 apply(M,F,A)
539 of
554 [case apply(M,F,A) of
540555 ok -> ok;
541 {error, Reason} -> boot_error({boot_step, StepName, Reason},
542 not_available)
543 catch
544 _:Reason -> boot_error({boot_step, StepName, Reason},
545 erlang:get_stacktrace())
556 {error, Reason} -> exit({error, Reason})
546557 end || {M,F,A} <- MFAs],
547558 ok
548559 end.
580591 {_App, StepName, Attributes} <- SortedSteps,
581592 {mfa, {M,F,A}} <- Attributes,
582593 not erlang:function_exported(M, F, length(A))] of
583 [] -> SortedSteps;
584 MissingFunctions -> basic_boot_error(
585 {missing_functions, MissingFunctions},
586 "Boot step functions not exported: ~p~n",
587 [MissingFunctions])
594 [] -> SortedSteps;
595 MissingFns -> exit({boot_functions_not_exported, MissingFns})
588596 end;
589597 {error, {vertex, duplicate, StepName}} ->
590 basic_boot_error({duplicate_boot_step, StepName},
591 "Duplicate boot step name: ~w~n", [StepName]);
598 exit({duplicate_boot_step, StepName});
592599 {error, {edge, Reason, From, To}} ->
593 basic_boot_error(
594 {invalid_boot_step_dependency, From, To},
595 "Could not add boot step dependency of ~w on ~w:~n~s",
596 [To, From,
597 case Reason of
598 {bad_vertex, V} ->
599 io_lib:format("Boot step not registered: ~w~n", [V]);
600 {bad_edge, [First | Rest]} ->
601 [io_lib:format("Cyclic dependency: ~w", [First]),
602 [io_lib:format(" depends on ~w", [Next]) ||
603 Next <- Rest],
604 io_lib:format(" depends on ~w~n", [First])]
605 end])
600 exit({invalid_boot_step_dependency, From, To, Reason})
606601 end.
607602
608603 -ifdef(use_specs).
609604 -spec(boot_error/2 :: (term(), not_available | [tuple()]) -> no_return()).
610605 -endif.
611 boot_error(Term={error, {timeout_waiting_for_tables, _}}, _Stacktrace) ->
606 boot_error({could_not_start, rabbit, {{timeout_waiting_for_tables, _}, _}},
607 _Stacktrace) ->
612608 AllNodes = rabbit_mnesia:cluster_nodes(all),
609 Suffix = "~nBACKGROUND~n==========~n~n"
610 "This cluster node was shut down while other nodes were still running.~n"
611 "To avoid losing data, you should start the other nodes first, then~n"
612 "start this one. To force this node to start, first invoke~n"
613 "\"rabbitmqctl force_boot\". If you do so, any changes made on other~n"
614 "cluster nodes after this one was shut down may be lost.~n",
613615 {Err, Nodes} =
614616 case AllNodes -- [node()] of
615617 [] -> {"Timeout contacting cluster nodes. Since RabbitMQ was"
616618 " shut down forcefully~nit cannot determine which nodes"
617 " are timing out.~n", []};
619 " are timing out.~n" ++ Suffix, []};
618620 Ns -> {rabbit_misc:format(
619 "Timeout contacting cluster nodes: ~p.~n", [Ns]),
621 "Timeout contacting cluster nodes: ~p.~n" ++ Suffix, [Ns]),
620622 Ns}
621623 end,
622 basic_boot_error(Term,
623 Err ++ rabbit_nodes:diagnostics(Nodes) ++ "~n~n", []);
624 log_boot_error_and_exit(
625 timeout_waiting_for_tables,
626 Err ++ rabbit_nodes:diagnostics(Nodes) ++ "~n~n", []);
624627 boot_error(Reason, Stacktrace) ->
625 Fmt = "Error description:~n ~p~n~n" ++
628 Fmt = "Error description:~n ~p~n~n"
626629 "Log files (may contain more information):~n ~s~n ~s~n~n",
627630 Args = [Reason, log_location(kernel), log_location(sasl)],
628631 boot_error(Reason, Fmt, Args, Stacktrace).
632635 -> no_return()).
633636 -endif.
634637 boot_error(Reason, Fmt, Args, not_available) ->
635 basic_boot_error(Reason, Fmt, Args);
638 log_boot_error_and_exit(Reason, Fmt, Args);
636639 boot_error(Reason, Fmt, Args, Stacktrace) ->
637 basic_boot_error(Reason, Fmt ++ "Stack trace:~n ~p~n~n",
638 Args ++ [Stacktrace]).
639
640 basic_boot_error(Reason, Format, Args) ->
640 log_boot_error_and_exit(Reason, Fmt ++ "Stack trace:~n ~p~n~n",
641 Args ++ [Stacktrace]).
642
643 log_boot_error_and_exit(Reason, Format, Args) ->
641644 io:format("~n~nBOOT FAILED~n===========~n~n" ++ Format, Args),
642645 rabbit_log:info(Format, Args),
643646 timer:sleep(1000),
644 exit({?MODULE, failure_during_boot, Reason}).
647 exit(Reason).
645648
646649 %%---------------------------------------------------------------------------
647650 %% boot step functions
1818 -include("rabbit.hrl").
1919
2020 -export([check_user_pass_login/2, check_user_login/2, check_user_loopback/2,
21 check_vhost_access/2, check_resource_access/3]).
21 check_vhost_access/3, check_resource_access/3]).
2222
2323 %%----------------------------------------------------------------------------
2424
3030
3131 -spec(check_user_pass_login/2 ::
3232 (rabbit_types:username(), rabbit_types:password())
33 -> {'ok', rabbit_types:user()} | {'refused', string(), [any()]}).
33 -> {'ok', rabbit_types:user()} |
34 {'refused', rabbit_types:username(), string(), [any()]}).
3435 -spec(check_user_login/2 ::
3536 (rabbit_types:username(), [{atom(), any()}])
36 -> {'ok', rabbit_types:user()} | {'refused', string(), [any()]}).
37 -> {'ok', rabbit_types:user()} |
38 {'refused', rabbit_types:username(), string(), [any()]}).
3739 -spec(check_user_loopback/2 :: (rabbit_types:username(),
3840 rabbit_net:socket() | inet:ip_address())
3941 -> 'ok' | 'not_allowed').
40 -spec(check_vhost_access/2 ::
41 (rabbit_types:user(), rabbit_types:vhost())
42 -spec(check_vhost_access/3 ::
43 (rabbit_types:user(), rabbit_types:vhost(), rabbit_net:socket())
4244 -> 'ok' | rabbit_types:channel_exit()).
4345 -spec(check_resource_access/3 ::
4446 (rabbit_types:user(), rabbit_types:r(atom()), permission_atom())
5456 check_user_login(Username, AuthProps) ->
5557 {ok, Modules} = application:get_env(rabbit, auth_backends),
5658 R = lists:foldl(
57 fun ({ModN, ModZ}, {refused, _, _}) ->
59 fun ({ModN, ModZs0}, {refused, _, _, _}) ->
60 ModZs = case ModZs0 of
61 A when is_atom(A) -> [A];
62 L when is_list(L) -> L
63 end,
5864 %% Different modules for authN vs authZ. So authenticate
5965 %% with authN module, then if that succeeds do
60 %% passwordless (i.e pre-authenticated) login with authZ
61 %% module, and use the #user{} the latter gives us.
62 case try_login(ModN, Username, AuthProps) of
63 {ok, _} -> try_login(ModZ, Username, []);
64 Else -> Else
66 %% passwordless (i.e pre-authenticated) login with authZ.
67 case try_authenticate(ModN, Username, AuthProps) of
68 {ok, ModNUser = #auth_user{username = Username2}} ->
69 user(ModNUser, try_authorize(ModZs, Username2));
70 Else ->
71 Else
6572 end;
66 (Mod, {refused, _, _}) ->
73 (Mod, {refused, _, _, _}) ->
6774 %% Same module for authN and authZ. Just take the result
6875 %% it gives us
69 try_login(Mod, Username, AuthProps);
76 case try_authenticate(Mod, Username, AuthProps) of
77 {ok, ModNUser = #auth_user{impl = Impl}} ->
78 user(ModNUser, {ok, [{Mod, Impl}]});
79 Else ->
80 Else
81 end;
7082 (_, {ok, User}) ->
7183 %% We've successfully authenticated. Skip to the end...
7284 {ok, User}
73 end, {refused, "No modules checked '~s'", [Username]}, Modules),
74 rabbit_event:notify(case R of
75 {ok, _User} -> user_authentication_success;
76 _ -> user_authentication_failure
77 end, [{name, Username}]),
85 end,
86 {refused, Username, "No modules checked '~s'", [Username]}, Modules),
7887 R.
7988
80 try_login(Module, Username, AuthProps) ->
81 case Module:check_user_login(Username, AuthProps) of
82 {error, E} -> {refused, "~s failed authenticating ~s: ~p~n",
83 [Module, Username, E]};
84 Else -> Else
89 try_authenticate(Module, Username, AuthProps) ->
90 case Module:user_login_authentication(Username, AuthProps) of
91 {ok, AuthUser} -> {ok, AuthUser};
92 {error, E} -> {refused, Username,
93 "~s failed authenticating ~s: ~p~n",
94 [Module, Username, E]};
95 {refused, F, A} -> {refused, Username, F, A}
8596 end.
97
98 try_authorize(Modules, Username) ->
99 lists:foldr(
100 fun (Module, {ok, ModsImpls}) ->
101 case Module:user_login_authorization(Username) of
102 {ok, Impl} -> {ok, [{Module, Impl} | ModsImpls]};
103 {error, E} -> {refused, Username,
104 "~s failed authorizing ~s: ~p~n",
105 [Module, Username, E]};
106 {refused, F, A} -> {refused, Username, F, A}
107 end;
108 (_, {refused, F, A}) ->
109 {refused, Username, F, A}
110 end, {ok, []}, Modules).
111
112 user(#auth_user{username = Username, tags = Tags}, {ok, ModZImpls}) ->
113 {ok, #user{username = Username,
114 tags = Tags,
115 authz_backends = ModZImpls}};
116 user(_AuthUser, Error) ->
117 Error.
118
119 auth_user(#user{username = Username, tags = Tags}, Impl) ->
120 #auth_user{username = Username,
121 tags = Tags,
122 impl = Impl}.
86123
87124 check_user_loopback(Username, SockOrAddr) ->
88125 {ok, Users} = application:get_env(rabbit, loopback_users),
92129 false -> not_allowed
93130 end.
94131
95 check_vhost_access(User = #user{ username = Username,
96 auth_backend = Module }, VHostPath) ->
97 check_access(
98 fun() ->
99 %% TODO this could be an andalso shortcut under >R13A
100 case rabbit_vhost:exists(VHostPath) of
101 false -> false;
102 true -> Module:check_vhost_access(User, VHostPath)
103 end
104 end,
105 Module, "access to vhost '~s' refused for user '~s'",
106 [VHostPath, Username]).
132 check_vhost_access(User = #user{username = Username,
133 authz_backends = Modules}, VHostPath, Sock) ->
134 lists:foldl(
135 fun({Mod, Impl}, ok) ->
136 check_access(
137 fun() ->
138 rabbit_vhost:exists(VHostPath) andalso
139 Mod:check_vhost_access(
140 auth_user(User, Impl), VHostPath, Sock)
141 end,
142 Mod, "access to vhost '~s' refused for user '~s'",
143 [VHostPath, Username]);
144 (_, Else) ->
145 Else
146 end, ok, Modules).
107147
108148 check_resource_access(User, R = #resource{kind = exchange, name = <<"">>},
109149 Permission) ->
110150 check_resource_access(User, R#resource{name = <<"amq.default">>},
111151 Permission);
112 check_resource_access(User = #user{username = Username, auth_backend = Module},
152 check_resource_access(User = #user{username = Username,
153 authz_backends = Modules},
113154 Resource, Permission) ->
114 check_access(
115 fun() -> Module:check_resource_access(User, Resource, Permission) end,
116 Module, "access to ~s refused for user '~s'",
117 [rabbit_misc:rs(Resource), Username]).
155 lists:foldl(
156 fun({Module, Impl}, ok) ->
157 check_access(
158 fun() -> Module:check_resource_access(
159 auth_user(User, Impl), Resource, Permission) end,
160 Module, "access to ~s refused for user '~s'",
161 [rabbit_misc:rs(Resource), Username]);
162 (_, Else) -> Else
163 end, ok, Modules).
118164
119165 check_access(Fun, Module, ErrStr, ErrArgs) ->
120166 Allow = case Fun() of
2222 -export([lookup/1, not_found_or_absent/1, with/2, with/3, with_or_die/2,
2323 assert_equivalence/5,
2424 check_exclusive_access/2, with_exclusive_access_or_die/3,
25 stat/1, deliver/2, deliver_flow/2, requeue/3, ack/3, reject/4]).
25 stat/1, deliver/2, requeue/3, ack/3, reject/4]).
2626 -export([list/0, list/1, info_keys/0, info/1, info/2, info_all/1, info_all/2]).
2727 -export([list_down/1]).
2828 -export([force_event_refresh/1, notify_policy_changed/1]).
148148 -spec(forget_all_durable/1 :: (node()) -> 'ok').
149149 -spec(deliver/2 :: ([rabbit_types:amqqueue()], rabbit_types:delivery()) ->
150150 qpids()).
151 -spec(deliver_flow/2 :: ([rabbit_types:amqqueue()], rabbit_types:delivery()) ->
152 qpids()).
153151 -spec(requeue/3 :: (pid(), [msg_id()], pid()) -> 'ok').
154152 -spec(ack/3 :: (pid(), [msg_id()], pid()) -> 'ok').
155153 -spec(reject/4 :: (pid(), [msg_id()], boolean(), pid()) -> 'ok').
264262 declare(QueueName, Durable, AutoDelete, Args, Owner, Node) ->
265263 ok = check_declare_arguments(QueueName, Args),
266264 Q = rabbit_queue_decorator:set(
267 rabbit_policy:set(#amqqueue{name = QueueName,
268 durable = Durable,
269 auto_delete = AutoDelete,
270 arguments = Args,
271 exclusive_owner = Owner,
272 pid = none,
273 slave_pids = [],
274 sync_slave_pids = [],
275 down_slave_nodes = [],
276 gm_pids = [],
277 state = live})),
265 rabbit_policy:set(#amqqueue{name = QueueName,
266 durable = Durable,
267 auto_delete = AutoDelete,
268 arguments = Args,
269 exclusive_owner = Owner,
270 pid = none,
271 slave_pids = [],
272 sync_slave_pids = [],
273 recoverable_slaves = [],
274 gm_pids = [],
275 state = live})),
278276 Node = rabbit_mirror_queue_misc:initial_queue_node(Q, Node),
279277 gen_server2:call(
280278 rabbit_amqqueue_sup_sup:start_queue_process(Node, Q, declare),
468466 {<<"x-dead-letter-exchange">>, fun check_dlxname_arg/2},
469467 {<<"x-dead-letter-routing-key">>, fun check_dlxrk_arg/2},
470468 {<<"x-max-length">>, fun check_non_neg_int_arg/2},
471 {<<"x-max-length-bytes">>, fun check_non_neg_int_arg/2}].
469 {<<"x-max-length-bytes">>, fun check_non_neg_int_arg/2},
470 {<<"x-max-priority">>, fun check_non_neg_int_arg/2}].
472471
473472 consume_args() -> [{<<"x-priority">>, fun check_int_arg/2},
474473 {<<"x-cancel-on-ha-failover">>, fun check_bool_arg/2}].
559558 info_down(Q, Items, DownReason) ->
560559 [{Item, i_down(Item, Q, DownReason)} || Item <- Items].
561560
562 i_down(name, #amqqueue{name = Name}, _) -> Name;
563 i_down(durable, #amqqueue{durable = Durable},_) -> Durable;
564 i_down(auto_delete, #amqqueue{auto_delete = AD}, _) -> AD;
565 i_down(arguments, #amqqueue{arguments = Args}, _) -> Args;
566 i_down(pid, #amqqueue{pid = QPid}, _) -> QPid;
567 i_down(down_slave_nodes, #amqqueue{down_slave_nodes = DSN}, _) -> DSN;
561 i_down(name, #amqqueue{name = Name}, _) -> Name;
562 i_down(durable, #amqqueue{durable = Dur}, _) -> Dur;
563 i_down(auto_delete, #amqqueue{auto_delete = AD}, _) -> AD;
564 i_down(arguments, #amqqueue{arguments = Args}, _) -> Args;
565 i_down(pid, #amqqueue{pid = QPid}, _) -> QPid;
566 i_down(recoverable_slaves, #amqqueue{recoverable_slaves = RS}, _) -> RS;
568567 i_down(state, _Q, DownReason) -> DownReason;
569568 i_down(K, _Q, _DownReason) ->
570569 case lists:member(K, rabbit_amqqueue_process:info_keys()) of
621620 ok = internal_delete(QName).
622621
623622 purge(#amqqueue{ pid = QPid }) -> delegate:call(QPid, purge).
624
625 deliver(Qs, Delivery) -> deliver(Qs, Delivery, noflow).
626
627 deliver_flow(Qs, Delivery) -> deliver(Qs, Delivery, flow).
628623
629624 requeue(QPid, MsgIds, ChPid) -> delegate:call(QPid, {requeue, MsgIds, ChPid}).
630625
723718 fun () ->
724719 Qs = mnesia:match_object(rabbit_durable_queue,
725720 #amqqueue{_ = '_'}, write),
726 [forget_node_for_queue(Q) || #amqqueue{pid = Pid} = Q <- Qs,
721 [forget_node_for_queue(Node, Q) ||
722 #amqqueue{pid = Pid} = Q <- Qs,
727723 node(Pid) =:= Node],
728724 ok
729725 end),
730726 ok.
731727
732 forget_node_for_queue(#amqqueue{name = Name,
733 down_slave_nodes = []}) ->
728 %% Try to promote a slave while down - it should recover as a
729 %% master. We try to take the oldest slave here for best chance of
730 %% recovery.
731 forget_node_for_queue(DeadNode, Q = #amqqueue{recoverable_slaves = RS}) ->
732 forget_node_for_queue(DeadNode, RS, Q).
733
734 forget_node_for_queue(_DeadNode, [], #amqqueue{name = Name}) ->
734735 %% No slaves to recover from, queue is gone.
735736 %% Don't process_deletions since that just calls callbacks and we
736737 %% are not really up.
737738 internal_delete1(Name, true);
738739
739 forget_node_for_queue(Q = #amqqueue{down_slave_nodes = [H|T]}) ->
740 %% Promote a slave while down - it'll happily recover as a master
741 Q1 = Q#amqqueue{pid = rabbit_misc:node_to_fake_pid(H),
742 down_slave_nodes = T},
743 ok = mnesia:write(rabbit_durable_queue, Q1, write).
740 %% Should not happen, but let's be conservative.
741 forget_node_for_queue(DeadNode, [DeadNode | T], Q) ->
742 forget_node_for_queue(DeadNode, T, Q);
743
744 forget_node_for_queue(DeadNode, [H|T], Q) ->
745 case node_permits_offline_promotion(H) of
746 false -> forget_node_for_queue(DeadNode, T, Q);
747 true -> Q1 = Q#amqqueue{pid = rabbit_misc:node_to_fake_pid(H)},
748 ok = mnesia:write(rabbit_durable_queue, Q1, write)
749 end.
750
751 node_permits_offline_promotion(Node) ->
752 case node() of
753 Node -> not rabbit:is_running(); %% [1]
754 _ -> Running = rabbit_mnesia:cluster_nodes(running),
755 not lists:member(Node, Running) %% [2]
756 end.
757 %% [1] In this case if we are a real running node (i.e. rabbitmqctl
758 %% has RPCed into us) then we cannot allow promotion. If on the other
759 %% hand we *are* rabbitmqctl impersonating the node for offline
760 %% node-forgetting then we can.
761 %%
762 %% [2] This is simpler; as long as it's down that's OK
744763
745764 run_backing_queue(QPid, Mod, Fun) ->
746765 gen_server2:cast(QPid, {run_backing_queue, Mod, Fun}).
762781 fun () ->
763782 Qs = mnesia:match_object(rabbit_queue,
764783 #amqqueue{_ = '_'}, write),
765 [case lists:member(Node, DSNs) of
766 true -> DSNs1 = DSNs -- [Node],
784 [case lists:member(Node, RSs) of
785 true -> RSs1 = RSs -- [Node],
767786 store_queue(
768 Q#amqqueue{down_slave_nodes = DSNs1});
787 Q#amqqueue{recoverable_slaves = RSs1});
769788 false -> ok
770 end || #amqqueue{down_slave_nodes = DSNs} = Q <- Qs],
789 end || #amqqueue{recoverable_slaves = RSs} = Q <- Qs],
771790 ok
772791 end).
773792
806825 pid = Pid,
807826 slave_pids = []}.
808827
809 immutable(Q) -> Q#amqqueue{pid = none,
810 slave_pids = none,
811 sync_slave_pids = none,
812 down_slave_nodes = none,
813 gm_pids = none,
814 policy = none,
815 decorators = none,
816 state = none}.
817
818 deliver([], _Delivery, _Flow) ->
828 immutable(Q) -> Q#amqqueue{pid = none,
829 slave_pids = none,
830 sync_slave_pids = none,
831 recoverable_slaves = none,
832 gm_pids = none,
833 policy = none,
834 decorators = none,
835 state = none}.
836
837 deliver([], _Delivery) ->
819838 %% /dev/null optimisation
820839 [];
821840
822 deliver(Qs, Delivery, Flow) ->
841 deliver(Qs, Delivery = #delivery{flow = Flow}) ->
823842 {MPids, SPids} = qpids(Qs),
824843 QPids = MPids ++ SPids,
844 %% We use up two credits to send to a slave since the message
845 %% arrives at the slave from two directions. We will ack one when
846 %% the slave receives the message direct from the channel, and the
847 %% other when it receives it via GM.
825848 case Flow of
826 flow -> [credit_flow:send(QPid) || QPid <- QPids];
849 flow -> [credit_flow:send(QPid) || QPid <- QPids],
850 [credit_flow:send(QPid) || QPid <- SPids];
827851 noflow -> ok
828852 end,
829853
832856 %% after they have become master they should mark the message as
833857 %% 'delivered' since they do not know what the master may have
834858 %% done with it.
835 MMsg = {deliver, Delivery, false, Flow},
836 SMsg = {deliver, Delivery, true, Flow},
859 MMsg = {deliver, Delivery, false},
860 SMsg = {deliver, Delivery, true},
837861 delegate:cast(MPids, MMsg),
838862 delegate:cast(SPids, SMsg),
839863 QPids.
8282 memory,
8383 slave_pids,
8484 synchronised_slave_pids,
85 down_slave_nodes,
85 recoverable_slaves,
8686 state
8787 ]).
8888
152152 #amqqueue{} = Q1 ->
153153 case matches(Recover, Q, Q1) of
154154 true ->
155 send_reply(From, {new, Q}),
156155 ok = file_handle_cache:register_callback(
157156 rabbit_amqqueue, set_maximum_since_use, [self()]),
158157 ok = rabbit_memory_monitor:register(
160159 set_ram_duration_target, [self()]}),
161160 BQ = backing_queue_module(Q1),
162161 BQS = bq_init(BQ, Q, TermsOrNew),
162 send_reply(From, {new, Q}),
163163 recovery_barrier(Barrier),
164164 State1 = process_args_policy(
165165 State#q{backing_queue = BQ,
497497
498498 discard(#delivery{confirm = Confirm,
499499 sender = SenderPid,
500 flow = Flow,
500501 message = #basic_message{id = MsgId}}, BQ, BQS, MTC) ->
501502 MTC1 = case Confirm of
502503 true -> confirm_messages([MsgId], MTC);
503504 false -> MTC
504505 end,
505 BQS1 = BQ:discard(MsgId, SenderPid, BQS),
506 BQS1 = BQ:discard(MsgId, SenderPid, Flow, BQS),
506507 {BQS1, MTC1}.
507508
508509 run_message_queue(State) -> run_message_queue(false, State).
524525 end
525526 end.
526527
527 attempt_delivery(Delivery = #delivery{sender = SenderPid, message = Message},
528 attempt_delivery(Delivery = #delivery{sender = SenderPid,
529 flow = Flow,
530 message = Message},
528531 Props, Delivered, State = #q{backing_queue = BQ,
529532 backing_queue_state = BQS,
530533 msg_id_to_channel = MTC}) ->
531534 case rabbit_queue_consumers:deliver(
532535 fun (true) -> true = BQ:is_empty(BQS),
533 {AckTag, BQS1} = BQ:publish_delivered(
534 Message, Props, SenderPid, BQS),
536 {AckTag, BQS1} =
537 BQ:publish_delivered(
538 Message, Props, SenderPid, Flow, BQS),
535539 {{Message, Delivered, AckTag}, {BQS1, MTC}};
536540 (false) -> {{Message, Delivered, undefined},
537541 discard(Delivery, BQ, BQS, MTC)}
548552 State#q{consumers = Consumers})}
549553 end.
550554
551 deliver_or_enqueue(Delivery = #delivery{message = Message, sender = SenderPid},
555 deliver_or_enqueue(Delivery = #delivery{message = Message,
556 sender = SenderPid,
557 flow = Flow},
552558 Delivered, State = #q{backing_queue = BQ,
553559 backing_queue_state = BQS}) ->
554560 send_mandatory(Delivery), %% must do this before confirms
569575 {BQS3, MTC1} = discard(Delivery, BQ, BQS2, MTC),
570576 State3#q{backing_queue_state = BQS3, msg_id_to_channel = MTC1};
571577 {undelivered, State3 = #q{backing_queue_state = BQS2}} ->
572 BQS3 = BQ:publish(Message, Props, Delivered, SenderPid, BQS2),
578 BQS3 = BQ:publish(Message, Props, Delivered, SenderPid, Flow, BQS2),
573579 {Dropped, State4 = #q{backing_queue_state = BQS4}} =
574580 maybe_drop_head(State3#q{backing_queue_state = BQS3}),
575581 QLen = BQ:len(BQS4),
854860 false -> '';
855861 true -> SSPids
856862 end;
857 i(down_slave_nodes, #q{q = #amqqueue{name = Name,
858 durable = Durable}}) ->
859 {ok, Q = #amqqueue{down_slave_nodes = Nodes}} =
863 i(recoverable_slaves, #q{q = #amqqueue{name = Name,
864 durable = Durable}}) ->
865 {ok, Q = #amqqueue{recoverable_slaves = Nodes}} =
860866 rabbit_amqqueue:lookup(Name),
861867 case Durable andalso rabbit_mirror_queue_misc:is_mirrored(Q) of
862868 false -> '';
10991105 State = #q{backing_queue = BQ, backing_queue_state = BQS}) ->
11001106 noreply(State#q{backing_queue_state = BQ:invoke(Mod, Fun, BQS)});
11011107
1102 handle_cast({deliver, Delivery = #delivery{sender = Sender}, Delivered, Flow},
1108 handle_cast({deliver, Delivery = #delivery{sender = Sender,
1109 flow = Flow}, SlaveWhenPublished},
11031110 State = #q{senders = Senders}) ->
11041111 Senders1 = case Flow of
11051112 flow -> credit_flow:ack(Sender),
1113 case SlaveWhenPublished of
1114 true -> credit_flow:ack(Sender); %% [0]
1115 false -> ok
1116 end,
11061117 pmon:monitor(Sender, Senders);
11071118 noflow -> Senders
11081119 end,
11091120 State1 = State#q{senders = Senders1},
1110 noreply(deliver_or_enqueue(Delivery, Delivered, State1));
1121 noreply(deliver_or_enqueue(Delivery, SlaveWhenPublished, State1));
1122 %% [0] The second ack is since the channel thought we were a slave at
1123 %% the time it published this message, so it used two credits (see
1124 %% rabbit_amqqueue:deliver/2).
11111125
11121126 handle_cast({ack, AckTags, ChPid}, State) ->
11131127 noreply(ack(AckTags, ChPid, State));
+0
-72
src/rabbit_auth_backend.erl less more
0 %% The contents of this file are subject to the Mozilla Public License
1 %% Version 1.1 (the "License"); you may not use this file except in
2 %% compliance with the License. You may obtain a copy of the License
3 %% at http://www.mozilla.org/MPL/
4 %%
5 %% Software distributed under the License is distributed on an "AS IS"
6 %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 %% the License for the specific language governing rights and
8 %% limitations under the License.
9 %%
10 %% The Original Code is RabbitMQ.
11 %%
12 %% The Initial Developer of the Original Code is GoPivotal, Inc.
13 %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
14 %%
15
16 -module(rabbit_auth_backend).
17
18 -ifdef(use_specs).
19
20 %% A description proplist as with auth mechanisms,
21 %% exchanges. Currently unused.
22 -callback description() -> [proplists:property()].
23
24 %% Check a user can log in, given a username and a proplist of
25 %% authentication information (e.g. [{password, Password}]).
26 %%
27 %% Possible responses:
28 %% {ok, User}
29 %% Authentication succeeded, and here's the user record.
30 %% {error, Error}
31 %% Something went wrong. Log and die.
32 %% {refused, Msg, Args}
33 %% Client failed authentication. Log and die.
34 -callback check_user_login(rabbit_types:username(), [term()]) ->
35 {'ok', rabbit_types:user()} |
36 {'refused', string(), [any()]} |
37 {'error', any()}.
38
39 %% Given #user and vhost, can a user log in to a vhost?
40 %% Possible responses:
41 %% true
42 %% false
43 %% {error, Error}
44 %% Something went wrong. Log and die.
45 -callback check_vhost_access(rabbit_types:user(), rabbit_types:vhost()) ->
46 boolean() | {'error', any()}.
47
48
49 %% Given #user, resource and permission, can a user access a resource?
50 %%
51 %% Possible responses:
52 %% true
53 %% false
54 %% {error, Error}
55 %% Something went wrong. Log and die.
56 -callback check_resource_access(rabbit_types:user(),
57 rabbit_types:r(atom()),
58 rabbit_access_control:permission_atom()) ->
59 boolean() | {'error', any()}.
60
61 -else.
62
63 -export([behaviour_info/1]).
64
65 behaviour_info(callbacks) ->
66 [{description, 0}, {check_user_login, 2}, {check_vhost_access, 2},
67 {check_resource_access, 3}];
68 behaviour_info(_Other) ->
69 undefined.
70
71 -endif.
1616 -module(rabbit_auth_backend_dummy).
1717 -include("rabbit.hrl").
1818
19 -behaviour(rabbit_auth_backend).
19 -behaviour(rabbit_authn_backend).
20 -behaviour(rabbit_authz_backend).
2021
21 -export([description/0]).
2222 -export([user/0]).
23 -export([check_user_login/2, check_vhost_access/2, check_resource_access/3]).
23 -export([user_login_authentication/2, user_login_authorization/1,
24 check_vhost_access/3, check_resource_access/3]).
2425
2526 -ifdef(use_specs).
2627
3031
3132 %% A user to be used by the direct client when permission checks are
3233 %% not needed. This user can do anything AMQPish.
33 user() -> #user{username = <<"none">>,
34 tags = [],
35 auth_backend = ?MODULE,
36 impl = none}.
34 user() -> #user{username = <<"none">>,
35 tags = [],
36 authz_backends = [{?MODULE, none}]}.
3737
3838 %% Implementation of rabbit_auth_backend
3939
40 description() ->
41 [{name, <<"Dummy">>},
42 {description, <<"Database for the dummy user">>}].
43
44 check_user_login(_, _) ->
40 user_login_authentication(_, _) ->
4541 {refused, "cannot log in conventionally as dummy user", []}.
4642
47 check_vhost_access(#user{}, _VHostPath) -> true.
48 check_resource_access(#user{}, #resource{}, _Permission) -> true.
43 user_login_authorization(_) ->
44 {refused, "cannot log in conventionally as dummy user", []}.
45
46 check_vhost_access(#auth_user{}, _VHostPath, _Sock) -> true.
47 check_resource_access(#auth_user{}, #resource{}, _Permission) -> true.
1616 -module(rabbit_auth_backend_internal).
1717 -include("rabbit.hrl").
1818
19 -behaviour(rabbit_auth_backend).
20
21 -export([description/0]).
22 -export([check_user_login/2, check_vhost_access/2, check_resource_access/3]).
19 -behaviour(rabbit_authn_backend).
20 -behaviour(rabbit_authz_backend).
21
22 -export([user_login_authentication/2, user_login_authorization/1,
23 check_vhost_access/3, check_resource_access/3]).
2324
2425 -export([add_user/2, delete_user/1, lookup_user/1,
2526 change_password/2, clear_password/1,
7576 %%----------------------------------------------------------------------------
7677 %% Implementation of rabbit_auth_backend
7778
78 description() ->
79 [{name, <<"Internal">>},
80 {description, <<"Internal user / password database">>}].
81
82 check_user_login(Username, []) ->
79 user_login_authentication(Username, []) ->
8380 internal_check_user_login(Username, fun(_) -> true end);
84 check_user_login(Username, [{password, Cleartext}]) ->
81 user_login_authentication(Username, [{password, Cleartext}]) ->
8582 internal_check_user_login(
8683 Username,
8784 fun (#internal_user{password_hash = <<Salt:4/binary, Hash/binary>>}) ->
8986 (#internal_user{}) ->
9087 false
9188 end);
92 check_user_login(Username, AuthProps) ->
89 user_login_authentication(Username, AuthProps) ->
9390 exit({unknown_auth_props, Username, AuthProps}).
91
92 user_login_authorization(Username) ->
93 case user_login_authentication(Username, []) of
94 {ok, #auth_user{impl = Impl}} -> {ok, Impl};
95 Else -> Else
96 end.
9497
9598 internal_check_user_login(Username, Fun) ->
9699 Refused = {refused, "user '~s' - invalid credentials", [Username]},
97100 case lookup_user(Username) of
98101 {ok, User = #internal_user{tags = Tags}} ->
99102 case Fun(User) of
100 true -> {ok, #user{username = Username,
101 tags = Tags,
102 auth_backend = ?MODULE,
103 impl = User}};
103 true -> {ok, #auth_user{username = Username,
104 tags = Tags,
105 impl = none}};
104106 _ -> Refused
105107 end;
106108 {error, not_found} ->
107109 Refused
108110 end.
109111
110 check_vhost_access(#user{username = Username}, VHostPath) ->
112 check_vhost_access(#auth_user{username = Username}, VHostPath, _Sock) ->
111113 case mnesia:dirty_read({rabbit_user_permission,
112114 #user_vhost{username = Username,
113115 virtual_host = VHostPath}}) of
115117 [_R] -> true
116118 end.
117119
118 check_resource_access(#user{username = Username},
120 check_resource_access(#auth_user{username = Username},
119121 #resource{virtual_host = VHostPath, name = Name},
120122 Permission) ->
121123 case mnesia:dirty_read({rabbit_user_permission,
3535 %% Another round is needed. Here's the state I want next time.
3636 %% {protocol_error, Msg, Args}
3737 %% Client got the protocol wrong. Log and die.
38 %% {refused, Msg, Args}
38 %% {refused, Username, Msg, Args}
3939 %% Client failed authentication. Log and die.
4040 -callback handle_response(binary(), any()) ->
4141 {'ok', rabbit_types:user()} |
4242 {'challenge', binary(), any()} |
4343 {'protocol_error', string(), [any()]} |
44 {'refused', string(), [any()]}.
44 {'refused', rabbit_types:username() | none, string(), [any()]}.
4545
4646 -else.
4747
0 %% The contents of this file are subject to the Mozilla Public License
1 %% Version 1.1 (the "License"); you may not use this file except in
2 %% compliance with the License. You may obtain a copy of the License
3 %% at http://www.mozilla.org/MPL/
4 %%
5 %% Software distributed under the License is distributed on an "AS IS"
6 %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 %% the License for the specific language governing rights and
8 %% limitations under the License.
9 %%
10 %% The Original Code is RabbitMQ.
11 %%
12 %% The Initial Developer of the Original Code is GoPivotal, Inc.
13 %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
14 %%
15
16 -module(rabbit_authn_backend).
17
18 -include("rabbit.hrl").
19
20 -ifdef(use_specs).
21
22 %% Check a user can log in, given a username and a proplist of
23 %% authentication information (e.g. [{password, Password}]). If your
24 %% backend is not to be used for authentication, this should always
25 %% refuse access.
26 %%
27 %% Possible responses:
28 %% {ok, User}
29 %% Authentication succeeded, and here's the user record.
30 %% {error, Error}
31 %% Something went wrong. Log and die.
32 %% {refused, Msg, Args}
33 %% Client failed authentication. Log and die.
34 -callback user_login_authentication(rabbit_types:username(), [term()]) ->
35 {'ok', rabbit_types:auth_user()} |
36 {'refused', string(), [any()]} |
37 {'error', any()}.
38
39 -else.
40
41 -export([behaviour_info/1]).
42
43 behaviour_info(callbacks) ->
44 [{user_login_authentication, 2}];
45 behaviour_info(_Other) ->
46 undefined.
47
48 -endif.
0 %% The contents of this file are subject to the Mozilla Public License
1 %% Version 1.1 (the "License"); you may not use this file except in
2 %% compliance with the License. You may obtain a copy of the License
3 %% at http://www.mozilla.org/MPL/
4 %%
5 %% Software distributed under the License is distributed on an "AS IS"
6 %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 %% the License for the specific language governing rights and
8 %% limitations under the License.
9 %%
10 %% The Original Code is RabbitMQ.
11 %%
12 %% The Initial Developer of the Original Code is GoPivotal, Inc.
13 %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
14 %%
15
16 -module(rabbit_authz_backend).
17
18 -include("rabbit.hrl").
19
20 -ifdef(use_specs).
21
22 %% Check a user can log in, when this backend is being used for
23 %% authorisation only. Authentication has already taken place
24 %% successfully, but we need to check that the user exists in this
25 %% backend, and initialise any impl field we will want to have passed
26 %% back in future calls to check_vhost_access/3 and
27 %% check_resource_access/3.
28 %%
29 %% Possible responses:
30 %% {ok, Impl}
31 %% User authorisation succeeded, and here's the impl field.
32 %% {error, Error}
33 %% Something went wrong. Log and die.
34 %% {refused, Msg, Args}
35 %% User authorisation failed. Log and die.
36 -callback user_login_authorization(rabbit_types:username()) ->
37 {'ok', any()} |
38 {'refused', string(), [any()]} |
39 {'error', any()}.
40
41 %% Given #auth_user and vhost, can a user log in to a vhost?
42 %% Possible responses:
43 %% true
44 %% false
45 %% {error, Error}
46 %% Something went wrong. Log and die.
47 -callback check_vhost_access(rabbit_types:auth_user(),
48 rabbit_types:vhost(), rabbit_net:socket()) ->
49 boolean() | {'error', any()}.
50
51 %% Given #auth_user, resource and permission, can a user access a resource?
52 %%
53 %% Possible responses:
54 %% true
55 %% false
56 %% {error, Error}
57 %% Something went wrong. Log and die.
58 -callback check_resource_access(rabbit_types:auth_user(),
59 rabbit_types:r(atom()),
60 rabbit_access_control:permission_atom()) ->
61 boolean() | {'error', any()}.
62
63 -else.
64
65 -export([behaviour_info/1]).
66
67 behaviour_info(callbacks) ->
68 [{user_login_authorization, 1},
69 {check_vhost_access, 3}, {check_resource_access, 3}];
70 behaviour_info(_Other) ->
71 undefined.
72
73 -endif.
1515
1616 -module(rabbit_autoheal).
1717
18 -export([init/0, maybe_start/1, rabbit_down/2, node_down/2, handle_msg/3]).
18 -export([init/0, enabled/0, maybe_start/1, rabbit_down/2, node_down/2,
19 handle_msg/3]).
1920
2021 %% The named process we are running in.
2122 -define(SERVER, rabbit_node_monitor).
2223
2324 -define(MNESIA_STOPPED_PING_INTERNAL, 200).
25
26 -define(AUTOHEAL_STATE_AFTER_RESTART, rabbit_autoheal_state_after_restart).
2427
2528 %%----------------------------------------------------------------------------
2629
4447 %% stops - if a node stops for any other reason it just gets a message
4548 %% it will ignore, and otherwise we carry on.
4649 %%
50 %% Meanwhile, the leader may continue to receive new autoheal requests:
51 %% all of them are ignored. The winner notifies the leader when the
52 %% current autoheal process is finished (ie. when all losers stopped and
53 %% were asked to start again) or was aborted. When the leader receives
54 %% the notification or if it looses contact with the winner, it can
55 %% accept new autoheal requests.
56 %%
4757 %% The winner and the leader are not necessarily the same node.
4858 %%
49 %% Possible states:
59 %% The leader can be a loser and will restart in this case. It remembers
60 %% there is an autoheal in progress by temporarily saving the autoheal
61 %% state to the application environment.
62 %%
63 %% == Possible states ==
5064 %%
5165 %% not_healing
5266 %% - the default
5569 %% - we are the winner and are waiting for all losing nodes to stop
5670 %% before telling them they can restart
5771 %%
58 %% about_to_heal
59 %% - we are the leader, and have already assigned the winner and
60 %% losers. We are part of the losers and we wait for the winner_is
61 %% announcement. This leader-specific state differs from not_healing
62 %% (the state other losers are in), because the leader could still
63 %% receive request_start messages: those subsequent requests must be
64 %% ignored.
65 %%
66 %% {leader_waiting, OutstandingStops}
72 %% {leader_waiting, Winner, Notify}
6773 %% - we are the leader, and have already assigned the winner and losers.
68 %% We are neither but need to ignore further requests to autoheal.
74 %% We are waiting for a confirmation from the winner that the autoheal
75 %% process has ended. Meanwhile we can ignore autoheal requests.
76 %% Because we may be a loser too, this state is saved to the application
77 %% environment and restored on startup.
6978 %%
7079 %% restarting
7180 %% - we are restarting. Of course the node monitor immediately dies
7281 %% then so this state does not last long. We therefore send the
7382 %% autoheal_safe_to_start message to the rabbit_outside_app_process
7483 %% instead.
84 %%
85 %% == Message flow ==
86 %%
87 %% 1. Any node (leader included) >> {request_start, node()} >> Leader
88 %% When Mnesia detects it is running partitioned or
89 %% when a remote node starts, rabbit_node_monitor calls
90 %% rabbit_autoheal:maybe_start/1. The message above is sent to the
91 %% leader so the leader can take a decision.
92 %%
93 %% 2. Leader >> {become_winner, Losers} >> Winner
94 %% The leader notifies the winner so the latter can proceed with
95 %% the autoheal.
96 %%
97 %% 3. Winner >> {winner_is, Winner} >> All losers
98 %% The winner notifies losers they must stop.
99 %%
100 %% 4. Winner >> autoheal_safe_to_start >> All losers
101 %% When either all losers stopped or the autoheal process was
102 %% aborted, the winner notifies losers they can start again.
103 %%
104 %% 5. Leader >> report_autoheal_status >> Winner
105 %% The leader asks the autoheal status to the winner. This only
106 %% happens when the leader is a loser too. If this is not the case,
107 %% this message is never sent.
108 %%
109 %% 6. Winner >> {autoheal_finished, Winner} >> Leader
110 %% The winner notifies the leader that the autoheal process was
111 %% either finished or aborted (ie. autoheal_safe_to_start was sent
112 %% to losers).
75113
76114 %%----------------------------------------------------------------------------
77115
78 init() -> not_healing.
116 init() ->
117 %% We check the application environment for a saved autoheal state
118 %% saved during a restart. If this node is a leader, it is used
119 %% to determine if it needs to ask the winner to report about the
120 %% autoheal progress.
121 State = case application:get_env(rabbit, ?AUTOHEAL_STATE_AFTER_RESTART) of
122 {ok, S} -> S;
123 undefined -> not_healing
124 end,
125 ok = application:unset_env(rabbit, ?AUTOHEAL_STATE_AFTER_RESTART),
126 case State of
127 {leader_waiting, Winner, _} ->
128 rabbit_log:info(
129 "Autoheal: in progress, requesting report from ~p~n", [Winner]),
130 send(Winner, report_autoheal_status);
131 _ ->
132 ok
133 end,
134 State.
79135
80136 maybe_start(not_healing) ->
81137 case enabled() of
82 true -> [Leader | _] = lists:usort(rabbit_mnesia:cluster_nodes(all)),
138 true -> Leader = leader(),
83139 send(Leader, {request_start, node()}),
84140 rabbit_log:info("Autoheal request sent to ~p~n", [Leader]),
85141 not_healing;
89145 State.
90146
91147 enabled() ->
92 {ok, autoheal} =:= application:get_env(rabbit, cluster_partition_handling).
93
148 case application:get_env(rabbit, cluster_partition_handling) of
149 {ok, autoheal} -> true;
150 {ok, {pause_if_all_down, _, autoheal}} -> true;
151 _ -> false
152 end.
153
154 leader() ->
155 [Leader | _] = lists:usort(rabbit_mnesia:cluster_nodes(all)),
156 Leader.
94157
95158 %% This is the winner receiving its last notification that a node has
96159 %% stopped - all nodes can now start again
101164 rabbit_down(Node, {winner_waiting, WaitFor, Notify}) ->
102165 {winner_waiting, WaitFor -- [Node], Notify};
103166
104 rabbit_down(Node, {leader_waiting, [Node]}) ->
105 not_healing;
106
107 rabbit_down(Node, {leader_waiting, WaitFor}) ->
108 {leader_waiting, WaitFor -- [Node]};
167 rabbit_down(Winner, {leader_waiting, Winner, Losers}) ->
168 abort([Winner], Losers);
109169
110170 rabbit_down(_Node, State) ->
111 %% ignore, we already cancelled the autoheal process
171 %% Ignore. Either:
172 %% o we already cancelled the autoheal process;
173 %% o we are still waiting the winner's report.
112174 State.
113175
114176 node_down(_Node, not_healing) ->
140202 case node() =:= Winner of
141203 true -> handle_msg({become_winner, Losers},
142204 not_healing, Partitions);
143 false -> send(Winner, {become_winner, Losers}), %% [0]
144 case lists:member(node(), Losers) of
145 true -> about_to_heal;
146 false -> {leader_waiting, Losers}
147 end
205 false -> send(Winner, {become_winner, Losers}),
206 {leader_waiting, Winner, Losers}
148207 end
149208 end;
150 %% [0] If we are a loser we will never receive this message - but it
151 %% won't stick in the mailbox as we are restarting anyway
152209
153210 handle_msg({request_start, Node},
154211 State, _Partitions) ->
169226 _ -> abort(Down, Losers)
170227 end;
171228
172 handle_msg({winner_is, Winner},
173 State, _Partitions)
174 when State =:= not_healing orelse State =:= about_to_heal ->
175 rabbit_log:warning(
176 "Autoheal: we were selected to restart; winner is ~p~n", [Winner]),
177 rabbit_node_monitor:run_outside_applications(
178 fun () ->
179 MRef = erlang:monitor(process, {?SERVER, Winner}),
180 rabbit:stop(),
181 receive
182 {'DOWN', MRef, process, {?SERVER, Winner}, _Reason} -> ok;
183 autoheal_safe_to_start -> ok
184 end,
185 erlang:demonitor(MRef, [flush]),
186 rabbit:start()
187 end),
229 handle_msg({winner_is, Winner}, State = not_healing,
230 _Partitions) ->
231 %% This node is a loser, nothing else.
232 restart_loser(State, Winner),
233 restarting;
234 handle_msg({winner_is, Winner}, State = {leader_waiting, Winner, _},
235 _Partitions) ->
236 %% This node is the leader and a loser at the same time.
237 restart_loser(State, Winner),
188238 restarting;
189239
190240 handle_msg(_, restarting, _Partitions) ->
191241 %% ignore, we can contribute no further
192 restarting.
242 restarting;
243
244 handle_msg(report_autoheal_status, not_healing, _Partitions) ->
245 %% The leader is asking about the autoheal status to us (the
246 %% winner). This happens when the leader is a loser and it just
247 %% restarted. We are in the "not_healing" state, so the previous
248 %% autoheal process ended: let's tell this to the leader.
249 send(leader(), {autoheal_finished, node()}),
250 not_healing;
251
252 handle_msg(report_autoheal_status, State, _Partitions) ->
253 %% Like above, the leader is asking about the autoheal status. We
254 %% are not finished with it. There is no need to send anything yet
255 %% to the leader: we will send the notification when it is over.
256 State;
257
258 handle_msg({autoheal_finished, Winner},
259 {leader_waiting, Winner, _}, _Partitions) ->
260 %% The winner is finished with the autoheal process and notified us
261 %% (the leader). We can transition to the "not_healing" state and
262 %% accept new requests.
263 rabbit_log:info("Autoheal finished according to winner ~p~n", [Winner]),
264 not_healing;
265
266 handle_msg({autoheal_finished, Winner}, not_healing, _Partitions)
267 when Winner =:= node() ->
268 %% We are the leader and the winner. The state already transitioned
269 %% to "not_healing" at the end of the autoheal process.
270 rabbit_log:info("Autoheal finished according to winner ~p~n", [node()]),
271 not_healing.
193272
194273 %%----------------------------------------------------------------------------
195274
214293 %% losing nodes before sending the "autoheal_safe_to_start" signal.
215294 wait_for_mnesia_shutdown(Notify),
216295 [{rabbit_outside_app_process, N} ! autoheal_safe_to_start || N <- Notify],
296 send(leader(), {autoheal_finished, node()}),
217297 not_healing.
218298
219299 wait_for_mnesia_shutdown([Node | Rest] = AllNodes) ->
231311 end;
232312 wait_for_mnesia_shutdown([]) ->
233313 ok.
314
315 restart_loser(State, Winner) ->
316 rabbit_log:warning(
317 "Autoheal: we were selected to restart; winner is ~p~n", [Winner]),
318 rabbit_node_monitor:run_outside_applications(
319 fun () ->
320 MRef = erlang:monitor(process, {?SERVER, Winner}),
321 rabbit:stop(),
322 NextState = receive
323 {'DOWN', MRef, process, {?SERVER, Winner}, _Reason} ->
324 not_healing;
325 autoheal_safe_to_start ->
326 State
327 end,
328 erlang:demonitor(MRef, [flush]),
329 %% During the restart, the autoheal state is lost so we
330 %% store it in the application environment temporarily so
331 %% init/0 can pick it up.
332 %%
333 %% This is useful to the leader which is a loser at the
334 %% same time: because the leader is restarting, there
335 %% is a great chance it misses the "autoheal finished!"
336 %% notification from the winner. Thanks to the saved
337 %% state, it knows it needs to ask the winner if the
338 %% autoheal process is finished or not.
339 application:set_env(rabbit,
340 ?AUTOHEAL_STATE_AFTER_RESTART, NextState),
341 rabbit:start()
342 end, true).
234343
235344 make_decision(AllPartitions) ->
236345 Sorted = lists:sort([{partition_value(P), P} || P <- AllPartitions]),
2121 messages_unacknowledged_ram, messages_persistent,
2222 message_bytes, message_bytes_ready,
2323 message_bytes_unacknowledged, message_bytes_ram,
24 message_bytes_persistent, backing_queue_status]).
24 message_bytes_persistent,
25 disk_reads, disk_writes, backing_queue_status]).
2526
2627 -ifdef(use_specs).
2728
2930 -type(ack() :: any()).
3031 -type(state() :: any()).
3132
33 -type(flow() :: 'flow' | 'noflow').
3234 -type(msg_ids() :: [rabbit_types:msg_id()]).
3335 -type(fetch_result(Ack) ::
3436 ('empty' | {rabbit_types:basic_message(), boolean(), Ack})).
98100
99101 %% Publish a message.
100102 -callback publish(rabbit_types:basic_message(),
101 rabbit_types:message_properties(), boolean(), pid(),
103 rabbit_types:message_properties(), boolean(), pid(), flow(),
102104 state()) -> state().
103105
104106 %% Called for messages which have already been passed straight
105107 %% out to a client. The queue will be empty for these calls
106108 %% (i.e. saves the round trip through the backing queue).
107109 -callback publish_delivered(rabbit_types:basic_message(),
108 rabbit_types:message_properties(), pid(), state())
110 rabbit_types:message_properties(), pid(), flow(),
111 state())
109112 -> {ack(), state()}.
110113
111114 %% Called to inform the BQ about messages which have reached the
112115 %% queue, but are not going to be further passed to BQ.
113 -callback discard(rabbit_types:msg_id(), pid(), state()) -> state().
116 -callback discard(rabbit_types:msg_id(), pid(), flow(), state()) -> state().
114117
115118 %% Return ids of messages which have been confirmed since the last
116119 %% invocation of this function (or initialisation).
248251
249252 behaviour_info(callbacks) ->
250253 [{start, 1}, {stop, 0}, {init, 3}, {terminate, 2},
251 {delete_and_terminate, 2}, {purge, 1}, {purge_acks, 1}, {publish, 5},
252 {publish_delivered, 4}, {discard, 3}, {drain_confirmed, 1},
254 {delete_and_terminate, 2}, {purge, 1}, {purge_acks, 1}, {publish, 6},
255 {publish_delivered, 5}, {discard, 4}, {drain_confirmed, 1},
253256 {dropwhile, 2}, {fetchwhile, 4},
254257 {fetch, 2}, {ack, 2}, {requeue, 2}, {ackfold, 4}, {fold, 3}, {len, 1},
255258 {is_empty, 1}, {depth, 1}, {set_ram_duration_target, 2},
2020 -export([publish/4, publish/5, publish/1,
2121 message/3, message/4, properties/1, prepend_table_header/3,
2222 extract_headers/1, map_headers/2, delivery/4, header_routes/1,
23 parse_expiration/1]).
23 parse_expiration/1, header/2, header/3]).
2424 -export([build_content/2, from_content/1, msg_size/1, maybe_gc_large_msg/1]).
2525
2626 %%----------------------------------------------------------------------------
3131 (rabbit_framing:amqp_property_record() | [{atom(), any()}])).
3232 -type(publish_result() ::
3333 ({ok, [pid()]} | rabbit_types:error('not_found'))).
34 -type(header() :: any()).
3435 -type(headers() :: rabbit_framing:amqp_table() | 'undefined').
3536
3637 -type(exchange_input() :: (rabbit_types:exchange() | rabbit_exchange:name())).
6061 -spec(prepend_table_header/3 ::
6162 (binary(), rabbit_framing:amqp_table(), headers()) -> headers()).
6263
64 -spec(header/2 ::
65 (header(), headers()) -> 'undefined' | any()).
66 -spec(header/3 ::
67 (header(), headers(), any()) -> 'undefined' | any()).
68
6369 -spec(extract_headers/1 :: (rabbit_types:content()) -> headers()).
6470
6571 -spec(map_headers/2 :: (fun((headers()) -> headers()), rabbit_types:content())
113119
114120 delivery(Mandatory, Confirm, Message, MsgSeqNo) ->
115121 #delivery{mandatory = Mandatory, confirm = Confirm, sender = self(),
116 message = Message, msg_seq_no = MsgSeqNo}.
122 message = Message, msg_seq_no = MsgSeqNo, flow = noflow}.
117123
118124 build_content(Properties, BodyBin) when is_binary(BodyBin) ->
119125 build_content(Properties, [BodyBin]);
223229 end,
224230 NewHdr = rabbit_misc:set_table_value(ExistingHdr, Name, array, Values),
225231 set_invalid(NewHdr, Header).
232
233 header(_Header, undefined) ->
234 undefined;
235 header(_Header, []) ->
236 undefined;
237 header(Header, Headers) ->
238 header(Header, Headers, undefined).
239
240 header(Header, Headers, Default) ->
241 case lists:keysearch(Header, 1, Headers) of
242 false -> Default;
243 {value, Val} -> Val
244 end.
226245
227246 extract_headers(Content) ->
228247 #content{properties = #'P_basic'{headers = Headers}} =
4040 %% parse_table supports the AMQP 0-8/0-9 standard types, S, I, D, T
4141 %% and F, as well as the QPid extensions b, d, f, l, s, t, x, and V.
4242
43 -define(SIMPLE_PARSE_TABLE(BType, Pattern, RType),
44 parse_table(<<NLen:8/unsigned, NameString:NLen/binary,
45 BType, Pattern, Rest/binary>>) ->
46 [{NameString, RType, Value} | parse_table(Rest)]).
47
48 %% Note that we try to put these in approximately the order we expect
49 %% to hit them, that's why the empty binary is half way through.
50
51 parse_table(<<NLen:8/unsigned, NameString:NLen/binary,
52 $S, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) ->
53 [{NameString, longstr, Value} | parse_table(Rest)];
54
55 ?SIMPLE_PARSE_TABLE($I, Value:32/signed, signedint);
56 ?SIMPLE_PARSE_TABLE($T, Value:64/unsigned, timestamp);
57
4358 parse_table(<<>>) ->
4459 [];
45 parse_table(<<NLen:8/unsigned, NameString:NLen/binary, ValueAndRest/binary>>) ->
46 {Type, Value, Rest} = parse_field_value(ValueAndRest),
47 [{NameString, Type, Value} | parse_table(Rest)].
60
61 ?SIMPLE_PARSE_TABLE($b, Value:8/signed, byte);
62 ?SIMPLE_PARSE_TABLE($d, Value:64/float, double);
63 ?SIMPLE_PARSE_TABLE($f, Value:32/float, float);
64 ?SIMPLE_PARSE_TABLE($l, Value:64/signed, long);
65 ?SIMPLE_PARSE_TABLE($s, Value:16/signed, short);
66
67 parse_table(<<NLen:8/unsigned, NameString:NLen/binary,
68 $t, Value:8/unsigned, Rest/binary>>) ->
69 [{NameString, bool, (Value /= 0)} | parse_table(Rest)];
70
71 parse_table(<<NLen:8/unsigned, NameString:NLen/binary,
72 $D, Before:8/unsigned, After:32/unsigned, Rest/binary>>) ->
73 [{NameString, decimal, {Before, After}} | parse_table(Rest)];
74
75 parse_table(<<NLen:8/unsigned, NameString:NLen/binary,
76 $F, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) ->
77 [{NameString, table, parse_table(Value)} | parse_table(Rest)];
78
79 parse_table(<<NLen:8/unsigned, NameString:NLen/binary,
80 $A, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) ->
81 [{NameString, array, parse_array(Value)} | parse_table(Rest)];
82
83 parse_table(<<NLen:8/unsigned, NameString:NLen/binary,
84 $x, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) ->
85 [{NameString, binary, Value} | parse_table(Rest)];
86
87 parse_table(<<NLen:8/unsigned, NameString:NLen/binary,
88 $V, Rest/binary>>) ->
89 [{NameString, void, undefined} | parse_table(Rest)].
90
91 -define(SIMPLE_PARSE_ARRAY(BType, Pattern, RType),
92 parse_array(<<BType, Pattern, Rest/binary>>) ->
93 [{RType, Value} | parse_array(Rest)]).
94
95 parse_array(<<$S, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) ->
96 [{longstr, Value} | parse_array(Rest)];
97
98 ?SIMPLE_PARSE_ARRAY($I, Value:32/signed, signedint);
99 ?SIMPLE_PARSE_ARRAY($T, Value:64/unsigned, timestamp);
48100
49101 parse_array(<<>>) ->
50102 [];
51 parse_array(<<ValueAndRest/binary>>) ->
52 {Type, Value, Rest} = parse_field_value(ValueAndRest),
53 [{Type, Value} | parse_array(Rest)].
54103
55 parse_field_value(<<$S, VLen:32/unsigned, V:VLen/binary, R/binary>>) ->
56 {longstr, V, R};
104 ?SIMPLE_PARSE_ARRAY($b, Value:8/signed, byte);
105 ?SIMPLE_PARSE_ARRAY($d, Value:64/float, double);
106 ?SIMPLE_PARSE_ARRAY($f, Value:32/float, float);
107 ?SIMPLE_PARSE_ARRAY($l, Value:64/signed, long);
108 ?SIMPLE_PARSE_ARRAY($s, Value:16/signed, short);
57109
58 parse_field_value(<<$I, V:32/signed, R/binary>>) ->
59 {signedint, V, R};
110 parse_array(<<$t, Value:8/unsigned, Rest/binary>>) ->
111 [{bool, (Value /= 0)} | parse_array(Rest)];
60112
61 parse_field_value(<<$D, Before:8/unsigned, After:32/unsigned, R/binary>>) ->
62 {decimal, {Before, After}, R};
113 parse_array(<<$D, Before:8/unsigned, After:32/unsigned, Rest/binary>>) ->
114 [{decimal, {Before, After}} | parse_array(Rest)];
63115
64 parse_field_value(<<$T, V:64/unsigned, R/binary>>) ->
65 {timestamp, V, R};
116 parse_array(<<$F, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) ->
117 [{table, parse_table(Value)} | parse_array(Rest)];
66118
67 parse_field_value(<<$F, VLen:32/unsigned, Table:VLen/binary, R/binary>>) ->
68 {table, parse_table(Table), R};
119 parse_array(<<$A, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) ->
120 [{array, parse_array(Value)} | parse_array(Rest)];
69121
70 parse_field_value(<<$A, VLen:32/unsigned, Array:VLen/binary, R/binary>>) ->
71 {array, parse_array(Array), R};
122 parse_array(<<$x, VLen:32/unsigned, Value:VLen/binary, Rest/binary>>) ->
123 [{binary, Value} | parse_array(Rest)];
72124
73 parse_field_value(<<$b, V:8/signed, R/binary>>) -> {byte, V, R};
74 parse_field_value(<<$d, V:64/float, R/binary>>) -> {double, V, R};
75 parse_field_value(<<$f, V:32/float, R/binary>>) -> {float, V, R};
76 parse_field_value(<<$l, V:64/signed, R/binary>>) -> {long, V, R};
77 parse_field_value(<<$s, V:16/signed, R/binary>>) -> {short, V, R};
78 parse_field_value(<<$t, V:8/unsigned, R/binary>>) -> {bool, (V /= 0), R};
79
80 parse_field_value(<<$x, VLen:32/unsigned, V:VLen/binary, R/binary>>) ->
81 {binary, V, R};
82
83 parse_field_value(<<$V, R/binary>>) ->
84 {void, undefined, R}.
125 parse_array(<<$V, Rest/binary>>) ->
126 [{void, undefined} | parse_array(Rest)].
85127
86128 ensure_content_decoded(Content = #content{properties = Props})
87129 when Props =/= none ->
483483
484484 %%---------------------------------------------------------------------------
485485
486 log(Level, Fmt, Args) -> rabbit_log:log(channel, Level, Fmt, Args).
487
486488 reply(Reply, NewState) -> {reply, Reply, next_state(NewState), hibernate}.
487489
488490 noreply(NewState) -> {noreply, next_state(NewState), hibernate}.
519521 {_Result, State1} = notify_queues(State),
520522 case rabbit_binary_generator:map_exception(Channel, Reason, Protocol) of
521523 {Channel, CloseMethod} ->
522 rabbit_log:error("Channel error on connection ~p (~s, vhost: '~s',"
523 " user: '~s'), channel ~p:~n~p~n",
524 [ConnPid, ConnName, VHost, User#user.username,
525 Channel, Reason]),
524 log(error, "Channel error on connection ~p (~s, vhost: '~s',"
525 " user: '~s'), channel ~p:~n~p~n",
526 [ConnPid, ConnName, VHost, User#user.username,
527 Channel, Reason]),
526528 ok = rabbit_writer:send_command(WriterPid, CloseMethod),
527529 {noreply, State1};
528530 {0, _} ->
580582 #ch{user = #user{username = Username}}) ->
581583 ok;
582584 check_user_id_header(
583 #'P_basic'{}, #ch{user = #user{auth_backend = rabbit_auth_backend_dummy}}) ->
585 #'P_basic'{}, #ch{user = #user{authz_backends =
586 [{rabbit_auth_backend_dummy, _}]}}) ->
584587 ok;
585588 check_user_id_header(#'P_basic'{user_id = Claimed},
586589 #ch{user = #user{username = Actual,
659662 check_not_default_exchange(_) ->
660663 ok.
661664
662 check_exchange_deletion(XName = #resource{name = <<"amq.rabbitmq.", _/binary>>,
665 check_exchange_deletion(XName = #resource{name = <<"amq.", _/binary>>,
663666 kind = exchange}) ->
664667 rabbit_misc:protocol_error(
665668 access_refused, "deletion of system ~s not allowed",
788791 end,
789792 case rabbit_basic:message(ExchangeName, RoutingKey, DecodedContent) of
790793 {ok, Message} ->
791 rabbit_trace:tap_in(Message, ConnName, ChannelNum,
792 Username, TraceState),
793794 Delivery = rabbit_basic:delivery(
794795 Mandatory, DoConfirm, Message, MsgSeqNo),
795796 QNames = rabbit_exchange:route(Exchange, Delivery),
796 DQ = {Delivery, QNames},
797 rabbit_trace:tap_in(Message, QNames, ConnName, ChannelNum,
798 Username, TraceState),
799 DQ = {Delivery#delivery{flow = flow}, QNames},
797800 {noreply, case Tx of
798801 none -> deliver_to_queues(DQ, State1);
799802 {Msgs, Acks} -> Msgs1 = queue:in(DQ, Msgs),
16641667 DelQNames}, State = #ch{queue_names = QNames,
16651668 queue_monitors = QMons}) ->
16661669 Qs = rabbit_amqqueue:lookup(DelQNames),
1667 DeliveredQPids = rabbit_amqqueue:deliver_flow(Qs, Delivery),
1670 DeliveredQPids = rabbit_amqqueue:deliver(Qs, Delivery),
16681671 %% The pmon:monitor_all/2 monitors all queues to which we
16691672 %% delivered. But we want to monitor even queues we didn't deliver
16701673 %% to, since we need their 'DOWN' messages to clean
17341737 send_confirms(State = #ch{tx = none, confirmed = []}) ->
17351738 State;
17361739 send_confirms(State = #ch{tx = none, confirmed = C}) ->
1737 case rabbit_node_monitor:pause_minority_guard() of
1740 case rabbit_node_monitor:pause_partition_guard() of
17381741 ok -> MsgSeqNos =
17391742 lists:foldl(
17401743 fun ({MsgSeqNo, XName}, MSNs) ->
17461749 pausing -> State
17471750 end;
17481751 send_confirms(State) ->
1749 case rabbit_node_monitor:pause_minority_guard() of
1752 case rabbit_node_monitor:pause_partition_guard() of
17501753 ok -> maybe_complete_tx(State);
17511754 pausing -> State
17521755 end.
1616 -module(rabbit_cli).
1717 -include("rabbit_cli.hrl").
1818
19 -export([main/3, parse_arguments/4, rpc_call/4]).
19 -export([main/3, start_distribution/0, start_distribution/1,
20 parse_arguments/4, rpc_call/4]).
2021
2122 %%----------------------------------------------------------------------------
2223
3031 -spec(main/3 :: (fun (([string()], string()) -> parse_result()),
3132 fun ((atom(), atom(), [any()], [any()]) -> any()),
3233 atom()) -> no_return()).
34 -spec(start_distribution/0 :: () -> {'ok', pid()} | {'error', any()}).
35 -spec(start_distribution/1 :: (string()) -> {'ok', pid()} | {'error', any()}).
3336 -spec(usage/1 :: (atom()) -> no_return()).
3437 -spec(parse_arguments/4 ::
3538 ([{atom(), [{string(), optdef()}]} | atom()],
4144 %%----------------------------------------------------------------------------
4245
4346 main(ParseFun, DoFun, UsageMod) ->
47 error_logger:tty(false),
48 start_distribution(),
4449 {ok, [[NodeStr|_]|_]} = init:get_argument(nodename),
4550 {Command, Opts, Args} =
4651 case ParseFun(init:get_plain_arguments(), NodeStr) of
98103 Other ->
99104 print_error("~p", [Other]),
100105 rabbit_misc:quit(2)
106 end.
107
108 start_distribution() ->
109 start_distribution(list_to_atom(
110 rabbit_misc:format("rabbitmq-cli-~s", [os:getpid()]))).
111
112 start_distribution(Name) ->
113 rabbit_nodes:ensure_epmd(),
114 net_kernel:start([Name, name_type()]).
115
116 name_type() ->
117 case os:getenv("RABBITMQ_USE_LONGNAME") of
118 "true" -> longnames;
119 _ -> shortnames
101120 end.
102121
103122 usage(Mod) ->
1818 -include("rabbit_cli.hrl").
1919
2020 -export([start/0, stop/0, parse_arguments/2, action/5,
21 sync_queue/1, cancel_sync_queue/1]).
21 sync_queue/1, cancel_sync_queue/1, become/1]).
2222
2323 -import(rabbit_cli, [rpc_call/4]).
2424
3939 change_cluster_node_type,
4040 update_cluster_nodes,
4141 {forget_cluster_node, [?OFFLINE_DEF]},
42 rename_cluster_node,
4243 force_boot,
4344 cluster_status,
4445 {sync_queue, [?VHOST_DEF]},
103104 -define(COMMANDS_NOT_REQUIRING_APP,
104105 [stop, stop_app, start_app, wait, reset, force_reset, rotate_logs,
105106 join_cluster, change_cluster_node_type, update_cluster_nodes,
106 forget_cluster_node, cluster_status, status, environment, eval,
107 force_boot]).
107 forget_cluster_node, rename_cluster_node, cluster_status, status,
108 environment, eval, force_boot]).
108109
109110 %%----------------------------------------------------------------------------
110111
122123 %%----------------------------------------------------------------------------
123124
124125 start() ->
125 start_distribution(),
126126 rabbit_cli:main(
127127 fun (Args, NodeStr) ->
128128 parse_arguments(Args, NodeStr)
233233 [ClusterNode, false])
234234 end;
235235
236 action(rename_cluster_node, Node, NodesS, _Opts, Inform) ->
237 Nodes = split_list([list_to_atom(N) || N <- NodesS]),
238 Inform("Renaming cluster nodes:~n~s~n",
239 [lists:flatten([rabbit_misc:format(" ~s -> ~s~n", [F, T]) ||
240 {F, T} <- Nodes])]),
241 rabbit_mnesia_rename:rename(Node, Nodes);
242
236243 action(force_boot, Node, [], _Opts, Inform) ->
237244 Inform("Forcing boot for Mnesia dir ~s", [mnesia:system_info(directory)]),
238245 case rabbit:is_running(Node) of
517524 Node, Pid, fun() -> rpc:call(Node, rabbit, await_startup, []) =:= ok end).
518525
519526 while_process_is_alive(Node, Pid, Activity) ->
520 case process_up(Pid) of
527 case rabbit_misc:is_os_process_alive(Pid) of
521528 true -> case Activity() of
522529 true -> ok;
523530 false -> timer:sleep(?EXTERNAL_CHECK_INTERVAL),
527534 end.
528535
529536 wait_for_process_death(Pid) ->
530 case process_up(Pid) of
537 case rabbit_misc:is_os_process_alive(Pid) of
531538 true -> timer:sleep(?EXTERNAL_CHECK_INTERVAL),
532539 wait_for_process_death(Pid);
533540 false -> ok
551558 exit({error, {could_not_read_pid, E}})
552559 end.
553560
554 % Test using some OS clunkiness since we shouldn't trust
555 % rpc:call(os, getpid, []) at this point
556 process_up(Pid) ->
557 with_os([{unix, fun () ->
558 run_ps(Pid) =:= 0
559 end},
560 {win32, fun () ->
561 Cmd = "tasklist /nh /fi \"pid eq " ++ Pid ++ "\" ",
562 Res = rabbit_misc:os_cmd(Cmd ++ "2>&1"),
563 case re:run(Res, "erl\\.exe", [{capture, none}]) of
564 match -> true;
565 _ -> false
566 end
567 end}]).
568
569 with_os(Handlers) ->
570 {OsFamily, _} = os:type(),
571 case proplists:get_value(OsFamily, Handlers) of
572 undefined -> throw({unsupported_os, OsFamily});
573 Handler -> Handler()
574 end.
575
576 run_ps(Pid) ->
577 Port = erlang:open_port({spawn, "ps -p " ++ Pid},
578 [exit_status, {line, 16384},
579 use_stdio, stderr_to_stdout]),
580 exit_loop(Port).
581
582 exit_loop(Port) ->
583 receive
584 {Port, {exit_status, Rc}} -> Rc;
585 {Port, _} -> exit_loop(Port)
586 end.
587
588 start_distribution() ->
589 CtlNodeName = rabbit_misc:format("rabbitmqctl-~s", [os:getpid()]),
590 {ok, _} = net_kernel:start([list_to_atom(CtlNodeName), name_type()]).
591
592561 become(BecomeNode) ->
562 error_logger:tty(false),
563 ok = net_kernel:stop(),
593564 case net_adm:ping(BecomeNode) of
594565 pong -> exit({node_running, BecomeNode});
595566 pang -> io:format(" * Impersonating node: ~s...", [BecomeNode]),
596 error_logger:tty(false),
597 ok = net_kernel:stop(),
598 {ok, _} = net_kernel:start([BecomeNode, name_type()]),
567 {ok, _} = rabbit_cli:start_distribution(BecomeNode),
599568 io:format(" done~n", []),
600569 Dir = mnesia:system_info(directory),
601570 io:format(" * Mnesia directory : ~s~n", [Dir])
602 end.
603
604 name_type() ->
605 case os:getenv("RABBITMQ_USE_LONGNAME") of
606 "true" -> longnames;
607 _ -> shortnames
608571 end.
609572
610573 %%----------------------------------------------------------------------------
719682 prettify_typed_amqp_value(array, Value) -> [prettify_typed_amqp_value(T, V) ||
720683 {T, V} <- Value];
721684 prettify_typed_amqp_value(_Type, Value) -> Value.
685
686 split_list([]) -> [];
687 split_list([_]) -> exit(even_list_needed);
688 split_list([A, B | T]) -> [{A, B} | split_list(T)].
6565 {<<"time">>, timestamp, TimeSec},
6666 {<<"exchange">>, longstr, Exchange#resource.name},
6767 {<<"routing-keys">>, array, RKs1}] ++ PerMsgTTL,
68 HeadersFun1(rabbit_basic:prepend_table_header(<<"x-death">>,
69 Info, Headers))
68 HeadersFun1(update_x_death_header(Info, Headers))
7069 end,
7170 Content1 = #content{properties = Props} =
7271 rabbit_basic:map_headers(HeadersFun2, Content),
7675 id = rabbit_guid:gen(),
7776 routing_keys = DeathRoutingKeys,
7877 content = Content2}.
78
79
80 x_death_event_key(Info, Key, KeyType) ->
81 case lists:keysearch(Key, 1, Info) of
82 false -> undefined;
83 {value, {Key, KeyType, Val}} -> Val
84 end.
85
86 update_x_death_header(Info, Headers) ->
87 Q = x_death_event_key(Info, <<"queue">>, longstr),
88 R = x_death_event_key(Info, <<"reason">>, longstr),
89 case rabbit_basic:header(<<"x-death">>, Headers) of
90 undefined ->
91 rabbit_basic:prepend_table_header(<<"x-death">>,
92 [{<<"count">>, long, 1} | Info], Headers);
93 {<<"x-death">>, array, Tables} ->
94 {Matches, Others} = lists:partition(
95 fun ({table, Info0}) ->
96 x_death_event_key(Info0, <<"queue">>, longstr) =:= Q
97 andalso x_death_event_key(Info0, <<"reason">>, longstr) =:= R
98 end, Tables),
99 Info1 = case Matches of
100 [] ->
101 [{<<"count">>, long, 1} | Info];
102 [{table, M}] ->
103 case x_death_event_key(M, <<"count">>, long) of
104 undefined ->
105 [{<<"count">>, long, 1} | M];
106 N ->
107 lists:keyreplace(
108 <<"count">>, 1, M,
109 {<<"count">>, long, N + 1})
110 end
111 end,
112 rabbit_misc:set_table_value(Headers, <<"x-death">>, array,
113 [{table, rabbit_misc:sort_field_table(Info1)} | Others])
114 end.
79115
80116 per_msg_ttl_header(#'P_basic'{expiration = undefined}) ->
81117 [];
8282 connect0(AuthFun, VHost, Protocol, Pid, Infos) ->
8383 case rabbit:is_running() of
8484 true -> case AuthFun() of
85 {ok, User} ->
85 {ok, User = #user{username = Username}} ->
86 notify_auth_result(Username,
87 user_authentication_success, []),
8688 connect1(User, VHost, Protocol, Pid, Infos);
87 {refused, _M, _A} ->
89 {refused, Username, Msg, Args} ->
90 notify_auth_result(Username,
91 user_authentication_failure,
92 [{error, rabbit_misc:format(Msg, Args)}]),
8893 {error, {auth_failure, "Refused"}}
8994 end;
9095 false -> {error, broker_not_found_on_node}
9196 end.
9297
98 notify_auth_result(Username, AuthResult, ExtraProps) ->
99 EventProps = [{connection_type, direct},
100 {name, case Username of none -> ''; _ -> Username end}] ++
101 ExtraProps,
102 rabbit_event:notify(AuthResult, [P || {_, V} = P <- EventProps, V =/= '']).
103
93104 connect1(User, VHost, Protocol, Pid, Infos) ->
94 try rabbit_access_control:check_vhost_access(User, VHost) of
105 try rabbit_access_control:check_vhost_access(User, VHost, undefined) of
95106 ok -> ok = pg_local:join(rabbit_direct, Pid),
96107 rabbit_event:notify(connection_created, Infos),
97108 {ok, {User, rabbit_reader:server_properties(Protocol)}}
0 %% The contents of this file are subject to the Mozilla Public License
1 %% Version 1.1 (the "License"); you may not use this file except in
2 %% compliance with the License. You may obtain a copy of the License
3 %% at http://www.mozilla.org/MPL/
4 %%
5 %% Software distributed under the License is distributed on an "AS IS"
6 %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 %% the License for the specific language governing rights and
8 %% limitations under the License.
9 %%
10 %% The Original Code is RabbitMQ.
11 %%
12 %% The Initial Developer of the Original Code is GoPivotal, Inc.
13 %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
14 %%
15
16 -module(rabbit_epmd_monitor).
17
18 -behaviour(gen_server).
19
20 -export([start_link/0]).
21
22 -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2,
23 code_change/3]).
24
25 -record(state, {timer, mod, me, host, port}).
26
27 -define(SERVER, ?MODULE).
28 -define(CHECK_FREQUENCY, 60000).
29
30 %%----------------------------------------------------------------------------
31
32 -ifdef(use_specs).
33
34 -spec(start_link/0 :: () -> rabbit_types:ok_pid_or_error()).
35
36 -endif.
37
38 %%----------------------------------------------------------------------------
39 %% It's possible for epmd to be killed out from underneath us. If that
40 %% happens, then obviously clustering and rabbitmqctl stop
41 %% working. This process checks up on epmd and restarts it /
42 %% re-registers us with it if it has gone away.
43 %%
44 %% How could epmd be killed?
45 %%
46 %% 1) The most popular way for this to happen is when running as a
47 %% Windows service. The user starts rabbitmqctl first, and this starts
48 %% epmd under the user's account. When they log out epmd is killed.
49 %%
50 %% 2) Some packagings of (non-RabbitMQ?) Erlang apps might do "killall
51 %% epmd" as a shutdown or uninstall step.
52 %% ----------------------------------------------------------------------------
53
54 start_link() -> gen_server:start_link({local, ?SERVER}, ?MODULE, [], []).
55
56 init([]) ->
57 {Me, Host} = rabbit_nodes:parts(node()),
58 Mod = net_kernel:epmd_module(),
59 {port, Port, _Version} = Mod:port_please(Me, Host),
60 {ok, ensure_timer(#state{mod = Mod,
61 me = Me,
62 host = Host,
63 port = Port})}.
64
65 handle_call(_Request, _From, State) ->
66 {noreply, State}.
67
68 handle_cast(_Msg, State) ->
69 {noreply, State}.
70
71 handle_info(check, State) ->
72 check_epmd(State),
73 {noreply, ensure_timer(State#state{timer = undefined})};
74
75 handle_info(_Info, State) ->
76 {noreply, State}.
77
78 terminate(_Reason, _State) ->
79 ok.
80
81 code_change(_OldVsn, State, _Extra) ->
82 {ok, State}.
83
84 %%----------------------------------------------------------------------------
85
86 ensure_timer(State) ->
87 rabbit_misc:ensure_timer(State, #state.timer, ?CHECK_FREQUENCY, check).
88
89 check_epmd(#state{mod = Mod,
90 me = Me,
91 host = Host,
92 port = Port}) ->
93 case Mod:port_please(Me, Host) of
94 noport -> rabbit_log:warning(
95 "epmd does not know us, re-registering ~s at port ~b~n",
96 [Me, Port]),
97 rabbit_nodes:ensure_epmd(),
98 erl_epmd:register_node(Me, Port);
99 _ -> ok
100 end.
2121 %%
2222 %% Each channel has an associated limiter process, created with
2323 %% start_link/1, which it passes to queues on consumer creation with
24 %% rabbit_amqqueue:basic_consume/9, and rabbit_amqqueue:basic_get/4.
24 %% rabbit_amqqueue:basic_consume/10, and rabbit_amqqueue:basic_get/4.
2525 %% The latter isn't strictly necessary, since basic.get is not
2626 %% subject to limiting, but it means that whenever a queue knows about
2727 %% a channel, it also knows about its limiter, which is less fiddly.
1515
1616 -module(rabbit_log).
1717
18 -export([log/3, log/4, info/1, info/2, warning/1, warning/2, error/1, error/2]).
18 -export([log/3, log/4, debug/1, debug/2, info/1, info/2, warning/1,
19 warning/2, error/1, error/2]).
1920 -export([with_local_io/1]).
2021
2122 %%----------------------------------------------------------------------------
2526 -export_type([level/0]).
2627
2728 -type(category() :: atom()).
28 -type(level() :: 'info' | 'warning' | 'error').
29 -type(level() :: 'debug' | 'info' | 'warning' | 'error').
2930
3031 -spec(log/3 :: (category(), level(), string()) -> 'ok').
3132 -spec(log/4 :: (category(), level(), string(), [any()]) -> 'ok').
3233
34 -spec(debug/1 :: (string()) -> 'ok').
35 -spec(debug/2 :: (string(), [any()]) -> 'ok').
3336 -spec(info/1 :: (string()) -> 'ok').
3437 -spec(info/2 :: (string(), [any()]) -> 'ok').
3538 -spec(warning/1 :: (string()) -> 'ok').
4952 case level(Level) =< catlevel(Category) of
5053 false -> ok;
5154 true -> F = case Level of
55 debug -> fun error_logger:info_msg/2;
5256 info -> fun error_logger:info_msg/2;
5357 warning -> fun error_logger:warning_msg/2;
5458 error -> fun error_logger:error_msg/2
5660 with_local_io(fun () -> F(Fmt, Args) end)
5761 end.
5862
63 debug(Fmt) -> log(default, debug, Fmt).
64 debug(Fmt, Args) -> log(default, debug, Fmt, Args).
5965 info(Fmt) -> log(default, info, Fmt).
6066 info(Fmt, Args) -> log(default, info, Fmt, Args).
6167 warning(Fmt) -> log(default, warning, Fmt).
7480
7581 %%--------------------------------------------------------------------
7682
83 level(debug) -> 4;
7784 level(info) -> 3;
7885 level(warning) -> 2;
7986 level(error) -> 1;
1818 -export([start_link/4, get_gm/1, ensure_monitoring/2]).
1919
2020 -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2,
21 code_change/3]).
21 code_change/3, handle_pre_hibernate/1]).
2222
2323 -export([joined/2, members_changed/3, handle_msg/3, handle_terminate/2]).
2424
352352 when node(MPid) =:= node() ->
353353 case rabbit_mirror_queue_misc:remove_from_queue(
354354 QueueName, MPid, DeadGMPids) of
355 {ok, MPid, DeadPids} ->
355 {ok, MPid, DeadPids, ExtraNodes} ->
356356 rabbit_mirror_queue_misc:report_deaths(MPid, true, QueueName,
357357 DeadPids),
358 rabbit_mirror_queue_misc:add_mirrors(QueueName, ExtraNodes, async),
358359 noreply(State);
359360 {error, not_found} ->
360361 {stop, normal, State}
387388
388389 code_change(_OldVsn, State, _Extra) ->
389390 {ok, State}.
391
392 handle_pre_hibernate(State = #state { gm = GM }) ->
393 %% Since GM notifications of deaths are lazy we might not get a
394 %% timely notification of slave death if policy changes when
395 %% everything is idle. So cause some activity just before we
396 %% sleep. This won't cause us to go into perpetual motion as the
397 %% heartbeat does not wake up coordinator or slaves.
398 gm:broadcast(GM, hibernate_heartbeat),
399 {hibernate, State}.
390400
391401 %% ---------------------------------------------------------------------------
392402 %% GM
1616 -module(rabbit_mirror_queue_master).
1717
1818 -export([init/3, terminate/2, delete_and_terminate/2,
19 purge/1, purge_acks/1, publish/5, publish_delivered/4,
20 discard/3, fetch/2, drop/2, ack/2, requeue/2, ackfold/4, fold/3,
19 purge/1, purge_acks/1, publish/6, publish_delivered/5,
20 discard/4, fetch/2, drop/2, ack/2, requeue/2, ackfold/4, fold/3,
2121 len/1, is_empty/1, depth/1, drain_confirmed/1,
2222 dropwhile/2, fetchwhile/4, set_ram_duration_target/2, ram_duration/1,
2323 needs_timeout/1, timeout/1, handle_pre_hibernate/1, resume/1,
229229
230230 purge_acks(_State) -> exit({not_implemented, {?MODULE, purge_acks}}).
231231
232 publish(Msg = #basic_message { id = MsgId }, MsgProps, IsDelivered, ChPid,
232 publish(Msg = #basic_message { id = MsgId }, MsgProps, IsDelivered, ChPid, Flow,
233233 State = #state { gm = GM,
234234 seen_status = SS,
235235 backing_queue = BQ,
236236 backing_queue_state = BQS }) ->
237237 false = dict:is_key(MsgId, SS), %% ASSERTION
238 ok = gm:broadcast(GM, {publish, ChPid, MsgProps, Msg},
238 ok = gm:broadcast(GM, {publish, ChPid, Flow, MsgProps, Msg},
239239 rabbit_basic:msg_size(Msg)),
240 BQS1 = BQ:publish(Msg, MsgProps, IsDelivered, ChPid, BQS),
240 BQS1 = BQ:publish(Msg, MsgProps, IsDelivered, ChPid, Flow, BQS),
241241 ensure_monitoring(ChPid, State #state { backing_queue_state = BQS1 }).
242242
243243 publish_delivered(Msg = #basic_message { id = MsgId }, MsgProps,
244 ChPid, State = #state { gm = GM,
245 seen_status = SS,
246 backing_queue = BQ,
247 backing_queue_state = BQS }) ->
244 ChPid, Flow, State = #state { gm = GM,
245 seen_status = SS,
246 backing_queue = BQ,
247 backing_queue_state = BQS }) ->
248248 false = dict:is_key(MsgId, SS), %% ASSERTION
249 ok = gm:broadcast(GM, {publish_delivered, ChPid, MsgProps, Msg},
249 ok = gm:broadcast(GM, {publish_delivered, ChPid, Flow, MsgProps, Msg},
250250 rabbit_basic:msg_size(Msg)),
251 {AckTag, BQS1} = BQ:publish_delivered(Msg, MsgProps, ChPid, BQS),
251 {AckTag, BQS1} = BQ:publish_delivered(Msg, MsgProps, ChPid, Flow, BQS),
252252 State1 = State #state { backing_queue_state = BQS1 },
253253 {AckTag, ensure_monitoring(ChPid, State1)}.
254254
255 discard(MsgId, ChPid, State = #state { gm = GM,
256 backing_queue = BQ,
257 backing_queue_state = BQS,
258 seen_status = SS }) ->
255 discard(MsgId, ChPid, Flow, State = #state { gm = GM,
256 backing_queue = BQ,
257 backing_queue_state = BQS,
258 seen_status = SS }) ->
259259 false = dict:is_key(MsgId, SS), %% ASSERTION
260 ok = gm:broadcast(GM, {discard, ChPid, MsgId}),
261 ensure_monitoring(ChPid, State #state { backing_queue_state =
262 BQ:discard(MsgId, ChPid, BQS) }).
260 ok = gm:broadcast(GM, {discard, ChPid, Flow, MsgId}),
261 ensure_monitoring(ChPid,
262 State #state { backing_queue_state =
263 BQ:discard(MsgId, ChPid, Flow, BQS) }).
263264
264265 dropwhile(Pred, State = #state{backing_queue = BQ,
265266 backing_queue_state = BQS }) ->
4848
4949 -spec(remove_from_queue/3 ::
5050 (rabbit_amqqueue:name(), pid(), [pid()])
51 -> {'ok', pid(), [pid()]} | {'error', 'not_found'}).
51 -> {'ok', pid(), [pid()], [node()]} | {'error', 'not_found'}).
5252 -spec(on_node_up/0 :: () -> 'ok').
5353 -spec(add_mirrors/3 :: (rabbit_amqqueue:name(), [node()], 'sync' | 'async')
5454 -> 'ok').
6969
7070 %%----------------------------------------------------------------------------
7171
72 %% Returns {ok, NewMPid, DeadPids}
72 %% Returns {ok, NewMPid, DeadPids, ExtraNodes}
7373 remove_from_queue(QueueName, Self, DeadGMPids) ->
7474 rabbit_misc:execute_mnesia_transaction(
7575 fun () ->
7777 %% get here.
7878 case mnesia:read({rabbit_queue, QueueName}) of
7979 [] -> {error, not_found};
80 [Q = #amqqueue { pid = QPid,
81 slave_pids = SPids,
82 gm_pids = GMPids,
83 down_slave_nodes = DSNs}] ->
80 [Q = #amqqueue { pid = QPid,
81 slave_pids = SPids,
82 gm_pids = GMPids }] ->
8483 {DeadGM, AliveGM} = lists:partition(
8584 fun ({GM, _}) ->
8685 lists:member(GM, DeadGMPids)
8988 AlivePids = [Pid || {_GM, Pid} <- AliveGM],
9089 Alive = [Pid || Pid <- [QPid | SPids],
9190 lists:member(Pid, AlivePids)],
92 DSNs1 = [node(Pid) ||
93 Pid <- SPids,
94 not lists:member(Pid, AlivePids)] ++ DSNs,
9591 {QPid1, SPids1} = promote_slave(Alive),
96 case {{QPid, SPids}, {QPid1, SPids1}} of
97 {Same, Same} ->
98 ok;
99 _ when QPid =:= QPid1 orelse QPid1 =:= Self ->
100 %% Either master hasn't changed, so
101 %% we're ok to update mnesia; or we have
102 %% become the master.
103 Q1 = Q#amqqueue{pid = QPid1,
104 slave_pids = SPids1,
105 gm_pids = AliveGM,
106 down_slave_nodes = DSNs1},
107 store_updated_slaves(Q1),
108 %% If we add and remove nodes at the same time we
109 %% might tell the old master we need to sync and
110 %% then shut it down. So let's check if the new
111 %% master needs to sync.
112 maybe_auto_sync(Q1);
92 Extra =
93 case {{QPid, SPids}, {QPid1, SPids1}} of
94 {Same, Same} ->
95 [];
96 _ when QPid =:= QPid1 orelse QPid1 =:= Self ->
97 %% Either master hasn't changed, so
98 %% we're ok to update mnesia; or we have
99 %% become the master.
100 Q1 = Q#amqqueue{pid = QPid1,
101 slave_pids = SPids1,
102 gm_pids = AliveGM},
103 store_updated_slaves(Q1),
104 %% If we add and remove nodes at the
105 %% same time we might tell the old
106 %% master we need to sync and then
107 %% shut it down. So let's check if
108 %% the new master needs to sync.
109 maybe_auto_sync(Q1),
110 slaves_to_start_on_failure(Q1, DeadGMPids);
113111 _ ->
114 %% Master has changed, and we're not it.
115 %% [1].
116 Q1 = Q#amqqueue{slave_pids = Alive,
117 gm_pids = AliveGM,
118 down_slave_nodes = DSNs1},
119 store_updated_slaves(Q1)
120 end,
121 {ok, QPid1, DeadPids}
112 %% Master has changed, and we're not it.
113 %% [1].
114 Q1 = Q#amqqueue{slave_pids = Alive,
115 gm_pids = AliveGM},
116 store_updated_slaves(Q1),
117 []
118 end,
119 {ok, QPid1, DeadPids, Extra}
122120 end
123121 end).
124122 %% [1] We still update mnesia here in case the slave that is supposed
143141 %% corresponding entry in gm_pids. By contrast, due to the
144142 %% aforementioned restriction on updating the master pid, that pid may
145143 %% not be present in gm_pids, but only if said master has died.
144
145 %% Sometimes a slave dying means we need to start more on other
146 %% nodes - "exactly" mode can cause this to happen.
147 slaves_to_start_on_failure(Q, DeadGMPids) ->
148 %% In case Mnesia has not caught up yet, filter out nodes we know
149 %% to be dead..
150 ClusterNodes = rabbit_mnesia:cluster_nodes(running) --
151 [node(P) || P <- DeadGMPids],
152 {_, OldNodes, _} = actual_queue_nodes(Q),
153 {_, NewNodes} = suggested_queue_nodes(Q, ClusterNodes),
154 NewNodes -- OldNodes.
146155
147156 on_node_up() ->
148157 QNames =
233242 rabbit_log:log(mirroring, Level, "Mirrored ~s: " ++ Fmt,
234243 [rabbit_misc:rs(QName) | Args]).
235244
236 store_updated_slaves(Q = #amqqueue{pid = MPid,
237 slave_pids = SPids,
238 sync_slave_pids = SSPids,
239 down_slave_nodes = DSNs}) ->
245 store_updated_slaves(Q = #amqqueue{slave_pids = SPids,
246 sync_slave_pids = SSPids,
247 recoverable_slaves = RS}) ->
240248 %% TODO now that we clear sync_slave_pids in rabbit_durable_queue,
241249 %% do we still need this filtering?
242250 SSPids1 = [SSPid || SSPid <- SSPids, lists:member(SSPid, SPids)],
243 DSNs1 = DSNs -- [node(P) || P <- [MPid | SPids]],
244 Q1 = Q#amqqueue{sync_slave_pids = SSPids1,
245 down_slave_nodes = DSNs1,
246 state = live},
251 Q1 = Q#amqqueue{sync_slave_pids = SSPids1,
252 recoverable_slaves = update_recoverable(SPids, RS),
253 state = live},
247254 ok = rabbit_amqqueue:store_queue(Q1),
248255 %% Wake it up so that we emit a stats event
249256 rabbit_amqqueue:notify_policy_changed(Q1),
250257 Q1.
258
259 %% Recoverable nodes are those which we could promote if the whole
260 %% cluster were to suddenly stop and we then lose the master; i.e. all
261 %% nodes with running slaves, and all stopped nodes which had running
262 %% slaves when they were up.
263 %%
264 %% Therefore we aim here to add new nodes with slaves, and remove
265 %% running nodes without slaves, We also try to keep the order
266 %% constant, and similar to the live SPids field (i.e. oldest
267 %% first). That's not necessarily optimal if nodes spend a long time
268 %% down, but we don't have a good way to predict what the optimal is
269 %% in that case anyway, and we assume nodes will not just be down for
270 %% a long time without being removed.
271 update_recoverable(SPids, RS) ->
272 SNodes = [node(SPid) || SPid <- SPids],
273 RunningNodes = rabbit_mnesia:cluster_nodes(running),
274 AddNodes = SNodes -- RS,
275 DelNodes = RunningNodes -- SNodes, %% i.e. running with no slave
276 (RS -- DelNodes) ++ AddNodes.
251277
252278 %%----------------------------------------------------------------------------
253279
343369 {NewMNode, NewSNodes} = suggested_queue_nodes(NewQ),
344370 OldNodes = [OldMNode | OldSNodes],
345371 NewNodes = [NewMNode | NewSNodes],
372 %% When a mirror dies, remove_from_queue/2 might have to add new
373 %% slaves (in "exactly" mode). It will check mnesia to see which
374 %% slaves there currently are. If drop_mirror/2 is invoked first
375 %% then when we end up in remove_from_queue/2 it will not see the
376 %% slaves that add_mirror/2 will add, and also want to add them
377 %% (even though we are not responding to the death of a
378 %% mirror). Breakage ensues.
346379 add_mirrors (QName, NewNodes -- OldNodes, async),
347380 drop_mirrors(QName, OldNodes -- NewNodes),
348381 %% This is for the case where no extra nodes were added but we changed to
205205 {error, not_found} ->
206206 gen_server2:reply(From, ok),
207207 {stop, normal, State};
208 {ok, Pid, DeadPids} ->
208 {ok, Pid, DeadPids, ExtraNodes} ->
209209 rabbit_mirror_queue_misc:report_deaths(Self, false, QName,
210210 DeadPids),
211211 case Pid of
212212 MPid ->
213213 %% master hasn't changed
214214 gen_server2:reply(From, ok),
215 rabbit_mirror_queue_misc:add_mirrors(
216 QName, ExtraNodes, async),
215217 noreply(State);
216218 Self ->
217219 %% we've become master
218220 QueueState = promote_me(From, State),
221 rabbit_mirror_queue_misc:add_mirrors(
222 QName, ExtraNodes, async),
219223 {become, rabbit_amqqueue_process, QueueState, hibernate};
220224 _ ->
221225 %% master has changed to not us
222226 gen_server2:reply(From, ok),
227 %% assertion, we don't need to add_mirrors/2 in this
228 %% branch, see last clause in remove_from_queue/2
229 [] = ExtraNodes,
223230 %% Since GM is by nature lazy we need to make sure
224231 %% there is some traffic when a master dies, to
225232 %% make sure all slaves get informed of the
245252 handle_cast({gm, Instruction}, State) ->
246253 handle_process_result(process_instruction(Instruction, State));
247254
248 handle_cast({deliver, Delivery = #delivery{sender = Sender}, true, Flow},
255 handle_cast({deliver, Delivery = #delivery{sender = Sender, flow = Flow}, true},
249256 State) ->
250257 %% Asynchronous, non-"mandatory", deliver mode.
251258 case Flow of
420427 {promote, CPid} -> {become, rabbit_mirror_queue_coordinator, [CPid]}
421428 end.
422429
430 handle_msg([_SPid], _From, hibernate_heartbeat) ->
431 %% See rabbit_mirror_queue_coordinator:handle_pre_hibernate/1
432 ok;
423433 handle_msg([_SPid], _From, request_depth) ->
424434 %% This is only of value to the master
425435 ok;
627637 (_Msgid, _Status, MTC0) ->
628638 MTC0
629639 end, gb_trees:empty(), MS),
630 Deliveries = [Delivery#delivery{mandatory = false} || %% [0]
640 Deliveries = [promote_delivery(Delivery) ||
631641 {_ChPid, {PubQ, _PendCh, _ChState}} <- dict:to_list(SQ),
632642 Delivery <- queue:to_list(PubQ)],
633643 AwaitGmDown = [ChPid || {ChPid, {_, _, down_from_ch}} <- dict:to_list(SQ)],
639649 Q1, rabbit_mirror_queue_master, MasterState, RateTRef, Deliveries, KS1,
640650 MTC).
641651
642 %% [0] We reset mandatory to false here because we will have sent the
643 %% mandatory_received already as soon as we got the message
652 %% We reset mandatory to false here because we will have sent the
653 %% mandatory_received already as soon as we got the message. We also
654 %% need to send an ack for these messages since the channel is waiting
655 %% for one for the via-GM case and we will not now receive one.
656 promote_delivery(Delivery = #delivery{sender = Sender, flow = Flow}) ->
657 case Flow of
658 flow -> credit_flow:ack(Sender);
659 noflow -> ok
660 end,
661 Delivery#delivery{mandatory = false}.
644662
645663 noreply(State) ->
646664 {NewState, Timeout} = next_state(State),
822840 State1 #state { sender_queues = SQ1, msg_id_status = MS1 }.
823841
824842
825 process_instruction({publish, ChPid, MsgProps,
843 process_instruction({publish, ChPid, Flow, MsgProps,
826844 Msg = #basic_message { id = MsgId }}, State) ->
845 maybe_flow_ack(ChPid, Flow),
827846 State1 = #state { backing_queue = BQ, backing_queue_state = BQS } =
828847 publish_or_discard(published, ChPid, MsgId, State),
829 BQS1 = BQ:publish(Msg, MsgProps, true, ChPid, BQS),
848 BQS1 = BQ:publish(Msg, MsgProps, true, ChPid, Flow, BQS),
830849 {ok, State1 #state { backing_queue_state = BQS1 }};
831 process_instruction({publish_delivered, ChPid, MsgProps,
850 process_instruction({publish_delivered, ChPid, Flow, MsgProps,
832851 Msg = #basic_message { id = MsgId }}, State) ->
852 maybe_flow_ack(ChPid, Flow),
833853 State1 = #state { backing_queue = BQ, backing_queue_state = BQS } =
834854 publish_or_discard(published, ChPid, MsgId, State),
835855 true = BQ:is_empty(BQS),
836 {AckTag, BQS1} = BQ:publish_delivered(Msg, MsgProps, ChPid, BQS),
856 {AckTag, BQS1} = BQ:publish_delivered(Msg, MsgProps, ChPid, Flow, BQS),
837857 {ok, maybe_store_ack(true, MsgId, AckTag,
838858 State1 #state { backing_queue_state = BQS1 })};
839 process_instruction({discard, ChPid, MsgId}, State) ->
859 process_instruction({discard, ChPid, Flow, MsgId}, State) ->
860 maybe_flow_ack(ChPid, Flow),
840861 State1 = #state { backing_queue = BQ, backing_queue_state = BQS } =
841862 publish_or_discard(discarded, ChPid, MsgId, State),
842 BQS1 = BQ:discard(MsgId, ChPid, BQS),
863 BQS1 = BQ:discard(MsgId, ChPid, Flow, BQS),
843864 {ok, State1 #state { backing_queue_state = BQS1 }};
844865 process_instruction({drop, Length, Dropped, AckRequired},
845866 State = #state { backing_queue = BQ,
898919 BQ:delete_and_terminate(Reason, BQS),
899920 {stop, State #state { backing_queue_state = undefined }}.
900921
922 maybe_flow_ack(ChPid, flow) -> credit_flow:ack(ChPid);
923 maybe_flow_ack(_ChPid, noflow) -> ok.
924
901925 msg_ids_to_acktags(MsgIds, MA) ->
902926 {AckTags, MA1} =
903927 lists:foldl(
262262 Props1 = Props#message_properties{needs_confirming = false},
263263 {MA1, BQS1} =
264264 case Unacked of
265 false -> {MA, BQ:publish(Msg, Props1, true, none, BQS)};
265 false -> {MA,
266 BQ:publish(Msg, Props1, true, none, noflow, BQS)};
266267 true -> {AckTag, BQS2} = BQ:publish_delivered(
267 Msg, Props1, none, BQS),
268 Msg, Props1, none, noflow, BQS),
268269 {[{Msg#basic_message.id, AckTag} | MA], BQS2}
269270 end,
270271 slave_sync_loop(Args, {MA1, TRef, BQS1});
4343 -export([format/2, format_many/1, format_stderr/2]).
4444 -export([unfold/2, ceil/1, queue_fold/3]).
4545 -export([sort_field_table/1]).
46 -export([pid_to_string/1, string_to_pid/1, node_to_fake_pid/1]).
46 -export([pid_to_string/1, string_to_pid/1,
47 pid_change_node/2, node_to_fake_pid/1]).
4748 -export([version_compare/2, version_compare/3]).
4849 -export([version_minor_equivalent/2]).
4950 -export([dict_cons/3, orddict_cons/3, gb_trees_cons/3]).
5758 -export([format_message_queue/2]).
5859 -export([append_rpc_all_nodes/4]).
5960 -export([os_cmd/1]).
61 -export([is_os_process_alive/1]).
6062 -export([gb_sets_difference/2]).
6163 -export([version/0, otp_release/0, which_applications/0]).
6264 -export([sequence_error/1]).
195197 (rabbit_framing:amqp_table()) -> rabbit_framing:amqp_table()).
196198 -spec(pid_to_string/1 :: (pid()) -> string()).
197199 -spec(string_to_pid/1 :: (string()) -> pid()).
200 -spec(pid_change_node/2 :: (pid(), node()) -> pid()).
198201 -spec(node_to_fake_pid/1 :: (atom()) -> pid()).
199202 -spec(version_compare/2 :: (string(), string()) -> 'lt' | 'eq' | 'gt').
200203 -spec(version_compare/3 ::
229232 -spec(format_message_queue/2 :: (any(), priority_queue:q()) -> term()).
230233 -spec(append_rpc_all_nodes/4 :: ([node()], atom(), atom(), [any()]) -> [any()]).
231234 -spec(os_cmd/1 :: (string()) -> string()).
235 -spec(is_os_process_alive/1 :: (non_neg_integer()) -> boolean()).
232236 -spec(gb_sets_difference/2 :: (gb_sets:set(), gb_sets:set()) -> gb_sets:set()).
233237 -spec(version/0 :: () -> string()).
234238 -spec(otp_release/0 :: () -> string()).
519523 Res = mnesia:sync_transaction(TxFun),
520524 DiskLogAfter = mnesia_dumper:get_log_writes(),
521525 case DiskLogAfter == DiskLogBefore of
522 true -> Res;
523 false -> {sync, Res}
526 true -> file_handle_cache_stats:update(
527 mnesia_ram_tx),
528 Res;
529 false -> file_handle_cache_stats:update(
530 mnesia_disk_tx),
531 {sync, Res}
524532 end;
525533 true -> mnesia:sync_transaction(TxFun)
526534 end
685693 %% regardless of what node we are running on. The representation also
686694 %% permits easy identification of the pid's node.
687695 pid_to_string(Pid) when is_pid(Pid) ->
688 %% see http://erlang.org/doc/apps/erts/erl_ext_dist.html (8.10 and
689 %% 8.7)
690 <<131,103,100,NodeLen:16,NodeBin:NodeLen/binary,Id:32,Ser:32,Cre:8>>
691 = term_to_binary(Pid),
692 Node = binary_to_term(<<131,100,NodeLen:16,NodeBin:NodeLen/binary>>),
696 {Node, Cre, Id, Ser} = decompose_pid(Pid),
693697 format("<~s.~B.~B.~B>", [Node, Cre, Id, Ser]).
694698
695699 %% inverse of above
700704 case re:run(Str, "^<(.*)\\.(\\d+)\\.(\\d+)\\.(\\d+)>\$",
701705 [{capture,all_but_first,list}]) of
702706 {match, [NodeStr, CreStr, IdStr, SerStr]} ->
703 <<131,NodeEnc/binary>> = term_to_binary(list_to_atom(NodeStr)),
704707 [Cre, Id, Ser] = lists:map(fun list_to_integer/1,
705708 [CreStr, IdStr, SerStr]),
706 binary_to_term(<<131,103,NodeEnc/binary,Id:32,Ser:32,Cre:8>>);
709 compose_pid(list_to_atom(NodeStr), Cre, Id, Ser);
707710 nomatch ->
708711 throw(Err)
709712 end.
710713
714 pid_change_node(Pid, NewNode) ->
715 {_OldNode, Cre, Id, Ser} = decompose_pid(Pid),
716 compose_pid(NewNode, Cre, Id, Ser).
717
711718 %% node(node_to_fake_pid(Node)) =:= Node.
712719 node_to_fake_pid(Node) ->
713 string_to_pid(format("<~s.0.0.0>", [Node])).
720 compose_pid(Node, 0, 0, 0).
721
722 decompose_pid(Pid) when is_pid(Pid) ->
723 %% see http://erlang.org/doc/apps/erts/erl_ext_dist.html (8.10 and
724 %% 8.7)
725 <<131,103,100,NodeLen:16,NodeBin:NodeLen/binary,Id:32,Ser:32,Cre:8>>
726 = term_to_binary(Pid),
727 Node = binary_to_term(<<131,100,NodeLen:16,NodeBin:NodeLen/binary>>),
728 {Node, Cre, Id, Ser}.
729
730 compose_pid(Node, Cre, Id, Ser) ->
731 <<131,NodeEnc/binary>> = term_to_binary(Node),
732 binary_to_term(<<131,103,NodeEnc/binary,Id:32,Ser:32,Cre:8>>).
714733
715734 version_compare(A, B, lte) ->
716735 case version_compare(A, B) of
914933 end
915934 end.
916935
936 is_os_process_alive(Pid) ->
937 with_os([{unix, fun () ->
938 run_ps(Pid) =:= 0
939 end},
940 {win32, fun () ->
941 Cmd = "tasklist /nh /fi \"pid eq " ++ Pid ++ "\" ",
942 Res = os_cmd(Cmd ++ "2>&1"),
943 case re:run(Res, "erl\\.exe", [{capture, none}]) of
944 match -> true;
945 _ -> false
946 end
947 end}]).
948
949 with_os(Handlers) ->
950 {OsFamily, _} = os:type(),
951 case proplists:get_value(OsFamily, Handlers) of
952 undefined -> throw({unsupported_os, OsFamily});
953 Handler -> Handler()
954 end.
955
956 run_ps(Pid) ->
957 Port = erlang:open_port({spawn, "ps -p " ++ Pid},
958 [exit_status, {line, 16384},
959 use_stdio, stderr_to_stdout]),
960 exit_loop(Port).
961
962 exit_loop(Port) ->
963 receive
964 {Port, {exit_status, Rc}} -> Rc;
965 {Port, _} -> exit_loop(Port)
966 end.
967
917968 gb_sets_difference(S1, S2) ->
918969 gb_sets:fold(fun gb_sets:delete_any/2, S1, S2).
919970
108108 %% We intuitively expect the global name server to be synced when
109109 %% Mnesia is up. In fact that's not guaranteed to be the case -
110110 %% let's make it so.
111 ok = global:sync(),
111 ok = rabbit_node_monitor:global_sync(),
112112 ok.
113113
114114 init_from_config() ->
115 FindBadNodeNames = fun
116 (Name, BadNames) when is_atom(Name) -> BadNames;
117 (Name, BadNames) -> [Name | BadNames]
118 end,
115119 {TryNodes, NodeType} =
116120 case application:get_env(rabbit, cluster_nodes) of
121 {ok, {Nodes, Type} = Config}
122 when is_list(Nodes) andalso (Type == disc orelse Type == ram) ->
123 case lists:foldr(FindBadNodeNames, [], Nodes) of
124 [] -> Config;
125 BadNames -> e({invalid_cluster_node_names, BadNames})
126 end;
127 {ok, {_, BadType}} when BadType /= disc andalso BadType /= ram ->
128 e({invalid_cluster_node_type, BadType});
117129 {ok, Nodes} when is_list(Nodes) ->
118 Config = {Nodes -- [node()], case lists:member(node(), Nodes) of
119 true -> disc;
120 false -> ram
121 end},
122 rabbit_log:warning(
123 "Converting legacy 'cluster_nodes' configuration~n ~w~n"
124 "to~n ~w.~n~n"
125 "Please update the configuration to the new format "
126 "{Nodes, NodeType}, where Nodes contains the nodes that the "
127 "node will try to cluster with, and NodeType is either "
128 "'disc' or 'ram'~n", [Nodes, Config]),
129 Config;
130 {ok, Config} ->
131 Config
130 %% The legacy syntax (a nodes list without the node
131 %% type) is unsupported.
132 case lists:foldr(FindBadNodeNames, [], Nodes) of
133 [] -> e(cluster_node_type_mandatory);
134 _ -> e(invalid_cluster_nodes_conf)
135 end;
136 {ok, _} ->
137 e(invalid_cluster_nodes_conf)
132138 end,
133139 case TryNodes of
134140 [] -> init_db_and_upgrade([node()], disc, false);
849855
850856 e(Tag) -> throw({error, {Tag, error_description(Tag)}}).
851857
858 error_description({invalid_cluster_node_names, BadNames}) ->
859 "In the 'cluster_nodes' configuration key, the following node names "
860 "are invalid: " ++ lists:flatten(io_lib:format("~p", [BadNames]));
861 error_description({invalid_cluster_node_type, BadType}) ->
862 "In the 'cluster_nodes' configuration key, the node type is invalid "
863 "(expected 'disc' or 'ram'): " ++
864 lists:flatten(io_lib:format("~p", [BadType]));
865 error_description(cluster_node_type_mandatory) ->
866 "The 'cluster_nodes' configuration key must indicate the node type: "
867 "either {[...], disc} or {[...], ram}";
868 error_description(invalid_cluster_nodes_conf) ->
869 "The 'cluster_nodes' configuration key is invalid, it must be of the "
870 "form {[Nodes], Type}, where Nodes is a list of node names and "
871 "Type is either 'disc' or 'ram'";
852872 error_description(clustering_only_disc_node) ->
853873 "You cannot cluster a node if it is the only disc node in its existing "
854874 " cluster. If new nodes joined while this node was offline, use "
0 %% The contents of this file are subject to the Mozilla Public License
1 %% Version 1.1 (the "License"); you may not use this file except in
2 %% compliance with the License. You may obtain a copy of the License
3 %% at http://www.mozilla.org/MPL/
4 %%
5 %% Software distributed under the License is distributed on an "AS IS"
6 %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 %% the License for the specific language governing rights and
8 %% limitations under the License.
9 %%
10 %% The Original Code is RabbitMQ.
11 %%
12 %% The Initial Developer of the Original Code is GoPivotal, Inc.
13 %% Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
14 %%
15
16 -module(rabbit_mnesia_rename).
17 -include("rabbit.hrl").
18
19 -export([rename/2]).
20 -export([maybe_finish/1]).
21
22 -define(CONVERT_TABLES, [schema, rabbit_durable_queue]).
23
24 %% Supports renaming the nodes in the Mnesia database. In order to do
25 %% this, we take a backup of the database, traverse the backup
26 %% changing node names and pids as we go, then restore it.
27 %%
28 %% That's enough for a standalone node, for clusters the story is more
29 %% complex. We can take pairs of nodes From and To, but backing up and
30 %% restoring the database changes schema cookies, so if we just do
31 %% this on all nodes the cluster will refuse to re-form with
32 %% "Incompatible schema cookies.". Therefore we do something similar
33 %% to what we do for upgrades - the first node in the cluster to
34 %% restart becomes the authority, and other nodes wipe their own
35 %% Mnesia state and rejoin. They also need to tell Mnesia the old node
36 %% is not coming back.
37 %%
38 %% If we are renaming nodes one at a time then the running cluster
39 %% might not be aware that a rename has taken place, so after we wipe
40 %% and rejoin we then update any tables (in practice just
41 %% rabbit_durable_queue) which should be aware that we have changed.
42
43 %%----------------------------------------------------------------------------
44
45 -ifdef(use_specs).
46
47 -spec(rename/2 :: (node(), [{node(), node()}]) -> 'ok').
48 -spec(maybe_finish/1 :: ([node()]) -> 'ok').
49
50 -endif.
51
52 %%----------------------------------------------------------------------------
53
54 rename(Node, NodeMapList) ->
55 try
56 %% Check everything is correct and figure out what we are
57 %% changing from and to.
58 {FromNode, ToNode, NodeMap} = prepare(Node, NodeMapList),
59
60 %% We backup and restore Mnesia even if other nodes are
61 %% running at the time, and defer the final decision about
62 %% whether to use our mutated copy or rejoin the cluster until
63 %% we restart. That means we might be mutating our copy of the
64 %% database while the cluster is running. *Do not* contact the
65 %% cluster while this is happening, we are likely to get
66 %% confused.
67 application:set_env(kernel, dist_auto_connect, never),
68
69 %% Take a copy we can restore from if we abandon the
70 %% rename. We don't restore from the "backup" since restoring
71 %% that changes schema cookies and might stop us rejoining the
72 %% cluster.
73 ok = rabbit_mnesia:copy_db(mnesia_copy_dir()),
74
75 %% And make the actual changes
76 rabbit_control_main:become(FromNode),
77 take_backup(before_backup_name()),
78 convert_backup(NodeMap, before_backup_name(), after_backup_name()),
79 ok = rabbit_file:write_term_file(rename_config_name(),
80 [{FromNode, ToNode}]),
81 convert_config_files(NodeMap),
82 rabbit_control_main:become(ToNode),
83 restore_backup(after_backup_name()),
84 ok
85 after
86 stop_mnesia()
87 end.
88
89 prepare(Node, NodeMapList) ->
90 %% If we have a previous rename and haven't started since, give up.
91 case rabbit_file:is_dir(dir()) of
92 true -> exit({rename_in_progress,
93 "Restart node under old name to roll back"});
94 false -> ok = rabbit_file:ensure_dir(mnesia_copy_dir())
95 end,
96
97 %% Check we don't have two nodes mapped to the same node
98 {FromNodes, ToNodes} = lists:unzip(NodeMapList),
99 case length(FromNodes) - length(lists:usort(ToNodes)) of
100 0 -> ok;
101 _ -> exit({duplicate_node, ToNodes})
102 end,
103
104 %% Figure out which node we are before and after the change
105 FromNode = case [From || {From, To} <- NodeMapList,
106 To =:= Node] of
107 [N] -> N;
108 [] -> Node
109 end,
110 NodeMap = dict:from_list(NodeMapList),
111 ToNode = case dict:find(FromNode, NodeMap) of
112 {ok, N2} -> N2;
113 error -> FromNode
114 end,
115
116 %% Check that we are in the cluster, all old nodes are in the
117 %% cluster, and no new nodes are.
118 Nodes = rabbit_mnesia:cluster_nodes(all),
119 case {FromNodes -- Nodes, ToNodes -- (ToNodes -- Nodes),
120 lists:member(Node, Nodes ++ ToNodes)} of
121 {[], [], true} -> ok;
122 {[], [], false} -> exit({i_am_not_involved, Node});
123 {F, [], _} -> exit({nodes_not_in_cluster, F});
124 {_, T, _} -> exit({nodes_already_in_cluster, T})
125 end,
126 {FromNode, ToNode, NodeMap}.
127
128 take_backup(Backup) ->
129 start_mnesia(),
130 ok = mnesia:backup(Backup),
131 stop_mnesia().
132
133 restore_backup(Backup) ->
134 ok = mnesia:install_fallback(Backup, [{scope, local}]),
135 start_mnesia(),
136 stop_mnesia(),
137 rabbit_mnesia:force_load_next_boot().
138
139 maybe_finish(AllNodes) ->
140 case rabbit_file:read_term_file(rename_config_name()) of
141 {ok, [{FromNode, ToNode}]} -> finish(FromNode, ToNode, AllNodes);
142 _ -> ok
143 end.
144
145 finish(FromNode, ToNode, AllNodes) ->
146 case node() of
147 ToNode ->
148 case rabbit_upgrade:nodes_running(AllNodes) of
149 [] -> finish_primary(FromNode, ToNode);
150 _ -> finish_secondary(FromNode, ToNode, AllNodes)
151 end;
152 FromNode ->
153 rabbit_log:info(
154 "Abandoning rename from ~s to ~s since we are still ~s~n",
155 [FromNode, ToNode, FromNode]),
156 [{ok, _} = file:copy(backup_of_conf(F), F) || F <- config_files()],
157 ok = rabbit_file:recursive_delete([rabbit_mnesia:dir()]),
158 ok = rabbit_file:recursive_copy(
159 mnesia_copy_dir(), rabbit_mnesia:dir()),
160 delete_rename_files();
161 _ ->
162 %% Boot will almost certainly fail but we might as
163 %% well just log this
164 rabbit_log:info(
165 "Rename attempted from ~s to ~s but we are ~s - ignoring.~n",
166 [FromNode, ToNode, node()])
167 end.
168
169 finish_primary(FromNode, ToNode) ->
170 rabbit_log:info("Restarting as primary after rename from ~s to ~s~n",
171 [FromNode, ToNode]),
172 delete_rename_files(),
173 ok.
174
175 finish_secondary(FromNode, ToNode, AllNodes) ->
176 rabbit_log:info("Restarting as secondary after rename from ~s to ~s~n",
177 [FromNode, ToNode]),
178 rabbit_upgrade:secondary_upgrade(AllNodes),
179 rename_in_running_mnesia(FromNode, ToNode),
180 delete_rename_files(),
181 ok.
182
183 dir() -> rabbit_mnesia:dir() ++ "-rename".
184 before_backup_name() -> dir() ++ "/backup-before".
185 after_backup_name() -> dir() ++ "/backup-after".
186 rename_config_name() -> dir() ++ "/pending.config".
187 mnesia_copy_dir() -> dir() ++ "/mnesia-copy".
188
189 delete_rename_files() -> ok = rabbit_file:recursive_delete([dir()]).
190
191 start_mnesia() -> rabbit_misc:ensure_ok(mnesia:start(), cannot_start_mnesia),
192 rabbit_table:force_load(),
193 rabbit_table:wait_for_replicated().
194 stop_mnesia() -> stopped = mnesia:stop().
195
196 convert_backup(NodeMap, FromBackup, ToBackup) ->
197 mnesia:traverse_backup(
198 FromBackup, ToBackup,
199 fun
200 (Row, Acc) ->
201 case lists:member(element(1, Row), ?CONVERT_TABLES) of
202 true -> {[update_term(NodeMap, Row)], Acc};
203 false -> {[Row], Acc}
204 end
205 end, switched).
206
207 config_files() ->
208 [rabbit_node_monitor:running_nodes_filename(),
209 rabbit_node_monitor:cluster_status_filename()].
210
211 backup_of_conf(Path) ->
212 filename:join([dir(), filename:basename(Path)]).
213
214 convert_config_files(NodeMap) ->
215 [convert_config_file(NodeMap, Path) || Path <- config_files()].
216
217 convert_config_file(NodeMap, Path) ->
218 {ok, Term} = rabbit_file:read_term_file(Path),
219 {ok, _} = file:copy(Path, backup_of_conf(Path)),
220 ok = rabbit_file:write_term_file(Path, update_term(NodeMap, Term)).
221
222 lookup_node(OldNode, NodeMap) ->
223 case dict:find(OldNode, NodeMap) of
224 {ok, NewNode} -> NewNode;
225 error -> OldNode
226 end.
227
228 mini_map(FromNode, ToNode) -> dict:from_list([{FromNode, ToNode}]).
229
230 update_term(NodeMap, L) when is_list(L) ->
231 [update_term(NodeMap, I) || I <- L];
232 update_term(NodeMap, T) when is_tuple(T) ->
233 list_to_tuple(update_term(NodeMap, tuple_to_list(T)));
234 update_term(NodeMap, Node) when is_atom(Node) ->
235 lookup_node(Node, NodeMap);
236 update_term(NodeMap, Pid) when is_pid(Pid) ->
237 rabbit_misc:pid_change_node(Pid, lookup_node(node(Pid), NodeMap));
238 update_term(_NodeMap, Term) ->
239 Term.
240
241 rename_in_running_mnesia(FromNode, ToNode) ->
242 All = rabbit_mnesia:cluster_nodes(all),
243 Running = rabbit_mnesia:cluster_nodes(running),
244 case {lists:member(FromNode, Running), lists:member(ToNode, All)} of
245 {false, true} -> ok;
246 {true, _} -> exit({old_node_running, FromNode});
247 {_, false} -> exit({new_node_not_in_cluster, ToNode})
248 end,
249 {atomic, ok} = mnesia:del_table_copy(schema, FromNode),
250 Map = mini_map(FromNode, ToNode),
251 {atomic, _} = transform_table(rabbit_durable_queue, Map),
252 ok.
253
254 transform_table(Table, Map) ->
255 mnesia:sync_transaction(
256 fun () ->
257 mnesia:lock({table, Table}, write),
258 transform_table(Table, Map, mnesia:first(Table))
259 end).
260
261 transform_table(_Table, _Map, '$end_of_table') ->
262 ok;
263 transform_table(Table, Map, Key) ->
264 [Term] = mnesia:read(Table, Key, write),
265 ok = mnesia:write(Table, update_term(Map, Term), write),
266 transform_table(Table, Map, mnesia:next(Table, Key)).
472472
473473 read(MsgId,
474474 CState = #client_msstate { cur_file_cache_ets = CurFileCacheEts }) ->
475 file_handle_cache_stats:update(msg_store_read),
475476 %% Check the cur file cache
476477 case ets:lookup(CurFileCacheEts, MsgId) of
477478 [] ->
506507 client_write(MsgId, Msg, Flow,
507508 CState = #client_msstate { cur_file_cache_ets = CurFileCacheEts,
508509 client_ref = CRef }) ->
510 file_handle_cache_stats:update(msg_store_write),
509511 ok = client_update_flying(+1, MsgId, CState),
510512 ok = update_msg_cache(CurFileCacheEts, MsgId, Msg),
511513 ok = server_cast(CState, {write, CRef, MsgId, Flow}).
12981300
12991301 open_file(Dir, FileName, Mode) ->
13001302 file_handle_cache:open(form_filename(Dir, FileName), ?BINARY_MODE ++ Mode,
1301 [{write_buffer, ?HANDLE_CACHE_BUFFER_SIZE}]).
1303 [{write_buffer, ?HANDLE_CACHE_BUFFER_SIZE},
1304 {read_buffer, ?HANDLE_CACHE_BUFFER_SIZE}]).
13021305
13031306 close_handle(Key, CState = #client_msstate { file_handle_cache = FHC }) ->
13041307 CState #client_msstate { file_handle_cache = close_handle(Key, FHC) };
393393 mnesia:dirty_read(rabbit_listener, Node).
394394
395395 on_node_down(Node) ->
396 ok = mnesia:dirty_delete(rabbit_listener, Node).
396 case lists:member(Node, nodes()) of
397 false -> ok = mnesia:dirty_delete(rabbit_listener, Node);
398 true -> rabbit_log:info(
399 "Keep ~s listeners: the node is already back~n", [Node])
400 end.
397401
398402 start_client(Sock, SockTransform) ->
399403 {ok, _Child, Reader} = supervisor:start_child(rabbit_tcp_client_sup, []),
2424 update_cluster_status/0, reset_cluster_status/0]).
2525 -export([notify_node_up/0, notify_joined_cluster/0, notify_left_cluster/1]).
2626 -export([partitions/0, partitions/1, status/1, subscribe/1]).
27 -export([pause_minority_guard/0]).
27 -export([pause_partition_guard/0]).
28 -export([global_sync/0]).
2829
2930 %% gen_server callbacks
3031 -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2,
3132 code_change/3]).
3233
3334 %% Utils
34 -export([all_rabbit_nodes_up/0, run_outside_applications/1, ping_all/0,
35 -export([all_rabbit_nodes_up/0, run_outside_applications/2, ping_all/0,
3536 alive_nodes/1, alive_rabbit_nodes/1]).
3637
3738 -define(SERVER, ?MODULE).
6364 -spec(partitions/1 :: ([node()]) -> [{node(), [node()]}]).
6465 -spec(status/1 :: ([node()]) -> {[{node(), [node()]}], [node()]}).
6566 -spec(subscribe/1 :: (pid()) -> 'ok').
66 -spec(pause_minority_guard/0 :: () -> 'ok' | 'pausing').
67 -spec(pause_partition_guard/0 :: () -> 'ok' | 'pausing').
6768
6869 -spec(all_rabbit_nodes_up/0 :: () -> boolean()).
69 -spec(run_outside_applications/1 :: (fun (() -> any())) -> pid()).
70 -spec(run_outside_applications/2 :: (fun (() -> any()), boolean()) -> pid()).
7071 -spec(ping_all/0 :: () -> 'ok').
7172 -spec(alive_nodes/1 :: ([node()]) -> [node()]).
7273 -spec(alive_rabbit_nodes/1 :: ([node()]) -> [node()]).
193194 gen_server:cast(?SERVER, {subscribe, Pid}).
194195
195196 %%----------------------------------------------------------------------------
196 %% pause_minority safety
197 %% pause_minority/pause_if_all_down safety
197198 %%----------------------------------------------------------------------------
198199
199200 %% If we are in a minority and pause_minority mode then a) we are
200201 %% going to shut down imminently and b) we should not confirm anything
201202 %% until then, since anything we confirm is likely to be lost.
202203 %%
203 %% We could confirm something by having an HA queue see the minority
204 %% The same principles apply to a node which isn't part of the preferred
205 %% partition when we are in pause_if_all_down mode.
206 %%
207 %% We could confirm something by having an HA queue see the pausing
204208 %% state (and fail over into it) before the node monitor stops us, or
205209 %% by using unmirrored queues and just having them vanish (and
206210 %% confiming messages as thrown away).
207211 %%
208212 %% So we have channels call in here before issuing confirms, to do a
209 %% lightweight check that we have not entered a minority state.
210
211 pause_minority_guard() ->
212 case get(pause_minority_guard) of
213 not_minority_mode ->
213 %% lightweight check that we have not entered a pausing state.
214
215 pause_partition_guard() ->
216 case get(pause_partition_guard) of
217 not_pause_mode ->
214218 ok;
215219 undefined ->
216220 {ok, M} = application:get_env(rabbit, cluster_partition_handling),
217221 case M of
218 pause_minority -> pause_minority_guard([]);
219 _ -> put(pause_minority_guard, not_minority_mode),
220 ok
222 pause_minority ->
223 pause_minority_guard([], ok);
224 {pause_if_all_down, PreferredNodes, _} ->
225 pause_if_all_down_guard(PreferredNodes, [], ok);
226 _ ->
227 put(pause_partition_guard, not_pause_mode),
228 ok
221229 end;
222 {minority_mode, Nodes} ->
223 pause_minority_guard(Nodes)
224 end.
225
226 pause_minority_guard(LastNodes) ->
230 {minority_mode, Nodes, LastState} ->
231 pause_minority_guard(Nodes, LastState);
232 {pause_if_all_down_mode, PreferredNodes, Nodes, LastState} ->
233 pause_if_all_down_guard(PreferredNodes, Nodes, LastState)
234 end.
235
236 pause_minority_guard(LastNodes, LastState) ->
227237 case nodes() of
228 LastNodes -> ok;
229 _ -> put(pause_minority_guard, {minority_mode, nodes()}),
230 case majority() of
231 false -> pausing;
232 true -> ok
233 end
234 end.
238 LastNodes -> LastState;
239 _ -> NewState = case majority() of
240 false -> pausing;
241 true -> ok
242 end,
243 put(pause_partition_guard,
244 {minority_mode, nodes(), NewState}),
245 NewState
246 end.
247
248 pause_if_all_down_guard(PreferredNodes, LastNodes, LastState) ->
249 case nodes() of
250 LastNodes -> LastState;
251 _ -> NewState = case in_preferred_partition(PreferredNodes) of
252 false -> pausing;
253 true -> ok
254 end,
255 put(pause_partition_guard,
256 {pause_if_all_down_mode, PreferredNodes, nodes(),
257 NewState}),
258 NewState
259 end.
260
261 %%----------------------------------------------------------------------------
262 %% "global" hang workaround.
263 %%----------------------------------------------------------------------------
264
265 %% This code works around a possible inconsistency in the "global"
266 %% state, causing global:sync/0 to never return.
267 %%
268 %% 1. A process is spawned.
269 %% 2. If after 15", global:sync() didn't return, the "global"
270 %% state is parsed.
271 %% 3. If it detects that a sync is blocked for more than 10",
272 %% the process sends fake nodedown/nodeup events to the two
273 %% nodes involved (one local, one remote).
274 %% 4. Both "global" instances restart their synchronisation.
275 %% 5. globao:sync() finally returns.
276 %%
277 %% FIXME: Remove this workaround, once we got rid of the change to
278 %% "dist_auto_connect" and fixed the bugs uncovered.
279
280 global_sync() ->
281 Pid = spawn(fun workaround_global_hang/0),
282 ok = global:sync(),
283 Pid ! global_sync_done,
284 ok.
285
286 workaround_global_hang() ->
287 receive
288 global_sync_done ->
289 ok
290 after 15000 ->
291 find_blocked_global_peers()
292 end.
293
294 find_blocked_global_peers() ->
295 {status, _, _, [Dict | _]} = sys:get_status(global_name_server),
296 find_blocked_global_peers1(Dict).
297
298 find_blocked_global_peers1([{{sync_tag_his, Peer}, Timestamp} | Rest]) ->
299 Diff = timer:now_diff(erlang:now(), Timestamp),
300 if
301 Diff >= 10000 -> unblock_global_peer(Peer);
302 true -> ok
303 end,
304 find_blocked_global_peers1(Rest);
305 find_blocked_global_peers1([_ | Rest]) ->
306 find_blocked_global_peers1(Rest);
307 find_blocked_global_peers1([]) ->
308 ok.
309
310 unblock_global_peer(PeerNode) ->
311 ThisNode = node(),
312 PeerState = rpc:call(PeerNode, sys, get_status, [global_name_server]),
313 error_logger:info_msg(
314 "Global hang workaround: global state on ~s seems broken~n"
315 " * Peer global state: ~p~n"
316 " * Local global state: ~p~n"
317 "Faking nodedown/nodeup between ~s and ~s~n",
318 [PeerNode, PeerState, sys:get_status(global_name_server),
319 PeerNode, ThisNode]),
320 {global_name_server, ThisNode} ! {nodedown, PeerNode},
321 {global_name_server, PeerNode} ! {nodedown, ThisNode},
322 {global_name_server, ThisNode} ! {nodeup, PeerNode},
323 {global_name_server, PeerNode} ! {nodeup, ThisNode},
324 ok.
235325
236326 %%----------------------------------------------------------------------------
237327 %% gen_server callbacks
288378 %% 'check_partial_partition' to all the nodes it still thinks are
289379 %% alive. If any of those (intermediate) nodes still see the "down"
290380 %% node as up, they inform it that this has happened. The original
291 %% node (in 'ignore' or 'autoheal' mode) will then disconnect from the
292 %% intermediate node to "upgrade" to a full partition.
381 %% node (in 'ignore', 'pause_if_all_down' or 'autoheal' mode) will then
382 %% disconnect from the intermediate node to "upgrade" to a full
383 %% partition.
293384 %%
294385 %% In pause_minority mode it will instead immediately pause until all
295386 %% nodes come back. This is because the contract for pause_minority is
354445 ArgsBase),
355446 await_cluster_recovery(fun all_nodes_up/0),
356447 {noreply, State};
448 {ok, {pause_if_all_down, PreferredNodes, _}} ->
449 case in_preferred_partition(PreferredNodes) of
450 true -> rabbit_log:error(
451 FmtBase ++ "We will therefore intentionally "
452 "disconnect from ~s~n", ArgsBase ++ [Proxy]),
453 upgrade_to_full_partition(Proxy);
454 false -> rabbit_log:info(
455 FmtBase ++ "We are about to pause, no need "
456 "for further actions~n", ArgsBase)
457 end,
458 {noreply, State};
357459 {ok, _} ->
358460 rabbit_log:error(
359461 FmtBase ++ "We will therefore intentionally disconnect from ~s~n",
360462 ArgsBase ++ [Proxy]),
361 cast(Proxy, {partial_partition_disconnect, node()}),
362 disconnect(Proxy),
463 upgrade_to_full_partition(Proxy),
363464 {noreply, State}
364465 end;
365466
524625 %% that we can respond in the same way to "rabbitmqctl stop_app"
525626 %% and "rabbitmqctl stop" as much as possible.
526627 %%
527 %% However, for pause_minority mode we can't do this, since we
528 %% depend on looking at whether other nodes are up to decide
529 %% whether to come back up ourselves - if we decide that based on
530 %% the rabbit application we would go down and never come back.
628 %% However, for pause_minority and pause_if_all_down modes we can't do
629 %% this, since we depend on looking at whether other nodes are up
630 %% to decide whether to come back up ourselves - if we decide that
631 %% based on the rabbit application we would go down and never come
632 %% back.
531633 case application:get_env(rabbit, cluster_partition_handling) of
532634 {ok, pause_minority} ->
533 case majority() of
635 case majority([Node]) of
534636 true -> ok;
535637 false -> await_cluster_recovery(fun majority/0)
536638 end,
537639 State;
640 {ok, {pause_if_all_down, PreferredNodes, HowToRecover}} ->
641 case in_preferred_partition(PreferredNodes, [Node]) of
642 true -> ok;
643 false -> await_cluster_recovery(
644 fun in_preferred_partition/0)
645 end,
646 case HowToRecover of
647 autoheal -> State#state{autoheal =
648 rabbit_autoheal:node_down(Node, Autoheal)};
649 _ -> State
650 end;
538651 {ok, ignore} ->
539652 State;
540653 {ok, autoheal} ->
546659 end.
547660
548661 await_cluster_recovery(Condition) ->
549 rabbit_log:warning("Cluster minority status detected - awaiting recovery~n",
550 []),
662 rabbit_log:warning("Cluster minority/secondary status detected - "
663 "awaiting recovery~n", []),
551664 run_outside_applications(fun () ->
552665 rabbit:stop(),
553666 wait_for_cluster_recovery(Condition)
554 end),
667 end, false),
555668 ok.
556669
557 run_outside_applications(Fun) ->
670 run_outside_applications(Fun, WaitForExistingProcess) ->
558671 spawn(fun () ->
559672 %% If our group leader is inside an application we are about
560673 %% to stop, application:stop/1 does not return.
561674 group_leader(whereis(init), self()),
562 %% Ensure only one such process at a time, the
563 %% exit(badarg) is harmless if one is already running
564 try register(rabbit_outside_app_process, self()) of
565 true ->
566 try
567 Fun()
568 catch _:E ->
569 rabbit_log:error(
570 "rabbit_outside_app_process:~n~p~n~p~n",
571 [E, erlang:get_stacktrace()])
572 end
573 catch error:badarg ->
574 ok
575 end
675 register_outside_app_process(Fun, WaitForExistingProcess)
576676 end).
677
678 register_outside_app_process(Fun, WaitForExistingProcess) ->
679 %% Ensure only one such process at a time, the exit(badarg) is
680 %% harmless if one is already running.
681 %%
682 %% If WaitForExistingProcess is false, the given fun is simply not
683 %% executed at all and the process exits.
684 %%
685 %% If WaitForExistingProcess is true, we wait for the end of the
686 %% currently running process before executing the given function.
687 try register(rabbit_outside_app_process, self()) of
688 true ->
689 do_run_outside_app_fun(Fun)
690 catch
691 error:badarg when WaitForExistingProcess ->
692 MRef = erlang:monitor(process, rabbit_outside_app_process),
693 receive
694 {'DOWN', MRef, _, _, _} ->
695 %% The existing process exited, let's try to
696 %% register again.
697 register_outside_app_process(Fun, WaitForExistingProcess)
698 end;
699 error:badarg ->
700 ok
701 end.
702
703 do_run_outside_app_fun(Fun) ->
704 try
705 Fun()
706 catch _:E ->
707 rabbit_log:error(
708 "rabbit_outside_app_process:~n~p~n~p~n",
709 [E, erlang:get_stacktrace()])
710 end.
577711
578712 wait_for_cluster_recovery(Condition) ->
579713 ping_all(),
597731 %% that we do not attempt to deal with individual (other) partitions
598732 %% going away. It's only safe to forget anything about partitions when
599733 %% there are no partitions.
600 Partitions1 = case Partitions -- (Partitions -- alive_rabbit_nodes()) of
734 Down = Partitions -- alive_rabbit_nodes(),
735 NoLongerPartitioned = rabbit_mnesia:cluster_nodes(running),
736 Partitions1 = case Partitions -- Down -- NoLongerPartitioned of
601737 [] -> [];
602738 _ -> Partitions
603739 end,
657793 del_node(Node, Nodes) -> Nodes -- [Node].
658794
659795 cast(Node, Msg) -> gen_server:cast({?SERVER, Node}, Msg).
796
797 upgrade_to_full_partition(Proxy) ->
798 cast(Proxy, {partial_partition_disconnect, node()}),
799 disconnect(Proxy).
660800
661801 %% When we call this, it's because we want to force Mnesia to detect a
662802 %% partition. But if we just disconnect_node/1 then Mnesia won't
680820 %% here. "rabbit" in a function's name implies we test if the rabbit
681821 %% application is up, not just the node.
682822
683 %% As we use these functions to decide what to do in pause_minority
684 %% state, they *must* be fast, even in the case where TCP connections
685 %% are timing out. So that means we should be careful about whether we
686 %% connect to nodes which are currently disconnected.
823 %% As we use these functions to decide what to do in pause_minority or
824 %% pause_if_all_down states, they *must* be fast, even in the case where
825 %% TCP connections are timing out. So that means we should be careful
826 %% about whether we connect to nodes which are currently disconnected.
687827
688828 majority() ->
829 majority([]).
830
831 majority(NodesDown) ->
689832 Nodes = rabbit_mnesia:cluster_nodes(all),
690 length(alive_nodes(Nodes)) / length(Nodes) > 0.5.
833 AliveNodes = alive_nodes(Nodes) -- NodesDown,
834 length(AliveNodes) / length(Nodes) > 0.5.
835
836 in_preferred_partition() ->
837 {ok, {pause_if_all_down, PreferredNodes, _}} =
838 application:get_env(rabbit, cluster_partition_handling),
839 in_preferred_partition(PreferredNodes).
840
841 in_preferred_partition(PreferredNodes) ->
842 in_preferred_partition(PreferredNodes, []).
843
844 in_preferred_partition(PreferredNodes, NodesDown) ->
845 Nodes = rabbit_mnesia:cluster_nodes(all),
846 RealPreferredNodes = [N || N <- PreferredNodes, lists:member(N, Nodes)],
847 AliveNodes = alive_nodes(RealPreferredNodes) -- NodesDown,
848 RealPreferredNodes =:= [] orelse AliveNodes =/= [].
691849
692850 all_nodes_up() ->
693851 Nodes = rabbit_mnesia:cluster_nodes(all),
1717
1818 -export([names/1, diagnostics/1, make/1, parts/1, cookie_hash/0,
1919 is_running/2, is_process_running/2,
20 cluster_name/0, set_cluster_name/1]).
20 cluster_name/0, set_cluster_name/1, ensure_epmd/0]).
2121
2222 -include_lib("kernel/include/inet.hrl").
2323
4040 -spec(is_process_running/2 :: (node(), atom()) -> boolean()).
4141 -spec(cluster_name/0 :: () -> binary()).
4242 -spec(set_cluster_name/1 :: (binary()) -> 'ok').
43 -spec(ensure_epmd/0 :: () -> 'ok').
4344
4445 -endif.
4546
196197
197198 set_cluster_name(Name) ->
198199 rabbit_runtime_parameters:set_global(cluster_name, Name).
200
201 ensure_epmd() ->
202 {ok, Prog} = init:get_argument(progname),
203 ID = random:uniform(1000000000),
204 Port = open_port(
205 {spawn_executable, os:find_executable(Prog)},
206 [{args, ["-sname", rabbit_misc:format("epmd-starter-~b", [ID]),
207 "-noshell", "-eval", "halt()."]},
208 exit_status, stderr_to_stdout, use_stdio]),
209 port_shutdown_loop(Port).
210
211 port_shutdown_loop(Port) ->
212 receive
213 {Port, {exit_status, _Rc}} -> ok;
214 {Port, _} -> port_shutdown_loop(Port)
215 end.
2121
2222 -include("rabbit.hrl").
2323
24 -define(DIST_PORT_NOT_CONFIGURED, 0).
24 -define(SET_DIST_PORT, 0).
2525 -define(ERROR_CODE, 1).
26 -define(DIST_PORT_CONFIGURED, 2).
26 -define(DO_NOT_SET_DIST_PORT, 2).
2727
2828 %%----------------------------------------------------------------------------
2929 %% Specs
4545 {NodeName, NodeHost} = rabbit_nodes:parts(Node),
4646 ok = duplicate_node_check(NodeName, NodeHost),
4747 ok = dist_port_set_check(),
48 ok = dist_port_range_check(),
4849 ok = dist_port_use_check(NodeHost);
4950 [] ->
5051 %% Ignore running node while installing windows service
5152 ok = dist_port_set_check(),
5253 ok
5354 end,
54 rabbit_misc:quit(?DIST_PORT_NOT_CONFIGURED),
55 rabbit_misc:quit(?SET_DIST_PORT),
5556 ok.
5657
5758 stop() ->
8788 case {pget(inet_dist_listen_min, Kernel, none),
8889 pget(inet_dist_listen_max, Kernel, none)} of
8990 {none, none} -> ok;
90 _ -> rabbit_misc:quit(?DIST_PORT_CONFIGURED)
91 _ -> rabbit_misc:quit(?DO_NOT_SET_DIST_PORT)
9192 end;
9293 {ok, _} ->
9394 ok;
9495 {error, _} ->
9596 ok
9697 end
98 end.
99
100 dist_port_range_check() ->
101 case os:getenv("RABBITMQ_DIST_PORT") of
102 false -> ok;
103 PortStr -> case catch list_to_integer(PortStr) of
104 Port when is_integer(Port) andalso Port > 65535 ->
105 rabbit_misc:quit(?DO_NOT_SET_DIST_PORT);
106 _ ->
107 ok
108 end
97109 end.
98110
99111 dist_port_use_check(NodeHost) ->
0 %% The contents of this file are subject to the Mozilla Public License
1 %% Version 1.1 (the "License"); you may not use this file except in
2 %% compliance with the License. You may obtain a copy of the License
3 %% at http://www.mozilla.org/MPL/
4 %%
5 %% Software distributed under the License is distributed on an "AS IS"
6 %% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
7 %% the License for the specific language governing rights and
8 %% limitations under the License.
9 %%
10 %% The Original Code is RabbitMQ.
11 %%
12 %% The Initial Developer of the Original Code is GoPivotal, Inc.
13 %% Copyright (c) 2014 GoPivotal, Inc. All rights reserved.
14 %%
15
16 -module(rabbit_priority_queue).
17
18 -include_lib("rabbit.hrl").
19 -include_lib("rabbit_framing.hrl").
20 -behaviour(rabbit_backing_queue).
21
22 %% enabled unconditionally. Disabling priority queueing after
23 %% it has been enabled is dangerous.
24 -rabbit_boot_step({?MODULE,
25 [{description, "enable priority queue"},
26 {mfa, {?MODULE, enable, []}},
27 {requires, pre_boot},
28 {enables, kernel_ready}]}).
29
30 -export([enable/0]).
31
32 -export([start/1, stop/0]).
33
34 -export([init/3, terminate/2, delete_and_terminate/2, delete_crashed/1,
35 purge/1, purge_acks/1,
36 publish/6, publish_delivered/5, discard/4, drain_confirmed/1,
37 dropwhile/2, fetchwhile/4, fetch/2, drop/2, ack/2, requeue/2,
38 ackfold/4, fold/3, len/1, is_empty/1, depth/1,
39 set_ram_duration_target/2, ram_duration/1, needs_timeout/1, timeout/1,
40 handle_pre_hibernate/1, resume/1, msg_rates/1,
41 info/2, invoke/3, is_duplicate/2]).
42
43 -record(state, {bq, bqss}).
44 -record(passthrough, {bq, bqs}).
45
46 %% See 'note on suffixes' below
47 -define(passthrough1(F), State#passthrough{bqs = BQ:F}).
48 -define(passthrough2(F),
49 {Res, BQS1} = BQ:F, {Res, State#passthrough{bqs = BQS1}}).
50 -define(passthrough3(F),
51 {Res1, Res2, BQS1} = BQ:F, {Res1, Res2, State#passthrough{bqs = BQS1}}).
52
53 %% This module adds suport for priority queues.
54 %%
55 %% Priority queues have one backing queue per priority. Backing queue functions
56 %% then produce a list of results for each BQ and fold over them, sorting
57 %% by priority.
58 %%
59 %%For queues that do not
60 %% have priorities enabled, the functions in this module delegate to
61 %% their "regular" backing queue module counterparts. See the `passthrough`
62 %% record and passthrough{1,2,3} macros.
63 %%
64 %% Delivery to consumers happens by first "running" the queue with
65 %% the highest priority until there are no more messages to deliver,
66 %% then the next one, and so on. This offers good prioritisation
67 %% but may result in lower priority messages not being delivered
68 %% when there's a high ingress rate of messages with higher priority.
69
70 enable() ->
71 {ok, RealBQ} = application:get_env(rabbit, backing_queue_module),
72 case RealBQ of
73 ?MODULE -> ok;
74 _ -> rabbit_log:info("Priority queues enabled, real BQ is ~s~n",
75 [RealBQ]),
76 application:set_env(
77 rabbitmq_priority_queue, backing_queue_module, RealBQ),
78 application:set_env(rabbit, backing_queue_module, ?MODULE)
79 end.
80
81 %%----------------------------------------------------------------------------
82
83 start(QNames) ->
84 BQ = bq(),
85 %% TODO this expand-collapse dance is a bit ridiculous but it's what
86 %% rabbit_amqqueue:recover/0 expects. We could probably simplify
87 %% this if we rejigged recovery a bit.
88 {DupNames, ExpNames} = expand_queues(QNames),
89 case BQ:start(ExpNames) of
90 {ok, ExpRecovery} ->
91 {ok, collapse_recovery(QNames, DupNames, ExpRecovery)};
92 Else ->
93 Else
94 end.
95
96 stop() ->
97 BQ = bq(),
98 BQ:stop().
99
100 %%----------------------------------------------------------------------------
101
102 mutate_name(P, Q = #amqqueue{name = QName = #resource{name = QNameBin}}) ->
103 Q#amqqueue{name = QName#resource{name = mutate_name_bin(P, QNameBin)}}.
104
105 mutate_name_bin(P, NameBin) -> <<NameBin/binary, 0, P:8>>.
106
107 expand_queues(QNames) ->
108 lists:unzip(
109 lists:append([expand_queue(QName) || QName <- QNames])).
110
111 expand_queue(QName = #resource{name = QNameBin}) ->
112 {ok, Q} = rabbit_misc:dirty_read({rabbit_durable_queue, QName}),
113 case priorities(Q) of
114 none -> [{QName, QName}];
115 Ps -> [{QName, QName#resource{name = mutate_name_bin(P, QNameBin)}}
116 || P <- Ps]
117 end.
118
119 collapse_recovery(QNames, DupNames, Recovery) ->
120 NameToTerms = lists:foldl(fun({Name, RecTerm}, Dict) ->
121 dict:append(Name, RecTerm, Dict)
122 end, dict:new(), lists:zip(DupNames, Recovery)),
123 [dict:fetch(Name, NameToTerms) || Name <- QNames].
124
125 priorities(#amqqueue{arguments = Args}) ->
126 Ints = [long, short, signedint, byte],
127 case rabbit_misc:table_lookup(Args, <<"x-max-priority">>) of
128 {Type, Max} -> case lists:member(Type, Ints) of
129 false -> none;
130 true -> lists:reverse(lists:seq(0, Max))
131 end;
132 _ -> none
133 end.
134
135 %%----------------------------------------------------------------------------
136
137 init(Q, Recover, AsyncCallback) ->
138 BQ = bq(),
139 case priorities(Q) of
140 none -> RealRecover = case Recover of
141 [R] -> R; %% [0]
142 R -> R
143 end,
144 #passthrough{bq = BQ,
145 bqs = BQ:init(Q, RealRecover, AsyncCallback)};
146 Ps -> Init = fun (P, Term) ->
147 BQ:init(
148 mutate_name(P, Q), Term,
149 fun (M, F) -> AsyncCallback(M, {P, F}) end)
150 end,
151 BQSs = case have_recovery_terms(Recover) of
152 false -> [{P, Init(P, Recover)} || P <- Ps];
153 _ -> PsTerms = lists:zip(Ps, Recover),
154 [{P, Init(P, Term)} || {P, Term} <- PsTerms]
155 end,
156 #state{bq = BQ,
157 bqss = BQSs}
158 end.
159 %% [0] collapse_recovery has the effect of making a list of recovery
160 %% terms in priority order, even for non priority queues. It's easier
161 %% to do that and "unwrap" in init/3 than to have collapse_recovery be
162 %% aware of non-priority queues.
163
164 have_recovery_terms(new) -> false;
165 have_recovery_terms(non_clean_shutdown) -> false;
166 have_recovery_terms(_) -> true.
167
168 terminate(Reason, State = #state{bq = BQ}) ->
169 foreach1(fun (_P, BQSN) -> BQ:terminate(Reason, BQSN) end, State);
170 terminate(Reason, State = #passthrough{bq = BQ, bqs = BQS}) ->
171 ?passthrough1(terminate(Reason, BQS)).
172
173 delete_and_terminate(Reason, State = #state{bq = BQ}) ->
174 foreach1(fun (_P, BQSN) ->
175 BQ:delete_and_terminate(Reason, BQSN)
176 end, State);
177 delete_and_terminate(Reason, State = #passthrough{bq = BQ, bqs = BQS}) ->
178 ?passthrough1(delete_and_terminate(Reason, BQS)).
179
180 delete_crashed(Q) ->
181 BQ = bq(),
182 case priorities(Q) of
183 none -> BQ:delete_crashed(Q);
184 Ps -> [BQ:delete_crashed(mutate_name(P, Q)) || P <- Ps]
185 end.
186
187 purge(State = #state{bq = BQ}) ->
188 fold_add2(fun (_P, BQSN) -> BQ:purge(BQSN) end, State);
189 purge(State = #passthrough{bq = BQ, bqs = BQS}) ->
190 ?passthrough2(purge(BQS)).
191
192 purge_acks(State = #state{bq = BQ}) ->
193 foreach1(fun (_P, BQSN) -> BQ:purge_acks(BQSN) end, State);
194 purge_acks(State = #passthrough{bq = BQ, bqs = BQS}) ->
195 ?passthrough1(purge_acks(BQS)).
196
197 publish(Msg, MsgProps, IsDelivered, ChPid, Flow, State = #state{bq = BQ}) ->
198 pick1(fun (_P, BQSN) ->
199 BQ:publish(Msg, MsgProps, IsDelivered, ChPid, Flow, BQSN)
200 end, Msg, State);
201 publish(Msg, MsgProps, IsDelivered, ChPid, Flow,
202 State = #passthrough{bq = BQ, bqs = BQS}) ->
203 ?passthrough1(publish(Msg, MsgProps, IsDelivered, ChPid, Flow, BQS)).
204
205 publish_delivered(Msg, MsgProps, ChPid, Flow, State = #state{bq = BQ}) ->
206 pick2(fun (P, BQSN) ->
207 {AckTag, BQSN1} = BQ:publish_delivered(
208 Msg, MsgProps, ChPid, Flow, BQSN),
209 {{P, AckTag}, BQSN1}
210 end, Msg, State);
211 publish_delivered(Msg, MsgProps, ChPid, Flow,
212 State = #passthrough{bq = BQ, bqs = BQS}) ->
213 ?passthrough2(publish_delivered(Msg, MsgProps, ChPid, Flow, BQS)).
214
215 %% TODO this is a hack. The BQ api does not give us enough information
216 %% here - if we had the Msg we could look at its priority and forward
217 %% to the appropriate sub-BQ. But we don't so we are stuck.
218 %%
219 %% But fortunately VQ ignores discard/4, so we can too, *assuming we
220 %% are talking to VQ*. discard/4 is used by HA, but that's "above" us
221 %% (if in use) so we don't break that either, just some hypothetical
222 %% alternate BQ implementation.
223 discard(_MsgId, _ChPid, _Flow, State = #state{}) ->
224 State;
225 %% We should have something a bit like this here:
226 %% pick1(fun (_P, BQSN) ->
227 %% BQ:discard(MsgId, ChPid, Flow, BQSN)
228 %% end, Msg, State);
229 discard(MsgId, ChPid, Flow, State = #passthrough{bq = BQ, bqs = BQS}) ->
230 ?passthrough1(discard(MsgId, ChPid, Flow, BQS)).
231
232 drain_confirmed(State = #state{bq = BQ}) ->
233 fold_append2(fun (_P, BQSN) -> BQ:drain_confirmed(BQSN) end, State);
234 drain_confirmed(State = #passthrough{bq = BQ, bqs = BQS}) ->
235 ?passthrough2(drain_confirmed(BQS)).
236
237 dropwhile(Pred, State = #state{bq = BQ}) ->
238 find2(fun (_P, BQSN) -> BQ:dropwhile(Pred, BQSN) end, undefined, State);
239 dropwhile(Pred, State = #passthrough{bq = BQ, bqs = BQS}) ->
240 ?passthrough2(dropwhile(Pred, BQS)).
241
242 %% TODO this is a bit nasty. In the one place where fetchwhile/4 is
243 %% actually used the accumulator is a list of acktags, which of course
244 %% we need to mutate - so we do that although we are encoding an
245 %% assumption here.
246 fetchwhile(Pred, Fun, Acc, State = #state{bq = BQ}) ->
247 findfold3(
248 fun (P, BQSN, AccN) ->
249 {Res, AccN1, BQSN1} = BQ:fetchwhile(Pred, Fun, AccN, BQSN),
250 {Res, priority_on_acktags(P, AccN1), BQSN1}
251 end, Acc, undefined, State);
252 fetchwhile(Pred, Fun, Acc, State = #passthrough{bq = BQ, bqs = BQS}) ->
253 ?passthrough3(fetchwhile(Pred, Fun, Acc, BQS)).
254
255 fetch(AckRequired, State = #state{bq = BQ}) ->
256 find2(
257 fun (P, BQSN) ->
258 case BQ:fetch(AckRequired, BQSN) of
259 {empty, BQSN1} -> {empty, BQSN1};
260 {{Msg, Del, ATag}, BQSN1} -> {{Msg, Del, {P, ATag}}, BQSN1}
261 end
262 end, empty, State);
263 fetch(AckRequired, State = #passthrough{bq = BQ, bqs = BQS}) ->
264 ?passthrough2(fetch(AckRequired, BQS)).
265
266 drop(AckRequired, State = #state{bq = BQ}) ->
267 find2(fun (P, BQSN) ->
268 case BQ:drop(AckRequired, BQSN) of
269 {empty, BQSN1} -> {empty, BQSN1};
270 {{MsgId, AckTag}, BQSN1} -> {{MsgId, {P, AckTag}}, BQSN1}
271 end
272 end, empty, State);
273 drop(AckRequired, State = #passthrough{bq = BQ, bqs = BQS}) ->
274 ?passthrough2(drop(AckRequired, BQS)).
275
276 ack(AckTags, State = #state{bq = BQ}) ->
277 fold_by_acktags2(fun (AckTagsN, BQSN) ->
278 BQ:ack(AckTagsN, BQSN)
279 end, AckTags, State);
280 ack(AckTags, State = #passthrough{bq = BQ, bqs = BQS}) ->
281 ?passthrough2(ack(AckTags, BQS)).
282
283 requeue(AckTags, State = #state{bq = BQ}) ->
284 fold_by_acktags2(fun (AckTagsN, BQSN) ->
285 BQ:requeue(AckTagsN, BQSN)
286 end, AckTags, State);
287 requeue(AckTags, State = #passthrough{bq = BQ, bqs = BQS}) ->
288 ?passthrough2(requeue(AckTags, BQS)).
289
290 %% Similar problem to fetchwhile/4
291 ackfold(MsgFun, Acc, State = #state{bq = BQ}, AckTags) ->
292 AckTagsByPriority = partition_acktags(AckTags),
293 fold2(
294 fun (P, BQSN, AccN) ->
295 case orddict:find(P, AckTagsByPriority) of
296 {ok, ATagsN} -> {AccN1, BQSN1} =
297 BQ:ackfold(MsgFun, AccN, BQSN, ATagsN),
298 {priority_on_acktags(P, AccN1), BQSN1};
299 error -> {AccN, BQSN}
300 end
301 end, Acc, State);
302 ackfold(MsgFun, Acc, State = #passthrough{bq = BQ, bqs = BQS}, AckTags) ->
303 ?passthrough2(ackfold(MsgFun, Acc, BQS, AckTags)).
304
305 fold(Fun, Acc, State = #state{bq = BQ}) ->
306 fold2(fun (_P, BQSN, AccN) -> BQ:fold(Fun, AccN, BQSN) end, Acc, State);
307 fold(Fun, Acc, State = #passthrough{bq = BQ, bqs = BQS}) ->
308 ?passthrough2(fold(Fun, Acc, BQS)).
309
310 len(#state{bq = BQ, bqss = BQSs}) ->
311 add0(fun (_P, BQSN) -> BQ:len(BQSN) end, BQSs);
312 len(#passthrough{bq = BQ, bqs = BQS}) ->
313 BQ:len(BQS).
314
315 is_empty(#state{bq = BQ, bqss = BQSs}) ->
316 all0(fun (_P, BQSN) -> BQ:is_empty(BQSN) end, BQSs);
317 is_empty(#passthrough{bq = BQ, bqs = BQS}) ->
318 BQ:is_empty(BQS).
319
320 depth(#state{bq = BQ, bqss = BQSs}) ->
321 add0(fun (_P, BQSN) -> BQ:depth(BQSN) end, BQSs);
322 depth(#passthrough{bq = BQ, bqs = BQS}) ->
323 BQ:depth(BQS).
324
325 set_ram_duration_target(DurationTarget, State = #state{bq = BQ}) ->
326 foreach1(fun (_P, BQSN) ->
327 BQ:set_ram_duration_target(DurationTarget, BQSN)
328 end, State);
329 set_ram_duration_target(DurationTarget,
330 State = #passthrough{bq = BQ, bqs = BQS}) ->
331 ?passthrough1(set_ram_duration_target(DurationTarget, BQS)).
332
333 ram_duration(State = #state{bq = BQ}) ->
334 fold_min2(fun (_P, BQSN) -> BQ:ram_duration(BQSN) end, State);
335 ram_duration(State = #passthrough{bq = BQ, bqs = BQS}) ->
336 ?passthrough2(ram_duration(BQS)).
337
338 needs_timeout(#state{bq = BQ, bqss = BQSs}) ->
339 fold0(fun (_P, _BQSN, timed) -> timed;
340 (_P, BQSN, idle) -> case BQ:needs_timeout(BQSN) of
341 timed -> timed;
342 _ -> idle
343 end;
344 (_P, BQSN, false) -> BQ:needs_timeout(BQSN)
345 end, false, BQSs);
346 needs_timeout(#passthrough{bq = BQ, bqs = BQS}) ->
347 BQ:needs_timeout(BQS).
348
349 timeout(State = #state{bq = BQ}) ->
350 foreach1(fun (_P, BQSN) -> BQ:timeout(BQSN) end, State);
351 timeout(State = #passthrough{bq = BQ, bqs = BQS}) ->
352 ?passthrough1(timeout(BQS)).
353
354 handle_pre_hibernate(State = #state{bq = BQ}) ->
355 foreach1(fun (_P, BQSN) ->
356 BQ:handle_pre_hibernate(BQSN)
357 end, State);
358 handle_pre_hibernate(State = #passthrough{bq = BQ, bqs = BQS}) ->
359 ?passthrough1(handle_pre_hibernate(BQS)).
360
361 resume(State = #state{bq = BQ}) ->
362 foreach1(fun (_P, BQSN) -> BQ:resume(BQSN) end, State);
363 resume(State = #passthrough{bq = BQ, bqs = BQS}) ->
364 ?passthrough1(resume(BQS)).
365
366 msg_rates(#state{bq = BQ, bqss = BQSs}) ->
367 fold0(fun(_P, BQSN, {InN, OutN}) ->
368 {In, Out} = BQ:msg_rates(BQSN),
369 {InN + In, OutN + Out}
370 end, {0.0, 0.0}, BQSs);
371 msg_rates(#passthrough{bq = BQ, bqs = BQS}) ->
372 BQ:msg_rates(BQS).
373
374 info(backing_queue_status, #state{bq = BQ, bqss = BQSs}) ->
375 fold0(fun (P, BQSN, Acc) ->
376 combine_status(P, BQ:info(backing_queue_status, BQSN), Acc)
377 end, nothing, BQSs);
378 info(Item, #state{bq = BQ, bqss = BQSs}) ->
379 fold0(fun (_P, BQSN, Acc) ->
380 Acc + BQ:info(Item, BQSN)
381 end, 0, BQSs);
382 info(Item, #passthrough{bq = BQ, bqs = BQS}) ->
383 BQ:info(Item, BQS).
384
385 invoke(Mod, {P, Fun}, State = #state{bq = BQ}) ->
386 pick1(fun (_P, BQSN) -> BQ:invoke(Mod, Fun, BQSN) end, P, State);
387 invoke(Mod, Fun, State = #passthrough{bq = BQ, bqs = BQS}) ->
388 ?passthrough1(invoke(Mod, Fun, BQS)).
389
390 is_duplicate(Msg, State = #state{bq = BQ}) ->
391 pick2(fun (_P, BQSN) -> BQ:is_duplicate(Msg, BQSN) end, Msg, State);
392 is_duplicate(Msg, State = #passthrough{bq = BQ, bqs = BQS}) ->
393 ?passthrough2(is_duplicate(Msg, BQS)).
394
395 %%----------------------------------------------------------------------------
396
397 bq() ->
398 {ok, RealBQ} = application:get_env(
399 rabbitmq_priority_queue, backing_queue_module),
400 RealBQ.
401
402 %% Note on suffixes: Many utility functions here have suffixes telling
403 %% you the arity of the return type of the BQ function they are
404 %% designed to work with.
405 %%
406 %% 0 - BQ function returns a value and does not modify state
407 %% 1 - BQ function just returns a new state
408 %% 2 - BQ function returns a 2-tuple of {Result, NewState}
409 %% 3 - BQ function returns a 3-tuple of {Result1, Result2, NewState}
410
411 %% Fold over results
412 fold0(Fun, Acc, [{P, BQSN} | Rest]) -> fold0(Fun, Fun(P, BQSN, Acc), Rest);
413 fold0(_Fun, Acc, []) -> Acc.
414
415 %% Do all BQs match?
416 all0(Pred, BQSs) -> fold0(fun (_P, _BQSN, false) -> false;
417 (P, BQSN, true) -> Pred(P, BQSN)
418 end, true, BQSs).
419
420 %% Sum results
421 add0(Fun, BQSs) -> fold0(fun (P, BQSN, Acc) -> Acc + Fun(P, BQSN) end, 0, BQSs).
422
423 %% Apply for all states
424 foreach1(Fun, State = #state{bqss = BQSs}) ->
425 a(State#state{bqss = foreach1(Fun, BQSs, [])}).
426 foreach1(Fun, [{P, BQSN} | Rest], BQSAcc) ->
427 BQSN1 = Fun(P, BQSN),
428 foreach1(Fun, Rest, [{P, BQSN1} | BQSAcc]);
429 foreach1(_Fun, [], BQSAcc) ->
430 lists:reverse(BQSAcc).
431
432 %% For a given thing, just go to its BQ
433 pick1(Fun, Prioritisable, #state{bqss = BQSs} = State) ->
434 {P, BQSN} = priority(Prioritisable, BQSs),
435 a(State#state{bqss = bq_store(P, Fun(P, BQSN), BQSs)}).
436
437 %% Fold over results
438 fold2(Fun, Acc, State = #state{bqss = BQSs}) ->
439 {Res, BQSs1} = fold2(Fun, Acc, BQSs, []),
440 {Res, a(State#state{bqss = BQSs1})}.
441 fold2(Fun, Acc, [{P, BQSN} | Rest], BQSAcc) ->
442 {Acc1, BQSN1} = Fun(P, BQSN, Acc),
443 fold2(Fun, Acc1, Rest, [{P, BQSN1} | BQSAcc]);
444 fold2(_Fun, Acc, [], BQSAcc) ->
445 {Acc, lists:reverse(BQSAcc)}.
446
447 %% Fold over results assuming results are lists and we want to append them
448 fold_append2(Fun, State) ->
449 fold2(fun (P, BQSN, Acc) ->
450 {Res, BQSN1} = Fun(P, BQSN),
451 {Res ++ Acc, BQSN1}
452 end, [], State).
453
454 %% Fold over results assuming results are numbers and we want to sum them
455 fold_add2(Fun, State) ->
456 fold2(fun (P, BQSN, Acc) ->
457 {Res, BQSN1} = Fun(P, BQSN),
458 {add_maybe_infinity(Res, Acc), BQSN1}
459 end, 0, State).
460
461 %% Fold over results assuming results are numbers and we want the minimum
462 fold_min2(Fun, State) ->
463 fold2(fun (P, BQSN, Acc) ->
464 {Res, BQSN1} = Fun(P, BQSN),
465 {erlang:min(Res, Acc), BQSN1}
466 end, infinity, State).
467
468 %% Fold over results assuming results are lists and we want to append
469 %% them, and also that we have some AckTags we want to pass in to each
470 %% invocation.
471 fold_by_acktags2(Fun, AckTags, State) ->
472 AckTagsByPriority = partition_acktags(AckTags),
473 fold_append2(fun (P, BQSN) ->
474 case orddict:find(P, AckTagsByPriority) of
475 {ok, AckTagsN} -> Fun(AckTagsN, BQSN);
476 error -> {[], BQSN}
477 end
478 end, State).
479
480 %% For a given thing, just go to its BQ
481 pick2(Fun, Prioritisable, #state{bqss = BQSs} = State) ->
482 {P, BQSN} = priority(Prioritisable, BQSs),
483 {Res, BQSN1} = Fun(P, BQSN),
484 {Res, a(State#state{bqss = bq_store(P, BQSN1, BQSs)})}.
485
486 %% Run through BQs in priority order until one does not return
487 %% {NotFound, NewState} or we have gone through them all.
488 find2(Fun, NotFound, State = #state{bqss = BQSs}) ->
489 {Res, BQSs1} = find2(Fun, NotFound, BQSs, []),
490 {Res, a(State#state{bqss = BQSs1})}.
491 find2(Fun, NotFound, [{P, BQSN} | Rest], BQSAcc) ->
492 case Fun(P, BQSN) of
493 {NotFound, BQSN1} -> find2(Fun, NotFound, Rest, [{P, BQSN1} | BQSAcc]);
494 {Res, BQSN1} -> {Res, lists:reverse([{P, BQSN1} | BQSAcc]) ++ Rest}
495 end;
496 find2(_Fun, NotFound, [], BQSAcc) ->
497 {NotFound, lists:reverse(BQSAcc)}.
498
499 %% Run through BQs in priority order like find2 but also folding as we go.
500 findfold3(Fun, Acc, NotFound, State = #state{bqss = BQSs}) ->
501 {Res, Acc1, BQSs1} = findfold3(Fun, Acc, NotFound, BQSs, []),
502 {Res, Acc1, a(State#state{bqss = BQSs1})}.
503 findfold3(Fun, Acc, NotFound, [{P, BQSN} | Rest], BQSAcc) ->
504 case Fun(P, BQSN, Acc) of
505 {NotFound, Acc1, BQSN1} ->
506 findfold3(Fun, Acc1, NotFound, Rest, [{P, BQSN1} | BQSAcc]);
507 {Res, Acc1, BQSN1} ->
508 {Res, Acc1, lists:reverse([{P, BQSN1} | BQSAcc]) ++ Rest}
509 end;
510 findfold3(_Fun, Acc, NotFound, [], BQSAcc) ->
511 {NotFound, Acc, lists:reverse(BQSAcc)}.
512
513 bq_fetch(P, []) -> exit({not_found, P});
514 bq_fetch(P, [{P, BQSN} | _]) -> BQSN;
515 bq_fetch(P, [{_, _BQSN} | T]) -> bq_fetch(P, T).
516
517 bq_store(P, BQS, BQSs) ->
518 [{PN, case PN of
519 P -> BQS;
520 _ -> BQSN
521 end} || {PN, BQSN} <- BQSs].
522
523 %%
524 a(State = #state{bqss = BQSs}) ->
525 Ps = [P || {P, _} <- BQSs],
526 case lists:reverse(lists:usort(Ps)) of
527 Ps -> State;
528 _ -> exit({bad_order, Ps})
529 end.
530
531 %%----------------------------------------------------------------------------
532
533 priority(P, BQSs) when is_integer(P) ->
534 {P, bq_fetch(P, BQSs)};
535 priority(#basic_message{content = Content}, BQSs) ->
536 priority1(rabbit_binary_parser:ensure_content_decoded(Content), BQSs).
537
538 priority1(_Content, [{P, BQSN}]) ->
539 {P, BQSN};
540 priority1(Content = #content{properties = Props},
541 [{P, BQSN} | Rest]) ->
542 #'P_basic'{priority = Priority0} = Props,
543 Priority = case Priority0 of
544 undefined -> 0;
545 _ when is_integer(Priority0) -> Priority0
546 end,
547 case Priority >= P of
548 true -> {P, BQSN};
549 false -> priority1(Content, Rest)
550 end.
551
552 add_maybe_infinity(infinity, _) -> infinity;
553 add_maybe_infinity(_, infinity) -> infinity;
554 add_maybe_infinity(A, B) -> A + B.
555
556 partition_acktags(AckTags) -> partition_acktags(AckTags, orddict:new()).
557
558 partition_acktags([], Partitioned) ->
559 orddict:map(fun (_P, RevAckTags) ->
560 lists:reverse(RevAckTags)
561 end, Partitioned);
562 partition_acktags([{P, AckTag} | Rest], Partitioned) ->
563 partition_acktags(Rest, rabbit_misc:orddict_cons(P, AckTag, Partitioned)).
564
565 priority_on_acktags(P, AckTags) ->
566 [case Tag of
567 _ when is_integer(Tag) -> {P, Tag};
568 _ -> Tag
569 end || Tag <- AckTags].
570
571 combine_status(P, New, nothing) ->
572 [{priority_lengths, [{P, proplists:get_value(len, New)}]} | New];
573 combine_status(P, New, Old) ->
574 Combined = [{K, cse(V, proplists:get_value(K, Old))} || {K, V} <- New],
575 Lens = [{P, proplists:get_value(len, New)} |
576 proplists:get_value(priority_lengths, Old)],
577 [{priority_lengths, Lens} | Combined].
578
579 cse(infinity, _) -> infinity;
580 cse(_, infinity) -> infinity;
581 cse(A, B) when is_number(A) -> A + B;
582 cse({delta, _, _, _}, _) -> {delta, todo, todo, todo};
583 cse(A, B) -> exit({A, B}).
174174 C = #cr{ch_pid = ChPid,
175175 acktags = ChAckTags,
176176 blocked_consumers = BlockedQ} ->
177 AllConsumers = priority_queue:join(Consumers, BlockedQ),
177 All = priority_queue:join(Consumers, BlockedQ),
178178 ok = erase_ch_record(C),
179 Filtered = priority_queue:filter(chan_pred(ChPid, true), All),
179180 {[AckTag || {AckTag, _CTag} <- queue:to_list(ChAckTags)],
180 tags(priority_queue:to_list(AllConsumers)),
181 tags(priority_queue:to_list(Filtered)),
181182 State#state{consumers = remove_consumers(ChPid, Consumers)}}
182183 end.
183184
441442 end, Queue).
442443
443444 remove_consumers(ChPid, Queue) ->
444 priority_queue:filter(fun ({CP, _Consumer}) when CP =:= ChPid -> false;
445 (_) -> true
446 end, Queue).
445 priority_queue:filter(chan_pred(ChPid, false), Queue).
446
447 chan_pred(ChPid, Want) ->
448 fun ({CP, _Consumer}) when CP =:= ChPid -> Want;
449 (_) -> not Want
450 end.
447451
448452 update_use({inactive, _, _, _} = CUInfo, inactive) ->
449453 CUInfo;
1515
1616 -module(rabbit_queue_index).
1717
18 -export([erase/1, init/2, recover/5,
18 -export([erase/1, init/3, recover/6,
1919 terminate/2, delete_and_terminate/1,
20 publish/5, deliver/2, ack/2, sync/1, needs_sync/1, flush/1,
20 publish/6, deliver/2, ack/2, sync/1, needs_sync/1, flush/1,
2121 read/3, next_segment_boundary/1, bounds/1, start/1, stop/0]).
2222
23 -export([add_queue_ttl/0, avoid_zeroes/0, store_msg_size/0]).
23 -export([add_queue_ttl/0, avoid_zeroes/0, store_msg_size/0, store_msg/0]).
2424
2525 -define(CLEAN_FILENAME, "clean.dot").
2626
2727 %%----------------------------------------------------------------------------
2828
2929 %% The queue index is responsible for recording the order of messages
30 %% within a queue on disk.
30 %% within a queue on disk. As such it contains records of messages
31 %% being published, delivered and acknowledged. The publish record
32 %% includes the sequence ID, message ID and a small quantity of
33 %% metadata about the message; the delivery and acknowledgement
34 %% records just contain the sequence ID. A publish record may also
35 %% contain the complete message if provided to publish/5; this allows
36 %% the message store to be avoided altogether for small messages. In
37 %% either case the publish record is stored in memory in the same
38 %% serialised format it will take on disk.
3139 %%
3240 %% Because of the fact that the queue can decide at any point to send
3341 %% a queue entry to disk, you can not rely on publishes appearing in
3543 %% then delivered, then ack'd.
3644 %%
3745 %% In order to be able to clean up ack'd messages, we write to segment
38 %% files. These files have a fixed maximum size: ?SEGMENT_ENTRY_COUNT
46 %% files. These files have a fixed number of entries: ?SEGMENT_ENTRY_COUNT
3947 %% publishes, delivers and acknowledgements. They are numbered, and so
4048 %% it is known that the 0th segment contains messages 0 ->
4149 %% ?SEGMENT_ENTRY_COUNT - 1, the 1st segment contains messages
8492 %% and seeding the message store on start up.
8593 %%
8694 %% Note that in general, the representation of a message's state as
87 %% the tuple: {('no_pub'|{MsgId, MsgProps, IsPersistent}),
95 %% the tuple: {('no_pub'|{IsPersistent, Bin, MsgBin}),
8896 %% ('del'|'no_del'), ('ack'|'no_ack')} is richer than strictly
8997 %% necessary for most operations. However, for startup, and to ensure
9098 %% the safe and correct combination of journal entries with entries
127135 -define(REL_SEQ_ONLY_RECORD_BYTES, 2).
128136
129137 %% publish record is binary 1 followed by a bit for is_persistent,
130 %% then 14 bits of rel seq id, 64 bits for message expiry and 128 bits
131 %% of md5sum msg id
138 %% then 14 bits of rel seq id, 64 bits for message expiry, 32 bits of
139 %% size and then 128 bits of md5sum msg id.
132140 -define(PUB_PREFIX, 1).
133141 -define(PUB_PREFIX_BITS, 1).
134142
139147 -define(MSG_ID_BYTES, 16). %% md5sum is 128 bit or 16 bytes
140148 -define(MSG_ID_BITS, (?MSG_ID_BYTES * 8)).
141149
150 %% This is the size of the message body content, for stats
142151 -define(SIZE_BYTES, 4).
143152 -define(SIZE_BITS, (?SIZE_BYTES * 8)).
144153
145 %% 16 bytes for md5sum + 8 for expiry + 4 for size
154 %% This is the size of the message record embedded in the queue
155 %% index. If 0, the message can be found in the message store.
156 -define(EMBEDDED_SIZE_BYTES, 4).
157 -define(EMBEDDED_SIZE_BITS, (?EMBEDDED_SIZE_BYTES * 8)).
158
159 %% 16 bytes for md5sum + 8 for expiry
146160 -define(PUB_RECORD_BODY_BYTES, (?MSG_ID_BYTES + ?EXPIRY_BYTES + ?SIZE_BYTES)).
161 %% + 4 for size
162 -define(PUB_RECORD_SIZE_BYTES, (?PUB_RECORD_BODY_BYTES + ?EMBEDDED_SIZE_BYTES)).
163
147164 %% + 2 for seq, bits and prefix
148 -define(PUB_RECORD_BYTES, (?PUB_RECORD_BODY_BYTES + 2)).
149
150 %% 1 publish, 1 deliver, 1 ack per msg
151 -define(SEGMENT_TOTAL_SIZE, ?SEGMENT_ENTRY_COUNT *
152 (?PUB_RECORD_BYTES + (2 * ?REL_SEQ_ONLY_RECORD_BYTES))).
165 -define(PUB_RECORD_PREFIX_BYTES, 2).
153166
154167 %% ---- misc ----
155168
156 -define(PUB, {_, _, _}). %% {MsgId, MsgProps, IsPersistent}
169 -define(PUB, {_, _, _}). %% {IsPersistent, Bin, MsgBin}
157170
158171 -define(READ_MODE, [binary, raw, read]).
159 -define(READ_AHEAD_MODE, [{read_ahead, ?SEGMENT_TOTAL_SIZE} | ?READ_MODE]).
160172 -define(WRITE_MODE, [write | ?READ_MODE]).
161173
162174 %%----------------------------------------------------------------------------
163175
164 -record(qistate, { dir, segments, journal_handle, dirty_count,
165 max_journal_entries, on_sync, unconfirmed }).
166
167 -record(segment, { num, path, journal_entries, unacked }).
176 -record(qistate, {dir, segments, journal_handle, dirty_count,
177 max_journal_entries, on_sync, on_sync_msg,
178 unconfirmed, unconfirmed_msg}).
179
180 -record(segment, {num, path, journal_entries, unacked}).
168181
169182 -include("rabbit.hrl").
170183
173186 -rabbit_upgrade({add_queue_ttl, local, []}).
174187 -rabbit_upgrade({avoid_zeroes, local, [add_queue_ttl]}).
175188 -rabbit_upgrade({store_msg_size, local, [avoid_zeroes]}).
189 -rabbit_upgrade({store_msg, local, [store_msg_size]}).
176190
177191 -ifdef(use_specs).
178192
192206 dirty_count :: integer(),
193207 max_journal_entries :: non_neg_integer(),
194208 on_sync :: on_sync_fun(),
195 unconfirmed :: gb_sets:set()
209 on_sync_msg :: on_sync_fun(),
210 unconfirmed :: gb_sets:set(),
211 unconfirmed_msg :: gb_sets:set()
196212 }).
197213 -type(contains_predicate() :: fun ((rabbit_types:msg_id()) -> boolean())).
198214 -type(walker(A) :: fun ((A) -> 'finished' |
200216 -type(shutdown_terms() :: [term()] | 'non_clean_shutdown').
201217
202218 -spec(erase/1 :: (rabbit_amqqueue:name()) -> 'ok').
203 -spec(init/2 :: (rabbit_amqqueue:name(), on_sync_fun()) -> qistate()).
204 -spec(recover/5 :: (rabbit_amqqueue:name(), shutdown_terms(), boolean(),
205 contains_predicate(), on_sync_fun()) ->
219 -spec(init/3 :: (rabbit_amqqueue:name(),
220 on_sync_fun(), on_sync_fun()) -> qistate()).
221 -spec(recover/6 :: (rabbit_amqqueue:name(), shutdown_terms(), boolean(),
222 contains_predicate(),
223 on_sync_fun(), on_sync_fun()) ->
206224 {'undefined' | non_neg_integer(),
207225 'undefined' | non_neg_integer(), qistate()}).
208226 -spec(terminate/2 :: ([any()], qistate()) -> qistate()).
209227 -spec(delete_and_terminate/1 :: (qistate()) -> qistate()).
210 -spec(publish/5 :: (rabbit_types:msg_id(), seq_id(),
211 rabbit_types:message_properties(), boolean(), qistate())
212 -> qistate()).
228 -spec(publish/6 :: (rabbit_types:msg_id(), seq_id(),
229 rabbit_types:message_properties(), boolean(),
230 non_neg_integer(), qistate()) -> qistate()).
213231 -spec(deliver/2 :: ([seq_id()], qistate()) -> qistate()).
214232 -spec(ack/2 :: ([seq_id()], qistate()) -> qistate()).
215233 -spec(sync/1 :: (qistate()) -> qistate()).
240258 false -> ok
241259 end.
242260
243 init(Name, OnSyncFun) ->
261 init(Name, OnSyncFun, OnSyncMsgFun) ->
244262 State = #qistate { dir = Dir } = blank_state(Name),
245263 false = rabbit_file:is_file(Dir), %% is_file == is file or dir
246 State #qistate { on_sync = OnSyncFun }.
247
248 recover(Name, Terms, MsgStoreRecovered, ContainsCheckFun, OnSyncFun) ->
264 State#qistate{on_sync = OnSyncFun,
265 on_sync_msg = OnSyncMsgFun}.
266
267 recover(Name, Terms, MsgStoreRecovered, ContainsCheckFun,
268 OnSyncFun, OnSyncMsgFun) ->
249269 State = blank_state(Name),
250 State1 = State #qistate { on_sync = OnSyncFun },
270 State1 = State #qistate{on_sync = OnSyncFun,
271 on_sync_msg = OnSyncMsgFun},
251272 CleanShutdown = Terms /= non_clean_shutdown,
252273 case CleanShutdown andalso MsgStoreRecovered of
253274 true -> RecoveredCounts = proplists:get_value(segments, Terms, []),
266287 ok = rabbit_file:recursive_delete([Dir]),
267288 State1.
268289
269 publish(MsgId, SeqId, MsgProps, IsPersistent,
270 State = #qistate { unconfirmed = Unconfirmed })
271 when is_binary(MsgId) ->
290 publish(MsgOrId, SeqId, MsgProps, IsPersistent, JournalSizeHint,
291 State = #qistate{unconfirmed = UC,
292 unconfirmed_msg = UCM}) ->
293 MsgId = case MsgOrId of
294 #basic_message{id = Id} -> Id;
295 Id when is_binary(Id) -> Id
296 end,
272297 ?MSG_ID_BYTES = size(MsgId),
273298 {JournalHdl, State1} =
274299 get_journal_handle(
275 case MsgProps#message_properties.needs_confirming of
276 true -> Unconfirmed1 = gb_sets:add_element(MsgId, Unconfirmed),
277 State #qistate { unconfirmed = Unconfirmed1 };
278 false -> State
300 case {MsgProps#message_properties.needs_confirming, MsgOrId} of
301 {true, MsgId} -> UC1 = gb_sets:add_element(MsgId, UC),
302 State#qistate{unconfirmed = UC1};
303 {true, _} -> UCM1 = gb_sets:add_element(MsgId, UCM),
304 State#qistate{unconfirmed_msg = UCM1};
305 {false, _} -> State
279306 end),
307 file_handle_cache_stats:update(queue_index_journal_write),
308 {Bin, MsgBin} = create_pub_record_body(MsgOrId, MsgProps),
280309 ok = file_handle_cache:append(
281310 JournalHdl, [<<(case IsPersistent of
282311 true -> ?PUB_PERSIST_JPREFIX;
283312 false -> ?PUB_TRANS_JPREFIX
284313 end):?JPREFIX_BITS,
285 SeqId:?SEQ_BITS>>,
286 create_pub_record_body(MsgId, MsgProps)]),
314 SeqId:?SEQ_BITS, Bin/binary,
315 (size(MsgBin)):?EMBEDDED_SIZE_BITS>>, MsgBin]),
287316 maybe_flush_journal(
288 add_to_journal(SeqId, {MsgId, MsgProps, IsPersistent}, State1)).
317 JournalSizeHint,
318 add_to_journal(SeqId, {IsPersistent, Bin, MsgBin}, State1)).
289319
290320 deliver(SeqIds, State) ->
291321 deliver_or_ack(del, SeqIds, State).
301331 ok = file_handle_cache:sync(JournalHdl),
302332 notify_sync(State).
303333
304 needs_sync(#qistate { journal_handle = undefined }) ->
334 needs_sync(#qistate{journal_handle = undefined}) ->
305335 false;
306 needs_sync(#qistate { journal_handle = JournalHdl, unconfirmed = UC }) ->
307 case gb_sets:is_empty(UC) of
336 needs_sync(#qistate{journal_handle = JournalHdl,
337 unconfirmed = UC,
338 unconfirmed_msg = UCM}) ->
339 case gb_sets:is_empty(UC) andalso gb_sets:is_empty(UCM) of
308340 true -> case file_handle_cache:needs_sync(JournalHdl) of
309341 true -> other;
310342 false -> false
408440 dirty_count = 0,
409441 max_journal_entries = MaxJournal,
410442 on_sync = fun (_) -> ok end,
411 unconfirmed = gb_sets:new() }.
443 on_sync_msg = fun (_) -> ok end,
444 unconfirmed = gb_sets:new(),
445 unconfirmed_msg = gb_sets:new() }.
412446
413447 init_clean(RecoveredCounts, State) ->
414448 %% Load the journal. Since this is a clean recovery this (almost)
478512 {SegEntries1, UnackedCountDelta} =
479513 segment_plus_journal(SegEntries, JEntries),
480514 array:sparse_foldl(
481 fun (RelSeq, {{MsgId, MsgProps, IsPersistent}, Del, no_ack},
515 fun (RelSeq, {{IsPersistent, Bin, MsgBin}, Del, no_ack},
482516 {SegmentAndDirtyCount, Bytes}) ->
483 {recover_message(ContainsCheckFun(MsgId), CleanShutdown,
517 {MsgOrId, MsgProps} = parse_pub_record_body(Bin, MsgBin),
518 {recover_message(ContainsCheckFun(MsgOrId), CleanShutdown,
484519 Del, RelSeq, SegmentAndDirtyCount),
485520 Bytes + case IsPersistent of
486521 true -> MsgProps#message_properties.size;
540575 queue_index_walker_reader(QueueName, Gatherer) ->
541576 State = blank_state(QueueName),
542577 ok = scan_segments(
543 fun (_SeqId, MsgId, _MsgProps, true, _IsDelivered, no_ack, ok) ->
578 fun (_SeqId, MsgId, _MsgProps, true, _IsDelivered, no_ack, ok)
579 when is_binary(MsgId) ->
544580 gatherer:sync_in(Gatherer, {MsgId, 1});
545581 (_SeqId, _MsgId, _MsgProps, _IsPersistent, _IsDelivered,
546582 _IsAcked, Acc) ->
554590 Result = lists:foldr(
555591 fun (Seg, AccN) ->
556592 segment_entries_foldr(
557 fun (RelSeq, {{MsgId, MsgProps, IsPersistent},
593 fun (RelSeq, {{MsgOrId, MsgProps, IsPersistent},
558594 IsDelivered, IsAcked}, AccM) ->
559 Fun(reconstruct_seq_id(Seg, RelSeq), MsgId, MsgProps,
595 Fun(reconstruct_seq_id(Seg, RelSeq), MsgOrId, MsgProps,
560596 IsPersistent, IsDelivered, IsAcked, AccM)
561597 end, AccN, segment_find_or_new(Seg, Dir, Segments))
562598 end, Acc, all_segment_nums(State1)),
567603 %% expiry/binary manipulation
568604 %%----------------------------------------------------------------------------
569605
570 create_pub_record_body(MsgId, #message_properties { expiry = Expiry,
571 size = Size }) ->
572 [MsgId, expiry_to_binary(Expiry), <<Size:?SIZE_BITS>>].
606 create_pub_record_body(MsgOrId, #message_properties { expiry = Expiry,
607 size = Size }) ->
608 ExpiryBin = expiry_to_binary(Expiry),
609 case MsgOrId of
610 MsgId when is_binary(MsgId) ->
611 {<<MsgId/binary, ExpiryBin/binary, Size:?SIZE_BITS>>, <<>>};
612 #basic_message{id = MsgId} ->
613 MsgBin = term_to_binary(MsgOrId),
614 {<<MsgId/binary, ExpiryBin/binary, Size:?SIZE_BITS>>, MsgBin}
615 end.
573616
574617 expiry_to_binary(undefined) -> <<?NO_EXPIRY:?EXPIRY_BITS>>;
575618 expiry_to_binary(Expiry) -> <<Expiry:?EXPIRY_BITS>>.
576619
577620 parse_pub_record_body(<<MsgIdNum:?MSG_ID_BITS, Expiry:?EXPIRY_BITS,
578 Size:?SIZE_BITS>>) ->
621 Size:?SIZE_BITS>>, MsgBin) ->
579622 %% work around for binary data fragmentation. See
580623 %% rabbit_msg_file:read_next/2
581624 <<MsgId:?MSG_ID_BYTES/binary>> = <<MsgIdNum:?MSG_ID_BITS>>,
582 Exp = case Expiry of
583 ?NO_EXPIRY -> undefined;
584 X -> X
585 end,
586 {MsgId, #message_properties { expiry = Exp,
587 size = Size }}.
625 Props = #message_properties{expiry = case Expiry of
626 ?NO_EXPIRY -> undefined;
627 X -> X
628 end,
629 size = Size},
630 case MsgBin of
631 <<>> -> {MsgId, Props};
632 _ -> Msg = #basic_message{id = MsgId} = binary_to_term(MsgBin),
633 {Msg, Props}
634 end.
588635
589636 %%----------------------------------------------------------------------------
590637 %% journal manipulation
627674 array:reset(RelSeq, JEntries)
628675 end.
629676
630 maybe_flush_journal(State = #qistate { dirty_count = DCount,
631 max_journal_entries = MaxJournal })
632 when DCount > MaxJournal ->
677 maybe_flush_journal(State) ->
678 maybe_flush_journal(infinity, State).
679
680 maybe_flush_journal(Hint, State = #qistate { dirty_count = DCount,
681 max_journal_entries = MaxJournal })
682 when DCount > MaxJournal orelse (Hint =/= infinity andalso DCount > Hint) ->
633683 flush_journal(State);
634 maybe_flush_journal(State) ->
684 maybe_flush_journal(_Hint, State) ->
635685 State.
636686
637687 flush_journal(State = #qistate { segments = Segments }) ->
655705 path = Path } = Segment) ->
656706 case array:sparse_size(JEntries) of
657707 0 -> Segment;
658 _ -> {ok, Hdl} = file_handle_cache:open(Path, ?WRITE_MODE,
708 _ -> Seg = array:sparse_foldr(
709 fun entry_to_segment/3, [], JEntries),
710 file_handle_cache_stats:update(queue_index_write),
711
712 {ok, Hdl} = file_handle_cache:open(Path, ?WRITE_MODE,
659713 [{write_buffer, infinity}]),
660 array:sparse_foldl(fun write_entry_to_segment/3, Hdl, JEntries),
714 file_handle_cache:append(Hdl, Seg),
661715 ok = file_handle_cache:close(Hdl),
662716 Segment #segment { journal_entries = array_new() }
663717 end.
676730 %% if you call it more than once on the same state. Assumes the counts
677731 %% are 0 to start with.
678732 load_journal(State = #qistate { dir = Dir }) ->
679 case rabbit_file:is_file(filename:join(Dir, ?JOURNAL_FILENAME)) of
733 Path = filename:join(Dir, ?JOURNAL_FILENAME),
734 case rabbit_file:is_file(Path) of
680735 true -> {JournalHdl, State1} = get_journal_handle(State),
736 Size = rabbit_file:file_size(Path),
681737 {ok, 0} = file_handle_cache:position(JournalHdl, 0),
682 load_journal_entries(State1);
738 {ok, JournalBin} = file_handle_cache:read(JournalHdl, Size),
739 parse_journal_entries(JournalBin, State1);
683740 false -> State
684741 end.
685742
703760 end, Segments),
704761 State1 #qistate { segments = Segments1 }.
705762
706 load_journal_entries(State = #qistate { journal_handle = Hdl }) ->
707 case file_handle_cache:read(Hdl, ?SEQ_BYTES) of
708 {ok, <<Prefix:?JPREFIX_BITS, SeqId:?SEQ_BITS>>} ->
709 case Prefix of
710 ?DEL_JPREFIX ->
711 load_journal_entries(add_to_journal(SeqId, del, State));
712 ?ACK_JPREFIX ->
713 load_journal_entries(add_to_journal(SeqId, ack, State));
714 _ ->
715 case file_handle_cache:read(Hdl, ?PUB_RECORD_BODY_BYTES) of
716 %% Journal entry composed only of zeroes was probably
717 %% produced during a dirty shutdown so stop reading
718 {ok, <<0:?PUB_RECORD_BODY_BYTES/unit:8>>} ->
719 State;
720 {ok, <<Bin:?PUB_RECORD_BODY_BYTES/binary>>} ->
721 {MsgId, MsgProps} = parse_pub_record_body(Bin),
722 IsPersistent = case Prefix of
723 ?PUB_PERSIST_JPREFIX -> true;
724 ?PUB_TRANS_JPREFIX -> false
725 end,
726 load_journal_entries(
727 add_to_journal(
728 SeqId, {MsgId, MsgProps, IsPersistent}, State));
729 _ErrOrEoF -> %% err, we've lost at least a publish
730 State
731 end
732 end;
733 _ErrOrEoF -> State
734 end.
763 parse_journal_entries(<<?DEL_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS,
764 Rest/binary>>, State) ->
765 parse_journal_entries(Rest, add_to_journal(SeqId, del, State));
766
767 parse_journal_entries(<<?ACK_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS,
768 Rest/binary>>, State) ->
769 parse_journal_entries(Rest, add_to_journal(SeqId, ack, State));
770 parse_journal_entries(<<0:?JPREFIX_BITS, 0:?SEQ_BITS,
771 0:?PUB_RECORD_SIZE_BYTES/unit:8, _/binary>>, State) ->
772 %% Journal entry composed only of zeroes was probably
773 %% produced during a dirty shutdown so stop reading
774 State;
775 parse_journal_entries(<<Prefix:?JPREFIX_BITS, SeqId:?SEQ_BITS,
776 Bin:?PUB_RECORD_BODY_BYTES/binary,
777 MsgSize:?EMBEDDED_SIZE_BITS, MsgBin:MsgSize/binary,
778 Rest/binary>>, State) ->
779 IsPersistent = case Prefix of
780 ?PUB_PERSIST_JPREFIX -> true;
781 ?PUB_TRANS_JPREFIX -> false
782 end,
783 parse_journal_entries(
784 Rest, add_to_journal(SeqId, {IsPersistent, Bin, MsgBin}, State));
785 parse_journal_entries(_ErrOrEoF, State) ->
786 State.
735787
736788 deliver_or_ack(_Kind, [], State) ->
737789 State;
738790 deliver_or_ack(Kind, SeqIds, State) ->
739791 JPrefix = case Kind of ack -> ?ACK_JPREFIX; del -> ?DEL_JPREFIX end,
740792 {JournalHdl, State1} = get_journal_handle(State),
793 file_handle_cache_stats:update(queue_index_journal_write),
741794 ok = file_handle_cache:append(
742795 JournalHdl,
743796 [<<JPrefix:?JPREFIX_BITS, SeqId:?SEQ_BITS>> || SeqId <- SeqIds]),
745798 add_to_journal(SeqId, Kind, StateN)
746799 end, State1, SeqIds)).
747800
748 notify_sync(State = #qistate { unconfirmed = UC, on_sync = OnSyncFun }) ->
749 case gb_sets:is_empty(UC) of
750 true -> State;
751 false -> OnSyncFun(UC),
752 State #qistate { unconfirmed = gb_sets:new() }
801 notify_sync(State = #qistate{unconfirmed = UC,
802 unconfirmed_msg = UCM,
803 on_sync = OnSyncFun,
804 on_sync_msg = OnSyncMsgFun}) ->
805 State1 = case gb_sets:is_empty(UC) of
806 true -> State;
807 false -> OnSyncFun(UC),
808 State#qistate{unconfirmed = gb_sets:new()}
809 end,
810 case gb_sets:is_empty(UCM) of
811 true -> State1;
812 false -> OnSyncMsgFun(UCM),
813 State1#qistate{unconfirmed_msg = gb_sets:new()}
753814 end.
754815
755816 %%----------------------------------------------------------------------------
822883 segments_new() ->
823884 {dict:new(), []}.
824885
825 write_entry_to_segment(_RelSeq, {?PUB, del, ack}, Hdl) ->
826 Hdl;
827 write_entry_to_segment(RelSeq, {Pub, Del, Ack}, Hdl) ->
828 ok = case Pub of
829 no_pub ->
830 ok;
831 {MsgId, MsgProps, IsPersistent} ->
832 file_handle_cache:append(
833 Hdl, [<<?PUB_PREFIX:?PUB_PREFIX_BITS,
834 (bool_to_int(IsPersistent)):1,
835 RelSeq:?REL_SEQ_BITS>>,
836 create_pub_record_body(MsgId, MsgProps)])
837 end,
838 ok = case {Del, Ack} of
839 {no_del, no_ack} ->
840 ok;
841 _ ->
842 Binary = <<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS,
843 RelSeq:?REL_SEQ_BITS>>,
844 file_handle_cache:append(
845 Hdl, case {Del, Ack} of
846 {del, ack} -> [Binary, Binary];
847 _ -> Binary
848 end)
849 end,
850 Hdl.
886 entry_to_segment(_RelSeq, {?PUB, del, ack}, Buf) ->
887 Buf;
888 entry_to_segment(RelSeq, {Pub, Del, Ack}, Buf) ->
889 %% NB: we are assembling the segment in reverse order here, so
890 %% del/ack comes first.
891 Buf1 = case {Del, Ack} of
892 {no_del, no_ack} ->
893 Buf;
894 _ ->
895 Binary = <<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS,
896 RelSeq:?REL_SEQ_BITS>>,
897 case {Del, Ack} of
898 {del, ack} -> [[Binary, Binary] | Buf];
899 _ -> [Binary | Buf]
900 end
901 end,
902 case Pub of
903 no_pub ->
904 Buf1;
905 {IsPersistent, Bin, MsgBin} ->
906 [[<<?PUB_PREFIX:?PUB_PREFIX_BITS,
907 (bool_to_int(IsPersistent)):1,
908 RelSeq:?REL_SEQ_BITS, Bin/binary,
909 (size(MsgBin)):?EMBEDDED_SIZE_BITS>>, MsgBin] | Buf1]
910 end.
851911
852912 read_bounded_segment(Seg, {StartSeg, StartRelSeq}, {EndSeg, EndRelSeq},
853913 {Messages, Segments}, Dir) ->
854914 Segment = segment_find_or_new(Seg, Dir, Segments),
855915 {segment_entries_foldr(
856 fun (RelSeq, {{MsgId, MsgProps, IsPersistent}, IsDelivered, no_ack}, Acc)
916 fun (RelSeq, {{MsgOrId, MsgProps, IsPersistent}, IsDelivered, no_ack},
917 Acc)
857918 when (Seg > StartSeg orelse StartRelSeq =< RelSeq) andalso
858919 (Seg < EndSeg orelse EndRelSeq >= RelSeq) ->
859 [ {MsgId, reconstruct_seq_id(StartSeg, RelSeq), MsgProps,
860 IsPersistent, IsDelivered == del} | Acc ];
920 [{MsgOrId, reconstruct_seq_id(StartSeg, RelSeq), MsgProps,
921 IsPersistent, IsDelivered == del} | Acc];
861922 (_RelSeq, _Value, Acc) ->
862923 Acc
863924 end, Messages, Segment),
867928 Segment = #segment { journal_entries = JEntries }) ->
868929 {SegEntries, _UnackedCount} = load_segment(false, Segment),
869930 {SegEntries1, _UnackedCountD} = segment_plus_journal(SegEntries, JEntries),
870 array:sparse_foldr(Fun, Init, SegEntries1).
931 array:sparse_foldr(
932 fun (RelSeq, {{IsPersistent, Bin, MsgBin}, Del, Ack}, Acc) ->
933 {MsgOrId, MsgProps} = parse_pub_record_body(Bin, MsgBin),
934 Fun(RelSeq, {{MsgOrId, MsgProps, IsPersistent}, Del, Ack}, Acc)
935 end, Init, SegEntries1).
871936
872937 %% Loading segments
873938 %%
876941 Empty = {array_new(), 0},
877942 case rabbit_file:is_file(Path) of
878943 false -> Empty;
879 true -> {ok, Hdl} = file_handle_cache:open(Path, ?READ_AHEAD_MODE, []),
944 true -> Size = rabbit_file:file_size(Path),
945 file_handle_cache_stats:update(queue_index_read),
946 {ok, Hdl} = file_handle_cache:open(Path, ?READ_MODE, []),
880947 {ok, 0} = file_handle_cache:position(Hdl, bof),
881 Res = case file_handle_cache:read(Hdl, ?SEGMENT_TOTAL_SIZE) of
882 {ok, SegData} -> load_segment_entries(
883 KeepAcked, SegData, Empty);
884 eof -> Empty
885 end,
948 {ok, SegBin} = file_handle_cache:read(Hdl, Size),
886949 ok = file_handle_cache:close(Hdl),
950 Res = parse_segment_entries(SegBin, KeepAcked, Empty),
887951 Res
888952 end.
889953
890 load_segment_entries(KeepAcked,
891 <<?PUB_PREFIX:?PUB_PREFIX_BITS,
892 IsPersistentNum:1, RelSeq:?REL_SEQ_BITS,
893 PubRecordBody:?PUB_RECORD_BODY_BYTES/binary,
894 SegData/binary>>,
895 {SegEntries, UnackedCount}) ->
896 {MsgId, MsgProps} = parse_pub_record_body(PubRecordBody),
897 Obj = {{MsgId, MsgProps, 1 == IsPersistentNum}, no_del, no_ack},
954 parse_segment_entries(<<?PUB_PREFIX:?PUB_PREFIX_BITS,
955 IsPersistNum:1, RelSeq:?REL_SEQ_BITS, Rest/binary>>,
956 KeepAcked, Acc) ->
957 parse_segment_publish_entry(
958 Rest, 1 == IsPersistNum, RelSeq, KeepAcked, Acc);
959 parse_segment_entries(<<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS,
960 RelSeq:?REL_SEQ_BITS, Rest/binary>>, KeepAcked, Acc) ->
961 parse_segment_entries(
962 Rest, KeepAcked, add_segment_relseq_entry(KeepAcked, RelSeq, Acc));
963 parse_segment_entries(<<>>, _KeepAcked, Acc) ->
964 Acc.
965
966 parse_segment_publish_entry(<<Bin:?PUB_RECORD_BODY_BYTES/binary,
967 MsgSize:?EMBEDDED_SIZE_BITS,
968 MsgBin:MsgSize/binary, Rest/binary>>,
969 IsPersistent, RelSeq, KeepAcked,
970 {SegEntries, Unacked}) ->
971 Obj = {{IsPersistent, Bin, MsgBin}, no_del, no_ack},
898972 SegEntries1 = array:set(RelSeq, Obj, SegEntries),
899 load_segment_entries(KeepAcked, SegData, {SegEntries1, UnackedCount + 1});
900 load_segment_entries(KeepAcked,
901 <<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS,
902 RelSeq:?REL_SEQ_BITS, SegData/binary>>,
903 {SegEntries, UnackedCount}) ->
904 {UnackedCountDelta, SegEntries1} =
905 case array:get(RelSeq, SegEntries) of
906 {Pub, no_del, no_ack} ->
907 { 0, array:set(RelSeq, {Pub, del, no_ack}, SegEntries)};
908 {Pub, del, no_ack} when KeepAcked ->
909 {-1, array:set(RelSeq, {Pub, del, ack}, SegEntries)};
910 {_Pub, del, no_ack} ->
911 {-1, array:reset(RelSeq, SegEntries)}
912 end,
913 load_segment_entries(KeepAcked, SegData,
914 {SegEntries1, UnackedCount + UnackedCountDelta});
915 load_segment_entries(_KeepAcked, _SegData, Res) ->
916 Res.
973 parse_segment_entries(Rest, KeepAcked, {SegEntries1, Unacked + 1});
974 parse_segment_publish_entry(Rest, _IsPersistent, _RelSeq, KeepAcked, Acc) ->
975 parse_segment_entries(Rest, KeepAcked, Acc).
976
977 add_segment_relseq_entry(KeepAcked, RelSeq, {SegEntries, Unacked}) ->
978 case array:get(RelSeq, SegEntries) of
979 {Pub, no_del, no_ack} ->
980 {array:set(RelSeq, {Pub, del, no_ack}, SegEntries), Unacked};
981 {Pub, del, no_ack} when KeepAcked ->
982 {array:set(RelSeq, {Pub, del, ack}, SegEntries), Unacked - 1};
983 {_Pub, del, no_ack} ->
984 {array:reset(RelSeq, SegEntries), Unacked - 1}
985 end.
917986
918987 array_new() ->
919988 array:new([{default, undefined}, fixed, {size, ?SEGMENT_ENTRY_COUNT}]).
11201189 store_msg_size_segment(_) ->
11211190 stop.
11221191
1192 store_msg() ->
1193 foreach_queue_index({fun store_msg_journal/1,
1194 fun store_msg_segment/1}).
1195
1196 store_msg_journal(<<?DEL_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS,
1197 Rest/binary>>) ->
1198 {<<?DEL_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS>>, Rest};
1199 store_msg_journal(<<?ACK_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS,
1200 Rest/binary>>) ->
1201 {<<?ACK_JPREFIX:?JPREFIX_BITS, SeqId:?SEQ_BITS>>, Rest};
1202 store_msg_journal(<<Prefix:?JPREFIX_BITS, SeqId:?SEQ_BITS,
1203 MsgId:?MSG_ID_BITS, Expiry:?EXPIRY_BITS, Size:?SIZE_BITS,
1204 Rest/binary>>) ->
1205 {<<Prefix:?JPREFIX_BITS, SeqId:?SEQ_BITS, MsgId:?MSG_ID_BITS,
1206 Expiry:?EXPIRY_BITS, Size:?SIZE_BITS,
1207 0:?EMBEDDED_SIZE_BITS>>, Rest};
1208 store_msg_journal(_) ->
1209 stop.
1210
1211 store_msg_segment(<<?PUB_PREFIX:?PUB_PREFIX_BITS, IsPersistentNum:1,
1212 RelSeq:?REL_SEQ_BITS, MsgId:?MSG_ID_BITS,
1213 Expiry:?EXPIRY_BITS, Size:?SIZE_BITS, Rest/binary>>) ->
1214 {<<?PUB_PREFIX:?PUB_PREFIX_BITS, IsPersistentNum:1, RelSeq:?REL_SEQ_BITS,
1215 MsgId:?MSG_ID_BITS, Expiry:?EXPIRY_BITS, Size:?SIZE_BITS,
1216 0:?EMBEDDED_SIZE_BITS>>, Rest};
1217 store_msg_segment(<<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS,
1218 RelSeq:?REL_SEQ_BITS, Rest/binary>>) ->
1219 {<<?REL_SEQ_ONLY_PREFIX:?REL_SEQ_ONLY_PREFIX_BITS, RelSeq:?REL_SEQ_BITS>>,
1220 Rest};
1221 store_msg_segment(_) ->
1222 stop.
1223
1224
1225
11231226
11241227 %%----------------------------------------------------------------------------
11251228
11561259 [{write_buffer, infinity}]),
11571260
11581261 {ok, PathHdl} = file_handle_cache:open(
1159 Path, [{read_ahead, Size} | ?READ_MODE], []),
1262 Path, ?READ_MODE, [{read_buffer, Size}]),
11601263 {ok, Content} = file_handle_cache:read(PathHdl, Size),
11611264 ok = file_handle_cache:close(PathHdl),
11621265
5656 timeout, frame_max, channel_max, client_properties, connected_at]).
5757
5858 -define(INFO_KEYS, ?CREATION_EVENT_KEYS ++ ?STATISTICS_KEYS -- [pid]).
59
60 -define(AUTH_NOTIFICATION_INFO_KEYS,
61 [host, vhost, name, peer_host, peer_port, protocol, auth_mechanism,
62 ssl, ssl_protocol, ssl_cipher, peer_cert_issuer, peer_cert_subject,
63 peer_cert_validity]).
5964
6065 -define(IS_RUNNING(State),
6166 (State#v1.connection_state =:= running orelse
213218 rabbit_net:fast_close(Sock),
214219 exit(normal)
215220 end,
216 log(info, "accepting AMQP connection ~p (~s)~n", [self(), Name]),
217221 {ok, HandshakeTimeout} = application:get_env(rabbit, handshake_timeout),
218222 ClientSock = socket_op(Sock, SockTransform),
219223 erlang:send_after(HandshakeTimeout, self(), handshake_timeout),
259263 log(info, "closing AMQP connection ~p (~s)~n", [self(), Name])
260264 catch
261265 Ex -> log(case Ex of
262 connection_closed_abruptly -> warning;
263 _ -> error
266 connection_closed_with_no_data_received -> debug;
267 connection_closed_abruptly -> warning;
268 _ -> error
264269 end, "closing AMQP connection ~p (~s):~n~p~n",
265270 [self(), Name, Ex])
266271 after
312317 binlist_split(Len, [H|T], Acc) ->
313318 binlist_split(Len - size(H), T, [H|Acc]).
314319
315 mainloop(Deb, Buf, BufLen, State = #v1{sock = Sock}) ->
316 case rabbit_net:recv(Sock) of
320 mainloop(Deb, Buf, BufLen, State = #v1{sock = Sock,
321 connection_state = CS,
322 connection = #connection{
323 name = ConnName}}) ->
324 Recv = rabbit_net:recv(Sock),
325 case CS of
326 pre_init when Buf =:= [] ->
327 %% We only log incoming connections when either the
328 %% first byte was received or there was an error (eg. a
329 %% timeout).
330 %%
331 %% The goal is to not log TCP healthchecks (a connection
332 %% with no data received) unless specified otherwise.
333 log(case Recv of
334 closed -> debug;
335 _ -> info
336 end, "accepting AMQP connection ~p (~s)~n",
337 [self(), ConnName]);
338 _ ->
339 ok
340 end,
341 case Recv of
317342 {data, Data} ->
318343 recvloop(Deb, [Data | Buf], BufLen + size(Data),
319344 State#v1{pending_recv = false});
333358 end
334359 end.
335360
336 stop(closed, State) -> maybe_emit_stats(State),
337 throw(connection_closed_abruptly);
338 stop(Reason, State) -> maybe_emit_stats(State),
339 throw({inet_error, Reason}).
361 stop(closed, #v1{connection_state = pre_init} = State) ->
362 %% The connection was closed before any packet was received. It's
363 %% probably a load-balancer healthcheck: don't consider this a
364 %% failure.
365 maybe_emit_stats(State),
366 throw(connection_closed_with_no_data_received);
367 stop(closed, State) ->
368 maybe_emit_stats(State),
369 throw(connection_closed_abruptly);
370 stop(Reason, State) ->
371 maybe_emit_stats(State),
372 throw({inet_error, Reason}).
340373
341374 handle_other({conserve_resources, Source, Conserve},
342375 State = #v1{throttle = Throttle = #throttle{alarmed_by = CR}}) ->
943976 helper_sup = SupPid,
944977 sock = Sock,
945978 throttle = Throttle}) ->
946 ok = rabbit_access_control:check_vhost_access(User, VHostPath),
979 ok = rabbit_access_control:check_vhost_access(User, VHostPath, Sock),
947980 NewConnection = Connection#connection{vhost = VHostPath},
948981 ok = send_on_channel0(Sock, #'connection.open_ok'{}, Protocol),
949982 Conserve = rabbit_alarm:register(self(), {?MODULE, conserve_resources, []}),
10451078 auth_state = AuthState},
10461079 sock = Sock}) ->
10471080 case AuthMechanism:handle_response(Response, AuthState) of
1048 {refused, Msg, Args} ->
1049 auth_fail(Msg, Args, Name, State);
1081 {refused, Username, Msg, Args} ->
1082 auth_fail(Username, Msg, Args, Name, State);
10501083 {protocol_error, Msg, Args} ->
1084 notify_auth_result(none, user_authentication_failure,
1085 [{error, rabbit_misc:format(Msg, Args)}],
1086 State),
10511087 rabbit_misc:protocol_error(syntax_error, Msg, Args);
10521088 {challenge, Challenge, AuthState1} ->
10531089 Secure = #'connection.secure'{challenge = Challenge},
10561092 auth_state = AuthState1}};
10571093 {ok, User = #user{username = Username}} ->
10581094 case rabbit_access_control:check_user_loopback(Username, Sock) of
1059 ok -> ok;
1060 not_allowed -> auth_fail("user '~s' can only connect via "
1061 "localhost", [Username], Name, State)
1095 ok ->
1096 notify_auth_result(Username, user_authentication_success,
1097 [], State);
1098 not_allowed ->
1099 auth_fail(Username, "user '~s' can only connect via "
1100 "localhost", [Username], Name, State)
10621101 end,
10631102 Tune = #'connection.tune'{frame_max = get_env(frame_max),
10641103 channel_max = get_env(channel_max),
10701109 end.
10711110
10721111 -ifdef(use_specs).
1073 -spec(auth_fail/4 :: (string(), [any()], binary(), #v1{}) -> no_return()).
1112 -spec(auth_fail/5 ::
1113 (rabbit_types:username() | none, string(), [any()], binary(), #v1{}) ->
1114 no_return()).
10741115 -endif.
1075 auth_fail(Msg, Args, AuthName,
1116 auth_fail(Username, Msg, Args, AuthName,
10761117 State = #v1{connection = #connection{protocol = Protocol,
10771118 capabilities = Capabilities}}) ->
1119 notify_auth_result(Username, user_authentication_failure,
1120 [{error, rabbit_misc:format(Msg, Args)}], State),
10781121 AmqpError = rabbit_misc:amqp_error(
10791122 access_refused, "~s login refused: ~s",
10801123 [AuthName, io_lib:format(Msg, Args)], none),
10921135 _ -> ok
10931136 end,
10941137 rabbit_misc:protocol_error(AmqpError).
1138
1139 notify_auth_result(Username, AuthResult, ExtraProps, State) ->
1140 EventProps = [{connection_type, network},
1141 {name, case Username of none -> ''; _ -> Username end}] ++
1142 [case Item of
1143 name -> {connection_name, i(name, State)};
1144 _ -> {Item, i(Item, State)}
1145 end || Item <- ?AUTH_NOTIFICATION_INFO_KEYS] ++
1146 ExtraProps,
1147 rabbit_event:notify(AuthResult, [P || {_, V} = P <- EventProps, V =/= '']).
10951148
10961149 %%--------------------------------------------------------------------------
10971150
11661219
11671220 cert_info(F, #v1{sock = Sock}) ->
11681221 case rabbit_net:peercert(Sock) of
1169 nossl -> '';
1170 {error, no_peercert} -> '';
1171 {ok, Cert} -> list_to_binary(F(Cert))
1222 nossl -> '';
1223 {error, _} -> '';
1224 {ok, Cert} -> list_to_binary(F(Cert))
11721225 end.
11731226
11741227 maybe_emit_stats(State) ->
1515
1616 -module(rabbit_trace).
1717
18 -export([init/1, enabled/1, tap_in/5, tap_out/5, start/1, stop/1]).
18 -export([init/1, enabled/1, tap_in/6, tap_out/5, start/1, stop/1]).
1919
2020 -include("rabbit.hrl").
2121 -include("rabbit_framing.hrl").
3131
3232 -spec(init/1 :: (rabbit_types:vhost()) -> state()).
3333 -spec(enabled/1 :: (rabbit_types:vhost()) -> boolean()).
34 -spec(tap_in/5 :: (rabbit_types:basic_message(), binary(),
35 rabbit_channel:channel_number(),
34 -spec(tap_in/6 :: (rabbit_types:basic_message(), [rabbit_amqqueue:name()],
35 binary(), rabbit_channel:channel_number(),
3636 rabbit_types:username(), state()) -> 'ok').
3737 -spec(tap_out/5 :: (rabbit_amqqueue:qmsg(), binary(),
3838 rabbit_channel:channel_number(),
5757 {ok, VHosts} = application:get_env(rabbit, ?TRACE_VHOSTS),
5858 lists:member(VHost, VHosts).
5959
60 tap_in(_Msg, _ConnName, _ChannelNum, _Username, none) -> ok;
60 tap_in(_Msg, _QNames, _ConnName, _ChannelNum, _Username, none) -> ok;
6161 tap_in(Msg = #basic_message{exchange_name = #resource{name = XName,
6262 virtual_host = VHost}},
63 ConnName, ChannelNum, Username, TraceX) ->
63 QNames, ConnName, ChannelNum, Username, TraceX) ->
6464 trace(TraceX, Msg, <<"publish">>, XName,
65 [{<<"vhost">>, longstr, VHost},
66 {<<"connection">>, longstr, ConnName},
67 {<<"channel">>, signedint, ChannelNum},
68 {<<"user">>, longstr, Username}]).
65 [{<<"vhost">>, longstr, VHost},
66 {<<"connection">>, longstr, ConnName},
67 {<<"channel">>, signedint, ChannelNum},
68 {<<"user">>, longstr, Username},
69 {<<"routed_queues">>, array,
70 [{longstr, QName#resource.name} || QName <- QNames]}]).
6971
7072 tap_out(_Msg, _ConnName, _ChannelNum, _Username, none) -> ok;
7173 tap_out({#resource{name = QName, virtual_host = VHost},
2626 vhost/0, ctag/0, amqp_error/0, r/1, r2/2, r3/3, listener/0,
2727 binding/0, binding_source/0, binding_destination/0,
2828 amqqueue/0, exchange/0,
29 connection/0, protocol/0, user/0, internal_user/0,
29 connection/0, protocol/0, auth_user/0, user/0, internal_user/0,
3030 username/0, password/0, password_hash/0,
3131 ok/1, error/1, ok_or_error/1, ok_or_error2/2, ok_pid_or_error/0,
3232 channel_exit/0, connection_exit/0, mfargs/0, proc_name/0,
130130
131131 -type(protocol() :: rabbit_framing:protocol()).
132132
133 -type(auth_user() ::
134 #auth_user{username :: username(),
135 tags :: [atom()],
136 impl :: any()}).
137
133138 -type(user() ::
134 #user{username :: username(),
135 tags :: [atom()],
136 auth_backend :: atom(),
137 impl :: any()}).
139 #user{username :: username(),
140 tags :: [atom()],
141 authz_backends :: [{atom(), any()}]}).
138142
139143 -type(internal_user() ::
140144 #internal_user{username :: username(),
1515
1616 -module(rabbit_upgrade).
1717
18 -export([maybe_upgrade_mnesia/0, maybe_upgrade_local/0]).
18 -export([maybe_upgrade_mnesia/0, maybe_upgrade_local/0,
19 nodes_running/1, secondary_upgrade/1]).
1920
2021 -include("rabbit.hrl").
2122
121122
122123 maybe_upgrade_mnesia() ->
123124 AllNodes = rabbit_mnesia:cluster_nodes(all),
125 ok = rabbit_mnesia_rename:maybe_finish(AllNodes),
124126 case rabbit_version:upgrades_required(mnesia) of
125127 {error, starting_from_scratch} ->
126128 ok;
4949 -rabbit_upgrade({cluster_name, mnesia, [runtime_parameters]}).
5050 -rabbit_upgrade({down_slave_nodes, mnesia, [queue_decorators]}).
5151 -rabbit_upgrade({queue_state, mnesia, [down_slave_nodes]}).
52 -rabbit_upgrade({recoverable_slaves, mnesia, [queue_state]}).
5253
5354 %% -------------------------------------------------------------------
5455
8182 -spec(cluster_name/0 :: () -> 'ok').
8283 -spec(down_slave_nodes/0 :: () -> 'ok').
8384 -spec(queue_state/0 :: () -> 'ok').
85 -spec(recoverable_slaves/0 :: () -> 'ok').
8486
8587 -endif.
8688
417419 [name, durable, auto_delete, exclusive_owner, arguments, pid, slave_pids,
418420 sync_slave_pids, down_slave_nodes, policy, gm_pids, decorators, state]).
419421
422 recoverable_slaves() ->
423 ok = recoverable_slaves(rabbit_queue),
424 ok = recoverable_slaves(rabbit_durable_queue).
425
426 recoverable_slaves(Table) ->
427 transform(
428 Table, fun (Q) -> Q end, %% Don't change shape of record
429 [name, durable, auto_delete, exclusive_owner, arguments, pid, slave_pids,
430 sync_slave_pids, recoverable_slaves, policy, gm_pids, decorators,
431 state]).
432
433
420434 %%--------------------------------------------------------------------
421435
422436 transform(TableName, Fun, FieldList) ->
1717
1818 -export([init/3, terminate/2, delete_and_terminate/2, delete_crashed/1,
1919 purge/1, purge_acks/1,
20 publish/5, publish_delivered/4, discard/3, drain_confirmed/1,
20 publish/6, publish_delivered/5, discard/4, drain_confirmed/1,
2121 dropwhile/2, fetchwhile/4, fetch/2, drop/2, ack/2, requeue/2,
2222 ackfold/4, fold/3, len/1, is_empty/1, depth/1,
2323 set_ram_duration_target/2, ram_duration/1, needs_timeout/1, timeout/1,
2727 -export([start/1, stop/0]).
2828
2929 %% exported for testing only
30 -export([start_msg_store/2, stop_msg_store/0, init/5]).
31
32 %%----------------------------------------------------------------------------
30 -export([start_msg_store/2, stop_msg_store/0, init/6]).
31
32 %%----------------------------------------------------------------------------
33 %% Messages, and their position in the queue, can be in memory or on
34 %% disk, or both. Persistent messages will have both message and
35 %% position pushed to disk as soon as they arrive; transient messages
36 %% can be written to disk (and thus both types can be evicted from
37 %% memory) under memory pressure. The question of whether a message is
38 %% in RAM and whether it is persistent are orthogonal.
39 %%
40 %% Messages are persisted using the queue index and the message
41 %% store. Normally the queue index holds the position of the message
42 %% *within this queue* along with a couple of small bits of metadata,
43 %% while the message store holds the message itself (including headers
44 %% and other properties).
45 %%
46 %% However, as an optimisation, small messages can be embedded
47 %% directly in the queue index and bypass the message store
48 %% altogether.
49 %%
3350 %% Definitions:
34
51 %%
3552 %% alpha: this is a message where both the message itself, and its
3653 %% position within the queue are held in RAM
3754 %%
38 %% beta: this is a message where the message itself is only held on
39 %% disk, but its position within the queue is held in RAM.
55 %% beta: this is a message where the message itself is only held on
56 %% disk (if persisted to the message store) but its position
57 %% within the queue is held in RAM.
4058 %%
4159 %% gamma: this is a message where the message itself is only held on
4260 %% disk, but its position is both in RAM and on disk.
247265 q3,
248266 q4,
249267 next_seq_id,
250 ram_pending_ack,
251 disk_pending_ack,
268 ram_pending_ack, %% msgs using store, still in RAM
269 disk_pending_ack, %% msgs in store, paged out
270 qi_pending_ack, %% msgs using qi, *can't* be paged out
252271 index_state,
253272 msg_store_clients,
254273 durable,
273292 unconfirmed,
274293 confirmed,
275294 ack_out_counter,
276 ack_in_counter
295 ack_in_counter,
296 %% Unlike the other counters these two do not feed into
297 %% #rates{} and get reset
298 disk_read_count,
299 disk_write_count
277300 }).
278301
279302 -record(rates, { in, out, ack_in, ack_out, timestamp }).
284307 msg,
285308 is_persistent,
286309 is_delivered,
287 msg_on_disk,
310 msg_in_store,
288311 index_on_disk,
312 persist_to,
289313 msg_props
290314 }).
291315
299323 %% betas, the IO_BATCH_SIZE sets the number of betas that we must be
300324 %% due to write indices for before we do any work at all.
301325 -define(IO_BATCH_SIZE, 2048). %% next power-of-2 after ?CREDIT_DISC_BOUND
326 -define(HEADER_GUESS_SIZE, 100). %% see determine_persist_to/2
302327 -define(PERSISTENT_MSG_STORE, msg_store_persistent).
303328 -define(TRANSIENT_MSG_STORE, msg_store_transient).
304329 -define(QUEUE, lqueue).
305330
306331 -include("rabbit.hrl").
332 -include("rabbit_framing.hrl").
307333
308334 %%----------------------------------------------------------------------------
309335
340366 next_seq_id :: seq_id(),
341367 ram_pending_ack :: gb_trees:tree(),
342368 disk_pending_ack :: gb_trees:tree(),
369 qi_pending_ack :: gb_trees:tree(),
343370 index_state :: any(),
344371 msg_store_clients :: 'undefined' | {{any(), binary()},
345372 {any(), binary()}},
366393 unconfirmed :: gb_sets:set(),
367394 confirmed :: gb_sets:set(),
368395 ack_out_counter :: non_neg_integer(),
369 ack_in_counter :: non_neg_integer() }).
396 ack_in_counter :: non_neg_integer(),
397 disk_read_count :: non_neg_integer(),
398 disk_write_count :: non_neg_integer() }).
370399 %% Duplicated from rabbit_backing_queue
371400 -spec(ack/2 :: ([ack()], state()) -> {[rabbit_guid:guid()], state()}).
372401
425454 ok = rabbit_sup:stop_child(?PERSISTENT_MSG_STORE),
426455 ok = rabbit_sup:stop_child(?TRANSIENT_MSG_STORE).
427456
428 init(Queue, Recover, AsyncCallback) ->
429 init(Queue, Recover, AsyncCallback,
430 fun (MsgIds, ActionTaken) ->
431 msgs_written_to_disk(AsyncCallback, MsgIds, ActionTaken)
432 end,
433 fun (MsgIds) -> msg_indices_written_to_disk(AsyncCallback, MsgIds) end).
457 init(Queue, Recover, Callback) ->
458 init(
459 Queue, Recover, Callback,
460 fun (MsgIds, ActionTaken) ->
461 msgs_written_to_disk(Callback, MsgIds, ActionTaken)
462 end,
463 fun (MsgIds) -> msg_indices_written_to_disk(Callback, MsgIds) end,
464 fun (MsgIds) -> msgs_and_indices_written_to_disk(Callback, MsgIds) end).
434465
435466 init(#amqqueue { name = QueueName, durable = IsDurable }, new,
436 AsyncCallback, MsgOnDiskFun, MsgIdxOnDiskFun) ->
437 IndexState = rabbit_queue_index:init(QueueName, MsgIdxOnDiskFun),
467 AsyncCallback, MsgOnDiskFun, MsgIdxOnDiskFun, MsgAndIdxOnDiskFun) ->
468 IndexState = rabbit_queue_index:init(QueueName,
469 MsgIdxOnDiskFun, MsgAndIdxOnDiskFun),
438470 init(IsDurable, IndexState, 0, 0, [],
439471 case IsDurable of
440472 true -> msg_store_client_init(?PERSISTENT_MSG_STORE,
445477
446478 %% We can be recovering a transient queue if it crashed
447479 init(#amqqueue { name = QueueName, durable = IsDurable }, Terms,
448 AsyncCallback, MsgOnDiskFun, MsgIdxOnDiskFun) ->
480 AsyncCallback, MsgOnDiskFun, MsgIdxOnDiskFun, MsgAndIdxOnDiskFun) ->
449481 {PRef, RecoveryTerms} = process_recovery_terms(Terms),
450482 {PersistentClient, ContainsCheckFun} =
451483 case IsDurable of
452484 true -> C = msg_store_client_init(?PERSISTENT_MSG_STORE, PRef,
453485 MsgOnDiskFun, AsyncCallback),
454 {C, fun (MId) -> rabbit_msg_store:contains(MId, C) end};
486 {C, fun (MsgId) when is_binary(MsgId) ->
487 rabbit_msg_store:contains(MsgId, C);
488 (#basic_message{is_persistent = Persistent}) ->
489 Persistent
490 end};
455491 false -> {undefined, fun(_MsgId) -> false end}
456492 end,
457493 TransientClient = msg_store_client_init(?TRANSIENT_MSG_STORE,
460496 rabbit_queue_index:recover(
461497 QueueName, RecoveryTerms,
462498 rabbit_msg_store:successfully_recovered_state(?PERSISTENT_MSG_STORE),
463 ContainsCheckFun, MsgIdxOnDiskFun),
499 ContainsCheckFun, MsgIdxOnDiskFun, MsgAndIdxOnDiskFun),
464500 init(IsDurable, IndexState, DeltaCount, DeltaBytes, RecoveryTerms,
465501 PersistentClient, TransientClient).
466502
513549 delete_crashed(#amqqueue{name = QName}) ->
514550 ok = rabbit_queue_index:erase(QName).
515551
516 purge(State = #vqstate { q4 = Q4,
517 index_state = IndexState,
518 msg_store_clients = MSCState,
519 len = Len,
520 ram_bytes = RamBytes,
521 persistent_count = PCount,
522 persistent_bytes = PBytes }) ->
552 purge(State = #vqstate { q4 = Q4,
553 len = Len }) ->
523554 %% TODO: when there are no pending acks, which is a common case,
524555 %% we could simply wipe the qi instead of issuing delivers and
525556 %% acks for all the messages.
526 Stats = {RamBytes, PCount, PBytes},
527 {Stats1, IndexState1} =
528 remove_queue_entries(Q4, Stats, IndexState, MSCState),
529
530 {Stats2, State1 = #vqstate { q1 = Q1,
531 index_state = IndexState2,
532 msg_store_clients = MSCState1 }} =
533
534 purge_betas_and_deltas(
535 Stats1, State #vqstate { q4 = ?QUEUE:new(),
536 index_state = IndexState1 }),
537
538 {{RamBytes3, PCount3, PBytes3}, IndexState3} =
539 remove_queue_entries(Q1, Stats2, IndexState2, MSCState1),
540
541 {Len, a(State1 #vqstate { q1 = ?QUEUE:new(),
542 index_state = IndexState3,
543 len = 0,
544 bytes = 0,
545 ram_msg_count = 0,
546 ram_bytes = RamBytes3,
547 persistent_count = PCount3,
548 persistent_bytes = PBytes3 })}.
557 State1 = remove_queue_entries(Q4, State),
558
559 State2 = #vqstate { q1 = Q1 } =
560 purge_betas_and_deltas(State1 #vqstate { q4 = ?QUEUE:new() }),
561
562 State3 = remove_queue_entries(Q1, State2),
563
564 {Len, a(State3 #vqstate { q1 = ?QUEUE:new() })}.
549565
550566 purge_acks(State) -> a(purge_pending_ack(false, State)).
551567
552568 publish(Msg = #basic_message { is_persistent = IsPersistent, id = MsgId },
553569 MsgProps = #message_properties { needs_confirming = NeedsConfirming },
554 IsDelivered, _ChPid, State = #vqstate { q1 = Q1, q3 = Q3, q4 = Q4,
555 next_seq_id = SeqId,
556 len = Len,
557 in_counter = InCount,
558 persistent_count = PCount,
559 durable = IsDurable,
560 unconfirmed = UC }) ->
570 IsDelivered, _ChPid, _Flow,
571 State = #vqstate { q1 = Q1, q3 = Q3, q4 = Q4,
572 next_seq_id = SeqId,
573 in_counter = InCount,
574 durable = IsDurable,
575 unconfirmed = UC }) ->
561576 IsPersistent1 = IsDurable andalso IsPersistent,
562577 MsgStatus = msg_status(IsPersistent1, IsDelivered, SeqId, Msg, MsgProps),
563578 {MsgStatus1, State1} = maybe_write_to_disk(false, false, MsgStatus, State),
566581 true -> State1 #vqstate { q4 = ?QUEUE:in(m(MsgStatus1), Q4) }
567582 end,
568583 InCount1 = InCount + 1,
569 PCount1 = PCount + one_if(IsPersistent1),
570584 UC1 = gb_sets_maybe_insert(NeedsConfirming, MsgId, UC),
571 State3 = upd_bytes(
572 1, 0, MsgStatus1,
573 inc_ram_msg_count(State2 #vqstate { next_seq_id = SeqId + 1,
574 len = Len + 1,
575 in_counter = InCount1,
576 persistent_count = PCount1,
577 unconfirmed = UC1 })),
585 State3 = stats({1, 0}, {none, MsgStatus1},
586 State2#vqstate{ next_seq_id = SeqId + 1,
587 in_counter = InCount1,
588 unconfirmed = UC1 }),
578589 a(reduce_memory_use(maybe_update_rates(State3))).
579590
580591 publish_delivered(Msg = #basic_message { is_persistent = IsPersistent,
581592 id = MsgId },
582593 MsgProps = #message_properties {
583594 needs_confirming = NeedsConfirming },
584 _ChPid, State = #vqstate { next_seq_id = SeqId,
585 out_counter = OutCount,
586 in_counter = InCount,
587 persistent_count = PCount,
588 durable = IsDurable,
589 unconfirmed = UC }) ->
595 _ChPid, _Flow,
596 State = #vqstate { next_seq_id = SeqId,
597 out_counter = OutCount,
598 in_counter = InCount,
599 durable = IsDurable,
600 unconfirmed = UC }) ->
590601 IsPersistent1 = IsDurable andalso IsPersistent,
591602 MsgStatus = msg_status(IsPersistent1, true, SeqId, Msg, MsgProps),
592603 {MsgStatus1, State1} = maybe_write_to_disk(false, false, MsgStatus, State),
593604 State2 = record_pending_ack(m(MsgStatus1), State1),
594 PCount1 = PCount + one_if(IsPersistent1),
595605 UC1 = gb_sets_maybe_insert(NeedsConfirming, MsgId, UC),
596 State3 = upd_bytes(0, 1, MsgStatus,
597 State2 #vqstate { next_seq_id = SeqId + 1,
598 out_counter = OutCount + 1,
599 in_counter = InCount + 1,
600 persistent_count = PCount1,
601 unconfirmed = UC1 }),
606 State3 = stats({0, 1}, {none, MsgStatus1},
607 State2 #vqstate { next_seq_id = SeqId + 1,
608 out_counter = OutCount + 1,
609 in_counter = InCount + 1,
610 unconfirmed = UC1 }),
602611 {SeqId, a(reduce_memory_use(maybe_update_rates(State3)))}.
603612
604 discard(_MsgId, _ChPid, State) -> State.
613 discard(_MsgId, _ChPid, _Flow, State) -> State.
605614
606615 drain_confirmed(State = #vqstate { confirmed = C }) ->
607616 case gb_sets:is_empty(C) of
663672 ack([SeqId], State) ->
664673 {#msg_status { msg_id = MsgId,
665674 is_persistent = IsPersistent,
666 msg_on_disk = MsgOnDisk,
675 msg_in_store = MsgInStore,
667676 index_on_disk = IndexOnDisk },
668677 State1 = #vqstate { index_state = IndexState,
669678 msg_store_clients = MSCState,
673682 true -> rabbit_queue_index:ack([SeqId], IndexState);
674683 false -> IndexState
675684 end,
676 case MsgOnDisk of
685 case MsgInStore of
677686 true -> ok = msg_store_remove(MSCState, IsPersistent, [MsgId]);
678687 false -> ok
679688 end,
732741 {Its, IndexState1} = lists:foldl(fun inext/2, {[], IndexState},
733742 [msg_iterator(State),
734743 disk_ack_iterator(State),
735 ram_ack_iterator(State)]),
744 ram_ack_iterator(State),
745 qi_ack_iterator(State)]),
736746 ifold(Fun, Acc, Its, State#vqstate{index_state = IndexState1}).
737747
738748 len(#vqstate { len = Len }) -> Len.
739749
740750 is_empty(State) -> 0 == len(State).
741751
742 depth(State = #vqstate { ram_pending_ack = RPA, disk_pending_ack = DPA }) ->
743 len(State) + gb_trees:size(RPA) + gb_trees:size(DPA).
752 depth(State = #vqstate { ram_pending_ack = RPA,
753 disk_pending_ack = DPA,
754 qi_pending_ack = QPA }) ->
755 len(State) + gb_trees:size(RPA) + gb_trees:size(DPA) + gb_trees:size(QPA).
744756
745757 set_ram_duration_target(
746758 DurationTarget, State = #vqstate {
806818 ram_msg_count = RamMsgCount,
807819 ram_msg_count_prev = RamMsgCountPrev,
808820 ram_pending_ack = RPA,
821 qi_pending_ack = QPA,
809822 ram_ack_count_prev = RamAckCountPrev } =
810823 update_rates(State),
811824
812 RamAckCount = gb_trees:size(RPA),
825 RamAckCount = gb_trees:size(RPA) + gb_trees:size(QPA),
813826
814827 Duration = %% msgs+acks / (msgs+acks/sec) == sec
815828 case lists:all(fun (X) -> X < 0.01 end,
845858
846859 info(messages_ready_ram, #vqstate{ram_msg_count = RamMsgCount}) ->
847860 RamMsgCount;
848 info(messages_unacknowledged_ram, #vqstate{ram_pending_ack = RPA}) ->
849 gb_trees:size(RPA);
861 info(messages_unacknowledged_ram, #vqstate{ram_pending_ack = RPA,
862 qi_pending_ack = QPA}) ->
863 gb_trees:size(RPA) + gb_trees:size(QPA);
850864 info(messages_ram, State) ->
851865 info(messages_ready_ram, State) + info(messages_unacknowledged_ram, State);
852866 info(messages_persistent, #vqstate{persistent_count = PersistentCount}) ->
862876 RamBytes;
863877 info(message_bytes_persistent, #vqstate{persistent_bytes = PersistentBytes}) ->
864878 PersistentBytes;
879 info(disk_reads, #vqstate{disk_read_count = Count}) ->
880 Count;
881 info(disk_writes, #vqstate{disk_write_count = Count}) ->
882 Count;
865883 info(backing_queue_status, #vqstate {
866884 q1 = Q1, q2 = Q2, delta = Delta, q3 = Q3, q4 = Q4,
867885 len = Len,
932950 when Start + Count =< End ->
933951 Delta.
934952
935 m(MsgStatus = #msg_status { msg = Msg,
936 is_persistent = IsPersistent,
937 msg_on_disk = MsgOnDisk,
953 m(MsgStatus = #msg_status { is_persistent = IsPersistent,
954 msg_in_store = MsgInStore,
938955 index_on_disk = IndexOnDisk }) ->
939956 true = (not IsPersistent) or IndexOnDisk,
940 true = (not IndexOnDisk) or MsgOnDisk,
941 true = (Msg =/= undefined) or MsgOnDisk,
942
957 true = msg_in_ram(MsgStatus) or MsgInStore,
943958 MsgStatus.
944959
945960 one_if(true ) -> 1;
958973 msg = Msg,
959974 is_persistent = IsPersistent,
960975 is_delivered = IsDelivered,
961 msg_on_disk = false,
976 msg_in_store = false,
962977 index_on_disk = false,
978 persist_to = determine_persist_to(Msg, MsgProps),
963979 msg_props = MsgProps}.
964980
981 beta_msg_status({Msg = #basic_message{id = MsgId},
982 SeqId, MsgProps, IsPersistent, IsDelivered}) ->
983 MS0 = beta_msg_status0(SeqId, MsgProps, IsPersistent, IsDelivered),
984 MS0#msg_status{msg_id = MsgId,
985 msg = Msg,
986 persist_to = queue_index,
987 msg_in_store = false};
988
965989 beta_msg_status({MsgId, SeqId, MsgProps, IsPersistent, IsDelivered}) ->
990 MS0 = beta_msg_status0(SeqId, MsgProps, IsPersistent, IsDelivered),
991 MS0#msg_status{msg_id = MsgId,
992 msg = undefined,
993 persist_to = msg_store,
994 msg_in_store = true}.
995
996 beta_msg_status0(SeqId, MsgProps, IsPersistent, IsDelivered) ->
966997 #msg_status{seq_id = SeqId,
967 msg_id = MsgId,
968998 msg = undefined,
969999 is_persistent = IsPersistent,
9701000 is_delivered = IsDelivered,
971 msg_on_disk = true,
9721001 index_on_disk = true,
9731002 msg_props = MsgProps}.
9741003
975 trim_msg_status(MsgStatus) -> MsgStatus #msg_status { msg = undefined }.
1004 trim_msg_status(MsgStatus) ->
1005 case persist_to(MsgStatus) of
1006 msg_store -> MsgStatus#msg_status{msg = undefined};
1007 queue_index -> MsgStatus
1008 end.
9761009
9771010 with_msg_store_state({MSCStateP, MSCStateT}, true, Fun) ->
9781011 {Result, MSCStateP1} = Fun(MSCStateP),
10341067 maybe_write_delivered(true, SeqId, IndexState) ->
10351068 rabbit_queue_index:deliver([SeqId], IndexState).
10361069
1037 betas_from_index_entries(List, TransientThreshold, RPA, DPA, IndexState) ->
1038 {Filtered, Delivers, Acks} =
1070 betas_from_index_entries(List, TransientThreshold, RPA, DPA, QPA, IndexState) ->
1071 {Filtered, Delivers, Acks, RamReadyCount, RamBytes} =
10391072 lists:foldr(
1040 fun ({_MsgId, SeqId, _MsgProps, IsPersistent, IsDelivered} = M,
1041 {Filtered1, Delivers1, Acks1} = Acc) ->
1073 fun ({_MsgOrId, SeqId, _MsgProps, IsPersistent, IsDelivered} = M,
1074 {Filtered1, Delivers1, Acks1, RRC, RB} = Acc) ->
10421075 case SeqId < TransientThreshold andalso not IsPersistent of
10431076 true -> {Filtered1,
10441077 cons_if(not IsDelivered, SeqId, Delivers1),
1045 [SeqId | Acks1]};
1046 false -> case (gb_trees:is_defined(SeqId, RPA) orelse
1047 gb_trees:is_defined(SeqId, DPA)) of
1048 false -> {?QUEUE:in_r(m(beta_msg_status(M)),
1049 Filtered1),
1050 Delivers1, Acks1};
1051 true -> Acc
1052 end
1078 [SeqId | Acks1], RRC, RB};
1079 false -> MsgStatus = m(beta_msg_status(M)),
1080 HaveMsg = msg_in_ram(MsgStatus),
1081 Size = msg_size(MsgStatus),
1082 case (gb_trees:is_defined(SeqId, RPA) orelse
1083 gb_trees:is_defined(SeqId, DPA) orelse
1084 gb_trees:is_defined(SeqId, QPA)) of
1085 false -> {?QUEUE:in_r(MsgStatus, Filtered1),
1086 Delivers1, Acks1,
1087 RRC + one_if(HaveMsg),
1088 RB + one_if(HaveMsg) * Size};
1089 true -> Acc %% [0]
1090 end
10531091 end
1054 end, {?QUEUE:new(), [], []}, List),
1055 {Filtered, rabbit_queue_index:ack(
1056 Acks, rabbit_queue_index:deliver(Delivers, IndexState))}.
1092 end, {?QUEUE:new(), [], [], 0, 0}, List),
1093 {Filtered, RamReadyCount, RamBytes,
1094 rabbit_queue_index:ack(
1095 Acks, rabbit_queue_index:deliver(Delivers, IndexState))}.
1096 %% [0] We don't increase RamBytes here, even though it pertains to
1097 %% unacked messages too, since if HaveMsg then the message must have
1098 %% been stored in the QI, thus the message must have been in
1099 %% qi_pending_ack, thus it must already have been in RAM.
10571100
10581101 expand_delta(SeqId, ?BLANK_DELTA_PATTERN(X)) ->
10591102 d(#delta { start_seq_id = SeqId, count = 1, end_seq_id = SeqId + 1 });
11001143 next_seq_id = NextSeqId,
11011144 ram_pending_ack = gb_trees:empty(),
11021145 disk_pending_ack = gb_trees:empty(),
1146 qi_pending_ack = gb_trees:empty(),
11031147 index_state = IndexState1,
11041148 msg_store_clients = {PersistentClient, TransientClient},
11051149 durable = IsDurable,
11241168 unconfirmed = gb_sets:new(),
11251169 confirmed = gb_sets:new(),
11261170 ack_out_counter = 0,
1127 ack_in_counter = 0 },
1171 ack_in_counter = 0,
1172 disk_read_count = 0,
1173 disk_write_count = 0 },
11281174 a(maybe_deltas_to_betas(State)).
11291175
11301176 blank_rates(Now) ->
11401186 true -> State #vqstate { q3 = ?QUEUE:in_r(MsgStatus, Q3) };
11411187 false -> {Msg, State1 = #vqstate { q4 = Q4a }} =
11421188 read_msg(MsgStatus, State),
1143 upd_ram_bytes(
1144 1, MsgStatus,
1145 inc_ram_msg_count(
1146 State1 #vqstate { q4 = ?QUEUE:in_r(MsgStatus#msg_status {
1147 msg = Msg }, Q4a) }))
1189 MsgStatus1 = MsgStatus#msg_status{msg = Msg},
1190 stats(ready0, {MsgStatus, MsgStatus1},
1191 State1 #vqstate { q4 = ?QUEUE:in_r(MsgStatus1, Q4a) })
11481192 end;
11491193 in_r(MsgStatus, State = #vqstate { q4 = Q4 }) ->
11501194 State #vqstate { q4 = ?QUEUE:in_r(MsgStatus, Q4) }.
11671211 read_msg(#msg_status{msg = Msg}, State) ->
11681212 {Msg, State}.
11691213
1170 read_msg(MsgId, IsPersistent, State = #vqstate{msg_store_clients = MSCState}) ->
1214 read_msg(MsgId, IsPersistent, State = #vqstate{msg_store_clients = MSCState,
1215 disk_read_count = Count}) ->
11711216 {{ok, Msg = #basic_message {}}, MSCState1} =
11721217 msg_store_read(MSCState, IsPersistent, MsgId),
1173 {Msg, State #vqstate {msg_store_clients = MSCState1}}.
1174
1175 inc_ram_msg_count(State = #vqstate{ram_msg_count = RamMsgCount}) ->
1176 State#vqstate{ram_msg_count = RamMsgCount + 1}.
1177
1178 upd_bytes(SignReady, SignUnacked,
1179 MsgStatus = #msg_status{msg = undefined}, State) ->
1180 upd_bytes0(SignReady, SignUnacked, MsgStatus, State);
1181 upd_bytes(SignReady, SignUnacked, MsgStatus = #msg_status{msg = _}, State) ->
1182 upd_ram_bytes(SignReady + SignUnacked, MsgStatus,
1183 upd_bytes0(SignReady, SignUnacked, MsgStatus, State)).
1184
1185 upd_bytes0(SignReady, SignUnacked, MsgStatus = #msg_status{is_persistent = IsP},
1186 State = #vqstate{bytes = Bytes,
1187 unacked_bytes = UBytes,
1188 persistent_bytes = PBytes}) ->
1218 {Msg, State #vqstate {msg_store_clients = MSCState1,
1219 disk_read_count = Count + 1}}.
1220
1221 stats(Signs, Statuses, State) ->
1222 stats0(expand_signs(Signs), expand_statuses(Statuses), State).
1223
1224 expand_signs(ready0) -> {0, 0, true};
1225 expand_signs({A, B}) -> {A, B, false}.
1226
1227 expand_statuses({none, A}) -> {false, msg_in_ram(A), A};
1228 expand_statuses({B, none}) -> {msg_in_ram(B), false, B};
1229 expand_statuses({B, A}) -> {msg_in_ram(B), msg_in_ram(A), B}.
1230
1231 %% In this function at least, we are religious: the variable name
1232 %% contains "Ready" or "Unacked" iff that is what it counts. If
1233 %% neither is present it counts both.
1234 stats0({DeltaReady, DeltaUnacked, ReadyMsgPaged},
1235 {InRamBefore, InRamAfter, MsgStatus},
1236 State = #vqstate{len = ReadyCount,
1237 bytes = ReadyBytes,
1238 ram_msg_count = RamReadyCount,
1239 persistent_count = PersistentCount,
1240 unacked_bytes = UnackedBytes,
1241 ram_bytes = RamBytes,
1242 persistent_bytes = PersistentBytes}) ->
11891243 S = msg_size(MsgStatus),
1190 SignTotal = SignReady + SignUnacked,
1191 State#vqstate{bytes = Bytes + SignReady * S,
1192 unacked_bytes = UBytes + SignUnacked * S,
1193 persistent_bytes = PBytes + one_if(IsP) * S * SignTotal}.
1194
1195 upd_ram_bytes(Sign, MsgStatus, State = #vqstate{ram_bytes = RamBytes}) ->
1196 State#vqstate{ram_bytes = RamBytes + Sign * msg_size(MsgStatus)}.
1244 DeltaTotal = DeltaReady + DeltaUnacked,
1245 DeltaRam = case {InRamBefore, InRamAfter} of
1246 {false, false} -> 0;
1247 {false, true} -> 1;
1248 {true, false} -> -1;
1249 {true, true} -> 0
1250 end,
1251 DeltaRamReady = case DeltaReady of
1252 1 -> one_if(InRamAfter);
1253 -1 -> -one_if(InRamBefore);
1254 0 when ReadyMsgPaged -> DeltaRam;
1255 0 -> 0
1256 end,
1257 DeltaPersistent = DeltaTotal * one_if(MsgStatus#msg_status.is_persistent),
1258 State#vqstate{len = ReadyCount + DeltaReady,
1259 ram_msg_count = RamReadyCount + DeltaRamReady,
1260 persistent_count = PersistentCount + DeltaPersistent,
1261 bytes = ReadyBytes + DeltaReady * S,
1262 unacked_bytes = UnackedBytes + DeltaUnacked * S,
1263 ram_bytes = RamBytes + DeltaRam * S,
1264 persistent_bytes = PersistentBytes + DeltaPersistent * S}.
11971265
11981266 msg_size(#msg_status{msg_props = #message_properties{size = Size}}) -> Size.
11991267
12021270 remove(AckRequired, MsgStatus = #msg_status {
12031271 seq_id = SeqId,
12041272 msg_id = MsgId,
1205 msg = Msg,
12061273 is_persistent = IsPersistent,
12071274 is_delivered = IsDelivered,
1208 msg_on_disk = MsgOnDisk,
1275 msg_in_store = MsgInStore,
12091276 index_on_disk = IndexOnDisk },
1210 State = #vqstate {ram_msg_count = RamMsgCount,
1211 out_counter = OutCount,
1277 State = #vqstate {out_counter = OutCount,
12121278 index_state = IndexState,
1213 msg_store_clients = MSCState,
1214 len = Len,
1215 persistent_count = PCount}) ->
1279 msg_store_clients = MSCState}) ->
12161280 %% 1. Mark it delivered if necessary
12171281 IndexState1 = maybe_write_delivered(
12181282 IndexOnDisk andalso not IsDelivered,
12231287 ok = msg_store_remove(MSCState, IsPersistent, [MsgId])
12241288 end,
12251289 Ack = fun () -> rabbit_queue_index:ack([SeqId], IndexState1) end,
1226 IndexState2 = case {AckRequired, MsgOnDisk, IndexOnDisk} of
1227 {false, true, false} -> Rem(), IndexState1;
1228 {false, true, true} -> Rem(), Ack();
1229 _ -> IndexState1
1290 IndexState2 = case {AckRequired, MsgInStore, IndexOnDisk} of
1291 {false, true, false} -> Rem(), IndexState1;
1292 {false, true, true} -> Rem(), Ack();
1293 {false, false, true} -> Ack();
1294 _ -> IndexState1
12301295 end,
12311296
12321297 %% 3. If an ack is required, add something sensible to PA
12371302 {SeqId, StateN};
12381303 false -> {undefined, State}
12391304 end,
1240
1241 PCount1 = PCount - one_if(IsPersistent andalso not AckRequired),
1242 RamMsgCount1 = RamMsgCount - one_if(Msg =/= undefined),
12431305 State2 = case AckRequired of
1244 false -> upd_bytes(-1, 0, MsgStatus, State1);
1245 true -> upd_bytes(-1, 1, MsgStatus, State1)
1306 false -> stats({-1, 0}, {MsgStatus, none}, State1);
1307 true -> stats({-1, 1}, {MsgStatus, MsgStatus}, State1)
12461308 end,
12471309 {AckTag, maybe_update_rates(
1248 State2 #vqstate {ram_msg_count = RamMsgCount1,
1249 out_counter = OutCount + 1,
1250 index_state = IndexState2,
1251 len = Len - 1,
1252 persistent_count = PCount1})}.
1253
1254 purge_betas_and_deltas(Stats,
1255 State = #vqstate { q3 = Q3,
1256 index_state = IndexState,
1257 msg_store_clients = MSCState }) ->
1310 State2 #vqstate {out_counter = OutCount + 1,
1311 index_state = IndexState2})}.
1312
1313 purge_betas_and_deltas(State = #vqstate { q3 = Q3 }) ->
12581314 case ?QUEUE:is_empty(Q3) of
1259 true -> {Stats, State};
1260 false -> {Stats1, IndexState1} = remove_queue_entries(
1261 Q3, Stats, IndexState, MSCState),
1262 purge_betas_and_deltas(Stats1,
1263 maybe_deltas_to_betas(
1264 State #vqstate {
1265 q3 = ?QUEUE:new(),
1266 index_state = IndexState1 }))
1267 end.
1268
1269 remove_queue_entries(Q, {RamBytes, PCount, PBytes},
1270 IndexState, MSCState) ->
1271 {MsgIdsByStore, RamBytes1, PBytes1, Delivers, Acks} =
1315 true -> State;
1316 false -> State1 = remove_queue_entries(Q3, State),
1317 purge_betas_and_deltas(maybe_deltas_to_betas(
1318 State1#vqstate{q3 = ?QUEUE:new()}))
1319 end.
1320
1321 remove_queue_entries(Q, State = #vqstate{index_state = IndexState,
1322 msg_store_clients = MSCState}) ->
1323 {MsgIdsByStore, Delivers, Acks, State1} =
12721324 ?QUEUE:foldl(fun remove_queue_entries1/2,
1273 {orddict:new(), RamBytes, PBytes, [], []}, Q),
1325 {orddict:new(), [], [], State}, Q),
12741326 ok = orddict:fold(fun (IsPersistent, MsgIds, ok) ->
12751327 msg_store_remove(MSCState, IsPersistent, MsgIds)
12761328 end, ok, MsgIdsByStore),
1277 {{RamBytes1,
1278 PCount - case orddict:find(true, MsgIdsByStore) of
1279 error -> 0;
1280 {ok, Ids} -> length(Ids)
1281 end,
1282 PBytes1},
1283 rabbit_queue_index:ack(Acks,
1284 rabbit_queue_index:deliver(Delivers, IndexState))}.
1329 IndexState1 = rabbit_queue_index:ack(
1330 Acks, rabbit_queue_index:deliver(Delivers, IndexState)),
1331 State1#vqstate{index_state = IndexState1}.
12851332
12861333 remove_queue_entries1(
1287 #msg_status { msg_id = MsgId, seq_id = SeqId, msg = Msg,
1288 is_delivered = IsDelivered, msg_on_disk = MsgOnDisk,
1289 index_on_disk = IndexOnDisk, is_persistent = IsPersistent,
1290 msg_props = #message_properties { size = Size } },
1291 {MsgIdsByStore, RamBytes, PBytes, Delivers, Acks}) ->
1292 {case MsgOnDisk of
1334 #msg_status { msg_id = MsgId, seq_id = SeqId, is_delivered = IsDelivered,
1335 msg_in_store = MsgInStore, index_on_disk = IndexOnDisk,
1336 is_persistent = IsPersistent} = MsgStatus,
1337 {MsgIdsByStore, Delivers, Acks, State}) ->
1338 {case MsgInStore of
12931339 true -> rabbit_misc:orddict_cons(IsPersistent, MsgId, MsgIdsByStore);
12941340 false -> MsgIdsByStore
12951341 end,
1296 RamBytes - Size * one_if(Msg =/= undefined),
1297 PBytes - Size * one_if(IsPersistent),
12981342 cons_if(IndexOnDisk andalso not IsDelivered, SeqId, Delivers),
1299 cons_if(IndexOnDisk, SeqId, Acks)}.
1343 cons_if(IndexOnDisk, SeqId, Acks),
1344 stats({-1, 0}, {MsgStatus, none}, State)}.
13001345
13011346 %%----------------------------------------------------------------------------
13021347 %% Internal gubbins for publishing
13031348 %%----------------------------------------------------------------------------
13041349
13051350 maybe_write_msg_to_disk(_Force, MsgStatus = #msg_status {
1306 msg_on_disk = true }, _MSCState) ->
1307 MsgStatus;
1351 msg_in_store = true }, State) ->
1352 {MsgStatus, State};
13081353 maybe_write_msg_to_disk(Force, MsgStatus = #msg_status {
13091354 msg = Msg, msg_id = MsgId,
1310 is_persistent = IsPersistent }, MSCState)
1355 is_persistent = IsPersistent },
1356 State = #vqstate{ msg_store_clients = MSCState,
1357 disk_write_count = Count})
13111358 when Force orelse IsPersistent ->
1312 Msg1 = Msg #basic_message {
1313 %% don't persist any recoverable decoded properties
1314 content = rabbit_binary_parser:clear_decoded_content(
1315 Msg #basic_message.content)},
1316 ok = msg_store_write(MSCState, IsPersistent, MsgId, Msg1),
1317 MsgStatus #msg_status { msg_on_disk = true };
1318 maybe_write_msg_to_disk(_Force, MsgStatus, _MSCState) ->
1319 MsgStatus.
1359 case persist_to(MsgStatus) of
1360 msg_store -> ok = msg_store_write(MSCState, IsPersistent, MsgId,
1361 prepare_to_store(Msg)),
1362 {MsgStatus#msg_status{msg_in_store = true},
1363 State#vqstate{disk_write_count = Count + 1}};
1364 queue_index -> {MsgStatus, State}
1365 end;
1366 maybe_write_msg_to_disk(_Force, MsgStatus, State) ->
1367 {MsgStatus, State}.
13201368
13211369 maybe_write_index_to_disk(_Force, MsgStatus = #msg_status {
1322 index_on_disk = true }, IndexState) ->
1323 true = MsgStatus #msg_status.msg_on_disk, %% ASSERTION
1324 {MsgStatus, IndexState};
1370 index_on_disk = true }, State) ->
1371 {MsgStatus, State};
13251372 maybe_write_index_to_disk(Force, MsgStatus = #msg_status {
1373 msg = Msg,
13261374 msg_id = MsgId,
13271375 seq_id = SeqId,
13281376 is_persistent = IsPersistent,
13291377 is_delivered = IsDelivered,
1330 msg_props = MsgProps}, IndexState)
1378 msg_props = MsgProps},
1379 State = #vqstate{target_ram_count = TargetRamCount,
1380 disk_write_count = DiskWriteCount,
1381 index_state = IndexState})
13311382 when Force orelse IsPersistent ->
1332 true = MsgStatus #msg_status.msg_on_disk, %% ASSERTION
1383 {MsgOrId, DiskWriteCount1} =
1384 case persist_to(MsgStatus) of
1385 msg_store -> {MsgId, DiskWriteCount};
1386 queue_index -> {prepare_to_store(Msg), DiskWriteCount + 1}
1387 end,
13331388 IndexState1 = rabbit_queue_index:publish(
1334 MsgId, SeqId, MsgProps, IsPersistent, IndexState),
1335 {MsgStatus #msg_status { index_on_disk = true },
1336 maybe_write_delivered(IsDelivered, SeqId, IndexState1)};
1337 maybe_write_index_to_disk(_Force, MsgStatus, IndexState) ->
1338 {MsgStatus, IndexState}.
1339
1340 maybe_write_to_disk(ForceMsg, ForceIndex, MsgStatus,
1341 State = #vqstate { index_state = IndexState,
1342 msg_store_clients = MSCState }) ->
1343 MsgStatus1 = maybe_write_msg_to_disk(ForceMsg, MsgStatus, MSCState),
1344 {MsgStatus2, IndexState1} =
1345 maybe_write_index_to_disk(ForceIndex, MsgStatus1, IndexState),
1346 {MsgStatus2, State #vqstate { index_state = IndexState1 }}.
1389 MsgOrId, SeqId, MsgProps, IsPersistent, TargetRamCount,
1390 IndexState),
1391 IndexState2 = maybe_write_delivered(IsDelivered, SeqId, IndexState1),
1392 {MsgStatus#msg_status{index_on_disk = true},
1393 State#vqstate{index_state = IndexState2,
1394 disk_write_count = DiskWriteCount1}};
1395
1396 maybe_write_index_to_disk(_Force, MsgStatus, State) ->
1397 {MsgStatus, State}.
1398
1399 maybe_write_to_disk(ForceMsg, ForceIndex, MsgStatus, State) ->
1400 {MsgStatus1, State1} = maybe_write_msg_to_disk(ForceMsg, MsgStatus, State),
1401 maybe_write_index_to_disk(ForceIndex, MsgStatus1, State1).
1402
1403 determine_persist_to(#basic_message{
1404 content = #content{properties = Props,
1405 properties_bin = PropsBin}},
1406 #message_properties{size = BodySize}) ->
1407 {ok, IndexMaxSize} = application:get_env(
1408 rabbit, queue_index_embed_msgs_below),
1409 %% The >= is so that you can set the env to 0 and never persist
1410 %% to the index.
1411 %%
1412 %% We want this to be fast, so we avoid size(term_to_binary())
1413 %% here, or using the term size estimation from truncate.erl, both
1414 %% of which are too slow. So instead, if the message body size
1415 %% goes over the limit then we avoid any other checks.
1416 %%
1417 %% If it doesn't we need to decide if the properties will push
1418 %% it past the limit. If we have the encoded properties (usual
1419 %% case) we can just check their size. If we don't (message came
1420 %% via the direct client), we make a guess based on the number of
1421 %% headers.
1422 case BodySize >= IndexMaxSize of
1423 true -> msg_store;
1424 false -> Est = case is_binary(PropsBin) of
1425 true -> BodySize + size(PropsBin);
1426 false -> #'P_basic'{headers = Hs} = Props,
1427 case Hs of
1428 undefined -> 0;
1429 _ -> length(Hs)
1430 end * ?HEADER_GUESS_SIZE + BodySize
1431 end,
1432 case Est >= IndexMaxSize of
1433 true -> msg_store;
1434 false -> queue_index
1435 end
1436 end.
1437
1438 persist_to(#msg_status{persist_to = To}) -> To.
1439
1440 prepare_to_store(Msg) ->
1441 Msg#basic_message{
1442 %% don't persist any recoverable decoded properties
1443 content = rabbit_binary_parser:clear_decoded_content(
1444 Msg #basic_message.content)}.
13471445
13481446 %%----------------------------------------------------------------------------
13491447 %% Internal gubbins for acks
13501448 %%----------------------------------------------------------------------------
13511449
1352 record_pending_ack(#msg_status { seq_id = SeqId, msg = Msg } = MsgStatus,
1450 record_pending_ack(#msg_status { seq_id = SeqId } = MsgStatus,
13531451 State = #vqstate { ram_pending_ack = RPA,
13541452 disk_pending_ack = DPA,
1453 qi_pending_ack = QPA,
13551454 ack_in_counter = AckInCount}) ->
1356 {RPA1, DPA1} =
1357 case Msg of
1358 undefined -> {RPA, gb_trees:insert(SeqId, MsgStatus, DPA)};
1359 _ -> {gb_trees:insert(SeqId, MsgStatus, RPA), DPA}
1455 Insert = fun (Tree) -> gb_trees:insert(SeqId, MsgStatus, Tree) end,
1456 {RPA1, DPA1, QPA1} =
1457 case {msg_in_ram(MsgStatus), persist_to(MsgStatus)} of
1458 {false, _} -> {RPA, Insert(DPA), QPA};
1459 {_, queue_index} -> {RPA, DPA, Insert(QPA)};
1460 {_, msg_store} -> {Insert(RPA), DPA, QPA}
13601461 end,
13611462 State #vqstate { ram_pending_ack = RPA1,
13621463 disk_pending_ack = DPA1,
1464 qi_pending_ack = QPA1,
13631465 ack_in_counter = AckInCount + 1}.
13641466
13651467 lookup_pending_ack(SeqId, #vqstate { ram_pending_ack = RPA,
1366 disk_pending_ack = DPA }) ->
1468 disk_pending_ack = DPA,
1469 qi_pending_ack = QPA}) ->
13671470 case gb_trees:lookup(SeqId, RPA) of
13681471 {value, V} -> V;
1369 none -> gb_trees:get(SeqId, DPA)
1370 end.
1371
1372 %% First parameter = UpdatePersistentCount
1472 none -> case gb_trees:lookup(SeqId, DPA) of
1473 {value, V} -> V;
1474 none -> gb_trees:get(SeqId, QPA)
1475 end
1476 end.
1477
1478 %% First parameter = UpdateStats
13731479 remove_pending_ack(true, SeqId, State) ->
1374 {MsgStatus, State1 = #vqstate { persistent_count = PCount }} =
1375 remove_pending_ack(false, SeqId, State),
1376 PCount1 = PCount - one_if(MsgStatus#msg_status.is_persistent),
1377 {MsgStatus, upd_bytes(0, -1, MsgStatus,
1378 State1 # vqstate{ persistent_count = PCount1 })};
1379 remove_pending_ack(false, SeqId, State = #vqstate { ram_pending_ack = RPA,
1380 disk_pending_ack = DPA }) ->
1480 {MsgStatus, State1} = remove_pending_ack(false, SeqId, State),
1481 {MsgStatus, stats({0, -1}, {MsgStatus, none}, State1)};
1482 remove_pending_ack(false, SeqId, State = #vqstate{ram_pending_ack = RPA,
1483 disk_pending_ack = DPA,
1484 qi_pending_ack = QPA}) ->
13811485 case gb_trees:lookup(SeqId, RPA) of
13821486 {value, V} -> RPA1 = gb_trees:delete(SeqId, RPA),
13831487 {V, State #vqstate { ram_pending_ack = RPA1 }};
1384 none -> DPA1 = gb_trees:delete(SeqId, DPA),
1385 {gb_trees:get(SeqId, DPA),
1386 State #vqstate { disk_pending_ack = DPA1 }}
1488 none -> case gb_trees:lookup(SeqId, DPA) of
1489 {value, V} ->
1490 DPA1 = gb_trees:delete(SeqId, DPA),
1491 {V, State#vqstate{disk_pending_ack = DPA1}};
1492 none ->
1493 QPA1 = gb_trees:delete(SeqId, QPA),
1494 {gb_trees:get(SeqId, QPA),
1495 State#vqstate{qi_pending_ack = QPA1}}
1496 end
13871497 end.
13881498
13891499 purge_pending_ack(KeepPersistent,
13901500 State = #vqstate { ram_pending_ack = RPA,
13911501 disk_pending_ack = DPA,
1502 qi_pending_ack = QPA,
13921503 index_state = IndexState,
13931504 msg_store_clients = MSCState }) ->
13941505 F = fun (_SeqId, MsgStatus, Acc) -> accumulate_ack(MsgStatus, Acc) end,
13951506 {IndexOnDiskSeqIds, MsgIdsByStore, _AllMsgIds} =
13961507 rabbit_misc:gb_trees_fold(
1397 F, rabbit_misc:gb_trees_fold(F, accumulate_ack_init(), RPA), DPA),
1508 F, rabbit_misc:gb_trees_fold(
1509 F, rabbit_misc:gb_trees_fold(
1510 F, accumulate_ack_init(), RPA), DPA), QPA),
13981511 State1 = State #vqstate { ram_pending_ack = gb_trees:empty(),
1399 disk_pending_ack = gb_trees:empty() },
1512 disk_pending_ack = gb_trees:empty(),
1513 qi_pending_ack = gb_trees:empty()},
14001514
14011515 case KeepPersistent of
14021516 true -> case orddict:find(false, MsgIdsByStore) of
14171531 accumulate_ack(#msg_status { seq_id = SeqId,
14181532 msg_id = MsgId,
14191533 is_persistent = IsPersistent,
1420 msg_on_disk = MsgOnDisk,
1534 msg_in_store = MsgInStore,
14211535 index_on_disk = IndexOnDisk },
14221536 {IndexOnDiskSeqIdsAcc, MsgIdsByStore, AllMsgIds}) ->
14231537 {cons_if(IndexOnDisk, SeqId, IndexOnDiskSeqIdsAcc),
1424 case MsgOnDisk of
1538 case MsgInStore of
14251539 true -> rabbit_misc:orddict_cons(IsPersistent, MsgId, MsgIdsByStore);
14261540 false -> MsgIdsByStore
14271541 end,
14681582 gb_sets:union(MIOD, Confirmed) })
14691583 end).
14701584
1585 msgs_and_indices_written_to_disk(Callback, MsgIdSet) ->
1586 Callback(?MODULE,
1587 fun (?MODULE, State) -> record_confirms(MsgIdSet, State) end).
1588
14711589 %%----------------------------------------------------------------------------
14721590 %% Internal plumbing for requeue
14731591 %%----------------------------------------------------------------------------
14741592
14751593 publish_alpha(#msg_status { msg = undefined } = MsgStatus, State) ->
14761594 {Msg, State1} = read_msg(MsgStatus, State),
1477 {MsgStatus#msg_status { msg = Msg },
1478 upd_ram_bytes(1, MsgStatus, inc_ram_msg_count(State1))}; %% [1]
1595 MsgStatus1 = MsgStatus#msg_status { msg = Msg },
1596 {MsgStatus1, stats({1, -1}, {MsgStatus, MsgStatus1}, State1)};
14791597 publish_alpha(MsgStatus, State) ->
1480 {MsgStatus, inc_ram_msg_count(State)}.
1481 %% [1] We increase the ram_bytes here because we paged the message in
1482 %% to requeue it, not purely because we requeued it. Hence in the
1483 %% second head it's already accounted for as already in memory. OTOH
1484 %% ram_msg_count does not include unacked messages, so it needs
1485 %% incrementing in both heads.
1598 {MsgStatus, stats({1, -1}, {MsgStatus, MsgStatus}, State)}.
14861599
14871600 publish_beta(MsgStatus, State) ->
14881601 {MsgStatus1, State1} = maybe_write_to_disk(true, false, MsgStatus, State),
14891602 MsgStatus2 = m(trim_msg_status(MsgStatus1)),
1490 case msg_in_ram(MsgStatus1) andalso not msg_in_ram(MsgStatus2) of
1491 true -> {MsgStatus2, upd_ram_bytes(-1, MsgStatus, State1)};
1492 _ -> {MsgStatus2, State1}
1493 end.
1603 {MsgStatus2, stats({1, -1}, {MsgStatus, MsgStatus2}, State1)}.
14941604
14951605 %% Rebuild queue, inserting sequence ids to maintain ordering
14961606 queue_merge(SeqIds, Q, MsgIds, Limit, PubFun, State) ->
15121622 {#msg_status { msg_id = MsgId } = MsgStatus1, State2} =
15131623 PubFun(MsgStatus, State1),
15141624 queue_merge(Rest, Q, ?QUEUE:in(MsgStatus1, Front), [MsgId | MsgIds],
1515 Limit, PubFun, upd_bytes(1, -1, MsgStatus, State2))
1625 Limit, PubFun, State2)
15161626 end;
15171627 queue_merge(SeqIds, Q, Front, MsgIds,
15181628 _Limit, _PubFun, State) ->
15261636 msg_from_pending_ack(SeqId, State0),
15271637 {_MsgStatus, State2} =
15281638 maybe_write_to_disk(true, true, MsgStatus, State1),
1529 State3 =
1530 case msg_in_ram(MsgStatus) of
1531 false -> State2;
1532 true -> upd_ram_bytes(-1, MsgStatus, State2)
1533 end,
15341639 {expand_delta(SeqId, Delta0), [MsgId | MsgIds0],
1535 upd_bytes(1, -1, MsgStatus, State3)}
1640 stats({1, -1}, {MsgStatus, none}, State2)}
15361641 end, {Delta, MsgIds, State}, SeqIds).
15371642
15381643 %% Mostly opposite of record_pending_ack/2
15611666
15621667 disk_ack_iterator(State) ->
15631668 {ack, gb_trees:iterator(State#vqstate.disk_pending_ack)}.
1669
1670 qi_ack_iterator(State) ->
1671 {ack, gb_trees:iterator(State#vqstate.qi_pending_ack)}.
15641672
15651673 msg_iterator(State) -> istate(start, State).
15661674
15911699 next({delta, Delta, State}, IndexState);
15921700 next({delta, Delta, [{_, SeqId, _, _, _} = M | Rest], State}, IndexState) ->
15931701 case (gb_trees:is_defined(SeqId, State#vqstate.ram_pending_ack) orelse
1594 gb_trees:is_defined(SeqId, State#vqstate.disk_pending_ack)) of
1702 gb_trees:is_defined(SeqId, State#vqstate.disk_pending_ack) orelse
1703 gb_trees:is_defined(SeqId, State#vqstate.qi_pending_ack)) of
15951704 false -> Next = {delta, Delta, Rest, State},
15961705 {value, beta_msg_status(M), false, Next, IndexState};
15971706 true -> next({delta, Delta, Rest, State}, IndexState)
16881797 {SeqId, MsgStatus, RPA1} = gb_trees:take_largest(RPA),
16891798 {MsgStatus1, State1} =
16901799 maybe_write_to_disk(true, false, MsgStatus, State),
1691 DPA1 = gb_trees:insert(SeqId, m(trim_msg_status(MsgStatus1)), DPA),
1800 MsgStatus2 = m(trim_msg_status(MsgStatus1)),
1801 DPA1 = gb_trees:insert(SeqId, MsgStatus2, DPA),
16921802 limit_ram_acks(Quota - 1,
1693 upd_ram_bytes(
1694 -1, MsgStatus1,
1695 State1 #vqstate { ram_pending_ack = RPA1,
1696 disk_pending_ack = DPA1 }))
1803 stats({0, 0}, {MsgStatus, MsgStatus2},
1804 State1 #vqstate { ram_pending_ack = RPA1,
1805 disk_pending_ack = DPA1 }))
16971806 end.
16981807
16991808 permitted_beta_count(#vqstate { len = 0 }) ->
17541863 delta = Delta,
17551864 q3 = Q3,
17561865 index_state = IndexState,
1866 ram_msg_count = RamMsgCount,
1867 ram_bytes = RamBytes,
17571868 ram_pending_ack = RPA,
17581869 disk_pending_ack = DPA,
1870 qi_pending_ack = QPA,
1871 disk_read_count = DiskReadCount,
17591872 transient_threshold = TransientThreshold }) ->
17601873 #delta { start_seq_id = DeltaSeqId,
17611874 count = DeltaCount,
17651878 DeltaSeqIdEnd]),
17661879 {List, IndexState1} = rabbit_queue_index:read(DeltaSeqId, DeltaSeqId1,
17671880 IndexState),
1768 {Q3a, IndexState2} = betas_from_index_entries(List, TransientThreshold,
1769 RPA, DPA, IndexState1),
1770 State1 = State #vqstate { index_state = IndexState2 },
1881 {Q3a, RamCountsInc, RamBytesInc, IndexState2} =
1882 betas_from_index_entries(List, TransientThreshold,
1883 RPA, DPA, QPA, IndexState1),
1884 State1 = State #vqstate { index_state = IndexState2,
1885 ram_msg_count = RamMsgCount + RamCountsInc,
1886 ram_bytes = RamBytes + RamBytesInc,
1887 disk_read_count = DiskReadCount + RamCountsInc},
17711888 case ?QUEUE:len(Q3a) of
17721889 0 ->
17731890 %% we ignored every message in the segment due to it being
18251942 {empty, _Q} ->
18261943 {Quota, State};
18271944 {{value, MsgStatus}, Qa} ->
1828 {MsgStatus1 = #msg_status { msg_on_disk = true },
1829 State1 = #vqstate { ram_msg_count = RamMsgCount }} =
1945 {MsgStatus1, State1} =
18301946 maybe_write_to_disk(true, false, MsgStatus, State),
18311947 MsgStatus2 = m(trim_msg_status(MsgStatus1)),
1832 State2 = Consumer(
1833 MsgStatus2, Qa,
1834 upd_ram_bytes(
1835 -1, MsgStatus2,
1836 State1 #vqstate {
1837 ram_msg_count = RamMsgCount - 1})),
1948 State2 = stats(
1949 ready0, {MsgStatus, MsgStatus2}, State1),
1950 State3 = Consumer(MsgStatus2, Qa, State2),
18381951 push_alphas_to_betas(Generator, Consumer, Quota - 1,
1839 Qa, State2)
1952 Qa, State3)
18401953 end
18411954 end.
18421955
1843 push_betas_to_deltas(Quota, State = #vqstate { q2 = Q2,
1844 delta = Delta,
1845 q3 = Q3,
1846 index_state = IndexState }) ->
1847 PushState = {Quota, Delta, IndexState},
1956 push_betas_to_deltas(Quota, State = #vqstate { q2 = Q2,
1957 delta = Delta,
1958 q3 = Q3}) ->
1959 PushState = {Quota, Delta, State},
18481960 {Q3a, PushState1} = push_betas_to_deltas(
18491961 fun ?QUEUE:out_r/1,
18501962 fun rabbit_queue_index:next_segment_boundary/1,
18531965 fun ?QUEUE:out/1,
18541966 fun (Q2MinSeqId) -> Q2MinSeqId end,
18551967 Q2, PushState1),
1856 {_, Delta1, IndexState1} = PushState2,
1857 State #vqstate { q2 = Q2a,
1858 delta = Delta1,
1859 q3 = Q3a,
1860 index_state = IndexState1 }.
1968 {_, Delta1, State1} = PushState2,
1969 State1 #vqstate { q2 = Q2a,
1970 delta = Delta1,
1971 q3 = Q3a }.
18611972
18621973 push_betas_to_deltas(Generator, LimitFun, Q, PushState) ->
18631974 case ?QUEUE:is_empty(Q) of
18731984 end
18741985 end.
18751986
1876 push_betas_to_deltas1(_Generator, _Limit, Q,
1877 {0, _Delta, _IndexState} = PushState) ->
1987 push_betas_to_deltas1(_Generator, _Limit, Q, {0, _Delta, _State} = PushState) ->
18781988 {Q, PushState};
1879 push_betas_to_deltas1(Generator, Limit, Q,
1880 {Quota, Delta, IndexState} = PushState) ->
1989 push_betas_to_deltas1(Generator, Limit, Q, {Quota, Delta, State} = PushState) ->
18811990 case Generator(Q) of
18821991 {empty, _Q} ->
18831992 {Q, PushState};
18851994 when SeqId < Limit ->
18861995 {Q, PushState};
18871996 {{value, MsgStatus = #msg_status { seq_id = SeqId }}, Qa} ->
1888 {#msg_status { index_on_disk = true }, IndexState1} =
1889 maybe_write_index_to_disk(true, MsgStatus, IndexState),
1997 {#msg_status { index_on_disk = true }, State1} =
1998 maybe_write_index_to_disk(true, MsgStatus, State),
1999 State2 = stats(ready0, {MsgStatus, none}, State1),
18902000 Delta1 = expand_delta(SeqId, Delta),
18912001 push_betas_to_deltas1(Generator, Limit, Qa,
1892 {Quota - 1, Delta1, IndexState1})
2002 {Quota - 1, Delta1, State2})
18932003 end.
18942004
18952005 %%----------------------------------------------------------------------------
4444 report(Other, Params) -> term(Other, Params).
4545
4646 term(Thing, {Max, {Content, Struct, ContentDec, StructDec}}) ->
47 case term_limit(Thing, Max) of
47 case exceeds_size(Thing, Max) of
4848 true -> term(Thing, true, #params{content = Content,
4949 struct = Struct,
5050 content_dec = ContentDec,
9292 %% sizes. This is all going to be rather approximate though, these
9393 %% sizes are probably not very "fair" but we are just trying to see if
9494 %% we reach a fairly arbitrary limit anyway though.
95 term_limit(Thing, Max) ->
95 exceeds_size(Thing, Max) ->
9696 case term_size(Thing, Max, erlang:system_info(wordsize)) of
9797 limit_exceeded -> true;
9898 _ -> false
1717
1818 %% Generic worker pool manager.
1919 %%
20 %% Supports nested submission of jobs (nested jobs always run
21 %% immediately in current worker process).
20 %% Submitted jobs are functions. They can be executed asynchronously
21 %% (using worker_pool:submit/1, worker_pool:submit/2) or synchronously
22 %% (using worker_pool:submit_async/1).
2223 %%
23 %% Possible future enhancements:
24 %% We typically use the worker pool if we want to limit the maximum
25 %% parallelism of some job. We are not trying to dodge the cost of
26 %% creating Erlang processes.
2427 %%
25 %% 1. Allow priorities (basically, change the pending queue to a
26 %% priority_queue).
28 %% Supports nested submission of jobs and two execution modes:
29 %% 'single' and 'reuse'. Jobs executed in 'single' mode are invoked in
30 %% a one-off process. Those executed in 'reuse' mode are invoked in a
31 %% worker process out of the pool. Nested jobs are always executed
32 %% immediately in current worker process.
33 %%
34 %% 'single' mode is offered to work around a bug in Mnesia: after
35 %% network partitions reply messages for prior failed requests can be
36 %% sent to Mnesia clients - a reused worker pool process can crash on
37 %% receiving one.
38 %%
39 %% Caller submissions are enqueued internally. When the next worker
40 %% process is available, it communicates it to the pool and is
41 %% assigned a job to execute. If job execution fails with an error, no
42 %% response is returned to the caller.
43 %%
44 %% Worker processes prioritise certain command-and-control messages
45 %% from the pool.
46 %%
47 %% Future improvement points: job prioritisation.
2748
2849 -behaviour(gen_server2).
2950
1414 %%
1515
1616 -module(worker_pool_worker).
17
18 %% Executes jobs (functions) submitted to a worker pool with worker_pool:submit/1,
19 %% worker_pool:submit/2 or worker_pool:submit_async/1.
20 %%
21 %% See worker_pool for an overview.
1722
1823 -behaviour(gen_server2).
1924
0 VERSION?=3.4.3
0 VERSION?=3.5.1