Merge tag '1.11.0' into debian/pike
osprofiler 1.11.0 release
meta:version: 1.11.0
meta:diff-start: -
meta:series: pike
meta:release-type: release
meta:pypi: yes
meta:first: no
meta:release:Author: ChangBo Guo(gcb) <eric.guo@easystack.cn>
meta:release:Commit: ChangBo Guo(gcb) <eric.guo@easystack.cn>
meta:release:Change-Id: I8bea7bf3bea2f048a13ed1e714f87835dfbcb265
meta:release:Code-Review+2: Davanum Srinivas (dims) <davanum@gmail.com>
meta:release:Workflow+1: Davanum Srinivas (dims) <davanum@gmail.com>
Thomas Goirand
6 years ago
0 | AUTHORS | |
1 | ChangeLog | |
0 | 2 | *.py[cod] |
1 | 3 | |
2 | 4 | # C extensions |
3 | 5 | *.so |
4 | 6 | |
5 | 7 | # Packages |
6 | *.egg | |
7 | *.egg-info | |
8 | *.egg* | |
8 | 9 | dist |
9 | 10 | build |
10 | 11 | _build |
36 | 37 | .mr.developer.cfg |
37 | 38 | .project |
38 | 39 | .pydevproject |
40 | .idea | |
41 | ||
42 | # reno build | |
43 | releasenotes/build |
0 | 0 | [DEFAULT] |
1 | test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover -t ./ ./osprofiler/tests $LISTOPT $IDOPTION | |
1 | test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./osprofiler/tests/unit} $LISTOPT $IDOPTION | |
2 | 2 | test_id_option=--load-list $IDFILE |
3 | 3 | test_list_option=--list |
0 | If you would like to contribute to the development of OpenStack, | |
1 | you must follow the steps in this page: | |
2 | ||
3 | http://docs.openstack.org/infra/manual/developers.html | |
4 | ||
5 | Once those steps have been completed, changes to OpenStack | |
6 | should be submitted for review via the Gerrit tool, following | |
7 | the workflow documented at: | |
8 | ||
9 | http://docs.openstack.org/infra/manual/developers.html#development-workflow | |
10 | ||
11 | Pull requests submitted through GitHub will be ignored. | |
12 | ||
13 | Bugs should be filed on Launchpad, not GitHub: | |
14 | ||
15 | https://bugs.launchpad.net/osprofiler⏎ |
0 | OSProfiler | |
1 | ========== | |
0 | ======================== | |
1 | Team and repository tags | |
2 | ======================== | |
2 | 3 | |
3 | OSProfiler is an OpenStack cross-project profiling library. | |
4 | .. image:: http://governance.openstack.org/badges/osprofiler.svg | |
5 | :target: http://governance.openstack.org/reference/tags/index.html | |
4 | 6 | |
7 | .. Change things from this point on | |
5 | 8 | |
6 | Background | |
7 | ---------- | |
9 | =========================================================== | |
10 | OSProfiler -- Library for cross-project profiling library | |
11 | =========================================================== | |
8 | 12 | |
9 | OpenStack consists of multiple projects. Each project, in turn, is composed of | |
10 | multiple services. To process some request, e.g. to boot a virtual machine, | |
11 | OpenStack uses multiple services from different projects. In the case something | |
12 | works too slowly, it's extremely complicated to understand what exactly goes | |
13 | wrong and to locate the bottleneck. | |
13 | .. image:: https://img.shields.io/pypi/v/osprofiler.svg | |
14 | :target: https://pypi.python.org/pypi/osprofiler/ | |
15 | :alt: Latest Version | |
14 | 16 | |
15 | To resolve this issue, we introduce a tiny but powerful library, | |
16 | **osprofiler**, that is going to be used by all OpenStack projects and their | |
17 | python clients. To be able to generate 1 trace per request, that goes through | |
18 | all involved services, and builds a tree of calls (see an | |
19 | `example <http://pavlovic.me/rally/profiler/>`_). | |
17 | .. image:: https://img.shields.io/pypi/dm/osprofiler.svg | |
18 | :target: https://pypi.python.org/pypi/osprofiler/ | |
19 | :alt: Downloads | |
20 | 20 | |
21 | OSProfiler provides a tiny but powerful library that is used by | |
22 | most (soon to be all) OpenStack projects and their python clients. It | |
23 | provides functionality to be able to generate 1 trace per request, that goes | |
24 | through all involved services. This trace can then be extracted and used | |
25 | to build a tree of calls which can be quite handy for a variety of | |
26 | reasons (for example in isolating cross-project performance issues). | |
21 | 27 | |
22 | Why not cProfile and etc? | |
23 | ------------------------- | |
24 | ||
25 | **The scope of this library is quite different:** | |
26 | ||
27 | * We are interested in getting one trace of points from different service, | |
28 | not tracing all python calls inside one process. | |
29 | ||
30 | * This library should be easy integratable in OpenStack. This means that: | |
31 | ||
32 | * It shouldn't require too many changes in code bases of integrating | |
33 | projects. | |
34 | ||
35 | * We should be able to turn it off fully. | |
36 | ||
37 | * We should be able to keep it turned on in lazy mode in production | |
38 | (e.g. admin should be able to "trace" on request). | |
39 | ||
40 | ||
41 | OSprofiler API version 0.3.0 | |
42 | ---------------------------- | |
43 | ||
44 | There are a couple of things that you should know about API before using it. | |
45 | ||
46 | ||
47 | * **4 ways to add a new trace point** | |
48 | ||
49 | .. parsed-literal:: | |
50 | ||
51 | from osprofiler import profiler | |
52 | ||
53 | def some_func(): | |
54 | profiler.start("point_name", {"any_key": "with_any_value"}) | |
55 | # your code | |
56 | profiler.stop({"any_info_about_point": "in_this_dict"}) | |
57 | ||
58 | ||
59 | @profiler.trace("point_name", | |
60 | info={"any_info_about_point": "in_this_dict"}, | |
61 | hide_args=False) | |
62 | def some_func2(*args, **kwargs): | |
63 | # If you need to hide args in profile info, put hide_args=True | |
64 | pass | |
65 | ||
66 | def some_func3(): | |
67 | with profiler.Trace("point_name", | |
68 | info={"any_key": "with_any_value"}): | |
69 | # some code here | |
70 | ||
71 | @profiler.trace_cls("point_name", info={}, hide_args=False, | |
72 | trace_private=False) | |
73 | class TracedClass(object): | |
74 | ||
75 | def traced_method(self): | |
76 | pass | |
77 | ||
78 | def _traced_only_if_trace_private_true(self): | |
79 | pass | |
80 | ||
81 | * **How profiler works?** | |
82 | ||
83 | * **@profiler.Trace()** and **profiler.trace()** are just syntax sugar, | |
84 | that just calls **profiler.start()** & **profiler.stop()** methods. | |
85 | ||
86 | * Every call of **profiler.start()** & **profiler.stop()** sends to | |
87 | **collector** 1 message. It means that every trace point creates 2 records | |
88 | in the collector. *(more about collector & records later)* | |
89 | ||
90 | * Nested trace points are supported. The sample below produces 2 trace points: | |
91 | ||
92 | .. parsed-literal:: | |
93 | ||
94 | profiler.start("parent_point") | |
95 | profiler.start("child_point") | |
96 | profiler.stop() | |
97 | profiler.stop() | |
98 | ||
99 | The implementation is quite simple. Profiler has one stack that contains | |
100 | ids of all trace points. E.g.: | |
101 | ||
102 | .. parsed-literal:: | |
103 | ||
104 | profiler.start("parent_point") # trace_stack.push(<new_uuid>) | |
105 | # send to collector -> trace_stack[-2:] | |
106 | ||
107 | profiler.start("parent_point") # trace_stack.push(<new_uuid>) | |
108 | # send to collector -> trace_stack[-2:] | |
109 | profiler.stop() # send to collector -> trace_stack[-2:] | |
110 | # trace_stack.pop() | |
111 | ||
112 | profiler.stop() # send to collector -> trace_stack[-2:] | |
113 | # trace_stack.pop() | |
114 | ||
115 | It's simple to build a tree of nested trace points, having | |
116 | **(parent_id, point_id)** of all trace points. | |
117 | ||
118 | * **Process of sending to collector** | |
119 | ||
120 | Trace points contain 2 messages (start and stop). Messages like below are | |
121 | sent to a collector: | |
122 | ||
123 | .. parsed-literal:: | |
124 | { | |
125 | "name": <point_name>-(start|stop) | |
126 | "base_id": <uuid>, | |
127 | "parent_id": <uuid>, | |
128 | "trace_id": <uuid>, | |
129 | "info": <dict> | |
130 | } | |
131 | ||
132 | * base_id - <uuid> that is equal for all trace points that belong | |
133 | to one trace, this is done to simplify the process of retrieving | |
134 | all trace points related to one trace from collector | |
135 | * parent_id - <uuid> of parent trace point | |
136 | * trace_id - <uuid> of current trace point | |
137 | * info - the dictionary that contains user information passed when calling | |
138 | profiler **start()** & **stop()** methods. | |
139 | ||
140 | ||
141 | ||
142 | * **Setting up the collector.** | |
143 | ||
144 | The profiler doesn't include a trace point collector. The user/developer | |
145 | should instead provide a method that sends messages to a collector. Let's | |
146 | take a look at a trivial sample, where the collector is just a file: | |
147 | ||
148 | .. parsed-literal:: | |
149 | ||
150 | import json | |
151 | ||
152 | from osprofiler import notifier | |
153 | ||
154 | def send_info_to_file_collector(info, context=None): | |
155 | with open("traces", "a") as f: | |
156 | f.write(json.dumps(info)) | |
157 | ||
158 | notifier.set(send_info_to_file_collector) | |
159 | ||
160 | So now on every **profiler.start()** and **profiler.stop()** call we will | |
161 | write info about the trace point to the end of the **traces** file. | |
162 | ||
163 | ||
164 | * **Initialization of profiler.** | |
165 | ||
166 | If profiler is not initialized, all calls to **profiler.start()** and | |
167 | **profiler.stop()** will be ignored. | |
168 | ||
169 | Initialization is a quite simple procedure. | |
170 | ||
171 | .. parsed-literal:: | |
172 | ||
173 | from osprofiler import profiler | |
174 | ||
175 | profiler.init("SECRET_HMAC_KEY", base_id=<uuid>, parent_id=<uuid>) | |
176 | ||
177 | ``SECRET_HMAC_KEY`` - will be discussed later, because it's related to the | |
178 | integration of OSprofiler & OpenStack. | |
179 | ||
180 | **base_id** and **trace_id** will be used to initialize stack_trace in | |
181 | profiler, e.g. stack_trace = [base_id, trace_id]. | |
182 | ||
183 | ||
184 | * **OSProfiler CLI.** | |
185 | ||
186 | To make it easier for end users to work with profiler from CLI, osprofiler | |
187 | has entry point that allows them to retrieve information about traces and | |
188 | present it in human readable from. | |
189 | ||
190 | Available commands: | |
191 | ||
192 | * Help message with all available commands and their arguments: | |
193 | ||
194 | .. parsed-literal:: | |
195 | ||
196 | $ osprofiler -h/--help | |
197 | ||
198 | * OSProfiler version: | |
199 | ||
200 | .. parsed-literal:: | |
201 | ||
202 | $ osprofiler -v/--version | |
203 | ||
204 | * Results of profiling can be obtained in JSON (option: ``--json``) and HTML | |
205 | (option: ``--html``) formats: | |
206 | ||
207 | .. parsed-literal:: | |
208 | ||
209 | $ osprofiler trace show <trace_id> --json/--html | |
210 | ||
211 | hint: option ``--out`` will redirect result of ``osprofiler trace show`` | |
212 | in specified file: | |
213 | ||
214 | .. parsed-literal:: | |
215 | ||
216 | $ osprofiler trace show <trace_id> --json/--html --out /path/to/file | |
217 | ||
218 | Integration with OpenStack | |
219 | -------------------------- | |
220 | ||
221 | There are 4 topics related to integration OSprofiler & `OpenStack`_: | |
222 | ||
223 | * **What we should use as a centralized collector?** | |
224 | ||
225 | We decided to use `Ceilometer`_, because: | |
226 | ||
227 | * It's already integrated in OpenStack, so it's quite simple to send | |
228 | notifications to it from all projects. | |
229 | ||
230 | * There is an OpenStack API in Ceilometer that allows us to retrieve all | |
231 | messages related to one trace. Take a look at | |
232 | *osprofiler.parsers.ceilometer:get_notifications* | |
233 | ||
234 | ||
235 | * **How to setup profiler notifier?** | |
236 | ||
237 | We decided to use olso.messaging Notifier API, because: | |
238 | ||
239 | * `oslo.messaging`_ is integrated in all projects | |
240 | ||
241 | * It's the simplest way to send notification to Ceilometer, take a | |
242 | look at: *osprofiler.notifiers.messaging.Messaging:notify* method | |
243 | ||
244 | * We don't need to add any new `CONF`_ options in projects | |
245 | ||
246 | ||
247 | * **How to initialize profiler, to get one trace across all services?** | |
248 | ||
249 | To enable cross service profiling we actually need to do send from caller | |
250 | to callee (base_id & trace_id). So callee will be able to init its profiler | |
251 | with these values. | |
252 | ||
253 | In case of OpenStack there are 2 kinds of interaction between 2 services: | |
254 | ||
255 | * REST API | |
256 | ||
257 | It's well known that there are python clients for every project, | |
258 | that generate proper HTTP requests, and parse responses to objects. | |
259 | ||
260 | These python clients are used in 2 cases: | |
261 | ||
262 | * User access -> OpenStack | |
263 | ||
264 | * Service from Project 1 would like to access Service from Project 2 | |
265 | ||
266 | ||
267 | So what we need is to: | |
268 | ||
269 | * Put in python clients headers with trace info (if profiler is inited) | |
270 | ||
271 | * Add `OSprofiler WSGI middleware`_ to your service, this initializes | |
272 | the profiler, if and only if there are special trace headers, that | |
273 | are signed by one of the HMAC keys from api-paste.ini (if multiple | |
274 | keys exist the signing process will continue to use the key that was | |
275 | accepted during validation). | |
276 | ||
277 | * The common items that are used to configure the middleware are the | |
278 | following (these can be provided when initializing the middleware | |
279 | object or when setting up the api-paste.ini file):: | |
280 | ||
281 | hmac_keys = KEY1, KEY2 (can be a single key as well) | |
282 | ||
283 | Actually the algorithm is a bit more complex. The Python client will | |
284 | also sign the trace info with a `HMAC`_ key (lets call that key ``A``) | |
285 | passed to profiler.init, and on reception the WSGI middleware will | |
286 | check that it's signed with *one of* the HMAC keys (the wsgi | |
287 | server should have key ``A`` as well, but may also have keys ``B`` | |
288 | and ``C``) that are specified in api-paste.ini. This ensures that only | |
289 | the user that knows the HMAC key ``A`` in api-paste.ini can init a | |
290 | profiler properly and send trace info that will be actually | |
291 | processed. This ensures that trace info that is sent in that | |
292 | does **not** pass the HMAC validation will be discarded. **NOTE:** The | |
293 | application of many possible *validation* keys makes it possible to | |
294 | roll out a key upgrade in a non-impactful manner (by adding a key into | |
295 | the list and rolling out that change and then removing the older key at | |
296 | some time in the future). | |
297 | ||
298 | * RPC API | |
299 | ||
300 | RPC calls are used for interaction between services of one project. | |
301 | It's well known that projects are using `oslo.messaging`_ to deal with | |
302 | RPC. It's very good, because projects deal with RPC in similar way. | |
303 | ||
304 | So there are 2 required changes: | |
305 | ||
306 | * On callee side put in request context trace info (if profiler was | |
307 | initialized) | |
308 | ||
309 | * On caller side initialize profiler, if there is trace info in request | |
310 | context. | |
311 | ||
312 | * Trace all methods of callee API (can be done via profiler.trace_cls). | |
313 | ||
314 | ||
315 | * **What points should be tracked by default?** | |
316 | ||
317 | I think that for all projects we should include by default 5 kinds of points: | |
318 | ||
319 | * All HTTP calls - helps to get information about: what HTTP requests were | |
320 | done, duration of calls (latency of service), information about projects | |
321 | involved in request. | |
322 | ||
323 | * All RPC calls - helps to understand duration of parts of request related | |
324 | to different services in one project. This information is essential to | |
325 | understand which service produce the bottleneck. | |
326 | ||
327 | * All DB API calls - in some cases slow DB query can produce bottleneck. So | |
328 | it's quite useful to track how much time request spend in DB layer. | |
329 | ||
330 | * All driver calls - in case of nova, cinder and others we have vendor | |
331 | drivers. Duration | |
332 | ||
333 | * ALL SQL requests (turned off by default, because it produce a lot of | |
334 | traffic) | |
335 | ||
336 | .. _CONF: http://docs.openstack.org/developer/oslo.config/ | |
337 | .. _HMAC: http://en.wikipedia.org/wiki/Hash-based_message_authentication_code | |
338 | .. _OpenStack: http://openstack.org/ | |
339 | .. _Ceilometer: https://wiki.openstack.org/wiki/Ceilometer | |
340 | .. _oslo.messaging: https://pypi.python.org/pypi/oslo.messaging | |
341 | .. _OSprofiler WSGI middleware: https://github.com/stackforge/osprofiler/blob/master/osprofiler/web.py | |
28 | * Free software: Apache license | |
29 | * Documentation: https://docs.openstack.org/osprofiler/latest/ | |
30 | * Source: https://git.openstack.org/cgit/openstack/osprofiler | |
31 | * Bugs: https://bugs.launchpad.net/osprofiler |
0 | 0 | ================================== |
1 | Enabling OSprofiler using DevStack | |
1 | Enabling OSProfiler using DevStack | |
2 | 2 | ================================== |
3 | 3 | |
4 | 4 | This directory contains the files necessary to run OpenStack with enabled |
5 | OSprofiler in DevStack. | |
5 | OSProfiler in DevStack. | |
6 | 6 | |
7 | To configure DevStack to enable OSprofiler edit | |
8 | ``${DEVSTACK_DIR}/local.conf`` file and add:: | |
7 | OSProfiler has different drivers for trace processing. The default driver uses | |
8 | Ceilometer to process and store trace events. Other drivers may connect | |
9 | to databases directly and do not require Ceilometer. | |
9 | 10 | |
10 | enable_plugin ceilometer https://github.com/openstack/ceilometer master | |
11 | enable_plugin osprofiler https://github.com/openstack/osprofiler master | |
11 | To configure DevStack and enable OSProfiler edit ``${DEVSTACK_DIR}/local.conf`` | |
12 | file and add the following to ``[[local|localrc]]`` section: | |
12 | 13 | |
13 | to the ``[[local|localrc]]`` section. | |
14 | * to use specified driver:: | |
15 | ||
16 | enable_plugin osprofiler https://git.openstack.org/openstack/osprofiler master | |
17 | OSPROFILER_CONNECTION_STRING=<connection string value> | |
18 | ||
19 | the driver is chosen depending on the value of | |
20 | ``OSPROFILER_CONNECTION_STRING`` variable (refer to the next section for | |
21 | details) | |
22 | ||
23 | * to use default Ceilometer driver:: | |
24 | ||
25 | enable_plugin panko https://git.openstack.org/openstack/panko master | |
26 | enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer master | |
27 | enable_plugin osprofiler https://git.openstack.org/openstack/osprofiler master | |
28 | ||
29 | .. note:: The order of enabling plugins matters. | |
14 | 30 | |
15 | 31 | Run DevStack as normal:: |
16 | 32 | |
17 | 33 | $ ./stack.sh |
34 | ||
35 | ||
36 | Config variables | |
37 | ---------------- | |
38 | ||
39 | **OSPROFILER_HMAC_KEYS** - a set of HMAC secrets, that are used for triggering | |
40 | of profiling in OpenStack services: only the requests that specify one of these | |
41 | keys in HTTP headers will be profiled. E.g. multiple secrets are specified as | |
42 | a comma-separated list of string values:: | |
43 | ||
44 | OSPROFILER_HMAC_KEYS=swordfish,foxtrot,charlie | |
45 | ||
46 | **OSPROFILER_CONNECTION_STRING** - connection string to identify the driver. | |
47 | Default value is ``messaging://`` refers to Ceilometer driver. For a full | |
48 | list of drivers please refer to | |
49 | ``http://git.openstack.org/cgit/openstack/osprofiler/tree/osprofiler/drivers``. | |
50 | Example: enable ElasticSearch driver with the server running on localhost:: | |
51 | ||
52 | OSPROFILER_CONNECTION_STRING=elasticsearch://127.0.0.1:9200 | |
53 |
0 | # lib/rally | |
1 | # Functions to control the configuration and operation of the **Rally** | |
0 | # lib/osprofiler | |
1 | # Functions to control the configuration and operation of the **osprofiler** | |
2 | 2 | |
3 | 3 | # Dependencies: |
4 | 4 | # |
19 | 19 | # Defaults |
20 | 20 | # -------- |
21 | 21 | |
22 | OLD_STYLE_CONF_FILES=( | |
23 | /etc/cinder/cinder.conf | |
24 | /etc/heat/heat.conf | |
25 | ) | |
26 | ||
27 | NEW_STYLE_CONF_FILES=( | |
28 | /etc/keystone/keystone.conf | |
29 | /etc/nova/nova.conf | |
30 | /etc/neutron/neutron.conf | |
31 | /etc/glance/glance-api.conf | |
32 | /etc/glance/glance-registry.conf | |
33 | /etc/trove/trove.conf | |
34 | /etc/trove/trove-conductor.conf | |
35 | /etc/trove/trove-guestagent.conf | |
36 | /etc/trove/trove-taskmanager.conf | |
22 | CONF_FILES=( | |
23 | $CINDER_CONF | |
24 | $HEAT_CONF | |
25 | $KEYSTONE_CONF | |
26 | $NOVA_CONF | |
27 | $NEUTRON_CONF | |
28 | $GLANCE_API_CONF | |
29 | $GLANCE_REGISTRY_CONF | |
30 | $TROVE_CONF | |
31 | $TROVE_CONDUCTOR_CONF | |
32 | $TROVE_GUESTAGENT_CONF | |
33 | $TROVE_TASKMANAGER_CONF | |
34 | $SENLIN_CONF | |
35 | $MAGNUM_CONF | |
36 | $ZUN_CONF | |
37 | 37 | ) |
38 | 38 | |
39 | 39 | # This will update CEILOMETER_NOTIFICATION_TOPICS in ceilometer.conf file |
46 | 46 | # configure_osprofiler() - Nothing for now |
47 | 47 | function configure_osprofiler() { |
48 | 48 | |
49 | for conf in ${OLD_STYLE_CONF_FILES[@]}; do | |
50 | if [ -f $conf ] | |
51 | then | |
52 | iniset $conf profiler profiler_enabled True | |
53 | iniset $conf profiler trace_sqlalchemy True | |
54 | iniset $conf profiler hmac_keys SECRET_KEY | |
55 | fi | |
56 | done | |
57 | ||
58 | for conf in ${NEW_STYLE_CONF_FILES[@]}; do | |
49 | for conf in ${CONF_FILES[@]}; do | |
59 | 50 | if [ -f $conf ] |
60 | 51 | then |
61 | 52 | iniset $conf profiler enabled True |
62 | 53 | iniset $conf profiler trace_sqlalchemy True |
63 | iniset $conf profiler hmac_keys SECRET_KEY | |
54 | iniset $conf profiler hmac_keys $OSPROFILER_HMAC_KEYS | |
55 | iniset $conf profiler connection_string $OSPROFILER_CONNECTION_STRING | |
64 | 56 | fi |
65 | 57 | done |
66 | CEILOMETER_CONF=/etc/ceilometer/ceilometer.conf | |
67 | iniset $CEILOMETER_CONF event store_raw info | |
58 | if [ -f $CEILOMETER_CONF ] | |
59 | then | |
60 | iniset $CEILOMETER_CONF event store_raw info | |
61 | fi | |
68 | 62 | } |
69 | 63 | |
70 | ||
71 | # init_rally() - Initialize databases, etc. | |
72 | function init_osprofiler() { | |
73 | ||
74 | echo "Do nothing here for now" | |
75 | } | |
76 | 64 | |
77 | 65 | # Restore xtrace |
78 | 66 | $XTRACE |
0 | # DevStack extras script to install Rally | |
0 | # DevStack extras script to install osprofiler | |
1 | 1 | |
2 | 2 | # Save trace setting |
3 | 3 | XTRACE=$(set +o | grep xtrace) |
5 | 5 | |
6 | 6 | source $DEST/osprofiler/devstack/lib/osprofiler |
7 | 7 | |
8 | if [[ "$1" == "source" ]]; then | |
9 | # Initial source | |
10 | source $TOP_DIR/lib/rally | |
11 | # elif [[ "$1" == "stack" && "$2" == "install" ]]; then | |
12 | # echo_summary "Installing OSprofiler" | |
13 | # install_rally | |
14 | elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then | |
8 | if [[ "$1" == "stack" && "$2" == "post-config" ]]; then | |
15 | 9 | echo_summary "Configuring OSprofiler" |
16 | 10 | configure_osprofiler |
17 | elif [[ "$1" == "stack" && "$2" == "extra" ]]; then | |
18 | echo_summary "Initializing OSprofiler" | |
19 | init_osprofiler | |
20 | 11 | fi |
21 | 12 | |
22 | 13 | # Restore xtrace |
0 | 0 | # Devstack settings |
1 | 1 | |
2 | # A comma-separated list of secrets, that will be used for triggering | |
3 | # of profiling in OpenStack services: profiling is only performed for | |
4 | # requests that specify one of these keys in HTTP headers. | |
5 | OSPROFILER_HMAC_KEYS=${OSPROFILER_HMAC_KEYS:-"SECRET_KEY"} | |
6 | OSPROFILER_CONNECTION_STRING=${OSPROFILER_CONNECTION_STRING:-"messaging://"} | |
7 | ||
2 | 8 | enable_service osprofiler |
10 | 10 | # All configuration values have a default; values that are commented out |
11 | 11 | # serve to show the default. |
12 | 12 | |
13 | import datetime | |
14 | 13 | import os |
15 | import subprocess | |
16 | 14 | import sys |
17 | 15 | |
18 | 16 | # If extensions (or modules to document with autodoc) are in another directory, |
37 | 35 | 'sphinx.ext.coverage', |
38 | 36 | 'sphinx.ext.ifconfig', |
39 | 37 | 'sphinx.ext.viewcode', |
38 | 'openstackdocstheme', | |
40 | 39 | ] |
40 | ||
41 | # openstackdocstheme options | |
42 | repository_name = 'openstack/osprofiler' | |
43 | bug_project = 'osprofiler' | |
44 | bug_tag = '' | |
45 | ||
41 | 46 | todo_include_todos = True |
42 | 47 | |
43 | 48 | # Add any paths that contain templates here, relative to this directory. |
54 | 59 | |
55 | 60 | # General information about the project. |
56 | 61 | project = u'OSprofiler' |
57 | copyright = u'%d, Mirantis Inc.' % datetime.datetime.now().year | |
58 | ||
59 | # The version info for the project you're documenting, acts as replacement for | |
60 | # |version| and |release|, also used in various other places throughout the | |
61 | # built documents. | |
62 | # | |
63 | # The short X.Y version. | |
64 | version = '0.2.5' | |
65 | # The full version, including alpha/beta/rc tags. | |
66 | release = '0.2.5' | |
62 | copyright = u'2016, OpenStack Foundation' | |
67 | 63 | |
68 | 64 | # The language for content autogenerated by Sphinx. Refer to documentation |
69 | 65 | # for a list of supported languages. |
104 | 100 | |
105 | 101 | # The theme to use for HTML and HTML Help pages. See the documentation for |
106 | 102 | # a list of builtin themes. |
107 | html_theme = 'default' | |
103 | html_theme = 'openstackdocs' | |
108 | 104 | |
109 | 105 | # Theme options are theme-specific and customize the look and feel of a theme |
110 | 106 | # further. For a list of options available for each theme, see the |
137 | 133 | |
138 | 134 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, |
139 | 135 | # using the given strftime format. |
140 | git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local", | |
141 | "-n1"] | |
142 | html_last_updated_fmt = subprocess.Popen( | |
143 | git_cmd, stdout=subprocess.PIPE).communicate()[0] | |
136 | html_last_updated_fmt = '%Y-%m-%d %H:%M' | |
144 | 137 | |
145 | 138 | # If true, SmartyPants will be used to convert quotes and dashes to |
146 | 139 | # typographically correct entities. |
0 | ============================================= | |
1 | OSProfiler -- Cross-project profiling library | |
2 | ============================================= | |
3 | ||
4 | OSProfiler provides a tiny but powerful library that is used by | |
5 | most (soon to be all) OpenStack projects and their python clients. It | |
6 | provides functionality to generate 1 trace per request, that goes | |
7 | through all involved services. This trace can then be extracted and used | |
8 | to build a tree of calls which can be quite handy for a variety of | |
9 | reasons (for example in isolating cross-project performance issues). | |
10 | ||
11 | .. toctree:: | |
12 | :maxdepth: 2 | |
13 | ||
14 | user/index | |
15 | ||
16 | .. rubric:: Indices and tables | |
17 | ||
18 | * :ref:`genindex` | |
19 | * :ref:`modindex` | |
20 | * :ref:`search` | |
21 |
0 | === | |
1 | API | |
2 | === | |
3 | ||
4 | There are few things that you should know about API before using it. | |
5 | ||
6 | Five ways to add a new trace point. | |
7 | ----------------------------------- | |
8 | ||
9 | .. code-block:: python | |
10 | ||
11 | from osprofiler import profiler | |
12 | ||
13 | def some_func(): | |
14 | profiler.start("point_name", {"any_key": "with_any_value"}) | |
15 | # your code | |
16 | profiler.stop({"any_info_about_point": "in_this_dict"}) | |
17 | ||
18 | ||
19 | @profiler.trace("point_name", | |
20 | info={"any_info_about_point": "in_this_dict"}, | |
21 | hide_args=False) | |
22 | def some_func2(*args, **kwargs): | |
23 | # If you need to hide args in profile info, put hide_args=True | |
24 | pass | |
25 | ||
26 | def some_func3(): | |
27 | with profiler.Trace("point_name", | |
28 | info={"any_key": "with_any_value"}): | |
29 | # some code here | |
30 | ||
31 | @profiler.trace_cls("point_name", info={}, hide_args=False, | |
32 | trace_private=False) | |
33 | class TracedClass(object): | |
34 | ||
35 | def traced_method(self): | |
36 | pass | |
37 | ||
38 | def _traced_only_if_trace_private_true(self): | |
39 | pass | |
40 | ||
41 | @six.add_metaclass(profiler.TracedMeta) | |
42 | class RpcManagerClass(object): | |
43 | __trace_args__ = {'name': 'rpc', | |
44 | 'info': None, | |
45 | 'hide_args': False, | |
46 | 'trace_private': False} | |
47 | ||
48 | def my_method(self, some_args): | |
49 | pass | |
50 | ||
51 | def my_method2(self, some_arg1, some_arg2, kw=None, kw2=None) | |
52 | pass | |
53 | ||
54 | How profiler works? | |
55 | ------------------- | |
56 | ||
57 | * **profiler.Trace()** and **@profiler.trace()** are just syntax sugar, | |
58 | that just calls **profiler.start()** & **profiler.stop()** methods. | |
59 | ||
60 | * Every call of **profiler.start()** & **profiler.stop()** sends to | |
61 | **collector** 1 message. It means that every trace point creates 2 records | |
62 | in the collector. *(more about collector & records later)* | |
63 | ||
64 | * Nested trace points are supported. The sample below produces 2 trace points: | |
65 | ||
66 | .. code-block:: python | |
67 | ||
68 | profiler.start("parent_point") | |
69 | profiler.start("child_point") | |
70 | profiler.stop() | |
71 | profiler.stop() | |
72 | ||
73 | The implementation is quite simple. Profiler has one stack that contains | |
74 | ids of all trace points. E.g.: | |
75 | ||
76 | .. code-block:: python | |
77 | ||
78 | profiler.start("parent_point") # trace_stack.push(<new_uuid>) | |
79 | # send to collector -> trace_stack[-2:] | |
80 | ||
81 | profiler.start("parent_point") # trace_stack.push(<new_uuid>) | |
82 | # send to collector -> trace_stack[-2:] | |
83 | profiler.stop() # send to collector -> trace_stack[-2:] | |
84 | # trace_stack.pop() | |
85 | ||
86 | profiler.stop() # send to collector -> trace_stack[-2:] | |
87 | # trace_stack.pop() | |
88 | ||
89 | It's simple to build a tree of nested trace points, having | |
90 | **(parent_id, point_id)** of all trace points. | |
91 | ||
92 | Process of sending to collector. | |
93 | -------------------------------- | |
94 | ||
95 | Trace points contain 2 messages (start and stop). Messages like below are | |
96 | sent to a collector: | |
97 | ||
98 | .. parsed-literal:: | |
99 | ||
100 | { | |
101 | "name": <point_name>-(start|stop) | |
102 | "base_id": <uuid>, | |
103 | "parent_id": <uuid>, | |
104 | "trace_id": <uuid>, | |
105 | "info": <dict> | |
106 | } | |
107 | ||
108 | The fields are defined as the following: | |
109 | ||
110 | * base_id - ``<uuid>`` that is equal for all trace points that belong | |
111 | to one trace, this is done to simplify the process of retrieving | |
112 | all trace points related to one trace from collector | |
113 | * parent_id - ``<uuid>`` of parent trace point | |
114 | * trace_id - ``<uuid>`` of current trace point | |
115 | * info - the dictionary that contains user information passed when calling | |
116 | profiler **start()** & **stop()** methods. | |
117 | ||
118 | Setting up the collector. | |
119 | ------------------------- | |
120 | ||
121 | Using OSProfiler notifier. | |
122 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
123 | ||
124 | .. note:: The following way of configuring OSProfiler is deprecated. The new | |
125 | version description is located below - `Using OSProfiler initializer.`_. | |
126 | Don't use OSproliler notifier directly! Its support will be removed soon | |
127 | from OSProfiler. | |
128 | ||
129 | The profiler doesn't include a trace point collector. The user/developer | |
130 | should instead provide a method that sends messages to a collector. Let's | |
131 | take a look at a trivial sample, where the collector is just a file: | |
132 | ||
133 | .. code-block:: python | |
134 | ||
135 | import json | |
136 | ||
137 | from osprofiler import notifier | |
138 | ||
139 | def send_info_to_file_collector(info, context=None): | |
140 | with open("traces", "a") as f: | |
141 | f.write(json.dumps(info)) | |
142 | ||
143 | notifier.set(send_info_to_file_collector) | |
144 | ||
145 | So now on every **profiler.start()** and **profiler.stop()** call we will | |
146 | write info about the trace point to the end of the **traces** file. | |
147 | ||
148 | Using OSProfiler initializer. | |
149 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
150 | ||
151 | OSProfiler now contains various storage drivers to collect tracing data. | |
152 | Information about what driver to use and what options to pass to OSProfiler | |
153 | are now stored in OpenStack services configuration files. Example of such | |
154 | configuration can be found below: | |
155 | ||
156 | .. code-block:: bash | |
157 | ||
158 | [profiler] | |
159 | enabled = True | |
160 | trace_sqlalchemy = True | |
161 | hmac_keys = SECRET_KEY | |
162 | connection_string = messaging:// | |
163 | ||
164 | If such configuration is provided, OSProfiler setting up can be processed in | |
165 | following way: | |
166 | ||
167 | .. code-block:: python | |
168 | ||
169 | if CONF.profiler.enabled: | |
170 | osprofiler_initializer.init_from_conf( | |
171 | conf=CONF, | |
172 | context=context.get_admin_context().to_dict(), | |
173 | project="cinder", | |
174 | service=binary, | |
175 | host=host | |
176 | ) | |
177 | ||
178 | Initialization of profiler. | |
179 | --------------------------- | |
180 | ||
181 | If profiler is not initialized, all calls to **profiler.start()** and | |
182 | **profiler.stop()** will be ignored. | |
183 | ||
184 | Initialization is a quite simple procedure. | |
185 | ||
186 | .. code-block:: python | |
187 | ||
188 | from osprofiler import profiler | |
189 | ||
190 | profiler.init("SECRET_HMAC_KEY", base_id=<uuid>, parent_id=<uuid>) | |
191 | ||
192 | ``SECRET_HMAC_KEY`` - will be discussed later, because it's related to the | |
193 | integration of OSprofiler & OpenStack. | |
194 | ||
195 | **base_id** and **trace_id** will be used to initialize stack_trace in | |
196 | profiler, e.g. ``stack_trace = [base_id, trace_id]``. | |
197 | ||
198 | OSProfiler CLI. | |
199 | --------------- | |
200 | ||
201 | To make it easier for end users to work with profiler from CLI, OSProfiler | |
202 | has entry point that allows them to retrieve information about traces and | |
203 | present it in human readable from. | |
204 | ||
205 | Available commands: | |
206 | ||
207 | * Help message with all available commands and their arguments: | |
208 | ||
209 | .. parsed-literal:: | |
210 | ||
211 | $ osprofiler -h/--help | |
212 | ||
213 | * OSProfiler version: | |
214 | ||
215 | .. parsed-literal:: | |
216 | ||
217 | $ osprofiler -v/--version | |
218 | ||
219 | * Results of profiling can be obtained in JSON (option: ``--json``) and HTML | |
220 | (option: ``--html``) formats: | |
221 | ||
222 | .. parsed-literal:: | |
223 | ||
224 | $ osprofiler trace show <trace_id> --json/--html | |
225 | ||
226 | hint: option ``--out`` will redirect result of ``osprofiler trace show`` | |
227 | in specified file: | |
228 | ||
229 | .. parsed-literal:: | |
230 | ||
231 | $ osprofiler trace show <trace_id> --json/--html --out /path/to/file | |
232 | ||
233 | * In latest versions of OSProfiler with storage drivers (e.g. MongoDB (URI: | |
234 | ``mongodb://``), Messaging (URI: ``messaging://``), and Ceilometer | |
235 | (URI: ``ceilometer://``)) ``--connection-string`` parameter should be set up: | |
236 | ||
237 | .. parsed-literal:: | |
238 | ||
239 | $ osprofiler trace show <trace_id> --connection-string=<URI> --json/--html |
0 | ========== | |
1 | Background | |
2 | ========== | |
3 | ||
4 | OpenStack consists of multiple projects. Each project, in turn, is composed of | |
5 | multiple services. To process some request, e.g. to boot a virtual machine, | |
6 | OpenStack uses multiple services from different projects. In the case something | |
7 | works too slow, it's extremely complicated to understand what exactly goes | |
8 | wrong and to locate the bottleneck. | |
9 | ||
10 | To resolve this issue, we introduce a tiny but powerful library, | |
11 | **osprofiler**, that is going to be used by all OpenStack projects and their | |
12 | python clients. It generates 1 trace per request, that goes through | |
13 | all involved services, and builds a tree of calls. | |
14 | ||
15 | Why not cProfile and etc? | |
16 | ------------------------- | |
17 | ||
18 | **The scope of this library is quite different:** | |
19 | ||
20 | * We are interested in getting one trace of points from different services, | |
21 | not tracing all Python calls inside one process. | |
22 | ||
23 | * This library should be easy integrable into OpenStack. This means that: | |
24 | ||
25 | * It shouldn't require too many changes in code bases of projects it's | |
26 | integrated with. | |
27 | ||
28 | * We should be able to fully turn it off. | |
29 | ||
30 | * We should be able to keep it turned on in lazy mode in production | |
31 | (e.g. admin should be able to "trace" on request). |
0 | ========== | |
1 | Collectors | |
2 | ========== | |
3 | ||
4 | There are a number of drivers to support different collector backends: | |
5 | ||
6 | Redis | |
7 | ----- | |
8 | ||
9 | * Overview | |
10 | ||
11 | The Redis driver allows profiling data to be collected into a redis | |
12 | database instance. The traces are stored as key-value pairs where the | |
13 | key is a string built using trace ids and timestamps and the values | |
14 | are JSON strings containing the trace information. A second driver is | |
15 | included to use Redis Sentinel in addition to single node Redis. | |
16 | ||
17 | * Capabilities | |
18 | ||
19 | * Write trace data to the database. | |
20 | * Query Traces in database: This allows for pulling trace data | |
21 | querying on the keys used to save the data in the database. | |
22 | * Generate a report based on the traces stored in the database. | |
23 | * Supports use of Redis Sentinel for robustness. | |
24 | ||
25 | * Usage | |
26 | ||
27 | The driver is used by OSProfiler when using a connection-string URL | |
28 | of the form redis://<hostname>:<port>. To use the Sentinel version | |
29 | use a connection-string of the form redissentinel://<hostname>:<port> | |
30 | ||
31 | * Configuration | |
32 | ||
33 | * No config changes are required by for the base Redis driver. | |
34 | * There are two configuration options for the Redis Sentinel driver: | |
35 | ||
36 | * socket_timeout: specifies the sentinel connection socket timeout | |
37 | value. Defaults to: 0.1 seconds | |
38 | * sentinel_service_name: The name of the Sentinel service to use. | |
39 | Defaults to: "mymaster" |
0 | .. include:: ../../../ChangeLog |
0 | ================ | |
1 | Using OSProfiler | |
2 | ================ | |
3 | ||
4 | OSProfiler provides a tiny but powerful library that is used by | |
5 | most (soon to be all) OpenStack projects and their python clients. It | |
6 | provides functionality to generate 1 trace per request, that goes | |
7 | through all involved services. This trace can then be extracted and used | |
8 | to build a tree of calls which can be quite handy for a variety of | |
9 | reasons (for example in isolating cross-project performance issues). | |
10 | ||
11 | .. toctree:: | |
12 | :maxdepth: 2 | |
13 | ||
14 | background | |
15 | api | |
16 | integration | |
17 | collectors | |
18 | similar_projects | |
19 | ||
20 | Release Notes | |
21 | ============= | |
22 | ||
23 | .. toctree:: | |
24 | :maxdepth: 1 | |
25 | ||
26 | history |
0 | =========== | |
1 | Integration | |
2 | =========== | |
3 | ||
4 | There are 4 topics related to integration OSprofiler & `OpenStack`_: | |
5 | ||
6 | What we should use as a centralized collector? | |
7 | ---------------------------------------------- | |
8 | ||
9 | We primarily decided to use `Ceilometer`_, because: | |
10 | ||
11 | * It's already integrated in OpenStack, so it's quite simple to send | |
12 | notifications to it from all projects. | |
13 | ||
14 | * There is an OpenStack API in Ceilometer that allows us to retrieve all | |
15 | messages related to one trace. Take a look at | |
16 | *osprofiler.drivers.ceilometer.Ceilometer:get_report* | |
17 | ||
18 | In OSProfiler starting with 1.4.0 version other options (MongoDB driver in | |
19 | 1.4.0 release, Elasticsearch driver added later, etc.) are also available. | |
20 | ||
21 | ||
22 | How to setup profiler notifier? | |
23 | ------------------------------- | |
24 | ||
25 | We primarily decided to use oslo.messaging Notifier API, because: | |
26 | ||
27 | * `oslo.messaging`_ is integrated in all projects | |
28 | ||
29 | * It's the simplest way to send notification to Ceilometer, take a | |
30 | look at: *osprofiler.drivers.messaging.Messaging:notify* method | |
31 | ||
32 | * We don't need to add any new `CONF`_ options in projects | |
33 | ||
34 | In OSProfiler starting with 1.4.0 version other options (MongoDB driver in | |
35 | 1.4.0 release, Elasticsearch driver added later, etc.) are also available. | |
36 | ||
37 | How to initialize profiler, to get one trace across all services? | |
38 | ----------------------------------------------------------------- | |
39 | ||
40 | To enable cross service profiling we actually need to do send from caller | |
41 | to callee (base_id & trace_id). So callee will be able to init its profiler | |
42 | with these values. | |
43 | ||
44 | In case of OpenStack there are 2 kinds of interaction between 2 services: | |
45 | ||
46 | * REST API | |
47 | ||
48 | It's well known that there are python clients for every project, | |
49 | that generate proper HTTP requests, and parse responses to objects. | |
50 | ||
51 | These python clients are used in 2 cases: | |
52 | ||
53 | * User access -> OpenStack | |
54 | ||
55 | * Service from Project 1 would like to access Service from Project 2 | |
56 | ||
57 | ||
58 | So what we need is to: | |
59 | ||
60 | * Put in python clients headers with trace info (if profiler is inited) | |
61 | ||
62 | * Add `OSprofiler WSGI middleware`_ to your service, this initializes | |
63 | the profiler, if and only if there are special trace headers, that | |
64 | are signed by one of the HMAC keys from api-paste.ini (if multiple | |
65 | keys exist the signing process will continue to use the key that was | |
66 | accepted during validation). | |
67 | ||
68 | * The common items that are used to configure the middleware are the | |
69 | following (these can be provided when initializing the middleware | |
70 | object or when setting up the api-paste.ini file):: | |
71 | ||
72 | hmac_keys = KEY1, KEY2 (can be a single key as well) | |
73 | ||
74 | Actually the algorithm is a bit more complex. The Python client will | |
75 | also sign the trace info with a `HMAC`_ key (lets call that key ``A``) | |
76 | passed to profiler.init, and on reception the WSGI middleware will | |
77 | check that it's signed with *one of* the HMAC keys (the wsgi | |
78 | server should have key ``A`` as well, but may also have keys ``B`` | |
79 | and ``C``) that are specified in api-paste.ini. This ensures that only | |
80 | the user that knows the HMAC key ``A`` in api-paste.ini can init a | |
81 | profiler properly and send trace info that will be actually | |
82 | processed. This ensures that trace info that is sent in that | |
83 | does **not** pass the HMAC validation will be discarded. **NOTE:** The | |
84 | application of many possible *validation* keys makes it possible to | |
85 | roll out a key upgrade in a non-impactful manner (by adding a key into | |
86 | the list and rolling out that change and then removing the older key at | |
87 | some time in the future). | |
88 | ||
89 | * RPC API | |
90 | ||
91 | RPC calls are used for interaction between services of one project. | |
92 | It's well known that projects are using `oslo.messaging`_ to deal with | |
93 | RPC. It's very good, because projects deal with RPC in similar way. | |
94 | ||
95 | So there are 2 required changes: | |
96 | ||
97 | * On callee side put in request context trace info (if profiler was | |
98 | initialized) | |
99 | ||
100 | * On caller side initialize profiler, if there is trace info in request | |
101 | context. | |
102 | ||
103 | * Trace all methods of callee API (can be done via profiler.trace_cls). | |
104 | ||
105 | ||
106 | What points should be tracked by default? | |
107 | ----------------------------------------- | |
108 | ||
109 | I think that for all projects we should include by default 5 kinds of points: | |
110 | ||
111 | * All HTTP calls - helps to get information about: what HTTP requests were | |
112 | done, duration of calls (latency of service), information about projects | |
113 | involved in request. | |
114 | ||
115 | * All RPC calls - helps to understand duration of parts of request related | |
116 | to different services in one project. This information is essential to | |
117 | understand which service produce the bottleneck. | |
118 | ||
119 | * All DB API calls - in some cases slow DB query can produce bottleneck. So | |
120 | it's quite useful to track how much time request spend in DB layer. | |
121 | ||
122 | * All driver calls - in case of nova, cinder and others we have vendor | |
123 | drivers. Duration | |
124 | ||
125 | * ALL SQL requests (turned off by default, because it produce a lot of | |
126 | traffic) | |
127 | ||
128 | .. _CONF: http://docs.openstack.org/developer/oslo.config/ | |
129 | .. _HMAC: http://en.wikipedia.org/wiki/Hash-based_message_authentication_code | |
130 | .. _OpenStack: http://openstack.org/ | |
131 | .. _Ceilometer: https://wiki.openstack.org/wiki/Ceilometer | |
132 | .. _oslo.messaging: https://pypi.python.org/pypi/oslo.messaging | |
133 | .. _OSprofiler WSGI middleware: https://github.com/openstack/osprofiler/blob/master/osprofiler/web.py |
0 | ================ | |
1 | Similar projects | |
2 | ================ | |
3 | ||
4 | Other projects (some alive, some abandoned, some research prototypes) | |
5 | that are similar (in idea and ideal to OSprofiler). | |
6 | ||
7 | * `Zipkin`_ | |
8 | * `Dapper`_ | |
9 | * `Tomograph`_ | |
10 | * `HTrace`_ | |
11 | * `Jaeger`_ | |
12 | * `OpenTracing`_ | |
13 | ||
14 | .. _Zipkin: http://zipkin.io/ | |
15 | .. _Dapper: http://research.google.com/pubs/pub36356.html | |
16 | .. _Tomograph: https://github.com/stackforge/tomograph | |
17 | .. _HTrace: https://htrace.incubator.apache.org/ | |
18 | .. _Jaeger: https://uber.github.io/jaeger/ | |
19 | .. _OpenTracing: http://opentracing.io/ |
0 | .. | |
1 | This work is licensed under a Creative Commons Attribution 3.0 Unported | |
2 | License. | |
3 | ||
4 | http://creativecommons.org/licenses/by/3.0/legalcode | |
5 | ||
6 | .. | |
7 | This template should be in ReSTructured text. The filename in the git | |
8 | repository should match the launchpad URL, for example a URL of | |
9 | https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named | |
10 | awesome-thing.rst . Please do not delete any of the sections in this | |
11 | template. If you have nothing to say for a whole section, just write: None | |
12 | For help with syntax, see http://sphinx-doc.org/rest.html | |
13 | To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html | |
14 | ||
15 | ====================================== | |
16 | Make api-paste.ini Arguments Optional | |
17 | ====================================== | |
18 | ||
19 | Problem description | |
20 | =================== | |
21 | ||
22 | Integration of OSprofiler with OpenStack projects is harder than it should be, | |
23 | it requires keeping part of arguments inside api-paste.ini files and part in | |
24 | projects.conf file. | |
25 | ||
26 | We should make all configuration options from api-paste.ini file optional | |
27 | and add alternative way to configure osprofiler.web.WsgiMiddleware | |
28 | ||
29 | ||
30 | Proposed change | |
31 | =============== | |
32 | ||
33 | Integration of OSprofiler requires 2 changes in api-paste.ini file: | |
34 | ||
35 | - One is adding osprofiler.web.WsgiMiddleware to pipelines: | |
36 | https://github.com/openstack/cinder/blob/master/etc/cinder/api-paste.ini#L13 | |
37 | ||
38 | - Another is to add it's arguments: | |
39 | https://github.com/openstack/cinder/blob/master/etc/cinder/api-paste.ini#L31-L32 | |
40 | ||
41 | so WsgiMiddleware will be correctly initialized here: | |
42 | https://github.com/openstack/osprofiler/blob/51761f375189bdc03b7e72a266ad0950777f32b1/osprofiler/web.py#L64 | |
43 | ||
44 | We should make ``hmac_keys`` and ``enabled`` variable optional, create | |
45 | separated method from initialization of wsgi middleware and cut new release. | |
46 | After that remove | |
47 | ||
48 | ||
49 | Alternatives | |
50 | ------------ | |
51 | ||
52 | None. | |
53 | ||
54 | ||
55 | Implementation | |
56 | ============== | |
57 | ||
58 | Assignee(s) | |
59 | ----------- | |
60 | ||
61 | Primary assignee: | |
62 | dbelova | |
63 | ||
64 | Work Items | |
65 | ---------- | |
66 | ||
67 | - Modify osprofiler.web.WsgiMiddleware to make ``hmac_keys`` optional (done) | |
68 | ||
69 | - Add alternative way to setup osprofiler.web.WsgiMiddleware, e.g. extra | |
70 | argument hmac_keys to enable() method (done) | |
71 | ||
72 | - Cut new release 0.3.1 (tbd) | |
73 | ||
74 | - Fix the code in all projects: remove api-paste.ini arguments and use | |
75 | osprofiler.web.enable with extra argument (tbd) | |
76 | ||
77 | ||
78 | Dependencies | |
79 | ============ | |
80 | ||
81 | - Cinder, Glance, Trove - projects should be fixed |
0 | .. | |
1 | This work is licensed under a Creative Commons Attribution 3.0 Unported | |
2 | License. | |
3 | ||
4 | http://creativecommons.org/licenses/by/3.0/legalcode | |
5 | ||
6 | .. | |
7 | This template should be in ReSTructured text. The filename in the git | |
8 | repository should match the launchpad URL, for example a URL of | |
9 | https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named | |
10 | awesome-thing.rst . Please do not delete any of the sections in this | |
11 | template. If you have nothing to say for a whole section, just write: None | |
12 | For help with syntax, see http://sphinx-doc.org/rest.html | |
13 | To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html | |
14 | ||
15 | ===================== | |
16 | Multi backend support | |
17 | ===================== | |
18 | ||
19 | Make OSProfiler more flexible and production ready. | |
20 | ||
21 | Problem description | |
22 | =================== | |
23 | ||
24 | Currently OSprofiler works only with one backend Ceilometer which actually | |
25 | doesn't work well and adds huge overhead. More over often Ceilometer is not | |
26 | installed/used at all. To resolve this we should add support for different | |
27 | backends like: MongoDB, InfluxDB, ElasticSearch, ... | |
28 | ||
29 | ||
30 | Proposed change | |
31 | =============== | |
32 | ||
33 | And new osprofiler.drivers mechanism, each driver will do 2 things: | |
34 | send notifications and parse all notification in unified tree structure | |
35 | that can be processed by the REST lib. | |
36 | ||
37 | Deprecate osprofiler.notifiers and osprofiler.parsers | |
38 | ||
39 | Change all projects that are using OSprofiler to new model | |
40 | ||
41 | Alternatives | |
42 | ------------ | |
43 | ||
44 | I don't know any good alternative. | |
45 | ||
46 | Implementation | |
47 | ============== | |
48 | ||
49 | Assignee(s) | |
50 | ----------- | |
51 | ||
52 | Primary assignees: | |
53 | dbelova | |
54 | ayelistratov | |
55 | ||
56 | ||
57 | Work Items | |
58 | ---------- | |
59 | ||
60 | To add support of multi backends we should change few places in osprofiler | |
61 | that are hardcoded on Ceilometer: | |
62 | ||
63 | - CLI command ``show``: | |
64 | ||
65 | I believe we should add extra argument "connection_string" which will allow | |
66 | people to specify where is backend. So it will look like: | |
67 | <backend_type>://[[user[:password]]@[address][:port][/database]] | |
68 | ||
69 | - Merge osprofiler.notifiers and osprofiler.parsers to osprofiler.drivers | |
70 | ||
71 | Notifiers and Parsers are tightly related. Like for MongoDB notifier you | |
72 | should use MongoDB parsers, so there is better solution to keep both | |
73 | in the same place. | |
74 | ||
75 | This change should be done with keeping backward compatibility, | |
76 | in other words | |
77 | we should create separated directory osprofiler.drivers and put first | |
78 | Ceilometer and then start working on other backends. | |
79 | ||
80 | These drivers will be chosen based on connection string | |
81 | ||
82 | - Deprecate osprofiler.notifiers and osprofiler.parsers | |
83 | ||
84 | - Switch all projects to new model with connection string | |
85 | ||
86 | ||
87 | Dependencies | |
88 | ============ | |
89 | ||
90 | - Cinder, Glance, Trove, Heat should be changed |
51 | 51 | |
52 | 52 | - Make DevStack plugin for OSprofiler |
53 | 53 | |
54 | - Configure Celiometer | |
54 | - Configure Ceilometer | |
55 | 55 | |
56 | 56 | - Configure services that support OSprofiler |
57 | 57 |
0 | .. | |
1 | This work is licensed under a Creative Commons Attribution 3.0 Unported | |
2 | License. | |
3 | ||
4 | http://creativecommons.org/licenses/by/3.0/legalcode | |
5 | ||
6 | .. | |
7 | This template should be in ReSTructured text. The filename in the git | |
8 | repository should match the launchpad URL, for example a URL of | |
9 | https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named | |
10 | awesome-thing.rst . Please do not delete any of the sections in this | |
11 | template. If you have nothing to say for a whole section, just write: None | |
12 | For help with syntax, see http://sphinx-doc.org/rest.html | |
13 | To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html | |
14 | ||
15 | ====================================== | |
16 | Make api-paste.ini Arguments Optional | |
17 | ====================================== | |
18 | ||
19 | Problem description | |
20 | =================== | |
21 | ||
22 | Integration of OSprofiler with OpenStack projects is harder than it should be, | |
23 | it requires keeping part of arguments inside api-paste.ini files and part in | |
24 | projects.conf file. | |
25 | ||
26 | We should make all configuration options from api-paste.ini file optional | |
27 | and add alternative way to configure osprofiler.web.WsgiMiddleware | |
28 | ||
29 | ||
30 | Proposed change | |
31 | =============== | |
32 | ||
33 | Integration of OSprofiler requires 2 changes in api-paste.ini file: | |
34 | ||
35 | - One is adding osprofiler.web.WsgiMiddleware to pipelines: | |
36 | https://github.com/openstack/cinder/blob/master/etc/cinder/api-paste.ini#L13 | |
37 | ||
38 | - Another is to add it's arguments: | |
39 | https://github.com/openstack/cinder/blob/master/etc/cinder/api-paste.ini#L31-L32 | |
40 | ||
41 | so WsgiMiddleware will be correctly initialized here: | |
42 | https://github.com/openstack/osprofiler/blob/51761f375189bdc03b7e72a266ad0950777f32b1/osprofiler/web.py#L64 | |
43 | ||
44 | We should make ``hmac_keys`` and ``enabled`` variable optional, create | |
45 | separated method from initialization of wsgi middleware and cut new release. | |
46 | After that remove | |
47 | ||
48 | ||
49 | Alternatives | |
50 | ------------ | |
51 | ||
52 | None. | |
53 | ||
54 | ||
55 | Implementation | |
56 | ============== | |
57 | ||
58 | Assignee(s) | |
59 | ----------- | |
60 | ||
61 | Primary assignee: | |
62 | dbelova | |
63 | ||
64 | Work Items | |
65 | ---------- | |
66 | ||
67 | - Modify osprofiler.web.WsgiMiddleware to make ``hmac_keys`` optional (done) | |
68 | ||
69 | - Add alternative way to setup osprofiler.web.WsgiMiddleware, e.g. extra | |
70 | argument hmac_keys to enable() method (done) | |
71 | ||
72 | - Cut new release 0.3.1 (tbd) | |
73 | ||
74 | - Fix the code in all projects: remove api-paste.ini arguments and use | |
75 | osprofiler.web.enable with extra argument (tbd) | |
76 | ||
77 | ||
78 | Dependencies | |
79 | ============ | |
80 | ||
81 | - Cinder, Glance, Trove - projects should be fixed |
0 | .. | |
1 | This work is licensed under a Creative Commons Attribution 3.0 Unported | |
2 | License. | |
3 | ||
4 | http://creativecommons.org/licenses/by/3.0/legalcode | |
5 | ||
6 | .. | |
7 | This template should be in ReSTructured text. The filename in the git | |
8 | repository should match the launchpad URL, for example a URL of | |
9 | https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named | |
10 | awesome-thing.rst . Please do not delete any of the sections in this | |
11 | template. If you have nothing to say for a whole section, just write: None | |
12 | For help with syntax, see http://sphinx-doc.org/rest.html | |
13 | To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html | |
14 | ||
15 | ====================== | |
16 | Multi backend support | |
17 | ====================== | |
18 | ||
19 | Make OSProfiler more flexible and production ready. | |
20 | ||
21 | Problem description | |
22 | =================== | |
23 | ||
24 | Currently OSprofiler works only with one backend Celiometer which actually | |
25 | doesn't work well and adds huge overhead. More over often Ceilometer is not | |
26 | installed/used at all. To resolve this we should add support for different | |
27 | backends like: MongoDB, InfluxDB, ElasticSearch, ... | |
28 | ||
29 | ||
30 | Proposed change | |
31 | =============== | |
32 | ||
33 | And new osprofiler.drivers mechanism, each driver will do 2 things: | |
34 | send notifications and parse all notification in unififed tree strcture | |
35 | that can be processed by the REST lib. | |
36 | ||
37 | Deprecate osprofiler.notifiers and osprofiler.parsers | |
38 | ||
39 | Change all projects that are using OSprofiler to new model | |
40 | ||
41 | Alternatives | |
42 | ------------ | |
43 | ||
44 | I don't know any good alternative. | |
45 | ||
46 | Implementation | |
47 | ============== | |
48 | ||
49 | Assignee(s) | |
50 | ----------- | |
51 | ||
52 | Primary assignee: | |
53 | <launchpad-id or None> | |
54 | ||
55 | ||
56 | Work Items | |
57 | ---------- | |
58 | ||
59 | To add support of multi backends we should change few places in osprofiler | |
60 | that are hardcoded on Ceilometer: | |
61 | ||
62 | - CLI command ``show``: | |
63 | ||
64 | I believe we should add extra argument "connection_string" which will allow | |
65 | people to specify where is backend. So it will look like: | |
66 | <backend_type>://[[user[:password]]@[address][:port][/database]] | |
67 | ||
68 | - Merge osprofiler.notifiers and osprofiler.parsers to osprofiler.drivers | |
69 | ||
70 | Notifiers and Parsers are tightly related. Like for MongoDB notifier you | |
71 | should use MongoDB parsers, so there is better solution to keep both | |
72 | in the same place. | |
73 | ||
74 | This change should be done with keeping backward compatiblity, in other words | |
75 | we should create separated direcotory osprofier.drivers and put first | |
76 | Ceilometer and then start working on other backends. | |
77 | ||
78 | These drivers will be chosen based on connection string | |
79 | ||
80 | - Deprecate osprofiler.notifiers and osprofier.parsers | |
81 | ||
82 | - Cut new release 0.4.2 | |
83 | ||
84 | - Switch all projects to new model with connection string | |
85 | ||
86 | ||
87 | Dependencies | |
88 | ============ | |
89 | ||
90 | - Cinder, Glance, Trove should be changed |
12 | 12 | # License for the specific language governing permissions and limitations |
13 | 13 | # under the License. |
14 | 14 | |
15 | import os | |
15 | import pkg_resources | |
16 | 16 | |
17 | from six.moves import configparser | |
18 | ||
19 | from osprofiler import _utils as utils | |
20 | ||
21 | ||
22 | utils.import_modules_from_package("osprofiler._notifiers") | |
23 | ||
24 | _conf = configparser.ConfigParser() | |
25 | _conf.read(os.path.join( | |
26 | os.path.dirname(os.path.dirname(__file__)), "setup.cfg")) | |
27 | try: | |
28 | __version__ = _conf.get("metadata", "version") | |
29 | except (configparser.NoOptionError, configparser.NoSectionError): | |
30 | __version__ = None | |
17 | __version__ = pkg_resources.get_distribution("osprofiler").version |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | from osprofiler import _utils as utils | |
16 | ||
17 | ||
18 | class Notifier(object): | |
19 | ||
20 | def notify(self, info, context=None): | |
21 | """This method will be called on each notifier.notify() call. | |
22 | ||
23 | To add new drivers you should, create new subclass of this class and | |
24 | implement notify method. | |
25 | ||
26 | :param info: Contains information about trace element. | |
27 | In payload dict there are always 3 ids: | |
28 | "base_id" - uuid that is common for all notifications | |
29 | related to one trace. Used to simplify | |
30 | retrieving of all trace elements from | |
31 | Ceilometer. | |
32 | "parent_id" - uuid of parent element in trace | |
33 | "trace_id" - uuid of current element in trace | |
34 | ||
35 | With parent_id and trace_id it's quite simple to build | |
36 | tree of trace elements, which simplify analyze of trace. | |
37 | ||
38 | :param context: request context that is mostly used to specify | |
39 | current active user and tenant. | |
40 | """ | |
41 | ||
42 | @staticmethod | |
43 | def factory(name, *args, **kwargs): | |
44 | for driver in utils.itersubclasses(Notifier): | |
45 | if name == driver.__name__: | |
46 | return driver(*args, **kwargs).notify | |
47 | ||
48 | raise TypeError("There is no driver, with name: %s" % name) |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | from osprofiler._notifiers import base | |
16 | ||
17 | ||
18 | class Messaging(base.Notifier): | |
19 | ||
20 | def __init__(self, messaging, context, transport, project, service, host): | |
21 | """Init Messaging notify driver. | |
22 | ||
23 | """ | |
24 | super(Messaging, self).__init__() | |
25 | self.messaging = messaging | |
26 | self.context = context | |
27 | self.project = project | |
28 | self.service = service | |
29 | ||
30 | self.notifier = messaging.Notifier( | |
31 | transport, publisher_id=host, driver="messaging", | |
32 | topic="profiler", retry=0) | |
33 | ||
34 | def notify(self, info, context=None): | |
35 | """Send notifications to Ceilometer via oslo.messaging notifier API. | |
36 | ||
37 | :param info: Contains information about trace element. | |
38 | In payload dict there are always 3 ids: | |
39 | "base_id" - uuid that is common for all notifications | |
40 | related to one trace. Used to simplify | |
41 | retrieving of all trace elements from | |
42 | Ceilometer. | |
43 | "parent_id" - uuid of parent element in trace | |
44 | "trace_id" - uuid of current element in trace | |
45 | ||
46 | With parent_id and trace_id it's quite simple to build | |
47 | tree of trace elements, which simplify analyze of trace. | |
48 | ||
49 | :param context: request context that is mostly used to specify | |
50 | current active user and tenant. | |
51 | """ | |
52 | ||
53 | info["project"] = self.project | |
54 | info["service"] = self.service | |
55 | self.notifier.info(context or self.context, | |
56 | "profiler.%s" % self.service, info) |
18 | 18 | import json |
19 | 19 | import os |
20 | 20 | |
21 | from oslo_utils import secretutils | |
21 | 22 | import six |
22 | ||
23 | try: | |
24 | # Only in python 2.7.7+ (and python 3.3+) | |
25 | # https://docs.python.org/2/library/hmac.html#hmac.compare_digest | |
26 | from hmac import compare_digest # noqa | |
27 | except (AttributeError, ImportError): | |
28 | # Taken/slightly modified from: | |
29 | # https://mail.python.org/pipermail/python-checkins/2012-June/114532.html | |
30 | def compare_digest(a, b): | |
31 | """Returns the equivalent of 'a == b'. | |
32 | ||
33 | This method avoids content based short circuiting to reduce the | |
34 | vulnerability to timing attacks. | |
35 | """ | |
36 | # We assume the length of the expected digest is public knowledge, | |
37 | # thus this early return isn't leaking anything an attacker wouldn't | |
38 | # already know | |
39 | if len(a) != len(b): | |
40 | return False | |
41 | ||
42 | # We assume that integers in the bytes range are all cached, | |
43 | # thus timing shouldn't vary much due to integer object creation | |
44 | result = 0 | |
45 | for x, y in zip(a, b): | |
46 | result |= ord(x) ^ ord(y) | |
47 | return result == 0 | |
48 | 23 | |
49 | 24 | |
50 | 25 | def split(text, strip=True): |
127 | 102 | for hmac_key in hmac_keys: |
128 | 103 | try: |
129 | 104 | user_hmac_data = generate_hmac(data, hmac_key) |
130 | except Exception: | |
105 | except Exception: # nosec | |
131 | 106 | pass |
132 | 107 | else: |
133 | if compare_digest(hmac_data, user_hmac_data): | |
108 | if secretutils.constant_time_compare(hmac_data, user_hmac_data): | |
134 | 109 | try: |
135 | 110 | contents = json.loads( |
136 | 111 | binary_decode(base64.urlsafe_b64decode(data))) |
15 | 15 | import json |
16 | 16 | import os |
17 | 17 | |
18 | from oslo_utils import uuidutils | |
19 | ||
18 | 20 | from osprofiler.cmd import cliutils |
19 | from osprofiler.cmd import exc | |
20 | from osprofiler.parsers import ceilometer as ceiloparser | |
21 | from osprofiler.drivers import base | |
22 | from osprofiler import exc | |
21 | 23 | |
22 | 24 | |
23 | 25 | class BaseCommand(object): |
28 | 30 | group_name = "trace" |
29 | 31 | |
30 | 32 | @cliutils.arg("trace", help="File with trace or trace id") |
33 | @cliutils.arg("--connection-string", dest="conn_str", | |
34 | default=(cliutils.env("OSPROFILER_CONNECTION_STRING") or | |
35 | "ceilometer://"), | |
36 | help="Storage driver's connection string. Defaults to " | |
37 | "env[OSPROFILER_CONNECTION_STRING] if set, else " | |
38 | "ceilometer://") | |
31 | 39 | @cliutils.arg("--json", dest="use_json", action="store_true", |
32 | 40 | help="show trace in JSON") |
33 | 41 | @cliutils.arg("--html", dest="use_html", action="store_true", |
34 | 42 | help="show trace in HTML") |
43 | @cliutils.arg("--dot", dest="use_dot", action="store_true", | |
44 | help="show trace in DOT language") | |
45 | @cliutils.arg("--render-dot", dest="render_dot_filename", | |
46 | help="filename for rendering the dot graph in pdf format") | |
35 | 47 | @cliutils.arg("--out", dest="file_name", help="save output in file") |
36 | 48 | def show(self, args): |
37 | """Displays trace-results by given trace id in HTML or JSON format.""" | |
49 | """Display trace results in HTML, JSON or DOT format.""" | |
38 | 50 | |
39 | 51 | trace = None |
40 | 52 | |
41 | if os.path.exists(args.trace): | |
53 | if not uuidutils.is_uuid_like(args.trace): | |
42 | 54 | trace = json.load(open(args.trace)) |
43 | 55 | else: |
44 | 56 | try: |
45 | import ceilometerclient.client | |
46 | import ceilometerclient.exc | |
47 | import ceilometerclient.shell | |
48 | except ImportError: | |
49 | raise ImportError( | |
50 | "To use this command, you should install " | |
51 | "'ceilometerclient' manually. Use command:\n " | |
52 | "'pip install ceilometerclient'.") | |
53 | try: | |
54 | client = ceilometerclient.client.get_client( | |
55 | args.ceilometer_api_version, **args.__dict__) | |
56 | notifications = ceiloparser.get_notifications( | |
57 | client, args.trace) | |
57 | engine = base.get_driver(args.conn_str, **args.__dict__) | |
58 | 58 | except Exception as e: |
59 | if hasattr(e, "http_status") and e.http_status == 401: | |
60 | msg = "Invalid OpenStack Identity credentials." | |
61 | else: | |
62 | msg = "Something has gone wrong. See logs for more details" | |
63 | raise exc.CommandError(msg) | |
59 | raise exc.CommandError(e.message) | |
64 | 60 | |
65 | if notifications: | |
66 | trace = ceiloparser.parse_notifications(notifications) | |
61 | trace = engine.get_report(args.trace) | |
67 | 62 | |
68 | if not trace: | |
69 | msg = ("Trace with UUID %s not found. " | |
70 | "There are 3 possible reasons: \n" | |
71 | " 1) You are using not admin credentials\n" | |
72 | " 2) You specified wrong trace id\n" | |
73 | " 3) You specified wrong HMAC Key in original calling" | |
74 | % args.trace) | |
63 | if not trace or not trace.get("children"): | |
64 | msg = ("Trace with UUID %s not found. Please check the HMAC key " | |
65 | "used in the command." % args.trace) | |
75 | 66 | raise exc.CommandError(msg) |
76 | 67 | |
68 | # NOTE(ayelistratov): Ceilometer translates datetime objects to | |
69 | # strings, other drivers store this data in ISO Date format. | |
70 | # Since datetime.datetime is not JSON serializable by default, | |
71 | # this method will handle that. | |
72 | def datetime_json_serialize(obj): | |
73 | if hasattr(obj, "isoformat"): | |
74 | return obj.isoformat() | |
75 | else: | |
76 | return obj | |
77 | ||
77 | 78 | if args.use_json: |
78 | output = json.dumps(trace) | |
79 | output = json.dumps(trace, default=datetime_json_serialize, | |
80 | separators=(",", ": "), | |
81 | indent=2) | |
79 | 82 | elif args.use_html: |
80 | 83 | with open(os.path.join(os.path.dirname(__file__), |
81 | 84 | "template.html")) as html_template: |
82 | 85 | output = html_template.read().replace( |
83 | "$DATA", json.dumps(trace, indent=2)) | |
86 | "$DATA", json.dumps(trace, indent=4, | |
87 | separators=(",", ": "), | |
88 | default=datetime_json_serialize)) | |
89 | elif args.use_dot: | |
90 | dot_graph = self._create_dot_graph(trace) | |
91 | output = dot_graph.source | |
92 | if args.render_dot_filename: | |
93 | dot_graph.render(args.render_dot_filename, cleanup=True) | |
84 | 94 | else: |
85 | 95 | raise exc.CommandError("You should choose one of the following " |
86 | "output-formats: --json or --html.") | |
96 | "output formats: json, html or dot.") | |
87 | 97 | |
88 | 98 | if args.file_name: |
89 | 99 | with open(args.file_name, "w+") as output_file: |
90 | 100 | output_file.write(output) |
91 | 101 | else: |
92 | print (output) | |
102 | print(output) | |
103 | ||
104 | def _create_dot_graph(self, trace): | |
105 | try: | |
106 | import graphviz | |
107 | except ImportError: | |
108 | raise exc.CommandError( | |
109 | "graphviz library is required to use this option.") | |
110 | ||
111 | dot = graphviz.Digraph(format="pdf") | |
112 | next_id = [0] | |
113 | ||
114 | def _create_node(info): | |
115 | time_taken = info["finished"] - info["started"] | |
116 | service = info["service"] + ":" if "service" in info else "" | |
117 | name = info["name"] | |
118 | label = "%s%s - %d ms" % (service, name, time_taken) | |
119 | ||
120 | if name == "wsgi": | |
121 | req = info["meta.raw_payload.wsgi-start"]["info"]["request"] | |
122 | label = "%s\\n%s %s.." % (label, req["method"], | |
123 | req["path"][:30]) | |
124 | elif name == "rpc" or name == "driver": | |
125 | raw = info["meta.raw_payload.%s-start" % name] | |
126 | fn_name = raw["info"]["function"]["name"] | |
127 | label = "%s\\n%s" % (label, fn_name.split(".")[-1]) | |
128 | ||
129 | node_id = str(next_id[0]) | |
130 | next_id[0] += 1 | |
131 | dot.node(node_id, label) | |
132 | return node_id | |
133 | ||
134 | def _create_sub_graph(root): | |
135 | rid = _create_node(root["info"]) | |
136 | for child in root["children"]: | |
137 | cid = _create_sub_graph(child) | |
138 | dot.edge(rid, cid) | |
139 | return rid | |
140 | ||
141 | _create_sub_graph(trace) | |
142 | return dot |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | ||
16 | class CommandError(Exception): | |
17 | """Invalid usage of CLI.""" | |
18 | ||
19 | def __init__(self, message=None): | |
20 | self.message = message | |
21 | ||
22 | def __str__(self): | |
23 | return self.message or self.__class__.__doc__ |
17 | 17 | Command-line interface to the OpenStack Profiler. |
18 | 18 | """ |
19 | 19 | |
20 | import argparse | |
20 | 21 | import inspect |
21 | 22 | import sys |
22 | 23 | |
23 | import argparse | |
24 | from oslo_config import cfg | |
24 | 25 | |
25 | 26 | import osprofiler |
26 | 27 | from osprofiler.cmd import cliutils |
27 | 28 | from osprofiler.cmd import commands |
28 | from osprofiler.cmd import exc | |
29 | from osprofiler import exc | |
30 | from osprofiler import opts | |
29 | 31 | |
30 | 32 | |
31 | 33 | class OSProfilerShell(object): |
32 | 34 | |
33 | 35 | def __init__(self, argv): |
34 | 36 | args = self._get_base_parser().parse_args(argv) |
37 | opts.set_defaults(cfg.CONF) | |
35 | 38 | |
36 | 39 | if not (args.os_auth_token and args.ceilometer_url): |
37 | 40 | if not args.os_username: |
234 | 237 | try: |
235 | 238 | OSProfilerShell(args) |
236 | 239 | except exc.CommandError as e: |
237 | print (e.message) | |
240 | print(e.message) | |
238 | 241 | return 1 |
239 | 242 | |
240 | 243 |
0 | 0 | <!doctype html> |
1 | <html ng-app="Application"> | |
2 | ||
3 | <head> | |
1 | <html ng-app="app"> | |
2 | ||
3 | <head> | |
4 | <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"> | |
5 | <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.8.0/styles/github.min.css"> | |
6 | <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.0/angular.min.js"></script> | |
7 | <script src="https://angular-ui.github.io/bootstrap/ui-bootstrap-tpls-2.3.1.min.js"></script> | |
8 | <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.8.0/highlight.min.js"></script> | |
9 | <script src="https://pc035860.github.io/angular-highlightjs/angular-highlightjs.min.js"></script> | |
10 | <style> | |
11 | .trace { | |
12 | min-width: 900px; | |
13 | width: 100%; | |
14 | } | |
15 | ||
16 | .trace tr:hover { | |
17 | background-color: #D9EDF7 !important; | |
18 | } | |
19 | ||
20 | .trace tr td { | |
21 | width: 14%; | |
22 | white-space: nowrap; | |
23 | padding: 2px; | |
24 | border-right: 1px solid #EEE; | |
25 | } | |
26 | ||
27 | .trace tr td.details { | |
28 | width: 10%; | |
29 | padding-right: 20px; | |
30 | } | |
31 | ||
32 | .trace.cursor_pointer_on_hover { | |
33 | cursor: pointer; | |
34 | } | |
35 | ||
36 | .trace .level { | |
37 | width: 10%; | |
38 | font-weight: bold; | |
39 | } | |
40 | ||
41 | .bold { | |
42 | font-weight: bold; | |
43 | } | |
44 | ||
45 | .duration { | |
46 | width: 25px; | |
47 | margin: 0px; | |
48 | padding: 0px; | |
49 | background-color: #C6EFF3; | |
50 | border-radius: 4px; | |
51 | font-size: 10px; | |
52 | } | |
53 | ||
54 | .duration div { | |
55 | padding-top: 4px; | |
56 | padding-bottom: 4px; | |
57 | text-align: center; | |
58 | } | |
59 | ||
60 | dl { | |
61 | margin: 5px 0px; | |
62 | } | |
63 | ||
64 | .hljs { | |
65 | white-space: pre; | |
66 | word-wrap: normal; | |
67 | } | |
68 | ||
69 | .hljs.wrapped { | |
70 | white-space: pre-wrap; | |
71 | } | |
72 | ||
73 | .toggle-button { | |
74 | margin-top: -7px; | |
75 | } | |
76 | ||
77 | </style> | |
4 | 78 | <script> |
5 | var OSProfilerData = $DATA | |
79 | (function(angular) { | |
80 | 'use strict'; | |
81 | ||
82 | var OSProfilerData = $DATA; | |
83 | ||
84 | angular | |
85 | .module('app', ['ui.bootstrap', 'hljs']) | |
86 | .config(['$rootScopeProvider', function ($rootScopeProvider) { | |
87 | $rootScopeProvider.digestTtl(50); | |
88 | }]) | |
89 | .config(['hljsServiceProvider', function (hljsServiceProvider) { | |
90 | hljsServiceProvider.setOptions({ | |
91 | // replace tab with 4 spaces | |
92 | tabReplace: ' ' | |
93 | }); | |
94 | }]) | |
95 | .controller('ProfilerController', ProfilerController) | |
96 | .controller('ModalInstanceController', ModalInstanceController); | |
97 | ||
98 | // Inject services | |
99 | ProfilerController.$inject = ['$uibModal']; | |
100 | ModalInstanceController.$inject = ['$uibModalInstance', 'info']; | |
101 | ||
102 | function ProfilerController($uibModal) { | |
103 | // NOTE(tovin07): Bind this to vm. This is controller as and vm syntax. | |
104 | // This style is mainstream now. It replaces $scope style. | |
105 | // Ref: https://johnpapa.net/angularjss-controller-as-and-the-vm-variable/ | |
106 | var vm = this; | |
107 | ||
108 | var converInput = function(input, level) { | |
109 | level = (level) ? level : 0; | |
110 | input.level = level; | |
111 | input.is_leaf = !input.children.length; | |
112 | ||
113 | for (var i = 0; i < input.children.length; i++) { | |
114 | converInput(input.children[i], level + 1); | |
115 | } | |
116 | return input; | |
117 | }; | |
118 | ||
119 | vm.isLastTrace = function(started) { | |
120 | if (started >=0 && started == vm.tree[0].info.last_trace_started) { | |
121 | return true; | |
122 | } | |
123 | return false; | |
124 | }; | |
125 | ||
126 | vm.getWidth = function(data) { | |
127 | var fullDuration = vm.tree[0].info.finished; | |
128 | var duration = (data.info.finished - data.info.started) * 100.0 / fullDuration; | |
129 | return (duration >= 0.5) ? duration : 0.5; | |
130 | }; | |
131 | ||
132 | vm.getStarted = function(data) { | |
133 | var fullDuration = vm.tree[0].info.finished; | |
134 | return data.info.started * 100.0 / fullDuration; | |
135 | }; | |
136 | ||
137 | vm.isImportance = function(data) { | |
138 | return ['total', 'wsgi', 'rpc'].indexOf(data.info.name) != -1; | |
139 | }; | |
140 | ||
141 | vm.display = function(data) { | |
142 | $uibModal.open({ | |
143 | animation: true, | |
144 | templateUrl: 'modal_renderer.html', | |
145 | controller: 'ModalInstanceController', | |
146 | controllerAs: 'modal', | |
147 | size: 'lg', | |
148 | resolve: { | |
149 | info: function() { | |
150 | return angular.copy(data.info); | |
151 | } | |
152 | } | |
153 | }); | |
154 | }; | |
155 | ||
156 | vm.tree = [converInput(OSProfilerData)]; | |
157 | } | |
158 | ||
159 | function ModalInstanceController($uibModalInstance, info){ | |
160 | var modal = this; | |
161 | var metadata = {}; | |
162 | angular.forEach(info, function(value, key) { | |
163 | var parts = key.split('.'); | |
164 | var metaText = 'meta'; | |
165 | if (parts[0] == metaText) { | |
166 | if (parts.length == 2) { | |
167 | this[parts[1]] = value; | |
168 | } else { | |
169 | var groupName = parts[1]; | |
170 | if (!(groupName in this)) { | |
171 | this[groupName] = {}; | |
172 | } | |
173 | // Plus 2 for 2 dots such as: meta.raw_payload.heat.wsgi-start | |
174 | var index = metaText.length + parts[1].length + 2; | |
175 | this[groupName][key.slice(index)] = value; | |
176 | } | |
177 | }; | |
178 | }, metadata); | |
179 | ||
180 | info.duration = info.finished - info.started; | |
181 | // Escape single-quotes to prevent angular parse lexerr | |
182 | info.metadata = JSON.stringify(metadata, null, 4).replace(/'/g, "\\'"); | |
183 | ||
184 | // Bind to view model | |
185 | modal.info = info; | |
186 | ||
187 | modal.columns = [ | |
188 | ['name', 'project', 'service', 'host'], | |
189 | ['started', 'finished', 'duration', 'exception'] | |
190 | ]; | |
191 | ||
192 | modal.toggleWrap = function() { | |
193 | var element = angular.element(document.querySelector('code.hljs')); | |
194 | ||
195 | var wrappedClass = 'wrapped'; | |
196 | var isWrapped = element.hasClass(wrappedClass); | |
197 | if (isWrapped) { | |
198 | element.removeClass(wrappedClass); | |
199 | } else { | |
200 | element.addClass(wrappedClass); | |
201 | } | |
202 | }; | |
203 | ||
204 | modal.close = function() { | |
205 | $uibModalInstance.dismiss('close'); | |
206 | }; | |
207 | } | |
208 | })(window.angular); | |
6 | 209 | </script> |
7 | ||
8 | <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css"> | |
9 | <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap-theme.min.css"> | |
10 | ||
11 | <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.10/angular.min.js"></script> | |
12 | <script src="https://angular-ui.github.io/bootstrap/ui-bootstrap-tpls-0.11.0.js"></script> | |
13 | ||
14 | <style> | |
15 | .trace { | |
16 | min-width: 900px; | |
17 | width: 100%; | |
18 | } | |
19 | .trace tr:hover { | |
20 | background-color: #D9EDF7!important; | |
21 | } | |
22 | .trace tr td { | |
23 | width: 14%; | |
24 | white-space: nowrap; | |
25 | padding: 2px; | |
26 | border-right: 1px solid #eee; | |
27 | } | |
28 | .trace tr td.details { | |
29 | width: 10%; | |
30 | padding-right: 20px; | |
31 | } | |
32 | .trace.cursor_pointer_on_hover { | |
33 | cursor: pointer; | |
34 | } | |
35 | .trace .level { | |
36 | width: 10%; | |
37 | font-weight: bold; | |
38 | } | |
39 | .bold { | |
40 | font-weight: bold; | |
41 | } | |
42 | .duration { | |
43 | width: 25px; | |
44 | margin: 0px; | |
45 | padding: 0px; | |
46 | background-color: #c6eff3; | |
47 | border-radius: 4px; | |
48 | font-size: 10px; | |
49 | } | |
50 | .duration div{ | |
51 | padding-top: 4px; | |
52 | padding-bottom: 4px; | |
53 | text-align: center; | |
54 | } | |
55 | </style> | |
56 | ||
57 | <script type="text/ng-template" id="tree_item_renderer.html"> | |
58 | ||
210 | </head> | |
211 | ||
212 | <body> | |
213 | <!--Tree item template--> | |
214 | <script type="text/ng-template" id="tree_item_renderer.html"> | |
59 | 215 | <div ng-init="hide_children=false"> |
60 | <table class="trace cursor_pointer_on_hover"> | |
61 | <tr> | |
62 | <td class="level" style="padding-left:{{data.level * 5}}px;"> | |
63 | <button type="button" class="btn btn-default btn-xs" ng-disabled="data.is_leaf" ng-click="hide_children=!hide_children"> | |
64 | <span class="glyphicon glyphicon-{{ (data.is_leaf) ? 'cloud' : ((hide_children) ? 'plus': 'minus')}}"></span> | |
65 | {{data.level || 0}} | |
66 | </button> | |
67 | </td> | |
68 | <td ng-click="display(data);" class="text-center"> | |
69 | <div class="duration" style="width: {{get_width(data)}}%; margin-left: {{get_started(data)}}%"> | |
70 | <div>{{data.info.finished - data.info.started}} ms</div> | |
216 | <table class="trace cursor_pointer_on_hover"> | |
217 | <tr ng-class="{'bg-success': vm.isLastTrace(data.info.started)}"> | |
218 | <td class="level" style="padding-left: {{::data.level * 5}}px;"> | |
219 | <button type="button" class="btn btn-default btn-xs" ng-disabled="data.is_leaf" ng-click="hide_children=!hide_children"> | |
220 | <span class="glyphicon glyphicon-{{(data.is_leaf) ? 'cloud' : ((hide_children) ? 'plus': 'minus')}}"></span> | |
221 | {{::data.level || 0}} | |
222 | </button> | |
223 | </td> | |
224 | <td ng-click="vm.display(data)" class="text-center"> | |
225 | <div class="duration" style="width: {{vm.getWidth(data)}}%; margin-left: {{vm.getStarted(data)}}%;"> | |
226 | <div>{{data.info.finished - data.info.started}} ms</div> | |
227 | </div> | |
228 | </td> | |
229 | <td ng-click="vm.display(data)" class="{{vm.isImportance(data) ? 'bold' : ''}} text-right">{{::data.info.name}}</td> | |
230 | <td ng-click="vm.display(data)">{{::data.info.project || "n/a"}}</td> | |
231 | <td ng-click="vm.display(data)">{{::data.info.service || "n/a"}}</td> | |
232 | <td ng-click="vm.display(data)">{{::data.info.host || "n/a"}}</td> | |
233 | <td class="details"> | |
234 | <a href="#" ng-click="vm.display(data)">Details</a> | |
235 | </td> | |
236 | </tr> | |
237 | </table> | |
238 | <div ng-hide="hide_children"> | |
239 | <div ng-repeat="data in data.children" ng-include="'tree_item_renderer.html'"></div> | |
240 | </div> | |
241 | </div> | |
242 | </script> | |
243 | <!--Modal template--> | |
244 | <script type="text/ng-template" id="modal_renderer.html"> | |
245 | <div class="modal-header"> | |
246 | <h3 class="text-center"> | |
247 | <strong>Trace Point Details</strong> | |
248 | <span class="btn btn-default pull-right toggle-button" ng-click="modal.toggleWrap()">Toggle wrap-text</span> | |
249 | </h3> | |
250 | </div> | |
251 | <div class="modal-body"> | |
252 | <div class="row"> | |
253 | <div class="col-md-6" ng-repeat="cols in modal.columns"> | |
254 | <dl class="dl-horizontal" ng-repeat="column in cols"> | |
255 | <dt class="text-capitalize">{{::column}}</dt> | |
256 | <dd>{{::modal.info[column]}}</dd> | |
257 | </dl> | |
71 | 258 | </div> |
72 | </td> | |
73 | <td ng-click="display(data);" class="{{ is_important(data) ? 'bold' : ''}} text-right" > {{data.info.name}} </td> | |
74 | <td ng-click="display(data);"> {{data.info.project || "n/a"}}</td> | |
75 | <td ng-click="display(data);"> {{data.info.service || "n/a" }} </td> | |
76 | <td ng-click="display(data);"> {{data.info.host || "n/a"}} </td> | |
77 | <td class="details"> | |
78 | <a href="#" ng-click="display(data);"> Details </a> | |
79 | </td> | |
259 | <div class="col-md-12"> | |
260 | <!--For metadata only--> | |
261 | <dl class="dl-horizontal"> | |
262 | <dt class="text-capitalize">metadata</dt> | |
263 | <dd hljs hljs-language="json" hljs-source="'{{::modal.info.metadata}}'"></dd> | |
264 | </dl> | |
265 | </div> | |
266 | </div> | |
267 | </div> | |
268 | <div class="modal-footer"> | |
269 | <span class="btn btn-default" ng-click="modal.close()">Close</span> | |
270 | </div> | |
271 | </script> | |
272 | <!--Body--> | |
273 | <div ng-controller="ProfilerController as vm"> | |
274 | <table class="trace"> | |
275 | <tr class="bold text-left" style="border-bottom: solid 1px gray;"> | |
276 | <td class="level">Levels</td> | |
277 | <td>Duration</td> | |
278 | <td class="text-right">Type</td> | |
279 | <td>Project</td> | |
280 | <td>Service</td> | |
281 | <td>Host</td> | |
282 | <td class="details">Details</td> | |
80 | 283 | </tr> |
81 | </table> | |
82 | ||
83 | <div ng-hide="hide_children"> | |
84 | <div ng-repeat="data in data.children" ng-include="'tree_item_renderer.html'"> </div> | |
85 | </div> | |
86 | </div> | |
87 | ||
88 | </script> | |
89 | ||
90 | <script> | |
91 | angular.module("Application", ['ui.bootstrap']); | |
92 | ||
93 | function ProfilerCtlr($scope, $modal) { | |
94 | ||
95 | var convert_input = function(input, level){ | |
96 | level = (level) ? level : 0; | |
97 | input.level = level; | |
98 | input.is_leaf = !input.children.length | |
99 | ||
100 | for (var i=0; i < input.children.length; i++) | |
101 | convert_input(input.children[i], level + 1); | |
102 | return input; | |
103 | } | |
104 | ||
105 | $scope.get_width = function(data){ | |
106 | ||
107 | var full_duration = $scope.tree[0].info.finished; | |
108 | var duration = (data.info.finished - data.info.started) * 100.0 / full_duration; | |
109 | return (duration >= 0.5) ? duration : 0.5; | |
110 | } | |
111 | ||
112 | $scope.get_started = function(data) { | |
113 | var full_duration = $scope.tree[0].info.finished; | |
114 | return data.info.started * 100.0 / full_duration; | |
115 | } | |
116 | ||
117 | $scope.is_important = function(data) { | |
118 | return ["total", "wsgi", "rpc"].indexOf(data.info.name) != -1; | |
119 | } | |
120 | ||
121 | $scope.display = function(data){ | |
122 | var info = angular.copy(data.info); | |
123 | ||
124 | var metadata = {}; | |
125 | angular.forEach(info, function(value, key) { | |
126 | var parts = key.split("."); | |
127 | if (parts[0] == "meta"){ | |
128 | ||
129 | if (parts.length == 2){ | |
130 | this[parts[1]] = value; | |
131 | } | |
132 | else{ | |
133 | var group_name = parts[1]; | |
134 | if (!(group_name in this)) | |
135 | this[group_name] = {}; | |
136 | ||
137 | this[group_name][parts[2]] = value; | |
138 | } | |
139 | }; | |
140 | }, metadata); | |
141 | ||
142 | info["duration"] = info["finished"] - info["started"] | |
143 | info["metadata"] = "<pre>" + JSON.stringify(metadata, "", 4) + "</pre>" | |
144 | ||
145 | var trace_data = "<div class='row'>" | |
146 | columns = ["name", "project", "service", "host", "started", | |
147 | "finished", "duration", "metadata"]; | |
148 | for (var i = 0; i < columns.length; i++){ | |
149 | trace_data += "<div class='col-md-2 text-right text-capitalize'><strong>" + columns[i] + " </strong></div>"; | |
150 | trace_data += "<div class='col-md-10 text-left'>" + info[columns[i]] + "</div>"; | |
151 | } | |
152 | trace_data += "</div>"; | |
153 | ||
154 | var output = ( | |
155 | '<div class="modal-header"> Trace Point Details </div>' + | |
156 | '<div class="modal-body">' + trace_data + '</div>' + | |
157 | '<div class="modal-footer"> <span class="glyphicon glyphicon-cloud </div>' | |
158 | ); | |
159 | ||
160 | var modal_instance = $modal.open({ | |
161 | "template": output, | |
162 | "size": "lg" | |
163 | }); | |
164 | } | |
165 | ||
166 | $scope.tree = [convert_input(OSProfilerData)]; | |
167 | } | |
168 | ||
169 | </script> | |
170 | </head> | |
171 | ||
172 | <body> | |
173 | <div ng-controller="ProfilerCtlr"> | |
174 | <table class="trace"> | |
175 | <tr class="bold text-left" style="border-bottom: solid 1px gray"> | |
176 | <td class="level">Levels</td> | |
177 | <td>Duration</td> | |
178 | <td class="text-right">Type</td> | |
179 | <td>Project</td> | |
180 | <td>Service</td> | |
181 | <td>Host</td> | |
182 | <td class="details">Details</td> | |
183 | </tr> | |
184 | </table> | |
185 | <div ng-repeat="data in tree" ng-include="'tree_item_renderer.html'"></div> | |
284 | </table> | |
285 | <div ng-repeat="data in vm.tree" ng-include="'tree_item_renderer.html'"></div> | |
186 | 286 | </div> |
187 | ||
188 | </body> | |
287 | </body> | |
189 | 288 | |
190 | 289 | </html> |
0 | from osprofiler.drivers import base # noqa | |
1 | from osprofiler.drivers import ceilometer # noqa | |
2 | from osprofiler.drivers import elasticsearch_driver # noqa | |
3 | from osprofiler.drivers import loginsight # noqa | |
4 | from osprofiler.drivers import messaging # noqa | |
5 | from osprofiler.drivers import mongodb # noqa | |
6 | from osprofiler.drivers import redis_driver # noqa |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import datetime | |
16 | ||
17 | from oslo_log import log | |
18 | import six.moves.urllib.parse as urlparse | |
19 | ||
20 | from osprofiler import _utils | |
21 | ||
22 | LOG = log.getLogger(__name__) | |
23 | ||
24 | ||
25 | def get_driver(connection_string, *args, **kwargs): | |
26 | """Create driver's instance according to specified connection string""" | |
27 | # NOTE(ayelistratov) Backward compatibility with old Messaging notation | |
28 | # Remove after patching all OS services | |
29 | # NOTE(ishakhat) Raise exception when ParsedResult.scheme is empty | |
30 | if "://" not in connection_string: | |
31 | connection_string += "://" | |
32 | ||
33 | parsed_connection = urlparse.urlparse(connection_string) | |
34 | LOG.debug("String %s looks like a connection string, trying it.", | |
35 | connection_string) | |
36 | ||
37 | backend = parsed_connection.scheme | |
38 | for driver in _utils.itersubclasses(Driver): | |
39 | if backend == driver.get_name(): | |
40 | return driver(connection_string, *args, **kwargs) | |
41 | ||
42 | raise ValueError("Driver not found for connection string: " | |
43 | "%s" % connection_string) | |
44 | ||
45 | ||
46 | class Driver(object): | |
47 | """Base Driver class. | |
48 | ||
49 | This class provides protected common methods that | |
50 | do not rely on a specific storage backend. Public methods notify() and/or | |
51 | get_report(), which require using storage backend API, must be overridden | |
52 | and implemented by any class derived from this class. | |
53 | """ | |
54 | ||
55 | def __init__(self, connection_str, project=None, service=None, host=None): | |
56 | self.connection_str = connection_str | |
57 | self.project = project | |
58 | self.service = service | |
59 | self.host = host | |
60 | self.result = {} | |
61 | self.started_at = None | |
62 | self.finished_at = None | |
63 | # Last trace started time | |
64 | self.last_started_at = None | |
65 | ||
66 | def notify(self, info, **kwargs): | |
67 | """This method will be called on each notifier.notify() call. | |
68 | ||
69 | To add new drivers you should, create new subclass of this class and | |
70 | implement notify method. | |
71 | ||
72 | :param info: Contains information about trace element. | |
73 | In payload dict there are always 3 ids: | |
74 | "base_id" - uuid that is common for all notifications | |
75 | related to one trace. Used to simplify | |
76 | retrieving of all trace elements from | |
77 | the backend. | |
78 | "parent_id" - uuid of parent element in trace | |
79 | "trace_id" - uuid of current element in trace | |
80 | ||
81 | With parent_id and trace_id it's quite simple to build | |
82 | tree of trace elements, which simplify analyze of trace. | |
83 | ||
84 | """ | |
85 | raise NotImplementedError("{0}: This method is either not supported " | |
86 | "or has to be overridden".format( | |
87 | self.get_name())) | |
88 | ||
89 | def get_report(self, base_id): | |
90 | """Forms and returns report composed from the stored notifications. | |
91 | ||
92 | :param base_id: Base id of trace elements. | |
93 | """ | |
94 | raise NotImplementedError("{0}: This method is either not supported " | |
95 | "or has to be overridden".format( | |
96 | self.get_name())) | |
97 | ||
98 | @classmethod | |
99 | def get_name(cls): | |
100 | """Returns backend specific name for the driver.""" | |
101 | return cls.__name__ | |
102 | ||
103 | def list_traces(self, query, fields): | |
104 | """Returns array of all base_id fields that match the given criteria | |
105 | ||
106 | :param query: dict that specifies the query criteria | |
107 | :param fields: iterable of strings that specifies the output fields | |
108 | """ | |
109 | raise NotImplementedError("{0}: This method is either not supported " | |
110 | "or has to be overridden".format( | |
111 | self.get_name())) | |
112 | ||
113 | @staticmethod | |
114 | def _build_tree(nodes): | |
115 | """Builds the tree (forest) data structure based on the list of nodes. | |
116 | ||
117 | Tree building works in O(n*log(n)). | |
118 | ||
119 | :param nodes: dict of nodes, where each node is a dictionary with fields | |
120 | "parent_id", "trace_id", "info" | |
121 | :returns: list of top level ("root") nodes in form of dictionaries, | |
122 | each containing the "info" and "children" fields, where | |
123 | "children" is the list of child nodes ("children" will be | |
124 | empty for leafs) | |
125 | """ | |
126 | ||
127 | tree = [] | |
128 | ||
129 | for trace_id in nodes: | |
130 | node = nodes[trace_id] | |
131 | node.setdefault("children", []) | |
132 | parent_id = node["parent_id"] | |
133 | if parent_id in nodes: | |
134 | nodes[parent_id].setdefault("children", []) | |
135 | nodes[parent_id]["children"].append(node) | |
136 | else: | |
137 | tree.append(node) # no parent => top-level node | |
138 | ||
139 | for trace_id in nodes: | |
140 | nodes[trace_id]["children"].sort( | |
141 | key=lambda x: x["info"]["started"]) | |
142 | ||
143 | return sorted(tree, key=lambda x: x["info"]["started"]) | |
144 | ||
145 | def _append_results(self, trace_id, parent_id, name, project, service, | |
146 | host, timestamp, raw_payload=None): | |
147 | """Appends the notification to the dictionary of notifications. | |
148 | ||
149 | :param trace_id: UUID of current trace point | |
150 | :param parent_id: UUID of parent trace point | |
151 | :param name: name of operation | |
152 | :param project: project name | |
153 | :param service: service name | |
154 | :param host: host name or FQDN | |
155 | :param timestamp: Unicode-style timestamp matching the pattern | |
156 | "%Y-%m-%dT%H:%M:%S.%f" , e.g. 2016-04-18T17:42:10.77 | |
157 | :param raw_payload: raw notification without any filtering, with all | |
158 | fields included | |
159 | """ | |
160 | timestamp = datetime.datetime.strptime(timestamp, | |
161 | "%Y-%m-%dT%H:%M:%S.%f") | |
162 | if trace_id not in self.result: | |
163 | self.result[trace_id] = { | |
164 | "info": { | |
165 | "name": name.split("-")[0], | |
166 | "project": project, | |
167 | "service": service, | |
168 | "host": host, | |
169 | }, | |
170 | "trace_id": trace_id, | |
171 | "parent_id": parent_id, | |
172 | } | |
173 | ||
174 | self.result[trace_id]["info"]["meta.raw_payload.%s" | |
175 | % name] = raw_payload | |
176 | ||
177 | if name.endswith("stop"): | |
178 | self.result[trace_id]["info"]["finished"] = timestamp | |
179 | self.result[trace_id]["info"]["exception"] = "None" | |
180 | if raw_payload and "info" in raw_payload: | |
181 | exc = raw_payload["info"].get("etype", "None") | |
182 | self.result[trace_id]["info"]["exception"] = exc | |
183 | else: | |
184 | self.result[trace_id]["info"]["started"] = timestamp | |
185 | if not self.last_started_at or self.last_started_at < timestamp: | |
186 | self.last_started_at = timestamp | |
187 | ||
188 | if not self.started_at or self.started_at > timestamp: | |
189 | self.started_at = timestamp | |
190 | ||
191 | if not self.finished_at or self.finished_at < timestamp: | |
192 | self.finished_at = timestamp | |
193 | ||
194 | def _parse_results(self): | |
195 | """Parses Driver's notifications placed by _append_results() . | |
196 | ||
197 | :returns: full profiling report | |
198 | """ | |
199 | ||
200 | def msec(dt): | |
201 | # NOTE(boris-42): Unfortunately this is the simplest way that works | |
202 | # in py26 and py27 | |
203 | microsec = (dt.microseconds + (dt.seconds + dt.days * 24 * 3600) * | |
204 | 1e6) | |
205 | return int(microsec / 1000.0) | |
206 | ||
207 | stats = {} | |
208 | ||
209 | for r in self.result.values(): | |
210 | # NOTE(boris-42): We are not able to guarantee that the backend | |
211 | # consumed all messages => so we should at make duration 0ms. | |
212 | ||
213 | if "started" not in r["info"]: | |
214 | r["info"]["started"] = r["info"]["finished"] | |
215 | if "finished" not in r["info"]: | |
216 | r["info"]["finished"] = r["info"]["started"] | |
217 | ||
218 | op_type = r["info"]["name"] | |
219 | op_started = msec(r["info"]["started"] - self.started_at) | |
220 | op_finished = msec(r["info"]["finished"] - | |
221 | self.started_at) | |
222 | duration = op_finished - op_started | |
223 | ||
224 | r["info"]["started"] = op_started | |
225 | r["info"]["finished"] = op_finished | |
226 | ||
227 | if op_type not in stats: | |
228 | stats[op_type] = { | |
229 | "count": 1, | |
230 | "duration": duration | |
231 | } | |
232 | else: | |
233 | stats[op_type]["count"] += 1 | |
234 | stats[op_type]["duration"] += duration | |
235 | ||
236 | return { | |
237 | "info": { | |
238 | "name": "total", | |
239 | "started": 0, | |
240 | "finished": msec(self.finished_at - | |
241 | self.started_at) if self.started_at else None, | |
242 | "last_trace_started": msec( | |
243 | self.last_started_at - self.started_at | |
244 | ) if self.started_at else None | |
245 | }, | |
246 | "children": self._build_tree(self.result), | |
247 | "stats": stats | |
248 | } |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | from osprofiler.drivers import base | |
16 | from osprofiler import exc | |
17 | ||
18 | ||
19 | class Ceilometer(base.Driver): | |
20 | def __init__(self, connection_str, **kwargs): | |
21 | """Driver receiving profiled information from ceilometer.""" | |
22 | super(Ceilometer, self).__init__(connection_str) | |
23 | try: | |
24 | import ceilometerclient.client | |
25 | except ImportError: | |
26 | raise exc.CommandError( | |
27 | "To use this command, you should install " | |
28 | "'ceilometerclient' manually. Use command:\n " | |
29 | "'pip install python-ceilometerclient'.") | |
30 | ||
31 | try: | |
32 | self.client = ceilometerclient.client.get_client( | |
33 | kwargs["ceilometer_api_version"], **kwargs) | |
34 | except Exception as e: | |
35 | if hasattr(e, "http_status") and e.http_status == 401: | |
36 | msg = "Invalid OpenStack Identity credentials." | |
37 | else: | |
38 | msg = "Error occurred while connecting to Ceilometer: %s." % e | |
39 | raise exc.CommandError(msg) | |
40 | ||
41 | @classmethod | |
42 | def get_name(cls): | |
43 | return "ceilometer" | |
44 | ||
45 | def get_report(self, base_id): | |
46 | """Retrieves and parses notification from ceilometer. | |
47 | ||
48 | :param base_id: Base id of trace elements. | |
49 | """ | |
50 | ||
51 | _filter = [{"field": "base_id", "op": "eq", "value": base_id}] | |
52 | ||
53 | # limit is hardcoded in this code state. Later that will be changed via | |
54 | # connection string usage | |
55 | notifications = [n.to_dict() | |
56 | for n in self.client.events.list(_filter, | |
57 | limit=100000)] | |
58 | ||
59 | for n in notifications: | |
60 | traits = n["traits"] | |
61 | ||
62 | def find_field(f_name): | |
63 | return [t["value"] for t in traits if t["name"] == f_name][0] | |
64 | ||
65 | trace_id = find_field("trace_id") | |
66 | parent_id = find_field("parent_id") | |
67 | name = find_field("name") | |
68 | project = find_field("project") | |
69 | service = find_field("service") | |
70 | host = find_field("host") | |
71 | timestamp = find_field("timestamp") | |
72 | ||
73 | payload = n.get("raw", {}).get("payload", {}) | |
74 | ||
75 | self._append_results(trace_id, parent_id, name, project, service, | |
76 | host, timestamp, payload) | |
77 | ||
78 | return self._parse_results() |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import six.moves.urllib.parse as parser | |
16 | ||
17 | from oslo_config import cfg | |
18 | from osprofiler.drivers import base | |
19 | from osprofiler import exc | |
20 | ||
21 | ||
22 | class ElasticsearchDriver(base.Driver): | |
23 | def __init__(self, connection_str, index_name="osprofiler-notifications", | |
24 | project=None, service=None, host=None, conf=cfg.CONF, | |
25 | **kwargs): | |
26 | """Elasticsearch driver for OSProfiler.""" | |
27 | ||
28 | super(ElasticsearchDriver, self).__init__(connection_str, | |
29 | project=project, | |
30 | service=service, host=host) | |
31 | try: | |
32 | from elasticsearch import Elasticsearch | |
33 | except ImportError: | |
34 | raise exc.CommandError( | |
35 | "To use this command, you should install " | |
36 | "'elasticsearch' manually. Use command:\n " | |
37 | "'pip install elasticsearch'.") | |
38 | ||
39 | client_url = parser.urlunparse(parser.urlparse(self.connection_str) | |
40 | ._replace(scheme="http")) | |
41 | self.conf = conf | |
42 | self.client = Elasticsearch(client_url) | |
43 | self.index_name = index_name | |
44 | ||
45 | @classmethod | |
46 | def get_name(cls): | |
47 | return "elasticsearch" | |
48 | ||
49 | def notify(self, info): | |
50 | """Send notifications to Elasticsearch. | |
51 | ||
52 | :param info: Contains information about trace element. | |
53 | In payload dict there are always 3 ids: | |
54 | "base_id" - uuid that is common for all notifications | |
55 | related to one trace. Used to simplify | |
56 | retrieving of all trace elements from | |
57 | Elasticsearch. | |
58 | "parent_id" - uuid of parent element in trace | |
59 | "trace_id" - uuid of current element in trace | |
60 | ||
61 | With parent_id and trace_id it's quite simple to build | |
62 | tree of trace elements, which simplify analyze of trace. | |
63 | ||
64 | """ | |
65 | ||
66 | info = info.copy() | |
67 | info["project"] = self.project | |
68 | info["service"] = self.service | |
69 | self.client.index(index=self.index_name, | |
70 | doc_type=self.conf.profiler.es_doc_type, body=info) | |
71 | ||
72 | def _hits(self, response): | |
73 | """Returns all hits of search query using scrolling | |
74 | ||
75 | :param response: ElasticSearch query response | |
76 | """ | |
77 | scroll_id = response["_scroll_id"] | |
78 | scroll_size = len(response["hits"]["hits"]) | |
79 | result = [] | |
80 | ||
81 | while scroll_size > 0: | |
82 | for hit in response["hits"]["hits"]: | |
83 | result.append(hit["_source"]) | |
84 | response = self.client.scroll(scroll_id=scroll_id, | |
85 | scroll=self.conf.profiler. | |
86 | es_scroll_time) | |
87 | scroll_id = response["_scroll_id"] | |
88 | scroll_size = len(response["hits"]["hits"]) | |
89 | ||
90 | return result | |
91 | ||
92 | def list_traces(self, query={"match_all": {}}, fields=[]): | |
93 | """Returns array of all base_id fields that match the given criteria | |
94 | ||
95 | :param query: dict that specifies the query criteria | |
96 | :param fields: iterable of strings that specifies the output fields | |
97 | """ | |
98 | for base_field in ["base_id", "timestamp"]: | |
99 | if base_field not in fields: | |
100 | fields.append(base_field) | |
101 | ||
102 | response = self.client.search(index=self.index_name, | |
103 | doc_type=self.conf.profiler.es_doc_type, | |
104 | size=self.conf.profiler.es_scroll_size, | |
105 | scroll=self.conf.profiler.es_scroll_time, | |
106 | body={"_source": fields, "query": query, | |
107 | "sort": [{"timestamp": "asc"}]}) | |
108 | ||
109 | return self._hits(response) | |
110 | ||
111 | def get_report(self, base_id): | |
112 | """Retrieves and parses notification from Elasticsearch. | |
113 | ||
114 | :param base_id: Base id of trace elements. | |
115 | """ | |
116 | response = self.client.search(index=self.index_name, | |
117 | doc_type=self.conf.profiler.es_doc_type, | |
118 | size=self.conf.profiler.es_scroll_size, | |
119 | scroll=self.conf.profiler.es_scroll_time, | |
120 | body={"query": { | |
121 | "match": {"base_id": base_id}}}) | |
122 | ||
123 | for n in self._hits(response): | |
124 | trace_id = n["trace_id"] | |
125 | parent_id = n["parent_id"] | |
126 | name = n["name"] | |
127 | project = n["project"] | |
128 | service = n["service"] | |
129 | host = n["info"]["host"] | |
130 | timestamp = n["timestamp"] | |
131 | ||
132 | self._append_results(trace_id, parent_id, name, project, service, | |
133 | host, timestamp, n) | |
134 | ||
135 | return self._parse_results() |
0 | # Copyright (c) 2016 VMware, Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | """ | |
16 | Classes to use VMware vRealize Log Insight as the trace data store. | |
17 | """ | |
18 | ||
19 | import json | |
20 | import logging as log | |
21 | ||
22 | import netaddr | |
23 | from oslo_concurrency.lockutils import synchronized | |
24 | import requests | |
25 | import six.moves.urllib.parse as urlparse | |
26 | ||
27 | from osprofiler.drivers import base | |
28 | from osprofiler import exc | |
29 | ||
30 | LOG = log.getLogger(__name__) | |
31 | ||
32 | ||
33 | class LogInsightDriver(base.Driver): | |
34 | """Driver for storing trace data in VMware vRealize Log Insight. | |
35 | ||
36 | The driver uses Log Insight ingest service to store trace data and uses | |
37 | the query service to retrieve it. The minimum required Log Insight version | |
38 | is 3.3. | |
39 | ||
40 | The connection string to initialize the driver should be of the format: | |
41 | loginsight://<username>:<password>@<loginsight-host> | |
42 | ||
43 | If the username or password contains the character ':' or '@', it must be | |
44 | escaped using URL encoding. For example, the connection string to connect | |
45 | to Log Insight server at 10.1.2.3 using username "osprofiler" and password | |
46 | "p@ssword" is: | |
47 | loginsight://osprofiler:p%40ssword@10.1.2.3 | |
48 | """ | |
49 | def __init__( | |
50 | self, connection_str, project=None, service=None, host=None, | |
51 | **kwargs): | |
52 | super(LogInsightDriver, self).__init__(connection_str, | |
53 | project=project, | |
54 | service=service, | |
55 | host=host) | |
56 | ||
57 | parsed_connection = urlparse.urlparse(connection_str) | |
58 | try: | |
59 | creds, host = parsed_connection.netloc.split("@") | |
60 | username, password = creds.split(":") | |
61 | except ValueError: | |
62 | raise ValueError("Connection string format is: loginsight://" | |
63 | "<username>:<password>@<loginsight-host>. If the " | |
64 | "username or password contains the character '@' " | |
65 | "or ':', it must be escaped using URL encoding.") | |
66 | ||
67 | username = urlparse.unquote(username) | |
68 | password = urlparse.unquote(password) | |
69 | self._client = LogInsightClient(host, username, password) | |
70 | ||
71 | self._client.login() | |
72 | ||
73 | @classmethod | |
74 | def get_name(cls): | |
75 | return "loginsight" | |
76 | ||
77 | def notify(self, info): | |
78 | """Send trace to Log Insight server.""" | |
79 | ||
80 | trace = info.copy() | |
81 | trace["project"] = self.project | |
82 | trace["service"] = self.service | |
83 | ||
84 | event = {"text": "OSProfiler trace"} | |
85 | ||
86 | def _create_field(name, content): | |
87 | return {"name": name, "content": content} | |
88 | ||
89 | event["fields"] = [_create_field("base_id", trace["base_id"]), | |
90 | _create_field("trace_id", trace["trace_id"]), | |
91 | _create_field("project", trace["project"]), | |
92 | _create_field("service", trace["service"]), | |
93 | _create_field("name", trace["name"]), | |
94 | _create_field("trace", json.dumps(trace))] | |
95 | ||
96 | self._client.send_event(event) | |
97 | ||
98 | def get_report(self, base_id): | |
99 | """Retrieves and parses trace data from Log Insight. | |
100 | ||
101 | :param base_id: Trace base ID | |
102 | """ | |
103 | response = self._client.query_events({"base_id": base_id}) | |
104 | ||
105 | if "events" in response: | |
106 | for event in response["events"]: | |
107 | if "fields" not in event: | |
108 | continue | |
109 | ||
110 | for field in event["fields"]: | |
111 | if field["name"] == "trace": | |
112 | trace = json.loads(field["content"]) | |
113 | trace_id = trace["trace_id"] | |
114 | parent_id = trace["parent_id"] | |
115 | name = trace["name"] | |
116 | project = trace["project"] | |
117 | service = trace["service"] | |
118 | host = trace["info"]["host"] | |
119 | timestamp = trace["timestamp"] | |
120 | ||
121 | self._append_results( | |
122 | trace_id, parent_id, name, project, service, host, | |
123 | timestamp, trace) | |
124 | break | |
125 | ||
126 | return self._parse_results() | |
127 | ||
128 | ||
129 | class LogInsightClient(object): | |
130 | """A minimal Log Insight client.""" | |
131 | ||
132 | LI_OSPROFILER_AGENT_ID = "F52D775B-6017-4787-8C8A-F21AE0AEC057" | |
133 | ||
134 | # API paths | |
135 | SESSIONS_PATH = "api/v1/sessions" | |
136 | CURRENT_SESSIONS_PATH = "api/v1/sessions/current" | |
137 | EVENTS_INGEST_PATH = "api/v1/events/ingest/%s" % LI_OSPROFILER_AGENT_ID | |
138 | QUERY_EVENTS_BASE_PATH = "api/v1/events" | |
139 | ||
140 | def __init__(self, host, username, password, api_port=9000, | |
141 | api_ssl_port=9543, query_timeout=60000): | |
142 | self._host = host | |
143 | self._username = username | |
144 | self._password = password | |
145 | self._api_port = api_port | |
146 | self._api_ssl_port = api_ssl_port | |
147 | self._query_timeout = query_timeout | |
148 | self._session = requests.Session() | |
149 | self._session_id = None | |
150 | ||
151 | def _build_base_url(self, scheme): | |
152 | proto_str = "%s://" % scheme | |
153 | host_str = ("[%s]" % self._host if netaddr.valid_ipv6(self._host) | |
154 | else self._host) | |
155 | port_str = ":%d" % (self._api_ssl_port if scheme == "https" | |
156 | else self._api_port) | |
157 | return proto_str + host_str + port_str | |
158 | ||
159 | def _check_response(self, resp): | |
160 | if resp.status_code == 440: | |
161 | raise exc.LogInsightLoginTimeout() | |
162 | ||
163 | if not resp.ok: | |
164 | msg = "n/a" | |
165 | if resp.text: | |
166 | try: | |
167 | body = json.loads(resp.text) | |
168 | msg = body.get("errorMessage", msg) | |
169 | except ValueError: | |
170 | pass | |
171 | else: | |
172 | msg = resp.reason | |
173 | raise exc.LogInsightAPIError(msg) | |
174 | ||
175 | def _send_request( | |
176 | self, method, scheme, path, headers=None, body=None, params=None): | |
177 | url = "%s/%s" % (self._build_base_url(scheme), path) | |
178 | ||
179 | headers = headers or {} | |
180 | headers["content-type"] = "application/json" | |
181 | body = body or {} | |
182 | params = params or {} | |
183 | ||
184 | req = requests.Request( | |
185 | method, url, headers=headers, data=json.dumps(body), params=params) | |
186 | req = req.prepare() | |
187 | resp = self._session.send(req, verify=False) | |
188 | ||
189 | self._check_response(resp) | |
190 | return resp.json() | |
191 | ||
192 | def _get_auth_header(self): | |
193 | return {"X-LI-Session-Id": self._session_id} | |
194 | ||
195 | def _trunc_session_id(self): | |
196 | if self._session_id: | |
197 | return self._session_id[-5:] | |
198 | ||
199 | def _is_current_session_active(self): | |
200 | try: | |
201 | self._send_request("get", | |
202 | "https", | |
203 | self.CURRENT_SESSIONS_PATH, | |
204 | headers=self._get_auth_header()) | |
205 | LOG.debug("Current session %s is active.", | |
206 | self._trunc_session_id()) | |
207 | return True | |
208 | except (exc.LogInsightLoginTimeout, exc.LogInsightAPIError): | |
209 | LOG.debug("Current session %s is not active.", | |
210 | self._trunc_session_id()) | |
211 | return False | |
212 | ||
213 | @synchronized("li_login_lock") | |
214 | def login(self): | |
215 | # Another thread might have created the session while the current | |
216 | # thread was waiting for the lock. | |
217 | if self._session_id and self._is_current_session_active(): | |
218 | return | |
219 | ||
220 | LOG.info("Logging into Log Insight server: %s.", self._host) | |
221 | resp = self._send_request("post", | |
222 | "https", | |
223 | self.SESSIONS_PATH, | |
224 | body={"username": self._username, | |
225 | "password": self._password}) | |
226 | ||
227 | self._session_id = resp["sessionId"] | |
228 | LOG.debug("Established session %s.", self._trunc_session_id()) | |
229 | ||
230 | def send_event(self, event): | |
231 | events = {"events": [event]} | |
232 | self._send_request("post", | |
233 | "http", | |
234 | self.EVENTS_INGEST_PATH, | |
235 | body=events) | |
236 | ||
237 | def query_events(self, params): | |
238 | # Assumes that the keys and values in the params are strings and | |
239 | # the operator is "CONTAINS". | |
240 | constraints = [] | |
241 | for field, value in params.items(): | |
242 | constraints.append("%s/CONTAINS+%s" % (field, value)) | |
243 | constraints.append("timestamp/GT+0") | |
244 | ||
245 | path = "%s/%s" % (self.QUERY_EVENTS_BASE_PATH, "/".join(constraints)) | |
246 | ||
247 | def _query_events(): | |
248 | return self._send_request("get", | |
249 | "https", | |
250 | path, | |
251 | headers=self._get_auth_header(), | |
252 | params={"limit": 20000, | |
253 | "timeout": self._query_timeout}) | |
254 | try: | |
255 | resp = _query_events() | |
256 | except exc.LogInsightLoginTimeout: | |
257 | # Login again and re-try. | |
258 | LOG.debug("Current session timed out.") | |
259 | self.login() | |
260 | resp = _query_events() | |
261 | ||
262 | return resp |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | from osprofiler.drivers import base | |
16 | ||
17 | ||
18 | class Messaging(base.Driver): | |
19 | def __init__(self, connection_str, messaging=None, context=None, | |
20 | transport=None, project=None, service=None, | |
21 | host=None, **kwargs): | |
22 | """Driver sending notifications via message queues.""" | |
23 | ||
24 | super(Messaging, self).__init__(connection_str, project=project, | |
25 | service=service, host=host) | |
26 | ||
27 | self.messaging = messaging | |
28 | self.context = context | |
29 | ||
30 | self.client = messaging.Notifier( | |
31 | transport, publisher_id=self.host, driver="messaging", | |
32 | topics=["profiler"], retry=0) | |
33 | ||
34 | @classmethod | |
35 | def get_name(cls): | |
36 | return "messaging" | |
37 | ||
38 | def notify(self, info, context=None): | |
39 | """Send notifications to backend via oslo.messaging notifier API. | |
40 | ||
41 | :param info: Contains information about trace element. | |
42 | In payload dict there are always 3 ids: | |
43 | "base_id" - uuid that is common for all notifications | |
44 | related to one trace. Used to simplify | |
45 | retrieving of all trace elements from | |
46 | Ceilometer. | |
47 | "parent_id" - uuid of parent element in trace | |
48 | "trace_id" - uuid of current element in trace | |
49 | ||
50 | With parent_id and trace_id it's quite simple to build | |
51 | tree of trace elements, which simplify analyze of trace. | |
52 | ||
53 | :param context: request context that is mostly used to specify | |
54 | current active user and tenant. | |
55 | """ | |
56 | ||
57 | info["project"] = self.project | |
58 | info["service"] = self.service | |
59 | self.client.info(context or self.context, | |
60 | "profiler.%s" % info["service"], | |
61 | info) |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | from osprofiler.drivers import base | |
16 | from osprofiler import exc | |
17 | ||
18 | ||
19 | class MongoDB(base.Driver): | |
20 | def __init__(self, connection_str, db_name="osprofiler", project=None, | |
21 | service=None, host=None, **kwargs): | |
22 | """MongoDB driver for OSProfiler.""" | |
23 | ||
24 | super(MongoDB, self).__init__(connection_str, project=project, | |
25 | service=service, host=host) | |
26 | try: | |
27 | from pymongo import MongoClient | |
28 | except ImportError: | |
29 | raise exc.CommandError( | |
30 | "To use this command, you should install " | |
31 | "'pymongo' manually. Use command:\n " | |
32 | "'pip install pymongo'.") | |
33 | ||
34 | client = MongoClient(self.connection_str, connect=False) | |
35 | self.db = client[db_name] | |
36 | ||
37 | @classmethod | |
38 | def get_name(cls): | |
39 | return "mongodb" | |
40 | ||
41 | def notify(self, info): | |
42 | """Send notifications to MongoDB. | |
43 | ||
44 | :param info: Contains information about trace element. | |
45 | In payload dict there are always 3 ids: | |
46 | "base_id" - uuid that is common for all notifications | |
47 | related to one trace. Used to simplify | |
48 | retrieving of all trace elements from | |
49 | MongoDB. | |
50 | "parent_id" - uuid of parent element in trace | |
51 | "trace_id" - uuid of current element in trace | |
52 | ||
53 | With parent_id and trace_id it's quite simple to build | |
54 | tree of trace elements, which simplify analyze of trace. | |
55 | ||
56 | """ | |
57 | data = info.copy() | |
58 | data["project"] = self.project | |
59 | data["service"] = self.service | |
60 | self.db.profiler.insert_one(data) | |
61 | ||
62 | def list_traces(self, query, fields=[]): | |
63 | """Returns array of all base_id fields that match the given criteria | |
64 | ||
65 | :param query: dict that specifies the query criteria | |
66 | :param fields: iterable of strings that specifies the output fields | |
67 | """ | |
68 | ids = self.db.profiler.find(query).distinct("base_id") | |
69 | out_format = {"base_id": 1, "timestamp": 1, "_id": 0} | |
70 | out_format.update({i: 1 for i in fields}) | |
71 | return [self.db.profiler.find( | |
72 | {"base_id": i}, out_format).sort("timestamp")[0] for i in ids] | |
73 | ||
74 | def get_report(self, base_id): | |
75 | """Retrieves and parses notification from MongoDB. | |
76 | ||
77 | :param base_id: Base id of trace elements. | |
78 | """ | |
79 | for n in self.db.profiler.find({"base_id": base_id}, {"_id": 0}): | |
80 | trace_id = n["trace_id"] | |
81 | parent_id = n["parent_id"] | |
82 | name = n["name"] | |
83 | project = n["project"] | |
84 | service = n["service"] | |
85 | host = n["info"]["host"] | |
86 | timestamp = n["timestamp"] | |
87 | ||
88 | self._append_results(trace_id, parent_id, name, project, service, | |
89 | host, timestamp, n) | |
90 | ||
91 | return self._parse_results() |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # Copyright 2016 IBM Corporation. | |
2 | # All Rights Reserved. | |
3 | # | |
4 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
5 | # not use this file except in compliance with the License. You may obtain | |
6 | # a copy of the License at | |
7 | # | |
8 | # http://www.apache.org/licenses/LICENSE-2.0 | |
9 | # | |
10 | # Unless required by applicable law or agreed to in writing, software | |
11 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
12 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
13 | # License for the specific language governing permissions and limitations | |
14 | # under the License. | |
15 | ||
16 | from oslo_config import cfg | |
17 | from oslo_serialization import jsonutils | |
18 | import six.moves.urllib.parse as parser | |
19 | ||
20 | from osprofiler.drivers import base | |
21 | from osprofiler import exc | |
22 | ||
23 | ||
24 | class Redis(base.Driver): | |
25 | def __init__(self, connection_str, db=0, project=None, | |
26 | service=None, host=None, **kwargs): | |
27 | """Redis driver for OSProfiler.""" | |
28 | ||
29 | super(Redis, self).__init__(connection_str, project=project, | |
30 | service=service, host=host) | |
31 | try: | |
32 | from redis import StrictRedis | |
33 | except ImportError: | |
34 | raise exc.CommandError( | |
35 | "To use this command, you should install " | |
36 | "'redis' manually. Use command:\n " | |
37 | "'pip install redis'.") | |
38 | ||
39 | parsed_url = parser.urlparse(self.connection_str) | |
40 | self.db = StrictRedis(host=parsed_url.hostname, | |
41 | port=parsed_url.port, | |
42 | db=db) | |
43 | self.namespace = "osprofiler:" | |
44 | ||
45 | @classmethod | |
46 | def get_name(cls): | |
47 | return "redis" | |
48 | ||
49 | def notify(self, info): | |
50 | """Send notifications to Redis. | |
51 | ||
52 | :param info: Contains information about trace element. | |
53 | In payload dict there are always 3 ids: | |
54 | "base_id" - uuid that is common for all notifications | |
55 | related to one trace. Used to simplify | |
56 | retrieving of all trace elements from | |
57 | Redis. | |
58 | "parent_id" - uuid of parent element in trace | |
59 | "trace_id" - uuid of current element in trace | |
60 | ||
61 | With parent_id and trace_id it's quite simple to build | |
62 | tree of trace elements, which simplify analyze of trace. | |
63 | ||
64 | """ | |
65 | data = info.copy() | |
66 | data["project"] = self.project | |
67 | data["service"] = self.service | |
68 | key = self.namespace + data["base_id"] + "_" + data["trace_id"] + "_" + \ | |
69 | data["timestamp"] | |
70 | self.db.set(key, jsonutils.dumps(data)) | |
71 | ||
72 | def list_traces(self, query="*", fields=[]): | |
73 | """Returns array of all base_id fields that match the given criteria | |
74 | ||
75 | :param query: string that specifies the query criteria | |
76 | :param fields: iterable of strings that specifies the output fields | |
77 | """ | |
78 | for base_field in ["base_id", "timestamp"]: | |
79 | if base_field not in fields: | |
80 | fields.append(base_field) | |
81 | ids = self.db.scan_iter(match=self.namespace + query) | |
82 | traces = [jsonutils.loads(self.db.get(i)) for i in ids] | |
83 | result = [] | |
84 | for trace in traces: | |
85 | result.append({key: value for key, value in trace.iteritems() | |
86 | if key in fields}) | |
87 | return result | |
88 | ||
89 | def get_report(self, base_id): | |
90 | """Retrieves and parses notification from Redis. | |
91 | ||
92 | :param base_id: Base id of trace elements. | |
93 | """ | |
94 | for key in self.db.scan_iter(match=self.namespace + base_id + "*"): | |
95 | data = self.db.get(key) | |
96 | n = jsonutils.loads(data) | |
97 | trace_id = n["trace_id"] | |
98 | parent_id = n["parent_id"] | |
99 | name = n["name"] | |
100 | project = n["project"] | |
101 | service = n["service"] | |
102 | host = n["info"]["host"] | |
103 | timestamp = n["timestamp"] | |
104 | ||
105 | self._append_results(trace_id, parent_id, name, project, service, | |
106 | host, timestamp, n) | |
107 | ||
108 | return self._parse_results() | |
109 | ||
110 | ||
111 | class RedisSentinel(Redis, base.Driver): | |
112 | def __init__(self, connection_str, db=0, project=None, | |
113 | service=None, host=None, conf=cfg.CONF, **kwargs): | |
114 | """Redis driver for OSProfiler.""" | |
115 | ||
116 | super(RedisSentinel, self).__init__(connection_str, project=project, | |
117 | service=service, host=host) | |
118 | try: | |
119 | from redis.sentinel import Sentinel | |
120 | except ImportError: | |
121 | raise exc.CommandError( | |
122 | "To use this command, you should install " | |
123 | "'redis' manually. Use command:\n " | |
124 | "'pip install redis'.") | |
125 | ||
126 | self.conf = conf | |
127 | socket_timeout = self.conf.profiler.socket_timeout | |
128 | parsed_url = parser.urlparse(self.connection_str) | |
129 | sentinel = Sentinel([(parsed_url.hostname, int(parsed_url.port))], | |
130 | socket_timeout=socket_timeout) | |
131 | self.db = sentinel.master_for(self.conf.profiler.sentinel_service_name, | |
132 | socket_timeout=socket_timeout) | |
133 | ||
134 | @classmethod | |
135 | def get_name(cls): | |
136 | return "redissentinel" |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | ||
16 | class CommandError(Exception): | |
17 | """Invalid usage of CLI.""" | |
18 | ||
19 | def __init__(self, message=None): | |
20 | self.message = message | |
21 | ||
22 | def __str__(self): | |
23 | return self.message or self.__class__.__doc__ | |
24 | ||
25 | ||
26 | class LogInsightAPIError(Exception): | |
27 | pass | |
28 | ||
29 | ||
30 | class LogInsightLoginTimeout(Exception): | |
31 | pass |
0 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
1 | # not use this file except in compliance with the License. You may obtain | |
2 | # a copy of the License at | |
3 | # | |
4 | # http://www.apache.org/licenses/LICENSE-2.0 | |
5 | # | |
6 | # Unless required by applicable law or agreed to in writing, software | |
7 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
8 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
9 | # License for the specific language governing permissions and limitations | |
10 | # under the License. | |
11 | ||
12 | """ | |
13 | Guidelines for writing new hacking checks | |
14 | ||
15 | - Use only for OSProfiler specific tests. OpenStack general tests | |
16 | should be submitted to the common 'hacking' module. | |
17 | - Pick numbers in the range N3xx. Find the current test with | |
18 | the highest allocated number and then pick the next value. | |
19 | - Keep the test method code in the source file ordered based | |
20 | on the N3xx value. | |
21 | - List the new rule in the top level HACKING.rst file | |
22 | - Add test cases for each new rule to tests/unit/test_hacking.py | |
23 | ||
24 | """ | |
25 | ||
26 | import functools | |
27 | import re | |
28 | import tokenize | |
29 | ||
30 | re_assert_true_instance = re.compile( | |
31 | r"(.)*assertTrue\(isinstance\((\w|\.|\'|\"|\[|\])+, " | |
32 | r"(\w|\.|\'|\"|\[|\])+\)\)") | |
33 | re_assert_equal_type = re.compile( | |
34 | r"(.)*assertEqual\(type\((\w|\.|\'|\"|\[|\])+\), " | |
35 | r"(\w|\.|\'|\"|\[|\])+\)") | |
36 | re_assert_equal_end_with_none = re.compile(r"assertEqual\(.*?,\s+None\)$") | |
37 | re_assert_equal_start_with_none = re.compile(r"assertEqual\(None,") | |
38 | re_assert_true_false_with_in_or_not_in = re.compile( | |
39 | r"assert(True|False)\(" | |
40 | r"(\w|[][.'\"])+( not)? in (\w|[][.'\",])+(, .*)?\)") | |
41 | re_assert_true_false_with_in_or_not_in_spaces = re.compile( | |
42 | r"assert(True|False)\((\w|[][.'\"])+( not)? in [\[|'|\"](\w|[][.'\", ])+" | |
43 | r"[\[|'|\"](, .*)?\)") | |
44 | re_assert_equal_in_end_with_true_or_false = re.compile( | |
45 | r"assertEqual\((\w|[][.'\"])+( not)? in (\w|[][.'\", ])+, (True|False)\)") | |
46 | re_assert_equal_in_start_with_true_or_false = re.compile( | |
47 | r"assertEqual\((True|False), (\w|[][.'\"])+( not)? in (\w|[][.'\", ])+\)") | |
48 | re_no_construct_dict = re.compile( | |
49 | r"\sdict\(\)") | |
50 | re_no_construct_list = re.compile( | |
51 | r"\slist\(\)") | |
52 | re_str_format = re.compile(r""" | |
53 | % # start of specifier | |
54 | \(([^)]+)\) # mapping key, in group 1 | |
55 | [#0 +\-]? # optional conversion flag | |
56 | (?:-?\d*)? # optional minimum field width | |
57 | (?:\.\d*)? # optional precision | |
58 | [hLl]? # optional length modifier | |
59 | [A-z%] # conversion modifier | |
60 | """, re.X) | |
61 | re_raises = re.compile( | |
62 | r"\s:raise[^s] *.*$|\s:raises *:.*$|\s:raises *[^:]+$") | |
63 | ||
64 | ||
65 | def skip_ignored_lines(func): | |
66 | ||
67 | @functools.wraps(func) | |
68 | def wrapper(logical_line, filename): | |
69 | line = logical_line.strip() | |
70 | if not line or line.startswith("#") or line.endswith("# noqa"): | |
71 | return | |
72 | yield next(func(logical_line, filename)) | |
73 | ||
74 | return wrapper | |
75 | ||
76 | ||
77 | def _parse_assert_mock_str(line): | |
78 | point = line.find(".assert_") | |
79 | ||
80 | if point != -1: | |
81 | end_pos = line[point:].find("(") + point | |
82 | return point, line[point + 1: end_pos], line[: point] | |
83 | else: | |
84 | return None, None, None | |
85 | ||
86 | ||
87 | @skip_ignored_lines | |
88 | def check_assert_methods_from_mock(logical_line, filename): | |
89 | """Ensure that ``assert_*`` methods from ``mock`` library is used correctly | |
90 | ||
91 | N301 - base error number | |
92 | N302 - related to nonexistent "assert_called" | |
93 | N303 - related to nonexistent "assert_called_once" | |
94 | """ | |
95 | ||
96 | correct_names = ["assert_any_call", "assert_called_once_with", | |
97 | "assert_called_with", "assert_has_calls"] | |
98 | ignored_files = ["./tests/unit/test_hacking.py"] | |
99 | ||
100 | if filename.startswith("./tests") and filename not in ignored_files: | |
101 | pos, method_name, obj_name = _parse_assert_mock_str(logical_line) | |
102 | ||
103 | if pos: | |
104 | if method_name not in correct_names: | |
105 | error_number = "N301" | |
106 | msg = ("%(error_number)s:'%(method)s' is not present in `mock`" | |
107 | " library. %(custom_msg)s For more details, visit " | |
108 | "http://www.voidspace.org.uk/python/mock/ .") | |
109 | ||
110 | if method_name == "assert_called": | |
111 | error_number = "N302" | |
112 | custom_msg = ("Maybe, you should try to use " | |
113 | "'assertTrue(%s.called)' instead." % | |
114 | obj_name) | |
115 | elif method_name == "assert_called_once": | |
116 | # For more details, see a bug in Rally: | |
117 | # https://bugs.launchpad.net/rally/+bug/1305991 | |
118 | error_number = "N303" | |
119 | custom_msg = ("Maybe, you should try to use " | |
120 | "'assertEqual(1, %s.call_count)' " | |
121 | "or '%s.assert_called_once_with()'" | |
122 | " instead." % (obj_name, obj_name)) | |
123 | else: | |
124 | custom_msg = ("Correct 'assert_*' methods: '%s'." | |
125 | % "', '".join(correct_names)) | |
126 | ||
127 | yield (pos, msg % { | |
128 | "error_number": error_number, | |
129 | "method": method_name, | |
130 | "custom_msg": custom_msg}) | |
131 | ||
132 | ||
133 | @skip_ignored_lines | |
134 | def assert_true_instance(logical_line, filename): | |
135 | """Check for assertTrue(isinstance(a, b)) sentences | |
136 | ||
137 | N320 | |
138 | """ | |
139 | if re_assert_true_instance.match(logical_line): | |
140 | yield (0, "N320 assertTrue(isinstance(a, b)) sentences not allowed, " | |
141 | "you should use assertIsInstance(a, b) instead.") | |
142 | ||
143 | ||
144 | @skip_ignored_lines | |
145 | def assert_equal_type(logical_line, filename): | |
146 | """Check for assertEqual(type(A), B) sentences | |
147 | ||
148 | N321 | |
149 | """ | |
150 | if re_assert_equal_type.match(logical_line): | |
151 | yield (0, "N321 assertEqual(type(A), B) sentences not allowed, " | |
152 | "you should use assertIsInstance(a, b) instead.") | |
153 | ||
154 | ||
155 | @skip_ignored_lines | |
156 | def assert_equal_none(logical_line, filename): | |
157 | """Check for assertEqual(A, None) or assertEqual(None, A) sentences | |
158 | ||
159 | N322 | |
160 | """ | |
161 | res = (re_assert_equal_start_with_none.search(logical_line) or | |
162 | re_assert_equal_end_with_none.search(logical_line)) | |
163 | if res: | |
164 | yield (0, "N322 assertEqual(A, None) or assertEqual(None, A) " | |
165 | "sentences not allowed, you should use assertIsNone(A) " | |
166 | "instead.") | |
167 | ||
168 | ||
169 | @skip_ignored_lines | |
170 | def assert_true_or_false_with_in(logical_line, filename): | |
171 | """Check assertTrue/False(A in/not in B) with collection contents | |
172 | ||
173 | Check for assertTrue/False(A in B), assertTrue/False(A not in B), | |
174 | assertTrue/False(A in B, message) or assertTrue/False(A not in B, message) | |
175 | sentences. | |
176 | ||
177 | N323 | |
178 | """ | |
179 | res = (re_assert_true_false_with_in_or_not_in.search(logical_line) or | |
180 | re_assert_true_false_with_in_or_not_in_spaces.search(logical_line)) | |
181 | if res: | |
182 | yield (0, "N323 assertTrue/assertFalse(A in/not in B)sentences not " | |
183 | "allowed, you should use assertIn(A, B) or assertNotIn(A, B)" | |
184 | " instead.") | |
185 | ||
186 | ||
187 | @skip_ignored_lines | |
188 | def assert_equal_in(logical_line, filename): | |
189 | """Check assertEqual(A in/not in B, True/False) with collection contents | |
190 | ||
191 | Check for assertEqual(A in B, True/False), assertEqual(True/False, A in B), | |
192 | assertEqual(A not in B, True/False) or assertEqual(True/False, A not in B) | |
193 | sentences. | |
194 | ||
195 | N324 | |
196 | """ | |
197 | res = (re_assert_equal_in_end_with_true_or_false.search(logical_line) or | |
198 | re_assert_equal_in_start_with_true_or_false.search(logical_line)) | |
199 | if res: | |
200 | yield (0, "N324: Use assertIn/NotIn(A, B) rather than " | |
201 | "assertEqual(A in/not in B, True/False) when checking " | |
202 | "collection contents.") | |
203 | ||
204 | ||
205 | @skip_ignored_lines | |
206 | def check_quotes(logical_line, filename): | |
207 | """Check that single quotation marks are not used | |
208 | ||
209 | N350 | |
210 | """ | |
211 | ||
212 | in_string = False | |
213 | in_multiline_string = False | |
214 | single_quotas_are_used = False | |
215 | ||
216 | check_tripple = ( | |
217 | lambda line, i, char: ( | |
218 | i + 2 < len(line) and | |
219 | (char == line[i] == line[i + 1] == line[i + 2]) | |
220 | ) | |
221 | ) | |
222 | ||
223 | i = 0 | |
224 | while i < len(logical_line): | |
225 | char = logical_line[i] | |
226 | ||
227 | if in_string: | |
228 | if char == "\"": | |
229 | in_string = False | |
230 | if char == "\\": | |
231 | i += 1 # ignore next char | |
232 | ||
233 | elif in_multiline_string: | |
234 | if check_tripple(logical_line, i, "\""): | |
235 | i += 2 # skip next 2 chars | |
236 | in_multiline_string = False | |
237 | ||
238 | elif char == "#": | |
239 | break | |
240 | ||
241 | elif char == "'": | |
242 | single_quotas_are_used = True | |
243 | break | |
244 | ||
245 | elif char == "\"": | |
246 | if check_tripple(logical_line, i, "\""): | |
247 | in_multiline_string = True | |
248 | i += 3 | |
249 | continue | |
250 | in_string = True | |
251 | ||
252 | i += 1 | |
253 | ||
254 | if single_quotas_are_used: | |
255 | yield (i, "N350 Remove Single quotes") | |
256 | ||
257 | ||
258 | @skip_ignored_lines | |
259 | def check_no_constructor_data_struct(logical_line, filename): | |
260 | """Check that data structs (lists, dicts) are declared using literals | |
261 | ||
262 | N351 | |
263 | """ | |
264 | ||
265 | match = re_no_construct_dict.search(logical_line) | |
266 | if match: | |
267 | yield (0, "N351 Remove dict() construct and use literal {}") | |
268 | match = re_no_construct_list.search(logical_line) | |
269 | if match: | |
270 | yield (0, "N351 Remove list() construct and use literal []") | |
271 | ||
272 | ||
273 | def check_dict_formatting_in_string(logical_line, tokens): | |
274 | """Check that strings do not use dict-formatting with a single replacement | |
275 | ||
276 | N352 | |
277 | """ | |
278 | # NOTE(stpierre): Can't use @skip_ignored_lines here because it's | |
279 | # a stupid decorator that only works on functions that take | |
280 | # (logical_line, filename) as arguments. | |
281 | if (not logical_line or | |
282 | logical_line.startswith("#") or | |
283 | logical_line.endswith("# noqa")): | |
284 | return | |
285 | ||
286 | current_string = "" | |
287 | in_string = False | |
288 | for token_type, text, start, end, line in tokens: | |
289 | if token_type == tokenize.STRING: | |
290 | if not in_string: | |
291 | current_string = "" | |
292 | in_string = True | |
293 | current_string += text.strip("\"") | |
294 | elif token_type == tokenize.OP: | |
295 | if not current_string: | |
296 | continue | |
297 | # NOTE(stpierre): The string formatting operator % has | |
298 | # lower precedence than +, so we assume that the logical | |
299 | # string has concluded whenever we hit an operator of any | |
300 | # sort. (Most operators don't work for strings anyway.) | |
301 | # Some string operators do have higher precedence than %, | |
302 | # though, so you can technically trick this check by doing | |
303 | # things like: | |
304 | # | |
305 | # "%(foo)s" * 1 % {"foo": 1} | |
306 | # "%(foo)s"[:] % {"foo": 1} | |
307 | # | |
308 | # It also will produce false positives if you use explicit | |
309 | # parenthesized addition for two strings instead of | |
310 | # concatenation by juxtaposition, e.g.: | |
311 | # | |
312 | # ("%(foo)s" + "%(bar)s") % vals | |
313 | # | |
314 | # But if you do any of those things, then you deserve all | |
315 | # of the horrible things that happen to you, and probably | |
316 | # many more. | |
317 | in_string = False | |
318 | if text == "%": | |
319 | format_keys = set() | |
320 | for match in re_str_format.finditer(current_string): | |
321 | format_keys.add(match.group(1)) | |
322 | if len(format_keys) == 1: | |
323 | yield (0, | |
324 | "N353 Do not use mapping key string formatting " | |
325 | "with a single key") | |
326 | if text != ")": | |
327 | # NOTE(stpierre): You can have a parenthesized string | |
328 | # followed by %, so a closing paren doesn't obviate | |
329 | # the possibility for a substitution operator like | |
330 | # every other operator does. | |
331 | current_string = "" | |
332 | elif token_type in (tokenize.NL, tokenize.COMMENT): | |
333 | continue | |
334 | else: | |
335 | in_string = False | |
336 | if token_type == tokenize.NEWLINE: | |
337 | current_string = "" | |
338 | ||
339 | ||
340 | @skip_ignored_lines | |
341 | def check_using_unicode(logical_line, filename): | |
342 | """Check crosspython unicode usage | |
343 | ||
344 | N353 | |
345 | """ | |
346 | ||
347 | if re.search(r"\bunicode\(", logical_line): | |
348 | yield (0, "N353 'unicode' function is absent in python3. Please " | |
349 | "use 'six.text_type' instead.") | |
350 | ||
351 | ||
352 | def check_raises(physical_line, filename): | |
353 | """Check raises usage | |
354 | ||
355 | N354 | |
356 | """ | |
357 | ||
358 | ignored_files = ["./tests/unit/test_hacking.py", | |
359 | "./tests/hacking/checks.py"] | |
360 | if filename not in ignored_files: | |
361 | if re_raises.search(physical_line): | |
362 | return (0, "N354 ':Please use ':raises Exception: conditions' " | |
363 | "in docstrings.") | |
364 | ||
365 | ||
366 | def factory(register): | |
367 | register(check_assert_methods_from_mock) | |
368 | register(assert_true_instance) | |
369 | register(assert_equal_type) | |
370 | register(assert_equal_none) | |
371 | register(assert_true_or_false_with_in) | |
372 | register(assert_equal_in) | |
373 | register(check_quotes) | |
374 | register(check_no_constructor_data_struct) | |
375 | register(check_dict_formatting_in_string) | |
376 | register(check_using_unicode) | |
377 | register(check_raises) |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import oslo_messaging | |
16 | ||
17 | from osprofiler import notifier | |
18 | from osprofiler import web | |
19 | ||
20 | ||
21 | def init_from_conf(conf, context, project, service, host): | |
22 | """Initialize notifier from service configuration | |
23 | ||
24 | :param conf: service configuration | |
25 | :param context: request context | |
26 | :param project: project name (keystone, cinder etc.) | |
27 | :param service: service name that will be profiled | |
28 | :param host: hostname or host IP address that the service will be | |
29 | running on. | |
30 | """ | |
31 | connection_str = conf.profiler.connection_string | |
32 | kwargs = {} | |
33 | if connection_str.startswith("messaging"): | |
34 | kwargs = {"messaging": oslo_messaging, | |
35 | "transport": oslo_messaging.get_notification_transport(conf)} | |
36 | _notifier = notifier.create( | |
37 | connection_str, | |
38 | context=context, | |
39 | project=project, | |
40 | service=service, | |
41 | host=host, | |
42 | conf=conf, | |
43 | **kwargs) | |
44 | notifier.set(_notifier) | |
45 | web.enable(conf.profiler.hmac_keys) |
12 | 12 | # License for the specific language governing permissions and limitations |
13 | 13 | # under the License. |
14 | 14 | |
15 | from osprofiler._notifiers import base | |
15 | from osprofiler.drivers import base | |
16 | 16 | |
17 | 17 | |
18 | 18 | def _noop_notifier(info, context=None): |
21 | 21 | |
22 | 22 | # NOTE(boris-42): By default we are using noop notifier. |
23 | 23 | __notifier = _noop_notifier |
24 | __driver_cache = {} | |
24 | 25 | |
25 | 26 | |
26 | 27 | def notify(info): |
47 | 48 | __notifier = notifier |
48 | 49 | |
49 | 50 | |
50 | def create(plugin_name, *args, **kwargs): | |
51 | def create(connection_string, *args, **kwargs): | |
51 | 52 | """Create notifier based on specified plugin_name |
52 | 53 | |
53 | :param plugin_name: Name of plugin that creates notifier | |
54 | :param *args: args that will be passed to plugin init method | |
55 | :param **kwargs: kwargs that will be passed to plugin init method | |
54 | :param connection_string: connection string which specifies the storage | |
55 | driver for notifier | |
56 | :param *args: args that will be passed to the driver's __init__ method | |
57 | :param **kwargs: kwargs that will be passed to the driver's __init__ method | |
56 | 58 | :returns: Callable notifier method |
57 | 59 | :raises TypeError: In case of invalid name of plugin raises TypeError |
58 | 60 | """ |
59 | return base.Notifier.factory(plugin_name, *args, **kwargs) | |
61 | global __driver_cache | |
62 | if connection_string not in __driver_cache: | |
63 | __driver_cache[connection_string] = base.get_driver(connection_string, | |
64 | *args, | |
65 | **kwargs).notify | |
66 | return __driver_cache[connection_string] |
32 | 32 | _enabled_opt = cfg.BoolOpt( |
33 | 33 | "enabled", |
34 | 34 | default=False, |
35 | deprecated_group="profiler", | |
36 | 35 | deprecated_name="profiler_enabled", |
37 | 36 | help=""" |
38 | 37 | Enables the profiling for all services on this node. Default value is False |
78 | 77 | ensures it can be used from client side to generate the trace, containing |
79 | 78 | information from all possible resources.""") |
80 | 79 | |
80 | _connection_string_opt = cfg.StrOpt( | |
81 | "connection_string", | |
82 | default="messaging://", | |
83 | help=""" | |
84 | Connection string for a notifier backend. Default value is messaging:// which | |
85 | sets the notifier to oslo_messaging. | |
86 | ||
87 | Examples of possible values: | |
88 | ||
89 | * messaging://: use oslo_messaging driver for sending notifications. | |
90 | * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications. | |
91 | * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending | |
92 | notifications. | |
93 | """) | |
94 | ||
95 | _es_doc_type_opt = cfg.StrOpt( | |
96 | "es_doc_type", | |
97 | default="notification", | |
98 | help=""" | |
99 | Document type for notification indexing in elasticsearch. | |
100 | """) | |
101 | ||
102 | _es_scroll_time_opt = cfg.StrOpt( | |
103 | "es_scroll_time", | |
104 | default="2m", | |
105 | help=""" | |
106 | This parameter is a time value parameter (for example: es_scroll_time=2m), | |
107 | indicating for how long the nodes that participate in the search will maintain | |
108 | relevant resources in order to continue and support it. | |
109 | """) | |
110 | ||
111 | _es_scroll_size_opt = cfg.IntOpt( | |
112 | "es_scroll_size", | |
113 | default=10000, | |
114 | help=""" | |
115 | Elasticsearch splits large requests in batches. This parameter defines | |
116 | maximum size of each batch (for example: es_scroll_size=10000). | |
117 | """) | |
118 | ||
119 | _socket_timeout_opt = cfg.FloatOpt( | |
120 | "socket_timeout", | |
121 | default=0.1, | |
122 | help=""" | |
123 | Redissentinel provides a timeout option on the connections. | |
124 | This parameter defines that timeout (for example: socket_timeout=0.1). | |
125 | """) | |
126 | ||
127 | _sentinel_service_name_opt = cfg.StrOpt( | |
128 | "sentinel_service_name", | |
129 | default="mymaster", | |
130 | help=""" | |
131 | Redissentinel uses a service name to identify a master redis service. | |
132 | This parameter defines the name (for example: | |
133 | sentinal_service_name=mymaster). | |
134 | """) | |
135 | ||
136 | ||
81 | 137 | _PROFILER_OPTS = [ |
82 | 138 | _enabled_opt, |
83 | 139 | _trace_sqlalchemy_opt, |
84 | 140 | _hmac_keys_opt, |
141 | _connection_string_opt, | |
142 | _es_doc_type_opt, | |
143 | _es_scroll_time_opt, | |
144 | _es_scroll_size_opt, | |
145 | _socket_timeout_opt, | |
146 | _sentinel_service_name_opt | |
85 | 147 | ] |
86 | 148 | |
87 | ||
88 | def set_defaults(conf, enabled=None, trace_sqlalchemy=None, hmac_keys=None): | |
149 | cfg.CONF.register_opts(_PROFILER_OPTS, group=_profiler_opt_group) | |
150 | ||
151 | ||
152 | def set_defaults(conf, enabled=None, trace_sqlalchemy=None, hmac_keys=None, | |
153 | connection_string=None, es_doc_type=None, | |
154 | es_scroll_time=None, es_scroll_size=None, | |
155 | socket_timeout=None, sentinel_service_name=None): | |
89 | 156 | conf.register_opts(_PROFILER_OPTS, group=_profiler_opt_group) |
90 | 157 | |
91 | 158 | if enabled is not None: |
98 | 165 | conf.set_default("hmac_keys", hmac_keys, |
99 | 166 | group=_profiler_opt_group.name) |
100 | 167 | |
168 | if connection_string is not None: | |
169 | conf.set_default("connection_string", connection_string, | |
170 | group=_profiler_opt_group.name) | |
171 | ||
172 | if es_doc_type is not None: | |
173 | conf.set_default("es_doc_type", es_doc_type, | |
174 | group=_profiler_opt_group.name) | |
175 | ||
176 | if es_scroll_time is not None: | |
177 | conf.set_default("es_scroll_time", es_scroll_time, | |
178 | group=_profiler_opt_group.name) | |
179 | ||
180 | if es_scroll_size is not None: | |
181 | conf.set_default("es_scroll_size", es_scroll_size, | |
182 | group=_profiler_opt_group.name) | |
183 | ||
184 | if socket_timeout is not None: | |
185 | conf.set_default("socket_timeout", socket_timeout, | |
186 | group=_profiler_opt_group.name) | |
187 | ||
188 | if sentinel_service_name is not None: | |
189 | conf.set_default("sentinel_service_name", sentinel_service_name, | |
190 | group=_profiler_opt_group.name) | |
191 | ||
101 | 192 | |
102 | 193 | def is_trace_enabled(conf=None): |
103 | 194 | if conf is None: |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import datetime | |
16 | ||
17 | ||
18 | def _build_tree(nodes): | |
19 | """Builds the tree (forest) data structure based on the list of nodes. | |
20 | ||
21 | Works in O(n). | |
22 | ||
23 | :param nodes: list of nodes, where each node is a dictionary with fields | |
24 | "parent_id", "trace_id", "info" | |
25 | :returns: list of top level ("root") nodes in form of dictionaries, | |
26 | each containing the "info" and "children" fields, where | |
27 | "children" is the list of child nodes ("children" will be | |
28 | empty for leafs) | |
29 | """ | |
30 | ||
31 | tree = [] | |
32 | ||
33 | for trace_id in nodes: | |
34 | node = nodes[trace_id] | |
35 | node.setdefault("children", []) | |
36 | parent_id = node["parent_id"] | |
37 | if parent_id in nodes: | |
38 | nodes[parent_id].setdefault("children", []) | |
39 | nodes[parent_id]["children"].append(node) | |
40 | else: | |
41 | tree.append(node) # no parent => top-level node | |
42 | ||
43 | for node in nodes: | |
44 | nodes[node]["children"].sort(key=lambda x: x["info"]["started"]) | |
45 | ||
46 | return sorted(tree, key=lambda x: x["info"]["started"]) | |
47 | ||
48 | ||
49 | def parse_notifications(notifications): | |
50 | """Parse & builds tree structure from list of ceilometer notifications.""" | |
51 | ||
52 | result = {} | |
53 | started_at = 0 | |
54 | finished_at = 0 | |
55 | ||
56 | for n in notifications: | |
57 | traits = n["traits"] | |
58 | ||
59 | def find_field(f_name): | |
60 | return [t["value"] for t in traits if t["name"] == f_name][0] | |
61 | ||
62 | trace_id = find_field("trace_id") | |
63 | parent_id = find_field("parent_id") | |
64 | name = find_field("name") | |
65 | project = find_field("project") | |
66 | service = find_field("service") | |
67 | host = find_field("host") | |
68 | timestamp = find_field("timestamp") | |
69 | ||
70 | timestamp = datetime.datetime.strptime(timestamp, | |
71 | "%Y-%m-%dT%H:%M:%S.%f") | |
72 | ||
73 | if trace_id not in result: | |
74 | result[trace_id] = { | |
75 | "info": { | |
76 | "name": name.split("-")[0], | |
77 | "project": project, | |
78 | "service": service, | |
79 | "host": host, | |
80 | }, | |
81 | "trace_id": trace_id, | |
82 | "parent_id": parent_id, | |
83 | } | |
84 | ||
85 | result[trace_id]["info"]["meta.raw_payload.%s" % name] = n.get( | |
86 | "raw", {}).get("payload", {}) | |
87 | ||
88 | if name.endswith("stop"): | |
89 | result[trace_id]["info"]["finished"] = timestamp | |
90 | else: | |
91 | result[trace_id]["info"]["started"] = timestamp | |
92 | ||
93 | if not started_at or started_at > timestamp: | |
94 | started_at = timestamp | |
95 | ||
96 | if not finished_at or finished_at < timestamp: | |
97 | finished_at = timestamp | |
98 | ||
99 | def msec(dt): | |
100 | # NOTE(boris-42): Unfortunately this is the simplest way that works in | |
101 | # py26 and py27 | |
102 | microsec = (dt.microseconds + (dt.seconds + dt.days * 24 * 3600) * 1e6) | |
103 | return int(microsec / 1000.0) | |
104 | ||
105 | for r in result.values(): | |
106 | # NOTE(boris-42): We are not able to guarantee that ceilometer consumed | |
107 | # all messages => so we should at make duration 0ms. | |
108 | if "started" not in r["info"]: | |
109 | r["info"]["started"] = r["info"]["finished"] | |
110 | if "finished" not in r["info"]: | |
111 | r["info"]["finished"] = r["info"]["started"] | |
112 | ||
113 | r["info"]["started"] = msec(r["info"]["started"] - started_at) | |
114 | r["info"]["finished"] = msec(r["info"]["finished"] - started_at) | |
115 | ||
116 | return { | |
117 | "info": { | |
118 | "name": "total", | |
119 | "started": 0, | |
120 | "finished": msec(finished_at - started_at) if started_at else 0 | |
121 | }, | |
122 | "children": _build_tree(result) | |
123 | } | |
124 | ||
125 | ||
126 | def get_notifications(ceilometer, base_id): | |
127 | """Retrieves and parses notification from ceilometer. | |
128 | ||
129 | :param ceilometer: Initialized ceilometer client. | |
130 | :param base_id: Base id of trace elements. | |
131 | """ | |
132 | ||
133 | _filter = [{"field": "base_id", "op": "eq", "value": base_id}] | |
134 | # limit is hardcoded in this code state. Later that will be changed via | |
135 | # connection string usage | |
136 | return [n.to_dict() | |
137 | for n in ceilometer.events.list(_filter, limit=100000)] |
18 | 18 | import inspect |
19 | 19 | import socket |
20 | 20 | import threading |
21 | import uuid | |
22 | 21 | |
23 | 22 | from oslo_utils import reflection |
24 | import six | |
23 | from oslo_utils import uuidutils | |
25 | 24 | |
26 | 25 | from osprofiler import notifier |
27 | 26 | |
34 | 33 | __local_ctx.profiler = None |
35 | 34 | |
36 | 35 | |
37 | def init(hmac_key, base_id=None, parent_id=None): | |
36 | def _ensure_no_multiple_traced(traceable_attrs): | |
37 | for attr_name, attr in traceable_attrs: | |
38 | traced_times = getattr(attr, "__traced__", 0) | |
39 | if traced_times: | |
40 | raise ValueError("Can not apply new trace on top of" | |
41 | " previously traced attribute '%s' since" | |
42 | " it has been traced %s times previously" | |
43 | % (attr_name, traced_times)) | |
44 | ||
45 | ||
46 | def init(hmac_key, base_id=None, parent_id=None, connection_str=None, | |
47 | project=None, service=None): | |
38 | 48 | """Init profiler instance for current thread. |
39 | 49 | |
40 | 50 | You should call profiler.init() before using osprofiler. |
43 | 53 | :param hmac_key: secret key to sign trace information. |
44 | 54 | :param base_id: Used to bind all related traces. |
45 | 55 | :param parent_id: Used to build tree of traces. |
56 | :param connection_str: Connection string to the backend to use for | |
57 | notifications. | |
58 | :param project: Project name that is under profiling | |
59 | :param service: Service name that is under profiling | |
46 | 60 | :returns: Profiler instance |
47 | 61 | """ |
48 | 62 | __local_ctx.profiler = _Profiler(hmac_key, base_id=base_id, |
49 | parent_id=parent_id) | |
63 | parent_id=parent_id, | |
64 | connection_str=connection_str, | |
65 | project=project, service=service) | |
50 | 66 | return __local_ctx.profiler |
51 | 67 | |
52 | 68 | |
77 | 93 | profiler.stop(info=info) |
78 | 94 | |
79 | 95 | |
80 | def trace(name, info=None, hide_args=False): | |
96 | def trace(name, info=None, hide_args=False, allow_multiple_trace=True): | |
81 | 97 | """Trace decorator for functions. |
82 | 98 | |
83 | 99 | Very useful if you would like to add trace point on existing function: |
92 | 108 | :param hide_args: Don't push to trace info args and kwargs. Quite useful |
93 | 109 | if you have some info in args that you wont to share, |
94 | 110 | e.g. passwords. |
111 | :param allow_multiple_trace: If the wrapped function has already been | |
112 | traced either allow the new trace to occur | |
113 | or raise a value error denoting that multiple | |
114 | tracing is not allowed (by default allow). | |
95 | 115 | """ |
96 | 116 | if not info: |
97 | 117 | info = {} |
100 | 120 | info["function"] = {} |
101 | 121 | |
102 | 122 | def decorator(f): |
123 | trace_times = getattr(f, "__traced__", 0) | |
124 | if not allow_multiple_trace and trace_times: | |
125 | raise ValueError("Function '%s' has already" | |
126 | " been traced %s times" % (f, trace_times)) | |
127 | ||
128 | try: | |
129 | f.__traced__ = trace_times + 1 | |
130 | except AttributeError: | |
131 | # Tries to work around the following: | |
132 | # | |
133 | # AttributeError: 'instancemethod' object has no | |
134 | # attribute '__traced__' | |
135 | try: | |
136 | f.im_func.__traced__ = trace_times + 1 | |
137 | except AttributeError: # nosec | |
138 | pass | |
103 | 139 | |
104 | 140 | @functools.wraps(f) |
105 | 141 | def wrapper(*args, **kwargs): |
120 | 156 | return decorator |
121 | 157 | |
122 | 158 | |
123 | def trace_cls(name, info=None, hide_args=False, trace_private=False): | |
159 | def trace_cls(name, info=None, hide_args=False, | |
160 | trace_private=False, allow_multiple_trace=True, | |
161 | trace_class_methods=False, trace_static_methods=False): | |
124 | 162 | """Trace decorator for instances of class . |
125 | 163 | |
126 | 164 | Very useful if you would like to add trace point on existing method: |
141 | 179 | :param hide_args: Don't push to trace info args and kwargs. Quite useful |
142 | 180 | if you have some info in args that you wont to share, |
143 | 181 | e.g. passwords. |
144 | ||
145 | 182 | :param trace_private: Trace methods that starts with "_". It wont trace |
146 | 183 | methods that starts "__" even if it is turned on. |
147 | """ | |
184 | :param trace_static_methods: Trace staticmethods. This may be prone to | |
185 | issues so careful usage is recommended (this | |
186 | is also why this defaults to false). | |
187 | :param trace_class_methods: Trace classmethods. This may be prone to | |
188 | issues so careful usage is recommended (this | |
189 | is also why this defaults to false). | |
190 | :param allow_multiple_trace: If wrapped attributes have already been | |
191 | traced either allow the new trace to occur | |
192 | or raise a value error denoting that multiple | |
193 | tracing is not allowed (by default allow). | |
194 | """ | |
195 | ||
196 | def trace_checker(attr_name, to_be_wrapped): | |
197 | if attr_name.startswith("__"): | |
198 | # Never trace really private methods. | |
199 | return (False, None) | |
200 | if not trace_private and attr_name.startswith("_"): | |
201 | return (False, None) | |
202 | if isinstance(to_be_wrapped, staticmethod): | |
203 | if not trace_static_methods: | |
204 | return (False, None) | |
205 | return (True, staticmethod) | |
206 | if isinstance(to_be_wrapped, classmethod): | |
207 | if not trace_class_methods: | |
208 | return (False, None) | |
209 | return (True, classmethod) | |
210 | return (True, None) | |
148 | 211 | |
149 | 212 | def decorator(cls): |
150 | 213 | clss = cls if inspect.isclass(cls) else cls.__class__ |
151 | 214 | mro_dicts = [c.__dict__ for c in inspect.getmro(clss)] |
215 | traceable_attrs = [] | |
216 | traceable_wrappers = [] | |
152 | 217 | for attr_name, attr in inspect.getmembers(cls): |
153 | 218 | if not (inspect.ismethod(attr) or inspect.isfunction(attr)): |
154 | 219 | continue |
155 | if attr_name.startswith("__"): | |
156 | continue | |
157 | if not trace_private and attr_name.startswith("_"): | |
158 | continue | |
159 | ||
160 | 220 | wrapped_obj = None |
161 | 221 | for cls_dict in mro_dicts: |
162 | 222 | if attr_name in cls_dict: |
163 | 223 | wrapped_obj = cls_dict[attr_name] |
164 | 224 | break |
165 | ||
225 | should_wrap, wrapper = trace_checker(attr_name, wrapped_obj) | |
226 | if not should_wrap: | |
227 | continue | |
228 | traceable_attrs.append((attr_name, attr)) | |
229 | traceable_wrappers.append(wrapper) | |
230 | if not allow_multiple_trace: | |
231 | # Check before doing any other further work (so we don't | |
232 | # halfway trace this class). | |
233 | _ensure_no_multiple_traced(traceable_attrs) | |
234 | for i, (attr_name, attr) in enumerate(traceable_attrs): | |
166 | 235 | wrapped_method = trace(name, info=info, hide_args=hide_args)(attr) |
167 | if isinstance(wrapped_obj, staticmethod): | |
168 | # FIXME(dbelova): tracing staticmethod is prone to issues, | |
169 | # there are lots of edge cases, so let's figure that out later. | |
170 | continue | |
171 | # wrapped_method = staticmethod(wrapped_method) | |
172 | elif isinstance(wrapped_obj, classmethod): | |
173 | wrapped_method = classmethod(wrapped_method) | |
236 | wrapper = traceable_wrappers[i] | |
237 | if wrapper is not None: | |
238 | wrapped_method = wrapper(wrapped_method) | |
174 | 239 | setattr(cls, attr_name, wrapped_method) |
175 | 240 | return cls |
176 | 241 | |
205 | 270 | |
206 | 271 | trace_args = dict(getattr(cls, "__trace_args__", {})) |
207 | 272 | trace_private = trace_args.pop("trace_private", False) |
273 | allow_multiple_trace = trace_args.pop("allow_multiple_trace", True) | |
208 | 274 | if "name" not in trace_args: |
209 | 275 | raise TypeError("Please specify __trace_args__ class level " |
210 | 276 | "dictionary attribute with mandatory 'name' key - " |
211 | 277 | "e.g. __trace_args__ = {'name': 'rpc'}") |
212 | 278 | |
213 | for attr_name, attr_value in six.iteritems(attrs): | |
279 | traceable_attrs = [] | |
280 | for attr_name, attr_value in attrs.items(): | |
214 | 281 | if not (inspect.ismethod(attr_value) or |
215 | 282 | inspect.isfunction(attr_value)): |
216 | 283 | continue |
218 | 285 | continue |
219 | 286 | if not trace_private and attr_name.startswith("_"): |
220 | 287 | continue |
221 | ||
288 | traceable_attrs.append((attr_name, attr_value)) | |
289 | if not allow_multiple_trace: | |
290 | # Check before doing any other further work (so we don't | |
291 | # halfway trace this class). | |
292 | _ensure_no_multiple_traced(traceable_attrs) | |
293 | for attr_name, attr_value in traceable_attrs: | |
222 | 294 | setattr(cls, attr_name, trace(**trace_args)(getattr(cls, |
223 | 295 | attr_name))) |
224 | 296 | |
247 | 319 | start(self._name, info=self._info) |
248 | 320 | |
249 | 321 | def __exit__(self, etype, value, traceback): |
250 | stop() | |
322 | if etype: | |
323 | info = {"etype": reflection.get_class_name(etype)} | |
324 | stop(info=info) | |
325 | else: | |
326 | stop() | |
251 | 327 | |
252 | 328 | |
253 | 329 | class _Profiler(object): |
254 | 330 | |
255 | def __init__(self, hmac_key, base_id=None, parent_id=None): | |
331 | def __init__(self, hmac_key, base_id=None, parent_id=None, | |
332 | connection_str=None, project=None, service=None): | |
256 | 333 | self.hmac_key = hmac_key |
257 | 334 | if not base_id: |
258 | base_id = str(uuid.uuid4()) | |
335 | base_id = str(uuidutils.generate_uuid()) | |
259 | 336 | self._trace_stack = collections.deque([base_id, parent_id or base_id]) |
260 | 337 | self._name = collections.deque() |
261 | 338 | self._host = socket.gethostname() |
339 | self._connection_str = connection_str | |
340 | self._project = project | |
341 | self._service = service | |
262 | 342 | |
263 | 343 | def get_base_id(self): |
264 | 344 | """Return base id of a trace. |
285 | 365 | parent_id - to build tree of events (not just a list) |
286 | 366 | trace_id - current event id. |
287 | 367 | |
288 | As we are writing this code special for OpenStack, and there will be | |
289 | only one implementation of notifier based on ceilometer notifier api. | |
290 | That already contains timestamps, so we don't measure time by hand. | |
291 | ||
292 | 368 | :param name: name of trace element (db, wsgi, rpc, etc..) |
293 | 369 | :param info: Dictionary with any useful information related to this |
294 | 370 | trace element. (sql request, rpc message or url...) |
296 | 372 | |
297 | 373 | info = info or {} |
298 | 374 | info["host"] = self._host |
375 | info["project"] = self._project | |
376 | info["service"] = self._service | |
299 | 377 | self._name.append(name) |
300 | self._trace_stack.append(str(uuid.uuid4())) | |
378 | self._trace_stack.append(str(uuidutils.generate_uuid())) | |
301 | 379 | self._notify("%s-start" % name, info) |
302 | 380 | |
303 | 381 | def stop(self, info=None): |
304 | """Finish latests event. | |
382 | """Finish latest event. | |
305 | 383 | |
306 | 384 | Same as a start, but instead of pushing trace_id to stack it pops it. |
307 | 385 | |
309 | 387 | """ |
310 | 388 | info = info or {} |
311 | 389 | info["host"] = self._host |
390 | info["project"] = self._project | |
391 | info["service"] = self._service | |
312 | 392 | self._notify("%s-stop" % self._name.pop(), info) |
313 | 393 | self._trace_stack.pop() |
314 | 394 |
11 | 11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
12 | 12 | # License for the specific language governing permissions and limitations |
13 | 13 | # under the License. |
14 | ||
15 | import contextlib | |
14 | 16 | |
15 | 17 | from osprofiler import profiler |
16 | 18 | |
41 | 43 | _after_cursor_execute()) |
42 | 44 | |
43 | 45 | |
46 | @contextlib.contextmanager | |
47 | def wrap_session(sqlalchemy, sess): | |
48 | with sess as s: | |
49 | if not getattr(s.bind, "traced", False): | |
50 | add_tracing(sqlalchemy, s.bind, "db") | |
51 | s.bind.traced = True | |
52 | yield s | |
53 | ||
54 | ||
44 | 55 | def _before_cursor_execute(name): |
45 | 56 | """Add listener that will send trace info before query is executed.""" |
46 | 57 |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import json | |
16 | import os | |
17 | import sys | |
18 | ||
19 | import mock | |
20 | import six | |
21 | ||
22 | from osprofiler.cmd import exc | |
23 | from osprofiler.cmd import shell | |
24 | from osprofiler.tests import test | |
25 | ||
26 | ||
27 | class ShellTestCase(test.TestCase): | |
28 | def setUp(self): | |
29 | super(ShellTestCase, self).setUp() | |
30 | self.old_environment = os.environ.copy() | |
31 | os.environ = { | |
32 | "OS_USERNAME": "username", | |
33 | "OS_USER_ID": "user_id", | |
34 | "OS_PASSWORD": "password", | |
35 | "OS_USER_DOMAIN_ID": "user_domain_id", | |
36 | "OS_USER_DOMAIN_NAME": "user_domain_name", | |
37 | "OS_PROJECT_DOMAIN_ID": "project_domain_id", | |
38 | "OS_PROJECT_DOMAIN_NAME": "project_domain_name", | |
39 | "OS_PROJECT_ID": "project_id", | |
40 | "OS_PROJECT_NAME": "project_name", | |
41 | "OS_TENANT_ID": "tenant_id", | |
42 | "OS_TENANT_NAME": "tenant_name", | |
43 | "OS_AUTH_URL": "http://127.0.0.1:5000/v3/", | |
44 | "OS_AUTH_TOKEN": "pass", | |
45 | "OS_CACERT": "/path/to/cacert", | |
46 | "OS_SERVICE_TYPE": "service_type", | |
47 | "OS_ENDPOINT_TYPE": "public", | |
48 | "OS_REGION_NAME": "test" | |
49 | } | |
50 | ||
51 | self.ceiloclient = mock.MagicMock() | |
52 | sys.modules["ceilometerclient"] = self.ceiloclient | |
53 | self.addCleanup(sys.modules.pop, "ceilometerclient", None) | |
54 | ceilo_modules = ["client", "exc", "shell"] | |
55 | for module in ceilo_modules: | |
56 | sys.modules["ceilometerclient.%s" % module] = getattr( | |
57 | self.ceiloclient, module) | |
58 | self.addCleanup( | |
59 | sys.modules.pop, "ceilometerclient.%s" % module, None) | |
60 | ||
61 | def tearDown(self): | |
62 | super(ShellTestCase, self).tearDown() | |
63 | os.environ = self.old_environment | |
64 | ||
65 | @mock.patch("sys.stdout", six.StringIO()) | |
66 | @mock.patch("osprofiler.cmd.shell.OSProfilerShell") | |
67 | def test_shell_main(self, mock_shell): | |
68 | mock_shell.side_effect = exc.CommandError("some_message") | |
69 | shell.main() | |
70 | self.assertEqual("some_message\n", sys.stdout.getvalue()) | |
71 | ||
72 | def run_command(self, cmd): | |
73 | shell.OSProfilerShell(cmd.split()) | |
74 | ||
75 | def _test_with_command_error(self, cmd, expected_message): | |
76 | try: | |
77 | self.run_command(cmd) | |
78 | except exc.CommandError as actual_error: | |
79 | self.assertEqual(str(actual_error), expected_message) | |
80 | else: | |
81 | raise ValueError( | |
82 | "Expected: `osprofiler.cmd.exc.CommandError` is raised with " | |
83 | "message: '%s'." % expected_message) | |
84 | ||
85 | def test_username_is_not_presented(self): | |
86 | os.environ.pop("OS_USERNAME") | |
87 | msg = ("You must provide a username via either --os-username or " | |
88 | "via env[OS_USERNAME]") | |
89 | self._test_with_command_error("trace show fake-uuid", msg) | |
90 | ||
91 | def test_password_is_not_presented(self): | |
92 | os.environ.pop("OS_PASSWORD") | |
93 | msg = ("You must provide a password via either --os-password or " | |
94 | "via env[OS_PASSWORD]") | |
95 | self._test_with_command_error("trace show fake-uuid", msg) | |
96 | ||
97 | def test_auth_url(self): | |
98 | os.environ.pop("OS_AUTH_URL") | |
99 | msg = ("You must provide an auth url via either --os-auth-url or " | |
100 | "via env[OS_AUTH_URL]") | |
101 | self._test_with_command_error("trace show fake-uuid", msg) | |
102 | ||
103 | def test_no_project_and_domain_set(self): | |
104 | os.environ.pop("OS_PROJECT_ID") | |
105 | os.environ.pop("OS_PROJECT_NAME") | |
106 | os.environ.pop("OS_TENANT_ID") | |
107 | os.environ.pop("OS_TENANT_NAME") | |
108 | os.environ.pop("OS_USER_DOMAIN_ID") | |
109 | os.environ.pop("OS_USER_DOMAIN_NAME") | |
110 | ||
111 | msg = ("You must provide a project_id via either --os-project-id or " | |
112 | "via env[OS_PROJECT_ID] and a domain_name via either " | |
113 | "--os-user-domain-name or via env[OS_USER_DOMAIN_NAME] or a " | |
114 | "domain_id via either --os-user-domain-id or via " | |
115 | "env[OS_USER_DOMAIN_ID]") | |
116 | self._test_with_command_error("trace show fake-uuid", msg) | |
117 | ||
118 | def test_trace_show_ceilometrclient_is_missed(self): | |
119 | sys.modules["ceilometerclient"] = None | |
120 | sys.modules["ceilometerclient.client"] = None | |
121 | sys.modules["ceilometerclient.exc"] = None | |
122 | sys.modules["ceilometerclient.shell"] = None | |
123 | ||
124 | self.assertRaises(ImportError, shell.main, | |
125 | "trace show fake_uuid".split()) | |
126 | ||
127 | def test_trace_show_unauthorized(self): | |
128 | class FakeHTTPUnauthorized(Exception): | |
129 | http_status = 401 | |
130 | ||
131 | self.ceiloclient.client.get_client.side_effect = FakeHTTPUnauthorized | |
132 | ||
133 | msg = "Invalid OpenStack Identity credentials." | |
134 | self._test_with_command_error("trace show fake_id", msg) | |
135 | ||
136 | def test_trace_show_unknown_error(self): | |
137 | class FakeException(Exception): | |
138 | pass | |
139 | ||
140 | self.ceiloclient.client.get_client.side_effect = FakeException | |
141 | msg = "Something has gone wrong. See logs for more details" | |
142 | self._test_with_command_error("trace show fake_id", msg) | |
143 | ||
144 | @mock.patch("osprofiler.parsers.ceilometer.get_notifications") | |
145 | @mock.patch("osprofiler.parsers.ceilometer.parse_notifications") | |
146 | def test_trace_show_no_selected_format(self, mock_notifications, mock_get): | |
147 | mock_get.return_value = "some_notificatios" | |
148 | msg = ("You should choose one of the following output-formats: " | |
149 | "--json or --html.") | |
150 | self._test_with_command_error("trace show fake_id", msg) | |
151 | ||
152 | @mock.patch("osprofiler.parsers.ceilometer.get_notifications") | |
153 | def test_trace_show_trace_id_not_found(self, mock_get): | |
154 | mock_get.return_value = None | |
155 | ||
156 | fake_trace_id = "fake_id" | |
157 | msg = ("Trace with UUID %s not found. There are 3 possible reasons: \n" | |
158 | " 1) You are using not admin credentials\n" | |
159 | " 2) You specified wrong trace id\n" | |
160 | " 3) You specified wrong HMAC Key in original calling" | |
161 | % fake_trace_id) | |
162 | ||
163 | self._test_with_command_error("trace show %s" % fake_trace_id, msg) | |
164 | ||
165 | @mock.patch("sys.stdout", six.StringIO()) | |
166 | @mock.patch("osprofiler.parsers.ceilometer.get_notifications") | |
167 | @mock.patch("osprofiler.parsers.ceilometer.parse_notifications") | |
168 | def test_trace_show_in_json(self, mock_notifications, mock_get): | |
169 | mock_get.return_value = "some notification" | |
170 | notifications = { | |
171 | "info": { | |
172 | "started": 0, "finished": 0, "name": "total"}, "children": []} | |
173 | mock_notifications.return_value = notifications | |
174 | ||
175 | self.run_command("trace show fake_id --json") | |
176 | self.assertEqual("%s\n" % json.dumps(notifications), | |
177 | sys.stdout.getvalue()) | |
178 | ||
179 | @mock.patch("sys.stdout", six.StringIO()) | |
180 | @mock.patch("osprofiler.parsers.ceilometer.get_notifications") | |
181 | @mock.patch("osprofiler.parsers.ceilometer.parse_notifications") | |
182 | def test_trace_show_in_html(self, mock_notifications, mock_get): | |
183 | mock_get.return_value = "some notification" | |
184 | ||
185 | notifications = { | |
186 | "info": { | |
187 | "started": 0, "finished": 0, "name": "total"}, "children": []} | |
188 | mock_notifications.return_value = notifications | |
189 | ||
190 | # NOTE(akurilin): to simplify assert statement, html-template should be | |
191 | # replaced. | |
192 | html_template = ( | |
193 | "A long time ago in a galaxy far, far away..." | |
194 | " some_data = $DATA" | |
195 | "It is a period of civil war. Rebel" | |
196 | "spaceships, striking from a hidden" | |
197 | "base, have won their first victory" | |
198 | "against the evil Galactic Empire.") | |
199 | ||
200 | with mock.patch("osprofiler.cmd.commands.open", | |
201 | mock.mock_open(read_data=html_template), create=True): | |
202 | self.run_command("trace show fake_id --html") | |
203 | self.assertEqual("A long time ago in a galaxy far, far away..." | |
204 | " some_data = %s" | |
205 | "It is a period of civil war. Rebel" | |
206 | "spaceships, striking from a hidden" | |
207 | "base, have won their first victory" | |
208 | "against the evil Galactic Empire." | |
209 | "\n" % json.dumps(notifications, indent=2), | |
210 | sys.stdout.getvalue()) | |
211 | ||
212 | @mock.patch("sys.stdout", six.StringIO()) | |
213 | @mock.patch("osprofiler.parsers.ceilometer.get_notifications") | |
214 | @mock.patch("osprofiler.parsers.ceilometer.parse_notifications") | |
215 | def test_trace_show_write_to_file(self, mock_notifications, mock_get): | |
216 | mock_get.return_value = "some notification" | |
217 | notifications = { | |
218 | "info": { | |
219 | "started": 0, "finished": 0, "name": "total"}, "children": []} | |
220 | mock_notifications.return_value = notifications | |
221 | ||
222 | with mock.patch("osprofiler.cmd.commands.open", | |
223 | mock.mock_open(), create=True) as mock_open: | |
224 | self.run_command("trace show fake_id --json --out='/file'") | |
225 | ||
226 | output = mock_open.return_value.__enter__.return_value | |
227 | output.write.assert_called_once_with(json.dumps(notifications)) |
0 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
1 | # not use this file except in compliance with the License. You may obtain | |
2 | # a copy of the License at | |
3 | # | |
4 | # http://www.apache.org/licenses/LICENSE-2.0 | |
5 | # | |
6 | # Unless required by applicable law or agreed to in writing, software | |
7 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
8 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
9 | # License for the specific language governing permissions and limitations | |
10 | # under the License. | |
11 | ||
12 | import glob | |
13 | import os | |
14 | import re | |
15 | ||
16 | import docutils.core | |
17 | ||
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class TitlesTestCase(test.TestCase): | |
22 | ||
23 | specs_path = os.path.join( | |
24 | os.path.dirname(__file__), | |
25 | os.pardir, os.pardir, os.pardir, | |
26 | "doc", "specs") | |
27 | ||
28 | def _get_title(self, section_tree): | |
29 | section = {"subtitles": []} | |
30 | for node in section_tree: | |
31 | if node.tagname == "title": | |
32 | section["name"] = node.rawsource | |
33 | elif node.tagname == "section": | |
34 | subsection = self._get_title(node) | |
35 | section["subtitles"].append(subsection["name"]) | |
36 | return section | |
37 | ||
38 | def _get_titles(self, spec): | |
39 | titles = {} | |
40 | for node in spec: | |
41 | if node.tagname == "section": | |
42 | # Note subsection subtitles are thrown away | |
43 | section = self._get_title(node) | |
44 | titles[section["name"]] = section["subtitles"] | |
45 | return titles | |
46 | ||
47 | def _check_titles(self, filename, expect, actual): | |
48 | missing_sections = [x for x in expect.keys() if x not in actual.keys()] | |
49 | extra_sections = [x for x in actual.keys() if x not in expect.keys()] | |
50 | ||
51 | msgs = [] | |
52 | if len(missing_sections) > 0: | |
53 | msgs.append("Missing sections: %s" % missing_sections) | |
54 | if len(extra_sections) > 0: | |
55 | msgs.append("Extra sections: %s" % extra_sections) | |
56 | ||
57 | for section in expect.keys(): | |
58 | missing_subsections = [x for x in expect[section] | |
59 | if x not in actual.get(section, {})] | |
60 | # extra subsections are allowed | |
61 | if len(missing_subsections) > 0: | |
62 | msgs.append("Section '%s' is missing subsections: %s" | |
63 | % (section, missing_subsections)) | |
64 | ||
65 | if len(msgs) > 0: | |
66 | self.fail("While checking '%s':\n %s" | |
67 | % (filename, "\n ".join(msgs))) | |
68 | ||
69 | def _check_lines_wrapping(self, tpl, raw): | |
70 | for i, line in enumerate(raw.split("\n")): | |
71 | if "http://" in line or "https://" in line: | |
72 | continue | |
73 | self.assertTrue( | |
74 | len(line) < 80, | |
75 | msg="%s:%d: Line limited to a maximum of 79 characters." % | |
76 | (tpl, i+1)) | |
77 | ||
78 | def _check_no_cr(self, tpl, raw): | |
79 | matches = re.findall("\r", raw) | |
80 | self.assertEqual( | |
81 | len(matches), 0, | |
82 | "Found %s literal carriage returns in file %s" % | |
83 | (len(matches), tpl)) | |
84 | ||
85 | def _check_trailing_spaces(self, tpl, raw): | |
86 | for i, line in enumerate(raw.split("\n")): | |
87 | trailing_spaces = re.findall(" +$", line) | |
88 | self.assertEqual( | |
89 | len(trailing_spaces), 0, | |
90 | "Found trailing spaces on line %s of %s" % (i+1, tpl)) | |
91 | ||
92 | def test_template(self): | |
93 | with open(os.path.join(self.specs_path, "template.rst")) as f: | |
94 | template = f.read() | |
95 | ||
96 | spec = docutils.core.publish_doctree(template) | |
97 | template_titles = self._get_titles(spec) | |
98 | ||
99 | for d in ["implemented", "in-progress"]: | |
100 | spec_dir = "%s/%s" % (self.specs_path, d) | |
101 | ||
102 | self.assertTrue(os.path.isdir(spec_dir), | |
103 | "%s is not a directory" % spec_dir) | |
104 | for filename in glob.glob(spec_dir + "/*"): | |
105 | if filename.endswith("README.rst"): | |
106 | continue | |
107 | ||
108 | self.assertTrue( | |
109 | filename.endswith(".rst"), | |
110 | "spec's file must have .rst ext. Found: %s" % filename) | |
111 | with open(filename) as f: | |
112 | data = f.read() | |
113 | ||
114 | titles = self._get_titles(docutils.core.publish_doctree(data)) | |
115 | self._check_titles(filename, template_titles, titles) | |
116 | self._check_lines_wrapping(filename, data) | |
117 | self._check_no_cr(filename, data) | |
118 | self._check_trailing_spaces(filename, data) |
0 | # Copyright (c) 2016 VMware, Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import os | |
16 | ||
17 | from oslo_config import cfg | |
18 | ||
19 | from osprofiler.drivers import base | |
20 | from osprofiler import initializer | |
21 | from osprofiler import opts | |
22 | from osprofiler import profiler | |
23 | from osprofiler.tests import test | |
24 | ||
25 | ||
26 | CONF = cfg.CONF | |
27 | ||
28 | ||
29 | class DriverTestCase(test.TestCase): | |
30 | ||
31 | SERVICE = "service" | |
32 | PROJECT = "project" | |
33 | ||
34 | def setUp(self): | |
35 | super(DriverTestCase, self).setUp() | |
36 | CONF(["--config-file", os.path.dirname(__file__) + "/config.cfg"]) | |
37 | opts.set_defaults(CONF, | |
38 | enabled=True, | |
39 | trace_sqlalchemy=False, | |
40 | hmac_keys="SECRET_KEY") | |
41 | ||
42 | @profiler.trace_cls("rpc", hide_args=True) | |
43 | class Foo(object): | |
44 | ||
45 | def bar(self, x): | |
46 | return self.baz(x, x) | |
47 | ||
48 | def baz(self, x, y): | |
49 | return x * y | |
50 | ||
51 | def _assert_dict(self, info, **kwargs): | |
52 | for key in kwargs: | |
53 | self.assertEqual(kwargs[key], info[key]) | |
54 | ||
55 | def _assert_child_dict(self, child, base_id, parent_id, name, fn_name): | |
56 | self.assertEqual(parent_id, child["parent_id"]) | |
57 | ||
58 | exp_info = {"name": "rpc", | |
59 | "service": self.SERVICE, | |
60 | "project": self.PROJECT} | |
61 | self._assert_dict(child["info"], **exp_info) | |
62 | ||
63 | exp_raw_info = {"project": self.PROJECT, | |
64 | "service": self.SERVICE} | |
65 | raw_start = child["info"]["meta.raw_payload.%s-start" % name] | |
66 | self._assert_dict(raw_start["info"], **exp_raw_info) | |
67 | self.assertEqual(fn_name, raw_start["info"]["function"]["name"]) | |
68 | exp_raw = {"name": "%s-start" % name, | |
69 | "service": self.SERVICE, | |
70 | "trace_id": child["trace_id"], | |
71 | "project": self.PROJECT, | |
72 | "base_id": base_id} | |
73 | self._assert_dict(raw_start, **exp_raw) | |
74 | ||
75 | raw_stop = child["info"]["meta.raw_payload.%s-stop" % name] | |
76 | self._assert_dict(raw_stop["info"], **exp_raw_info) | |
77 | exp_raw["name"] = "%s-stop" % name | |
78 | self._assert_dict(raw_stop, **exp_raw) | |
79 | ||
80 | def test_get_report(self): | |
81 | initializer.init_from_conf( | |
82 | CONF, None, self.PROJECT, self.SERVICE, "host") | |
83 | profiler.init("SECRET_KEY", project=self.PROJECT, service=self.SERVICE) | |
84 | ||
85 | foo = DriverTestCase.Foo() | |
86 | foo.bar(1) | |
87 | ||
88 | engine = base.get_driver(CONF.profiler.connection_string, | |
89 | project=self.PROJECT, | |
90 | service=self.SERVICE, | |
91 | host="host", | |
92 | conf=CONF) | |
93 | base_id = profiler.get().get_base_id() | |
94 | res = engine.get_report(base_id) | |
95 | ||
96 | self.assertEqual("total", res["info"]["name"]) | |
97 | self.assertEqual(2, res["stats"]["rpc"]["count"]) | |
98 | self.assertEqual(1, len(res["children"])) | |
99 | ||
100 | cbar = res["children"][0] | |
101 | self._assert_child_dict( | |
102 | cbar, base_id, base_id, "rpc", | |
103 | "osprofiler.tests.functional.test_driver.Foo.bar") | |
104 | ||
105 | self.assertEqual(1, len(cbar["children"])) | |
106 | cbaz = cbar["children"][0] | |
107 | self._assert_child_dict( | |
108 | cbaz, base_id, cbar["trace_id"], "rpc", | |
109 | "osprofiler.tests.functional.test_driver.Foo.baz") |
0 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
1 | # not use this file except in compliance with the License. You may obtain | |
2 | # a copy of the License at | |
3 | # | |
4 | # http://www.apache.org/licenses/LICENSE-2.0 | |
5 | # | |
6 | # Unless required by applicable law or agreed to in writing, software | |
7 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
8 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
9 | # License for the specific language governing permissions and limitations | |
10 | # under the License. | |
11 | ||
12 | """ | |
13 | Guidelines for writing new hacking checks | |
14 | ||
15 | - Use only for OSProfiler specific tests. OpenStack general tests | |
16 | should be submitted to the common 'hacking' module. | |
17 | - Pick numbers in the range N3xx. Find the current test with | |
18 | the highest allocated number and then pick the next value. | |
19 | - Keep the test method code in the source file ordered based | |
20 | on the N3xx value. | |
21 | - List the new rule in the top level HACKING.rst file | |
22 | - Add test cases for each new rule to tests/unit/test_hacking.py | |
23 | ||
24 | """ | |
25 | ||
26 | import functools | |
27 | import re | |
28 | import tokenize | |
29 | ||
30 | re_assert_true_instance = re.compile( | |
31 | r"(.)*assertTrue\(isinstance\((\w|\.|\'|\"|\[|\])+, " | |
32 | r"(\w|\.|\'|\"|\[|\])+\)\)") | |
33 | re_assert_equal_type = re.compile( | |
34 | r"(.)*assertEqual\(type\((\w|\.|\'|\"|\[|\])+\), " | |
35 | r"(\w|\.|\'|\"|\[|\])+\)") | |
36 | re_assert_equal_end_with_none = re.compile(r"assertEqual\(.*?,\s+None\)$") | |
37 | re_assert_equal_start_with_none = re.compile(r"assertEqual\(None,") | |
38 | re_assert_true_false_with_in_or_not_in = re.compile( | |
39 | r"assert(True|False)\(" | |
40 | r"(\w|[][.'\"])+( not)? in (\w|[][.'\",])+(, .*)?\)") | |
41 | re_assert_true_false_with_in_or_not_in_spaces = re.compile( | |
42 | r"assert(True|False)\((\w|[][.'\"])+( not)? in [\[|'|\"](\w|[][.'\", ])+" | |
43 | r"[\[|'|\"](, .*)?\)") | |
44 | re_assert_equal_in_end_with_true_or_false = re.compile( | |
45 | r"assertEqual\((\w|[][.'\"])+( not)? in (\w|[][.'\", ])+, (True|False)\)") | |
46 | re_assert_equal_in_start_with_true_or_false = re.compile( | |
47 | r"assertEqual\((True|False), (\w|[][.'\"])+( not)? in (\w|[][.'\", ])+\)") | |
48 | re_no_construct_dict = re.compile( | |
49 | r"\sdict\(\)") | |
50 | re_no_construct_list = re.compile( | |
51 | r"\slist\(\)") | |
52 | re_str_format = re.compile(r""" | |
53 | % # start of specifier | |
54 | \(([^)]+)\) # mapping key, in group 1 | |
55 | [#0 +\-]? # optional conversion flag | |
56 | (?:-?\d*)? # optional minimum field width | |
57 | (?:\.\d*)? # optional precision | |
58 | [hLl]? # optional length modifier | |
59 | [A-z%] # conversion modifier | |
60 | """, re.X) | |
61 | re_raises = re.compile( | |
62 | r"\s:raise[^s] *.*$|\s:raises *:.*$|\s:raises *[^:]+$") | |
63 | ||
64 | ||
65 | def skip_ignored_lines(func): | |
66 | ||
67 | @functools.wraps(func) | |
68 | def wrapper(logical_line, filename): | |
69 | line = logical_line.strip() | |
70 | if not line or line.startswith("#") or line.endswith("# noqa"): | |
71 | return | |
72 | yield next(func(logical_line, filename)) | |
73 | ||
74 | return wrapper | |
75 | ||
76 | ||
77 | def _parse_assert_mock_str(line): | |
78 | point = line.find(".assert_") | |
79 | ||
80 | if point != -1: | |
81 | end_pos = line[point:].find("(") + point | |
82 | return point, line[point + 1: end_pos], line[: point] | |
83 | else: | |
84 | return None, None, None | |
85 | ||
86 | ||
87 | @skip_ignored_lines | |
88 | def check_assert_methods_from_mock(logical_line, filename): | |
89 | """Ensure that ``assert_*`` methods from ``mock`` library is used correctly | |
90 | ||
91 | N301 - base error number | |
92 | N302 - related to nonexistent "assert_called" | |
93 | N303 - related to nonexistent "assert_called_once" | |
94 | """ | |
95 | ||
96 | correct_names = ["assert_any_call", "assert_called_once_with", | |
97 | "assert_called_with", "assert_has_calls"] | |
98 | ignored_files = ["./tests/unit/test_hacking.py"] | |
99 | ||
100 | if filename.startswith("./tests") and filename not in ignored_files: | |
101 | pos, method_name, obj_name = _parse_assert_mock_str(logical_line) | |
102 | ||
103 | if pos: | |
104 | if method_name not in correct_names: | |
105 | error_number = "N301" | |
106 | msg = ("%(error_number)s:'%(method)s' is not present in `mock`" | |
107 | " library. %(custom_msg)s For more details, visit " | |
108 | "http://www.voidspace.org.uk/python/mock/ .") | |
109 | ||
110 | if method_name == "assert_called": | |
111 | error_number = "N302" | |
112 | custom_msg = ("Maybe, you should try to use " | |
113 | "'assertTrue(%s.called)' instead." % | |
114 | obj_name) | |
115 | elif method_name == "assert_called_once": | |
116 | # For more details, see a bug in Rally: | |
117 | # https://bugs.launchpad.net/rally/+bug/1305991 | |
118 | error_number = "N303" | |
119 | custom_msg = ("Maybe, you should try to use " | |
120 | "'assertEqual(1, %s.call_count)' " | |
121 | "or '%s.assert_called_once_with()'" | |
122 | " instead." % (obj_name, obj_name)) | |
123 | else: | |
124 | custom_msg = ("Correct 'assert_*' methods: '%s'." | |
125 | % "', '".join(correct_names)) | |
126 | ||
127 | yield (pos, msg % { | |
128 | "error_number": error_number, | |
129 | "method": method_name, | |
130 | "custom_msg": custom_msg}) | |
131 | ||
132 | ||
133 | @skip_ignored_lines | |
134 | def assert_true_instance(logical_line, filename): | |
135 | """Check for assertTrue(isinstance(a, b)) sentences | |
136 | ||
137 | N320 | |
138 | """ | |
139 | if re_assert_true_instance.match(logical_line): | |
140 | yield (0, "N320 assertTrue(isinstance(a, b)) sentences not allowed, " | |
141 | "you should use assertIsInstance(a, b) instead.") | |
142 | ||
143 | ||
144 | @skip_ignored_lines | |
145 | def assert_equal_type(logical_line, filename): | |
146 | """Check for assertEqual(type(A), B) sentences | |
147 | ||
148 | N321 | |
149 | """ | |
150 | if re_assert_equal_type.match(logical_line): | |
151 | yield (0, "N321 assertEqual(type(A), B) sentences not allowed, " | |
152 | "you should use assertIsInstance(a, b) instead.") | |
153 | ||
154 | ||
155 | @skip_ignored_lines | |
156 | def assert_equal_none(logical_line, filename): | |
157 | """Check for assertEqual(A, None) or assertEqual(None, A) sentences | |
158 | ||
159 | N322 | |
160 | """ | |
161 | res = (re_assert_equal_start_with_none.search(logical_line) or | |
162 | re_assert_equal_end_with_none.search(logical_line)) | |
163 | if res: | |
164 | yield (0, "N322 assertEqual(A, None) or assertEqual(None, A) " | |
165 | "sentences not allowed, you should use assertIsNone(A) " | |
166 | "instead.") | |
167 | ||
168 | ||
169 | @skip_ignored_lines | |
170 | def assert_true_or_false_with_in(logical_line, filename): | |
171 | """Check assertTrue/False(A in/not in B) with collection contents | |
172 | ||
173 | Check for assertTrue/False(A in B), assertTrue/False(A not in B), | |
174 | assertTrue/False(A in B, message) or assertTrue/False(A not in B, message) | |
175 | sentences. | |
176 | ||
177 | N323 | |
178 | """ | |
179 | res = (re_assert_true_false_with_in_or_not_in.search(logical_line) or | |
180 | re_assert_true_false_with_in_or_not_in_spaces.search(logical_line)) | |
181 | if res: | |
182 | yield (0, "N323 assertTrue/assertFalse(A in/not in B)sentences not " | |
183 | "allowed, you should use assertIn(A, B) or assertNotIn(A, B)" | |
184 | " instead.") | |
185 | ||
186 | ||
187 | @skip_ignored_lines | |
188 | def assert_equal_in(logical_line, filename): | |
189 | """Check assertEqual(A in/not in B, True/False) with collection contents | |
190 | ||
191 | Check for assertEqual(A in B, True/False), assertEqual(True/False, A in B), | |
192 | assertEqual(A not in B, True/False) or assertEqual(True/False, A not in B) | |
193 | sentences. | |
194 | ||
195 | N324 | |
196 | """ | |
197 | res = (re_assert_equal_in_end_with_true_or_false.search(logical_line) or | |
198 | re_assert_equal_in_start_with_true_or_false.search(logical_line)) | |
199 | if res: | |
200 | yield (0, "N324: Use assertIn/NotIn(A, B) rather than " | |
201 | "assertEqual(A in/not in B, True/False) when checking " | |
202 | "collection contents.") | |
203 | ||
204 | ||
205 | @skip_ignored_lines | |
206 | def check_quotes(logical_line, filename): | |
207 | """Check that single quotation marks are not used | |
208 | ||
209 | N350 | |
210 | """ | |
211 | ||
212 | in_string = False | |
213 | in_multiline_string = False | |
214 | single_quotas_are_used = False | |
215 | ||
216 | check_tripple = ( | |
217 | lambda line, i, char: ( | |
218 | i + 2 < len(line) and | |
219 | (char == line[i] == line[i + 1] == line[i + 2]) | |
220 | ) | |
221 | ) | |
222 | ||
223 | i = 0 | |
224 | while i < len(logical_line): | |
225 | char = logical_line[i] | |
226 | ||
227 | if in_string: | |
228 | if char == "\"": | |
229 | in_string = False | |
230 | if char == "\\": | |
231 | i += 1 # ignore next char | |
232 | ||
233 | elif in_multiline_string: | |
234 | if check_tripple(logical_line, i, "\""): | |
235 | i += 2 # skip next 2 chars | |
236 | in_multiline_string = False | |
237 | ||
238 | elif char == "#": | |
239 | break | |
240 | ||
241 | elif char == "'": | |
242 | single_quotas_are_used = True | |
243 | break | |
244 | ||
245 | elif char == "\"": | |
246 | if check_tripple(logical_line, i, "\""): | |
247 | in_multiline_string = True | |
248 | i += 3 | |
249 | continue | |
250 | in_string = True | |
251 | ||
252 | i += 1 | |
253 | ||
254 | if single_quotas_are_used: | |
255 | yield (i, "N350 Remove Single quotes") | |
256 | ||
257 | ||
258 | @skip_ignored_lines | |
259 | def check_no_constructor_data_struct(logical_line, filename): | |
260 | """Check that data structs (lists, dicts) are declared using literals | |
261 | ||
262 | N351 | |
263 | """ | |
264 | ||
265 | match = re_no_construct_dict.search(logical_line) | |
266 | if match: | |
267 | yield (0, "N351 Remove dict() construct and use literal {}") | |
268 | match = re_no_construct_list.search(logical_line) | |
269 | if match: | |
270 | yield (0, "N351 Remove list() construct and use literal []") | |
271 | ||
272 | ||
273 | def check_dict_formatting_in_string(logical_line, tokens): | |
274 | """Check that strings do not use dict-formatting with a single replacement | |
275 | ||
276 | N352 | |
277 | """ | |
278 | # NOTE(stpierre): Can't use @skip_ignored_lines here because it's | |
279 | # a stupid decorator that only works on functions that take | |
280 | # (logical_line, filename) as arguments. | |
281 | if (not logical_line or | |
282 | logical_line.startswith("#") or | |
283 | logical_line.endswith("# noqa")): | |
284 | return | |
285 | ||
286 | current_string = "" | |
287 | in_string = False | |
288 | for token_type, text, start, end, line in tokens: | |
289 | if token_type == tokenize.STRING: | |
290 | if not in_string: | |
291 | current_string = "" | |
292 | in_string = True | |
293 | current_string += text.strip("\"") | |
294 | elif token_type == tokenize.OP: | |
295 | if not current_string: | |
296 | continue | |
297 | # NOTE(stpierre): The string formatting operator % has | |
298 | # lower precedence than +, so we assume that the logical | |
299 | # string has concluded whenever we hit an operator of any | |
300 | # sort. (Most operators don't work for strings anyway.) | |
301 | # Some string operators do have higher precedence than %, | |
302 | # though, so you can technically trick this check by doing | |
303 | # things like: | |
304 | # | |
305 | # "%(foo)s" * 1 % {"foo": 1} | |
306 | # "%(foo)s"[:] % {"foo": 1} | |
307 | # | |
308 | # It also will produce false positives if you use explicit | |
309 | # parenthesized addition for two strings instead of | |
310 | # concatenation by juxtaposition, e.g.: | |
311 | # | |
312 | # ("%(foo)s" + "%(bar)s") % vals | |
313 | # | |
314 | # But if you do any of those things, then you deserve all | |
315 | # of the horrible things that happen to you, and probably | |
316 | # many more. | |
317 | in_string = False | |
318 | if text == "%": | |
319 | format_keys = set() | |
320 | for match in re_str_format.finditer(current_string): | |
321 | format_keys.add(match.group(1)) | |
322 | if len(format_keys) == 1: | |
323 | yield (0, | |
324 | "N353 Do not use mapping key string formatting " | |
325 | "with a single key") | |
326 | if text != ")": | |
327 | # NOTE(stpierre): You can have a parenthesized string | |
328 | # followed by %, so a closing paren doesn't obviate | |
329 | # the possibility for a substitution operator like | |
330 | # every other operator does. | |
331 | current_string = "" | |
332 | elif token_type in (tokenize.NL, tokenize.COMMENT): | |
333 | continue | |
334 | else: | |
335 | in_string = False | |
336 | if token_type == tokenize.NEWLINE: | |
337 | current_string = "" | |
338 | ||
339 | ||
340 | @skip_ignored_lines | |
341 | def check_using_unicode(logical_line, filename): | |
342 | """Check crosspython unicode usage | |
343 | ||
344 | N353 | |
345 | """ | |
346 | ||
347 | if re.search(r"\bunicode\(", logical_line): | |
348 | yield (0, "N353 'unicode' function is absent in python3. Please " | |
349 | "use 'six.text_type' instead.") | |
350 | ||
351 | ||
352 | def check_raises(physical_line, filename): | |
353 | """Check raises usage | |
354 | ||
355 | N354 | |
356 | """ | |
357 | ||
358 | ignored_files = ["./tests/unit/test_hacking.py", | |
359 | "./tests/hacking/checks.py"] | |
360 | if filename not in ignored_files: | |
361 | if re_raises.search(physical_line): | |
362 | return (0, "N354 ':Please use ':raises Exception: conditions' " | |
363 | "in docstrings.") | |
364 | ||
365 | ||
366 | def factory(register): | |
367 | register(check_assert_methods_from_mock) | |
368 | register(assert_true_instance) | |
369 | register(assert_equal_type) | |
370 | register(assert_equal_none) | |
371 | register(assert_true_or_false_with_in) | |
372 | register(assert_equal_in) | |
373 | register(check_quotes) | |
374 | register(check_no_constructor_data_struct) | |
375 | register(check_dict_formatting_in_string) | |
376 | register(check_using_unicode) | |
377 | register(check_raises) |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler._notifiers import base | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class NotifierBaseTestCase(test.TestCase): | |
22 | ||
23 | def test_factory(self): | |
24 | ||
25 | class A(base.Notifier): | |
26 | ||
27 | def notify(self, a): | |
28 | return a | |
29 | ||
30 | self.assertEqual(base.Notifier.factory("A")(10), 10) | |
31 | ||
32 | def test_factory_with_args(self): | |
33 | ||
34 | class B(base.Notifier): | |
35 | ||
36 | def __init__(self, a, b=10): | |
37 | self.a = a | |
38 | self.b = b | |
39 | ||
40 | def notify(self, c): | |
41 | return self.a + self.b + c | |
42 | ||
43 | self.assertEqual(base.Notifier.factory("B", 5, b=7)(10), 22) | |
44 | ||
45 | def test_factory_not_found(self): | |
46 | self.assertRaises(TypeError, base.Notifier.factory, "non existing") | |
47 | ||
48 | def test_notify(self): | |
49 | base.Notifier().notify("") | |
50 | ||
51 | def test_plugins_are_imported(self): | |
52 | base.Notifier.factory("Messaging", mock.MagicMock(), "context", | |
53 | "transport", "project", "service", "host") |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler._notifiers import base | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class MessagingTestCase(test.TestCase): | |
22 | ||
23 | def test_init_and_notify(self): | |
24 | ||
25 | messaging = mock.MagicMock() | |
26 | context = "context" | |
27 | transport = "transport" | |
28 | project = "project" | |
29 | service = "service" | |
30 | host = "host" | |
31 | ||
32 | notify_func = base.Notifier.factory("Messaging", messaging, context, | |
33 | transport, project, service, host) | |
34 | ||
35 | messaging.Notifier.assert_called_once_with( | |
36 | transport, publisher_id=host, driver="messaging", | |
37 | topic="profiler", retry=0) | |
38 | ||
39 | info = { | |
40 | "a": 10 | |
41 | } | |
42 | notify_func(info) | |
43 | ||
44 | expected_data = {"project": project, "service": service} | |
45 | expected_data.update(info) | |
46 | messaging.Notifier().info.assert_called_once_with( | |
47 | context, "profiler.%s" % service, expected_data) | |
48 | ||
49 | messaging.reset_mock() | |
50 | notify_func(info, context="my_context") | |
51 | messaging.Notifier().info.assert_called_once_with( | |
52 | "my_context", "profiler.%s" % service, expected_data) |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler.parsers import ceilometer | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class CeilometerParserTestCase(test.TestCase): | |
22 | def test_build_empty_tree(self): | |
23 | self.assertEqual(ceilometer._build_tree({}), []) | |
24 | ||
25 | def test_build_complex_tree(self): | |
26 | test_input = { | |
27 | "2": {"parent_id": "0", "trace_id": "2", "info": {"started": 1}}, | |
28 | "1": {"parent_id": "0", "trace_id": "1", "info": {"started": 0}}, | |
29 | "21": {"parent_id": "2", "trace_id": "21", "info": {"started": 6}}, | |
30 | "22": {"parent_id": "2", "trace_id": "22", "info": {"started": 7}}, | |
31 | "11": {"parent_id": "1", "trace_id": "11", "info": {"started": 1}}, | |
32 | "113": {"parent_id": "11", "trace_id": "113", | |
33 | "info": {"started": 3}}, | |
34 | "112": {"parent_id": "11", "trace_id": "112", | |
35 | "info": {"started": 2}}, | |
36 | "114": {"parent_id": "11", "trace_id": "114", | |
37 | "info": {"started": 5}} | |
38 | } | |
39 | ||
40 | expected_output = [ | |
41 | { | |
42 | "parent_id": "0", | |
43 | "trace_id": "1", | |
44 | "info": {"started": 0}, | |
45 | "children": [ | |
46 | { | |
47 | "parent_id": "1", | |
48 | "trace_id": "11", | |
49 | "info": {"started": 1}, | |
50 | "children": [ | |
51 | {"parent_id": "11", "trace_id": "112", | |
52 | "info": {"started": 2}, "children": []}, | |
53 | {"parent_id": "11", "trace_id": "113", | |
54 | "info": {"started": 3}, "children": []}, | |
55 | {"parent_id": "11", "trace_id": "114", | |
56 | "info": {"started": 5}, "children": []} | |
57 | ] | |
58 | } | |
59 | ] | |
60 | }, | |
61 | { | |
62 | "parent_id": "0", | |
63 | "trace_id": "2", | |
64 | "info": {"started": 1}, | |
65 | "children": [ | |
66 | {"parent_id": "2", "trace_id": "21", | |
67 | "info": {"started": 6}, "children": []}, | |
68 | {"parent_id": "2", "trace_id": "22", | |
69 | "info": {"started": 7}, "children": []} | |
70 | ] | |
71 | } | |
72 | ] | |
73 | ||
74 | self.assertEqual(ceilometer._build_tree(test_input), expected_output) | |
75 | ||
76 | def test_parse_notifications_empty(self): | |
77 | expected = { | |
78 | "info": { | |
79 | "name": "total", | |
80 | "started": 0, | |
81 | "finished": 0 | |
82 | }, | |
83 | "children": [] | |
84 | } | |
85 | self.assertEqual(ceilometer.parse_notifications([]), expected) | |
86 | ||
87 | def test_parse_notifications(self): | |
88 | events = [ | |
89 | { | |
90 | "traits": [ | |
91 | { | |
92 | "type": "string", | |
93 | "name": "base_id", | |
94 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
95 | }, | |
96 | { | |
97 | "type": "string", | |
98 | "name": "host", | |
99 | "value": "ubuntu" | |
100 | }, | |
101 | { | |
102 | "type": "string", | |
103 | "name": "method", | |
104 | "value": "POST" | |
105 | }, | |
106 | { | |
107 | "type": "string", | |
108 | "name": "name", | |
109 | "value": "wsgi-start" | |
110 | }, | |
111 | { | |
112 | "type": "string", | |
113 | "name": "parent_id", | |
114 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
115 | }, | |
116 | { | |
117 | "type": "string", | |
118 | "name": "project", | |
119 | "value": "keystone" | |
120 | }, | |
121 | { | |
122 | "type": "string", | |
123 | "name": "service", | |
124 | "value": "main" | |
125 | }, | |
126 | { | |
127 | "type": "string", | |
128 | "name": "timestamp", | |
129 | "value": "2015-12-23T14:02:22.338776" | |
130 | }, | |
131 | { | |
132 | "type": "string", | |
133 | "name": "trace_id", | |
134 | "value": "06320327-2c2c-45ae-923a-515de890276a" | |
135 | } | |
136 | ], | |
137 | "raw": {}, | |
138 | "generated": "2015-12-23T10:41:38.415793", | |
139 | "event_type": "profiler.main", | |
140 | "message_id": "65fc1553-3082-4a6f-9d1e-0e3183f57a47"}, | |
141 | { | |
142 | "traits": | |
143 | [ | |
144 | { | |
145 | "type": "string", | |
146 | "name": "base_id", | |
147 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
148 | }, | |
149 | { | |
150 | "type": "string", | |
151 | "name": "host", | |
152 | "value": "ubuntu" | |
153 | }, | |
154 | { | |
155 | "type": "string", | |
156 | "name": "name", | |
157 | "value": "wsgi-stop" | |
158 | }, | |
159 | { | |
160 | "type": "string", | |
161 | "name": "parent_id", | |
162 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
163 | }, | |
164 | { | |
165 | "type": "string", | |
166 | "name": "project", | |
167 | "value": "keystone" | |
168 | }, | |
169 | { | |
170 | "type": "string", | |
171 | "name": "service", | |
172 | "value": "main" | |
173 | }, | |
174 | { | |
175 | "type": "string", | |
176 | "name": "timestamp", | |
177 | "value": "2015-12-23T14:02:22.380405" | |
178 | }, | |
179 | { | |
180 | "type": "string", | |
181 | "name": "trace_id", | |
182 | "value": "016c97fd-87f3-40b2-9b55-e431156b694b" | |
183 | } | |
184 | ], | |
185 | "raw": {}, | |
186 | "generated": "2015-12-23T10:41:38.406052", | |
187 | "event_type": "profiler.main", | |
188 | "message_id": "3256d9f1-48ba-4ac5-a50b-64fa42c6e264"}, | |
189 | { | |
190 | "traits": | |
191 | [ | |
192 | { | |
193 | "type": "string", | |
194 | "name": "base_id", | |
195 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
196 | }, | |
197 | { | |
198 | "type": "string", | |
199 | "name": "db.params", | |
200 | "value": "[]" | |
201 | }, | |
202 | { | |
203 | "type": "string", | |
204 | "name": "db.statement", | |
205 | "value": "SELECT 1" | |
206 | }, | |
207 | { | |
208 | "type": "string", | |
209 | "name": "host", | |
210 | "value": "ubuntu" | |
211 | }, | |
212 | { | |
213 | "type": "string", | |
214 | "name": "name", | |
215 | "value": "db-start" | |
216 | }, | |
217 | { | |
218 | "type": "string", | |
219 | "name": "parent_id", | |
220 | "value": "06320327-2c2c-45ae-923a-515de890276a" | |
221 | }, | |
222 | { | |
223 | "type": "string", | |
224 | "name": "project", | |
225 | "value": "keystone" | |
226 | }, | |
227 | { | |
228 | "type": "string", | |
229 | "name": "service", | |
230 | "value": "main" | |
231 | }, | |
232 | { | |
233 | "type": "string", | |
234 | "name": "timestamp", | |
235 | "value": "2015-12-23T14:02:22.395365" | |
236 | }, | |
237 | { | |
238 | "type": "string", | |
239 | "name": "trace_id", | |
240 | "value": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a" | |
241 | } | |
242 | ], | |
243 | "raw": {}, | |
244 | "generated": "2015-12-23T10:41:38.984161", | |
245 | "event_type": "profiler.main", | |
246 | "message_id": "60368aa4-16f0-4f37-a8fb-89e92fdf36ff" | |
247 | }, | |
248 | { | |
249 | "traits": | |
250 | [ | |
251 | { | |
252 | "type": "string", | |
253 | "name": "base_id", | |
254 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
255 | }, | |
256 | { | |
257 | "type": "string", | |
258 | "name": "host", | |
259 | "value": "ubuntu" | |
260 | }, | |
261 | { | |
262 | "type": "string", | |
263 | "name": "name", | |
264 | "value": "db-stop" | |
265 | }, | |
266 | { | |
267 | "type": "string", | |
268 | "name": "parent_id", | |
269 | "value": "06320327-2c2c-45ae-923a-515de890276a" | |
270 | }, | |
271 | { | |
272 | "type": "string", | |
273 | "name": "project", | |
274 | "value": "keystone" | |
275 | }, | |
276 | { | |
277 | "type": "string", | |
278 | "name": "service", | |
279 | "value": "main" | |
280 | }, | |
281 | { | |
282 | "type": "string", | |
283 | "name": "timestamp", | |
284 | "value": "2015-12-23T14:02:22.415486" | |
285 | }, | |
286 | { | |
287 | "type": "string", | |
288 | "name": "trace_id", | |
289 | "value": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a" | |
290 | } | |
291 | ], | |
292 | "raw": {}, | |
293 | "generated": "2015-12-23T10:41:39.019378", | |
294 | "event_type": "profiler.main", | |
295 | "message_id": "3fbeb339-55c5-4f28-88e4-15bee251dd3d" | |
296 | }, | |
297 | { | |
298 | "traits": | |
299 | [ | |
300 | { | |
301 | "type": "string", | |
302 | "name": "base_id", | |
303 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
304 | }, | |
305 | { | |
306 | "type": "string", | |
307 | "name": "host", | |
308 | "value": "ubuntu" | |
309 | }, | |
310 | { | |
311 | "type": "string", | |
312 | "name": "method", | |
313 | "value": "GET" | |
314 | }, | |
315 | { | |
316 | "type": "string", | |
317 | "name": "name", | |
318 | "value": "wsgi-start" | |
319 | }, | |
320 | { | |
321 | "type": "string", | |
322 | "name": "parent_id", | |
323 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
324 | }, | |
325 | { | |
326 | "type": "string", | |
327 | "name": "project", | |
328 | "value": "keystone" | |
329 | }, | |
330 | { | |
331 | "type": "string", | |
332 | "name": "service", | |
333 | "value": "main" | |
334 | }, | |
335 | { | |
336 | "type": "string", | |
337 | "name": "timestamp", | |
338 | "value": "2015-12-23T14:02:22.427444" | |
339 | }, | |
340 | { | |
341 | "type": "string", | |
342 | "name": "trace_id", | |
343 | "value": "016c97fd-87f3-40b2-9b55-e431156b694b" | |
344 | } | |
345 | ], | |
346 | "raw": {}, | |
347 | "generated": "2015-12-23T10:41:38.360409", | |
348 | "event_type": "profiler.main", | |
349 | "message_id": "57b971a9-572f-4f29-9838-3ed2564c6b5b" | |
350 | } | |
351 | ] | |
352 | ||
353 | expected = {"children": [ | |
354 | {"children": [{"children": [], | |
355 | "info": {"finished": 76, | |
356 | "host": "ubuntu", | |
357 | "meta.raw_payload.db-start": {}, | |
358 | "meta.raw_payload.db-stop": {}, | |
359 | "name": "db", | |
360 | "project": "keystone", | |
361 | "service": "main", | |
362 | "started": 56}, | |
363 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
364 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a"} | |
365 | ], | |
366 | "info": {"finished": 0, | |
367 | "host": "ubuntu", | |
368 | "meta.raw_payload.wsgi-start": {}, | |
369 | "name": "wsgi", | |
370 | "project": "keystone", | |
371 | "service": "main", | |
372 | "started": 0}, | |
373 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
374 | "trace_id": "06320327-2c2c-45ae-923a-515de890276a"}, | |
375 | {"children": [], | |
376 | "info": {"finished": 41, | |
377 | "host": "ubuntu", | |
378 | "meta.raw_payload.wsgi-start": {}, | |
379 | "meta.raw_payload.wsgi-stop": {}, | |
380 | "name": "wsgi", | |
381 | "project": "keystone", | |
382 | "service": "main", | |
383 | "started": 88}, | |
384 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
385 | "trace_id": "016c97fd-87f3-40b2-9b55-e431156b694b"}], | |
386 | "info": {"finished": 88, "name": "total", "started": 0}} | |
387 | ||
388 | self.assertEqual(expected, ceilometer.parse_notifications(events)) | |
389 | ||
390 | def test_get_notifications(self): | |
391 | mock_ceil_client = mock.MagicMock() | |
392 | results = [mock.MagicMock(), mock.MagicMock()] | |
393 | mock_ceil_client.events.list.return_value = results | |
394 | base_id = "10" | |
395 | ||
396 | result = ceilometer.get_notifications(mock_ceil_client, base_id) | |
397 | ||
398 | expected_filter = [{"field": "base_id", "op": "eq", "value": base_id}] | |
399 | mock_ceil_client.events.list.assert_called_once_with(expected_filter, | |
400 | limit=100000) | |
401 | self.assertEqual(result, [results[0].to_dict(), results[1].to_dict()]) |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler import notifier | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class NotifierTestCase(test.TestCase): | |
22 | ||
23 | def tearDown(self): | |
24 | notifier.__notifier = notifier._noop_notifier | |
25 | super(NotifierTestCase, self).tearDown() | |
26 | ||
27 | def test_set(self): | |
28 | ||
29 | def test(info): | |
30 | pass | |
31 | ||
32 | notifier.set(test) | |
33 | self.assertEqual(notifier.get(), test) | |
34 | ||
35 | def test_get_default_notifier(self): | |
36 | self.assertEqual(notifier.get(), notifier._noop_notifier) | |
37 | ||
38 | def test_notify(self): | |
39 | m = mock.MagicMock() | |
40 | notifier.set(m) | |
41 | notifier.notify(10) | |
42 | ||
43 | m.assert_called_once_with(10) | |
44 | ||
45 | @mock.patch("osprofiler.notifier.base.Notifier.factory") | |
46 | def test_create(self, mock_factory): | |
47 | ||
48 | result = notifier.create("test", 10, b=20) | |
49 | mock_factory.assert_called_once_with("test", 10, b=20) | |
50 | self.assertEqual(mock_factory.return_value, result) |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | from oslo_config import fixture | |
17 | from osprofiler import opts | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class ConfigTestCase(test.TestCase): | |
22 | def setUp(self): | |
23 | super(ConfigTestCase, self).setUp() | |
24 | self.conf_fixture = self.useFixture(fixture.Config()) | |
25 | ||
26 | def test_options_defaults(self): | |
27 | opts.set_defaults(self.conf_fixture.conf) | |
28 | self.assertFalse(self.conf_fixture.conf.profiler.enabled) | |
29 | self.assertFalse(self.conf_fixture.conf.profiler.trace_sqlalchemy) | |
30 | self.assertEqual("SECRET_KEY", | |
31 | self.conf_fixture.conf.profiler.hmac_keys) | |
32 | self.assertFalse(opts.is_trace_enabled(self.conf_fixture.conf)) | |
33 | self.assertFalse(opts.is_db_trace_enabled(self.conf_fixture.conf)) | |
34 | ||
35 | def test_options_defaults_override(self): | |
36 | opts.set_defaults(self.conf_fixture.conf, enabled=True, | |
37 | trace_sqlalchemy=True, | |
38 | hmac_keys="MY_KEY") | |
39 | self.assertTrue(self.conf_fixture.conf.profiler.enabled) | |
40 | self.assertTrue(self.conf_fixture.conf.profiler.trace_sqlalchemy) | |
41 | self.assertEqual("MY_KEY", | |
42 | self.conf_fixture.conf.profiler.hmac_keys) | |
43 | self.assertTrue(opts.is_trace_enabled(self.conf_fixture.conf)) | |
44 | self.assertTrue(opts.is_db_trace_enabled(self.conf_fixture.conf)) | |
45 | ||
46 | @mock.patch("osprofiler.web.enable") | |
47 | @mock.patch("osprofiler.web.disable") | |
48 | def test_web_trace_disabled(self, mock_disable, mock_enable): | |
49 | opts.set_defaults(self.conf_fixture.conf, hmac_keys="MY_KEY") | |
50 | opts.enable_web_trace(self.conf_fixture.conf) | |
51 | opts.disable_web_trace(self.conf_fixture.conf) | |
52 | self.assertEqual(0, mock_enable.call_count) | |
53 | self.assertEqual(0, mock_disable.call_count) | |
54 | ||
55 | @mock.patch("osprofiler.web.enable") | |
56 | @mock.patch("osprofiler.web.disable") | |
57 | def test_web_trace_enabled(self, mock_disable, mock_enable): | |
58 | opts.set_defaults(self.conf_fixture.conf, enabled=True, | |
59 | hmac_keys="MY_KEY") | |
60 | opts.enable_web_trace(self.conf_fixture.conf) | |
61 | opts.disable_web_trace(self.conf_fixture.conf) | |
62 | mock_enable.assert_called_once_with("MY_KEY") | |
63 | mock_disable.assert_called_once_with() |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import collections | |
16 | import copy | |
17 | import datetime | |
18 | import mock | |
19 | import re | |
20 | ||
21 | import six | |
22 | ||
23 | from osprofiler import profiler | |
24 | from osprofiler.tests import test | |
25 | ||
26 | ||
27 | class ProfilerGlobMethodsTestCase(test.TestCase): | |
28 | ||
29 | def test_get_profiler_not_inited(self): | |
30 | profiler._clean() | |
31 | self.assertIsNone(profiler.get()) | |
32 | ||
33 | def test_get_profiler_and_init(self): | |
34 | p = profiler.init("secret", base_id="1", parent_id="2") | |
35 | self.assertEqual(profiler.get(), p) | |
36 | ||
37 | self.assertEqual(p.get_base_id(), "1") | |
38 | # NOTE(boris-42): until we make first start we don't have | |
39 | self.assertEqual(p.get_id(), "2") | |
40 | ||
41 | def test_start_not_inited(self): | |
42 | profiler._clean() | |
43 | profiler.start("name") | |
44 | ||
45 | def test_start(self): | |
46 | p = profiler.init("secret", base_id="1", parent_id="2") | |
47 | p.start = mock.MagicMock() | |
48 | profiler.start("name", info="info") | |
49 | p.start.assert_called_once_with("name", info="info") | |
50 | ||
51 | def test_stop_not_inited(self): | |
52 | profiler._clean() | |
53 | profiler.stop() | |
54 | ||
55 | def test_stop(self): | |
56 | p = profiler.init("secret", base_id="1", parent_id="2") | |
57 | p.stop = mock.MagicMock() | |
58 | profiler.stop(info="info") | |
59 | p.stop.assert_called_once_with(info="info") | |
60 | ||
61 | ||
62 | class ProfilerTestCase(test.TestCase): | |
63 | ||
64 | def test_profiler_get_base_id(self): | |
65 | prof = profiler._Profiler("secret", base_id="1", parent_id="2") | |
66 | self.assertEqual(prof.get_base_id(), "1") | |
67 | ||
68 | @mock.patch("osprofiler.profiler.uuid.uuid4") | |
69 | def test_profiler_get_parent_id(self, mock_uuid4): | |
70 | mock_uuid4.return_value = "42" | |
71 | prof = profiler._Profiler("secret", base_id="1", parent_id="2") | |
72 | prof.start("test") | |
73 | self.assertEqual(prof.get_parent_id(), "2") | |
74 | ||
75 | @mock.patch("osprofiler.profiler.uuid.uuid4") | |
76 | def test_profiler_get_base_id_unset_case(self, mock_uuid4): | |
77 | mock_uuid4.return_value = "42" | |
78 | prof = profiler._Profiler("secret") | |
79 | self.assertEqual(prof.get_base_id(), "42") | |
80 | self.assertEqual(prof.get_parent_id(), "42") | |
81 | ||
82 | @mock.patch("osprofiler.profiler.uuid.uuid4") | |
83 | def test_profiler_get_id(self, mock_uuid4): | |
84 | mock_uuid4.return_value = "43" | |
85 | prof = profiler._Profiler("secret") | |
86 | prof.start("test") | |
87 | self.assertEqual(prof.get_id(), "43") | |
88 | ||
89 | @mock.patch("osprofiler.profiler.datetime") | |
90 | @mock.patch("osprofiler.profiler.uuid.uuid4") | |
91 | @mock.patch("osprofiler.profiler.notifier.notify") | |
92 | def test_profiler_start(self, mock_notify, mock_uuid4, mock_datetime): | |
93 | mock_uuid4.return_value = "44" | |
94 | now = datetime.datetime.utcnow() | |
95 | mock_datetime.datetime.utcnow.return_value = now | |
96 | ||
97 | info = {"some": "info"} | |
98 | payload = { | |
99 | "name": "test-start", | |
100 | "base_id": "1", | |
101 | "parent_id": "2", | |
102 | "trace_id": "44", | |
103 | "info": info, | |
104 | "timestamp": now.strftime("%Y-%m-%dT%H:%M:%S.%f"), | |
105 | } | |
106 | ||
107 | prof = profiler._Profiler("secret", base_id="1", parent_id="2") | |
108 | prof.start("test", info=info) | |
109 | ||
110 | mock_notify.assert_called_once_with(payload) | |
111 | ||
112 | @mock.patch("osprofiler.profiler.datetime") | |
113 | @mock.patch("osprofiler.profiler.notifier.notify") | |
114 | def test_profiler_stop(self, mock_notify, mock_datetime): | |
115 | now = datetime.datetime.utcnow() | |
116 | mock_datetime.datetime.utcnow.return_value = now | |
117 | prof = profiler._Profiler("secret", base_id="1", parent_id="2") | |
118 | prof._trace_stack.append("44") | |
119 | prof._name.append("abc") | |
120 | ||
121 | info = {"some": "info"} | |
122 | prof.stop(info=info) | |
123 | ||
124 | payload = { | |
125 | "name": "abc-stop", | |
126 | "base_id": "1", | |
127 | "parent_id": "2", | |
128 | "trace_id": "44", | |
129 | "info": info, | |
130 | "timestamp": now.strftime("%Y-%m-%dT%H:%M:%S.%f"), | |
131 | } | |
132 | ||
133 | mock_notify.assert_called_once_with(payload) | |
134 | self.assertEqual(len(prof._name), 0) | |
135 | self.assertEqual(prof._trace_stack, collections.deque(["1", "2"])) | |
136 | ||
137 | def test_profiler_hmac(self): | |
138 | hmac = "secret" | |
139 | prof = profiler._Profiler(hmac, base_id="1", parent_id="2") | |
140 | self.assertEqual(hmac, prof.hmac_key) | |
141 | ||
142 | ||
143 | class WithTraceTestCase(test.TestCase): | |
144 | ||
145 | @mock.patch("osprofiler.profiler.stop") | |
146 | @mock.patch("osprofiler.profiler.start") | |
147 | def test_with_trace(self, mock_start, mock_stop): | |
148 | ||
149 | with profiler.Trace("a", info="a1"): | |
150 | mock_start.assert_called_once_with("a", info="a1") | |
151 | mock_start.reset_mock() | |
152 | with profiler.Trace("b", info="b1"): | |
153 | mock_start.assert_called_once_with("b", info="b1") | |
154 | mock_stop.assert_called_once_with() | |
155 | mock_stop.reset_mock() | |
156 | mock_stop.assert_called_once_with() | |
157 | ||
158 | ||
159 | @profiler.trace("function", info={"info": "some_info"}) | |
160 | def tracede_func(i): | |
161 | return i | |
162 | ||
163 | ||
164 | @profiler.trace("hide_args", hide_args=True) | |
165 | def trace_hide_args_func(a, i=10): | |
166 | return (a, i) | |
167 | ||
168 | ||
169 | class TraceDecoratorTestCase(test.TestCase): | |
170 | ||
171 | @mock.patch("osprofiler.profiler.stop") | |
172 | @mock.patch("osprofiler.profiler.start") | |
173 | def test_with_args(self, mock_start, mock_stop): | |
174 | self.assertEqual(1, tracede_func(1)) | |
175 | expected_info = { | |
176 | "info": "some_info", | |
177 | "function": { | |
178 | "name": "osprofiler.tests.test_profiler.tracede_func", | |
179 | "args": str((1,)), | |
180 | "kwargs": str({}) | |
181 | } | |
182 | } | |
183 | mock_start.assert_called_once_with("function", info=expected_info) | |
184 | mock_stop.assert_called_once_with() | |
185 | ||
186 | @mock.patch("osprofiler.profiler.stop") | |
187 | @mock.patch("osprofiler.profiler.start") | |
188 | def test_without_args(self, mock_start, mock_stop): | |
189 | self.assertEqual((1, 2), trace_hide_args_func(1, i=2)) | |
190 | expected_info = { | |
191 | "function": { | |
192 | "name": "osprofiler.tests.test_profiler.trace_hide_args_func" | |
193 | } | |
194 | } | |
195 | mock_start.assert_called_once_with("hide_args", info=expected_info) | |
196 | mock_stop.assert_called_once_with() | |
197 | ||
198 | ||
199 | class FakeTracedCls(object): | |
200 | ||
201 | def method1(self, a, b, c=10): | |
202 | return a + b + c | |
203 | ||
204 | def method2(self, d, e): | |
205 | return d - e | |
206 | ||
207 | def method3(self, g=10, h=20): | |
208 | return g * h | |
209 | ||
210 | def _method(self, i): | |
211 | return i | |
212 | ||
213 | ||
214 | @profiler.trace_cls("rpc", info={"a": 10}) | |
215 | class FakeTraceClassWithInfo(FakeTracedCls): | |
216 | pass | |
217 | ||
218 | ||
219 | @profiler.trace_cls("a", info={"b": 20}, hide_args=True) | |
220 | class FakeTraceClassHideArgs(FakeTracedCls): | |
221 | pass | |
222 | ||
223 | ||
224 | @profiler.trace_cls("rpc", trace_private=True) | |
225 | class FakeTracePrivate(FakeTracedCls): | |
226 | pass | |
227 | ||
228 | ||
229 | @profiler.trace_cls("rpc") | |
230 | class FakeTraceStatic(FakeTracedCls): | |
231 | @staticmethod | |
232 | def method4(arg): | |
233 | return arg | |
234 | ||
235 | ||
236 | def py3_info(info): | |
237 | # NOTE(boris-42): py33 I hate you. | |
238 | info_py3 = copy.deepcopy(info) | |
239 | new_name = re.sub("FakeTrace[^.]*", "FakeTracedCls", | |
240 | info_py3["function"]["name"]) | |
241 | info_py3["function"]["name"] = new_name | |
242 | return info_py3 | |
243 | ||
244 | ||
245 | def possible_mock_calls(name, info): | |
246 | # NOTE(boris-42): py33 I hate you. | |
247 | return [mock.call(name, info=info), mock.call(name, info=py3_info(info))] | |
248 | ||
249 | ||
250 | class TraceClsDecoratorTestCase(test.TestCase): | |
251 | ||
252 | @mock.patch("osprofiler.profiler.stop") | |
253 | @mock.patch("osprofiler.profiler.start") | |
254 | def test_args(self, mock_start, mock_stop): | |
255 | fake_cls = FakeTraceClassWithInfo() | |
256 | self.assertEqual(30, fake_cls.method1(5, 15)) | |
257 | expected_info = { | |
258 | "a": 10, | |
259 | "function": { | |
260 | "name": ("osprofiler.tests.test_profiler" | |
261 | ".FakeTraceClassWithInfo.method1"), | |
262 | "args": str((fake_cls, 5, 15)), | |
263 | "kwargs": str({}) | |
264 | } | |
265 | } | |
266 | self.assertEqual(1, len(mock_start.call_args_list)) | |
267 | self.assertIn(mock_start.call_args_list[0], | |
268 | possible_mock_calls("rpc", expected_info)) | |
269 | mock_stop.assert_called_once_with() | |
270 | ||
271 | @mock.patch("osprofiler.profiler.stop") | |
272 | @mock.patch("osprofiler.profiler.start") | |
273 | def test_kwargs(self, mock_start, mock_stop): | |
274 | fake_cls = FakeTraceClassWithInfo() | |
275 | self.assertEqual(50, fake_cls.method3(g=5, h=10)) | |
276 | expected_info = { | |
277 | "a": 10, | |
278 | "function": { | |
279 | "name": ("osprofiler.tests.test_profiler" | |
280 | ".FakeTraceClassWithInfo.method3"), | |
281 | "args": str((fake_cls,)), | |
282 | "kwargs": str({"g": 5, "h": 10}) | |
283 | } | |
284 | } | |
285 | self.assertEqual(1, len(mock_start.call_args_list)) | |
286 | self.assertIn(mock_start.call_args_list[0], | |
287 | possible_mock_calls("rpc", expected_info)) | |
288 | mock_stop.assert_called_once_with() | |
289 | ||
290 | @mock.patch("osprofiler.profiler.stop") | |
291 | @mock.patch("osprofiler.profiler.start") | |
292 | def test_without_private(self, mock_start, mock_stop): | |
293 | fake_cls = FakeTraceClassHideArgs() | |
294 | self.assertEqual(10, fake_cls._method(10)) | |
295 | self.assertFalse(mock_start.called) | |
296 | self.assertFalse(mock_stop.called) | |
297 | ||
298 | @mock.patch("osprofiler.profiler.stop") | |
299 | @mock.patch("osprofiler.profiler.start") | |
300 | def test_without_args(self, mock_start, mock_stop): | |
301 | fake_cls = FakeTraceClassHideArgs() | |
302 | self.assertEqual(40, fake_cls.method1(5, 15, c=20)) | |
303 | expected_info = { | |
304 | "b": 20, | |
305 | "function": { | |
306 | "name": ("osprofiler.tests.test_profiler" | |
307 | ".FakeTraceClassHideArgs.method1"), | |
308 | } | |
309 | } | |
310 | ||
311 | self.assertEqual(1, len(mock_start.call_args_list)) | |
312 | self.assertIn(mock_start.call_args_list[0], | |
313 | possible_mock_calls("a", expected_info)) | |
314 | mock_stop.assert_called_once_with() | |
315 | ||
316 | @mock.patch("osprofiler.profiler.stop") | |
317 | @mock.patch("osprofiler.profiler.start") | |
318 | def test_private_methods(self, mock_start, mock_stop): | |
319 | fake_cls = FakeTracePrivate() | |
320 | self.assertEqual(5, fake_cls._method(5)) | |
321 | ||
322 | expected_info = { | |
323 | "function": { | |
324 | "name": ("osprofiler.tests.test_profiler" | |
325 | ".FakeTracePrivate._method"), | |
326 | "args": str((fake_cls, 5)), | |
327 | "kwargs": str({}) | |
328 | } | |
329 | } | |
330 | ||
331 | self.assertEqual(1, len(mock_start.call_args_list)) | |
332 | self.assertIn(mock_start.call_args_list[0], | |
333 | possible_mock_calls("rpc", expected_info)) | |
334 | mock_stop.assert_called_once_with() | |
335 | ||
336 | @mock.patch("osprofiler.profiler.stop") | |
337 | @mock.patch("osprofiler.profiler.start") | |
338 | @test.testcase.skip( | |
339 | "Static method tracing was disabled due the bug. This test should be " | |
340 | "skipped until we find the way to address it.") | |
341 | def test_static(self, mock_start, mock_stop): | |
342 | fake_cls = FakeTraceStatic() | |
343 | ||
344 | self.assertEqual(25, fake_cls.method4(25)) | |
345 | ||
346 | expected_info = { | |
347 | "function": { | |
348 | # fixme(boris-42): Static methods are treated differently in | |
349 | # Python 2.x and Python 3.x. So in PY2 we | |
350 | # expect to see method4 because method is | |
351 | # static and doesn't have reference to class | |
352 | # - and FakeTraceStatic.method4 in PY3 | |
353 | "name": | |
354 | "osprofiler.tests.test_profiler.method4" if six.PY2 else | |
355 | "osprofiler.tests.test_profiler.FakeTraceStatic.method4", | |
356 | "args": str((25,)), | |
357 | "kwargs": str({}) | |
358 | } | |
359 | } | |
360 | ||
361 | self.assertEqual(1, len(mock_start.call_args_list)) | |
362 | self.assertIn(mock_start.call_args_list[0], | |
363 | possible_mock_calls("rpc", expected_info)) | |
364 | mock_stop.assert_called_once_with() | |
365 | ||
366 | ||
367 | @six.add_metaclass(profiler.TracedMeta) | |
368 | class FakeTraceWithMetaclassBase(object): | |
369 | __trace_args__ = {"name": "rpc", | |
370 | "info": {"a": 10}} | |
371 | ||
372 | def method1(self, a, b, c=10): | |
373 | return a + b + c | |
374 | ||
375 | def method2(self, d, e): | |
376 | return d - e | |
377 | ||
378 | def method3(self, g=10, h=20): | |
379 | return g * h | |
380 | ||
381 | def _method(self, i): | |
382 | return i | |
383 | ||
384 | ||
385 | class FakeTraceDummy(FakeTraceWithMetaclassBase): | |
386 | def method4(self, j): | |
387 | return j | |
388 | ||
389 | ||
390 | class FakeTraceWithMetaclassHideArgs(FakeTraceWithMetaclassBase): | |
391 | __trace_args__ = {"name": "a", | |
392 | "info": {"b": 20}, | |
393 | "hide_args": True} | |
394 | ||
395 | def method5(self, k, l): | |
396 | return k + l | |
397 | ||
398 | ||
399 | class FakeTraceWithMetaclassPrivate(FakeTraceWithMetaclassBase): | |
400 | __trace_args__ = {"name": "rpc", | |
401 | "trace_private": True} | |
402 | ||
403 | def _new_private_method(self, m): | |
404 | return 2 * m | |
405 | ||
406 | ||
407 | class TraceWithMetaclassTestCase(test.TestCase): | |
408 | ||
409 | def test_no_name_exception(self): | |
410 | def define_class_with_no_name(): | |
411 | @six.add_metaclass(profiler.TracedMeta) | |
412 | class FakeTraceWithMetaclassNoName(FakeTracedCls): | |
413 | pass | |
414 | self.assertRaises(TypeError, define_class_with_no_name, 1) | |
415 | ||
416 | @mock.patch("osprofiler.profiler.stop") | |
417 | @mock.patch("osprofiler.profiler.start") | |
418 | def test_args(self, mock_start, mock_stop): | |
419 | fake_cls = FakeTraceWithMetaclassBase() | |
420 | self.assertEqual(30, fake_cls.method1(5, 15)) | |
421 | expected_info = { | |
422 | "a": 10, | |
423 | "function": { | |
424 | "name": ("osprofiler.tests.test_profiler" | |
425 | ".FakeTraceWithMetaclassBase.method1"), | |
426 | "args": str((fake_cls, 5, 15)), | |
427 | "kwargs": str({}) | |
428 | } | |
429 | } | |
430 | self.assertEqual(1, len(mock_start.call_args_list)) | |
431 | self.assertIn(mock_start.call_args_list[0], | |
432 | possible_mock_calls("rpc", expected_info)) | |
433 | mock_stop.assert_called_once_with() | |
434 | ||
435 | @mock.patch("osprofiler.profiler.stop") | |
436 | @mock.patch("osprofiler.profiler.start") | |
437 | def test_kwargs(self, mock_start, mock_stop): | |
438 | fake_cls = FakeTraceWithMetaclassBase() | |
439 | self.assertEqual(50, fake_cls.method3(g=5, h=10)) | |
440 | expected_info = { | |
441 | "a": 10, | |
442 | "function": { | |
443 | "name": ("osprofiler.tests.test_profiler" | |
444 | ".FakeTraceWithMetaclassBase.method3"), | |
445 | "args": str((fake_cls,)), | |
446 | "kwargs": str({"g": 5, "h": 10}) | |
447 | } | |
448 | } | |
449 | self.assertEqual(1, len(mock_start.call_args_list)) | |
450 | self.assertIn(mock_start.call_args_list[0], | |
451 | possible_mock_calls("rpc", expected_info)) | |
452 | mock_stop.assert_called_once_with() | |
453 | ||
454 | @mock.patch("osprofiler.profiler.stop") | |
455 | @mock.patch("osprofiler.profiler.start") | |
456 | def test_without_private(self, mock_start, mock_stop): | |
457 | fake_cls = FakeTraceWithMetaclassHideArgs() | |
458 | self.assertEqual(10, fake_cls._method(10)) | |
459 | self.assertFalse(mock_start.called) | |
460 | self.assertFalse(mock_stop.called) | |
461 | ||
462 | @mock.patch("osprofiler.profiler.stop") | |
463 | @mock.patch("osprofiler.profiler.start") | |
464 | def test_without_args(self, mock_start, mock_stop): | |
465 | fake_cls = FakeTraceWithMetaclassHideArgs() | |
466 | self.assertEqual(20, fake_cls.method5(5, 15)) | |
467 | expected_info = { | |
468 | "b": 20, | |
469 | "function": { | |
470 | "name": ("osprofiler.tests.test_profiler" | |
471 | ".FakeTraceWithMetaclassHideArgs.method5") | |
472 | } | |
473 | } | |
474 | ||
475 | self.assertEqual(1, len(mock_start.call_args_list)) | |
476 | self.assertIn(mock_start.call_args_list[0], | |
477 | possible_mock_calls("a", expected_info)) | |
478 | mock_stop.assert_called_once_with() | |
479 | ||
480 | @mock.patch("osprofiler.profiler.stop") | |
481 | @mock.patch("osprofiler.profiler.start") | |
482 | def test_private_methods(self, mock_start, mock_stop): | |
483 | fake_cls = FakeTraceWithMetaclassPrivate() | |
484 | self.assertEqual(10, fake_cls._new_private_method(5)) | |
485 | ||
486 | expected_info = { | |
487 | "function": { | |
488 | "name": ("osprofiler.tests.test_profiler" | |
489 | ".FakeTraceWithMetaclassPrivate._new_private_method"), | |
490 | "args": str((fake_cls, 5)), | |
491 | "kwargs": str({}) | |
492 | } | |
493 | } | |
494 | ||
495 | self.assertEqual(1, len(mock_start.call_args_list)) | |
496 | self.assertIn(mock_start.call_args_list[0], | |
497 | possible_mock_calls("rpc", expected_info)) | |
498 | mock_stop.assert_called_once_with() |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler import sqlalchemy | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class SqlalchemyTracingTestCase(test.TestCase): | |
22 | ||
23 | @mock.patch("osprofiler.sqlalchemy.profiler") | |
24 | def test_before_execute(self, mock_profiler): | |
25 | handler = sqlalchemy._before_cursor_execute("sql") | |
26 | ||
27 | handler(mock.MagicMock(), 1, 2, 3, 4, 5) | |
28 | expected_info = {"db": {"statement": 2, "params": 3}} | |
29 | mock_profiler.start.assert_called_once_with("sql", info=expected_info) | |
30 | ||
31 | @mock.patch("osprofiler.sqlalchemy.profiler") | |
32 | def test_after_execute(self, mock_profiler): | |
33 | handler = sqlalchemy._after_cursor_execute() | |
34 | handler(mock.MagicMock(), 1, 2, 3, 4, 5) | |
35 | mock_profiler.stop.assert_called_once_with() | |
36 | ||
37 | @mock.patch("osprofiler.sqlalchemy._before_cursor_execute") | |
38 | @mock.patch("osprofiler.sqlalchemy._after_cursor_execute") | |
39 | def test_add_tracing(self, mock_after_exc, mock_before_exc): | |
40 | sa = mock.MagicMock() | |
41 | engine = mock.MagicMock() | |
42 | ||
43 | mock_before_exc.return_value = "before" | |
44 | mock_after_exc.return_value = "after" | |
45 | ||
46 | sqlalchemy.add_tracing(sa, engine, "sql") | |
47 | ||
48 | mock_before_exc.assert_called_once_with("sql") | |
49 | mock_after_exc.assert_called_once_with() | |
50 | expected_calls = [ | |
51 | mock.call(engine, "before_cursor_execute", "before"), | |
52 | mock.call(engine, "after_cursor_execute", "after") | |
53 | ] | |
54 | self.assertEqual(sa.event.listen.call_args_list, expected_calls) | |
55 | ||
56 | @mock.patch("osprofiler.sqlalchemy._before_cursor_execute") | |
57 | @mock.patch("osprofiler.sqlalchemy._after_cursor_execute") | |
58 | def test_disable_and_enable(self, mock_after_exc, mock_before_exc): | |
59 | sqlalchemy.disable() | |
60 | ||
61 | sa = mock.MagicMock() | |
62 | engine = mock.MagicMock() | |
63 | sqlalchemy.add_tracing(sa, engine, "sql") | |
64 | self.assertFalse(mock_after_exc.called) | |
65 | self.assertFalse(mock_before_exc.called) | |
66 | ||
67 | sqlalchemy.enable() | |
68 | sqlalchemy.add_tracing(sa, engine, "sql") | |
69 | self.assertTrue(mock_after_exc.called) | |
70 | self.assertTrue(mock_before_exc.called) |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import base64 | |
16 | import hashlib | |
17 | import hmac | |
18 | ||
19 | import mock | |
20 | ||
21 | from osprofiler import _utils as utils | |
22 | from osprofiler.tests import test | |
23 | ||
24 | ||
25 | class UtilsTestCase(test.TestCase): | |
26 | ||
27 | def test_split(self): | |
28 | self.assertEqual([1, 2], utils.split([1, 2])) | |
29 | self.assertEqual(["A", "B"], utils.split("A, B")) | |
30 | self.assertEqual(["A", " B"], utils.split("A, B", strip=False)) | |
31 | ||
32 | def test_split_wrong_type(self): | |
33 | self.assertRaises(TypeError, utils.split, 1) | |
34 | ||
35 | def test_binary_encode_and_decode(self): | |
36 | self.assertEqual("text", | |
37 | utils.binary_decode(utils.binary_encode("text"))) | |
38 | ||
39 | def test_binary_encode_invalid_type(self): | |
40 | self.assertRaises(TypeError, utils.binary_encode, 1234) | |
41 | ||
42 | def test_binary_encode_binary_type(self): | |
43 | binary = utils.binary_encode("text") | |
44 | self.assertEqual(binary, utils.binary_encode(binary)) | |
45 | ||
46 | def test_binary_decode_invalid_type(self): | |
47 | self.assertRaises(TypeError, utils.binary_decode, 1234) | |
48 | ||
49 | def test_binary_decode_text_type(self): | |
50 | self.assertEqual("text", utils.binary_decode("text")) | |
51 | ||
52 | def test_generate_hmac(self): | |
53 | hmac_key = "secrete" | |
54 | data = "my data" | |
55 | ||
56 | h = hmac.new(utils.binary_encode(hmac_key), digestmod=hashlib.sha1) | |
57 | h.update(utils.binary_encode(data)) | |
58 | ||
59 | self.assertEqual(h.hexdigest(), utils.generate_hmac(data, hmac_key)) | |
60 | ||
61 | def test_signed_pack_unpack(self): | |
62 | hmac = "secret" | |
63 | data = {"some": "data"} | |
64 | ||
65 | packed_data, hmac_data = utils.signed_pack(data, hmac) | |
66 | ||
67 | process_data = utils.signed_unpack(packed_data, hmac_data, [hmac]) | |
68 | self.assertIn("hmac_key", process_data) | |
69 | process_data.pop("hmac_key") | |
70 | self.assertEqual(data, process_data) | |
71 | ||
72 | def test_signed_pack_unpack_many_keys(self): | |
73 | keys = ["secret", "secret2", "secret3"] | |
74 | data = {"some": "data"} | |
75 | packed_data, hmac_data = utils.signed_pack(data, keys[-1]) | |
76 | ||
77 | process_data = utils.signed_unpack(packed_data, hmac_data, keys) | |
78 | self.assertEqual(keys[-1], process_data["hmac_key"]) | |
79 | ||
80 | def test_signed_pack_unpack_many_wrong_keys(self): | |
81 | keys = ["secret", "secret2", "secret3"] | |
82 | data = {"some": "data"} | |
83 | packed_data, hmac_data = utils.signed_pack(data, "password") | |
84 | ||
85 | process_data = utils.signed_unpack(packed_data, hmac_data, keys) | |
86 | self.assertIsNone(process_data) | |
87 | ||
88 | def test_signed_unpack_wrong_key(self): | |
89 | data = {"some": "data"} | |
90 | packed_data, hmac_data = utils.signed_pack(data, "secret") | |
91 | ||
92 | self.assertIsNone(utils.signed_unpack(packed_data, hmac_data, "wrong")) | |
93 | ||
94 | def test_signed_unpack_no_key_or_hmac_data(self): | |
95 | data = {"some": "data"} | |
96 | packed_data, hmac_data = utils.signed_pack(data, "secret") | |
97 | self.assertIsNone(utils.signed_unpack(packed_data, hmac_data, None)) | |
98 | self.assertIsNone(utils.signed_unpack(packed_data, None, "secret")) | |
99 | self.assertIsNone(utils.signed_unpack(packed_data, " ", "secret")) | |
100 | ||
101 | @mock.patch("osprofiler._utils.generate_hmac") | |
102 | def test_singed_unpack_generate_hmac_failed(self, mock_generate_hmac): | |
103 | mock_generate_hmac.side_effect = Exception | |
104 | self.assertIsNone(utils.signed_unpack("data", "hmac_data", "hmac_key")) | |
105 | ||
106 | def test_signed_unpack_invalid_json(self): | |
107 | hmac = "secret" | |
108 | data = base64.urlsafe_b64encode(utils.binary_encode("not_a_json")) | |
109 | hmac_data = utils.generate_hmac(data, hmac) | |
110 | ||
111 | self.assertIsNone(utils.signed_unpack(data, hmac_data, hmac)) | |
112 | ||
113 | def test_itersubclasses(self): | |
114 | ||
115 | class A(object): | |
116 | pass | |
117 | ||
118 | class B(A): | |
119 | pass | |
120 | ||
121 | class C(A): | |
122 | pass | |
123 | ||
124 | class D(C): | |
125 | pass | |
126 | ||
127 | self.assertEqual([B, C, D], list(utils.itersubclasses(A))) | |
128 | ||
129 | class E(type): | |
130 | pass | |
131 | ||
132 | self.assertEqual([], list(utils.itersubclasses(E))) |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | from webob import response as webob_response | |
17 | ||
18 | from osprofiler import _utils as utils | |
19 | from osprofiler import profiler | |
20 | from osprofiler import web | |
21 | ||
22 | from osprofiler.tests import test | |
23 | ||
24 | ||
25 | def dummy_app(environ, response): | |
26 | res = webob_response.Response() | |
27 | return res(environ, response) | |
28 | ||
29 | ||
30 | class WebTestCase(test.TestCase): | |
31 | ||
32 | def setUp(self): | |
33 | super(WebTestCase, self).setUp() | |
34 | profiler._clean() | |
35 | self.addCleanup(profiler._clean) | |
36 | ||
37 | def test_get_trace_id_headers_no_hmac(self): | |
38 | profiler.init(None, base_id="y", parent_id="z") | |
39 | headers = web.get_trace_id_headers() | |
40 | self.assertEqual(headers, {}) | |
41 | ||
42 | def test_get_trace_id_headers(self): | |
43 | profiler.init("key", base_id="y", parent_id="z") | |
44 | headers = web.get_trace_id_headers() | |
45 | self.assertEqual(sorted(headers.keys()), | |
46 | sorted(["X-Trace-Info", "X-Trace-HMAC"])) | |
47 | ||
48 | trace_info = utils.signed_unpack(headers["X-Trace-Info"], | |
49 | headers["X-Trace-HMAC"], ["key"]) | |
50 | self.assertIn("hmac_key", trace_info) | |
51 | self.assertEqual("key", trace_info.pop("hmac_key")) | |
52 | self.assertEqual({"parent_id": "z", "base_id": "y"}, trace_info) | |
53 | ||
54 | @mock.patch("osprofiler.profiler.get") | |
55 | def test_get_trace_id_headers_no_profiler(self, mock_get_profiler): | |
56 | mock_get_profiler.return_value = False | |
57 | headers = web.get_trace_id_headers() | |
58 | self.assertEqual(headers, {}) | |
59 | ||
60 | ||
61 | class WebMiddlewareTestCase(test.TestCase): | |
62 | def setUp(self): | |
63 | super(WebMiddlewareTestCase, self).setUp() | |
64 | profiler._clean() | |
65 | # it's default state of _ENABLED param, so let's set it here | |
66 | web._ENABLED = None | |
67 | self.addCleanup(profiler._clean) | |
68 | ||
69 | def tearDown(self): | |
70 | web.enable() | |
71 | super(WebMiddlewareTestCase, self).tearDown() | |
72 | ||
73 | def test_factory(self): | |
74 | mock_app = mock.MagicMock() | |
75 | local_conf = {"enabled": True, "hmac_keys": "123"} | |
76 | ||
77 | factory = web.WsgiMiddleware.factory(None, **local_conf) | |
78 | wsgi = factory(mock_app) | |
79 | ||
80 | self.assertEqual(wsgi.application, mock_app) | |
81 | self.assertEqual(wsgi.name, "wsgi") | |
82 | self.assertTrue(wsgi.enabled) | |
83 | self.assertEqual(wsgi.hmac_keys, [local_conf["hmac_keys"]]) | |
84 | ||
85 | def _test_wsgi_middleware_with_invalid_trace(self, headers, hmac_key, | |
86 | mock_profiler_init, | |
87 | enabled=True): | |
88 | request = mock.MagicMock() | |
89 | request.get_response.return_value = "yeah!" | |
90 | request.headers = headers | |
91 | ||
92 | middleware = web.WsgiMiddleware("app", hmac_key, enabled=enabled) | |
93 | self.assertEqual("yeah!", middleware(request)) | |
94 | request.get_response.assert_called_once_with("app") | |
95 | self.assertEqual(0, mock_profiler_init.call_count) | |
96 | ||
97 | @mock.patch("osprofiler.web.profiler.init") | |
98 | def test_wsgi_middleware_disabled(self, mock_profiler_init): | |
99 | hmac_key = "secret" | |
100 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
101 | headers = { | |
102 | "a": "1", | |
103 | "b": "2", | |
104 | "X-Trace-Info": pack[0], | |
105 | "X-Trace-HMAC": pack[1] | |
106 | } | |
107 | ||
108 | self._test_wsgi_middleware_with_invalid_trace(headers, hmac_key, | |
109 | mock_profiler_init, | |
110 | enabled=False) | |
111 | ||
112 | @mock.patch("osprofiler.web.profiler.init") | |
113 | def test_wsgi_middleware_no_trace(self, mock_profiler_init): | |
114 | headers = { | |
115 | "a": "1", | |
116 | "b": "2" | |
117 | } | |
118 | self._test_wsgi_middleware_with_invalid_trace(headers, "secret", | |
119 | mock_profiler_init) | |
120 | ||
121 | @mock.patch("osprofiler.web.profiler.init") | |
122 | def test_wsgi_middleware_invalid_trace_headers(self, mock_profiler_init): | |
123 | headers = { | |
124 | "a": "1", | |
125 | "b": "2", | |
126 | "X-Trace-Info": "abbababababa", | |
127 | "X-Trace-HMAC": "abbababababa" | |
128 | } | |
129 | self._test_wsgi_middleware_with_invalid_trace(headers, "secret", | |
130 | mock_profiler_init) | |
131 | ||
132 | @mock.patch("osprofiler.web.profiler.init") | |
133 | def test_wsgi_middleware_no_trace_hmac(self, mock_profiler_init): | |
134 | hmac_key = "secret" | |
135 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
136 | headers = { | |
137 | "a": "1", | |
138 | "b": "2", | |
139 | "X-Trace-Info": pack[0] | |
140 | } | |
141 | self._test_wsgi_middleware_with_invalid_trace(headers, hmac_key, | |
142 | mock_profiler_init) | |
143 | ||
144 | @mock.patch("osprofiler.web.profiler.init") | |
145 | def test_wsgi_middleware_invalid_hmac(self, mock_profiler_init): | |
146 | hmac_key = "secret" | |
147 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
148 | headers = { | |
149 | "a": "1", | |
150 | "b": "2", | |
151 | "X-Trace-Info": pack[0], | |
152 | "X-Trace-HMAC": "not valid hmac" | |
153 | } | |
154 | self._test_wsgi_middleware_with_invalid_trace(headers, hmac_key, | |
155 | mock_profiler_init) | |
156 | ||
157 | @mock.patch("osprofiler.web.profiler.init") | |
158 | def test_wsgi_middleware_invalid_trace_info(self, mock_profiler_init): | |
159 | hmac_key = "secret" | |
160 | pack = utils.signed_pack([{"base_id": "1"}, {"parent_id": "2"}], | |
161 | hmac_key) | |
162 | headers = { | |
163 | "a": "1", | |
164 | "b": "2", | |
165 | "X-Trace-Info": pack[0], | |
166 | "X-Trace-HMAC": pack[1] | |
167 | } | |
168 | self._test_wsgi_middleware_with_invalid_trace(headers, hmac_key, | |
169 | mock_profiler_init) | |
170 | ||
171 | @mock.patch("osprofiler.web.profiler.init") | |
172 | def test_wsgi_middleware_key_passthrough(self, mock_profiler_init): | |
173 | hmac_key = "secret2" | |
174 | request = mock.MagicMock() | |
175 | request.get_response.return_value = "yeah!" | |
176 | request.url = "someurl" | |
177 | request.host_url = "someurl" | |
178 | request.path = "path" | |
179 | request.query_string = "query" | |
180 | request.method = "method" | |
181 | request.scheme = "scheme" | |
182 | ||
183 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
184 | ||
185 | request.headers = { | |
186 | "a": "1", | |
187 | "b": "2", | |
188 | "X-Trace-Info": pack[0], | |
189 | "X-Trace-HMAC": pack[1] | |
190 | } | |
191 | ||
192 | middleware = web.WsgiMiddleware("app", "secret1,%s" % hmac_key, | |
193 | enabled=True) | |
194 | self.assertEqual("yeah!", middleware(request)) | |
195 | mock_profiler_init.assert_called_once_with(hmac_key=hmac_key, | |
196 | base_id="1", | |
197 | parent_id="2") | |
198 | ||
199 | @mock.patch("osprofiler.web.profiler.init") | |
200 | def test_wsgi_middleware_key_passthrough2(self, mock_profiler_init): | |
201 | hmac_key = "secret1" | |
202 | request = mock.MagicMock() | |
203 | request.get_response.return_value = "yeah!" | |
204 | request.url = "someurl" | |
205 | request.host_url = "someurl" | |
206 | request.path = "path" | |
207 | request.query_string = "query" | |
208 | request.method = "method" | |
209 | request.scheme = "scheme" | |
210 | ||
211 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
212 | ||
213 | request.headers = { | |
214 | "a": "1", | |
215 | "b": "2", | |
216 | "X-Trace-Info": pack[0], | |
217 | "X-Trace-HMAC": pack[1] | |
218 | } | |
219 | ||
220 | middleware = web.WsgiMiddleware("app", "%s,secret2" % hmac_key, | |
221 | enabled=True) | |
222 | self.assertEqual("yeah!", middleware(request)) | |
223 | mock_profiler_init.assert_called_once_with(hmac_key=hmac_key, | |
224 | base_id="1", | |
225 | parent_id="2") | |
226 | ||
227 | @mock.patch("osprofiler.web.profiler.Trace") | |
228 | @mock.patch("osprofiler.web.profiler.init") | |
229 | def test_wsgi_middleware(self, mock_profiler_init, mock_profiler_trace): | |
230 | hmac_key = "secret" | |
231 | request = mock.MagicMock() | |
232 | request.get_response.return_value = "yeah!" | |
233 | request.url = "someurl" | |
234 | request.host_url = "someurl" | |
235 | request.path = "path" | |
236 | request.query_string = "query" | |
237 | request.method = "method" | |
238 | request.scheme = "scheme" | |
239 | ||
240 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
241 | ||
242 | request.headers = { | |
243 | "a": "1", | |
244 | "b": "2", | |
245 | "X-Trace-Info": pack[0], | |
246 | "X-Trace-HMAC": pack[1] | |
247 | } | |
248 | ||
249 | middleware = web.WsgiMiddleware("app", hmac_key, enabled=True) | |
250 | self.assertEqual("yeah!", middleware(request)) | |
251 | mock_profiler_init.assert_called_once_with(hmac_key=hmac_key, | |
252 | base_id="1", | |
253 | parent_id="2") | |
254 | expected_info = { | |
255 | "request": { | |
256 | "path": request.path, | |
257 | "query": request.query_string, | |
258 | "method": request.method, | |
259 | "scheme": request.scheme | |
260 | } | |
261 | } | |
262 | mock_profiler_trace.assert_called_once_with("wsgi", info=expected_info) | |
263 | ||
264 | @mock.patch("osprofiler.web.profiler.init") | |
265 | def test_wsgi_middleware_disable_via_python(self, mock_profiler_init): | |
266 | request = mock.MagicMock() | |
267 | request.get_response.return_value = "yeah!" | |
268 | web.disable() | |
269 | middleware = web.WsgiMiddleware("app", "hmac_key", enabled=True) | |
270 | self.assertEqual("yeah!", middleware(request)) | |
271 | self.assertEqual(mock_profiler_init.call_count, 0) | |
272 | ||
273 | @mock.patch("osprofiler.web.profiler.init") | |
274 | def test_wsgi_middleware_enable_via_python(self, mock_profiler_init): | |
275 | request = mock.MagicMock() | |
276 | request.get_response.return_value = "yeah!" | |
277 | request.url = "someurl" | |
278 | request.host_url = "someurl" | |
279 | request.path = "path" | |
280 | request.query_string = "query" | |
281 | request.method = "method" | |
282 | request.scheme = "scheme" | |
283 | hmac_key = "super_secret_key2" | |
284 | ||
285 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
286 | request.headers = { | |
287 | "a": "1", | |
288 | "b": "2", | |
289 | "X-Trace-Info": pack[0], | |
290 | "X-Trace-HMAC": pack[1] | |
291 | } | |
292 | ||
293 | web.enable("super_secret_key1,super_secret_key2") | |
294 | middleware = web.WsgiMiddleware("app", enabled=True) | |
295 | self.assertEqual("yeah!", middleware(request)) | |
296 | mock_profiler_init.assert_called_once_with(hmac_key=hmac_key, | |
297 | base_id="1", | |
298 | parent_id="2") | |
299 | ||
300 | def test_disable(self): | |
301 | web.disable() | |
302 | self.assertFalse(web._ENABLED) | |
303 | ||
304 | def test_enabled(self): | |
305 | web.disable() | |
306 | web.enable() | |
307 | self.assertTrue(web._ENABLED) |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import json | |
16 | import os | |
17 | import sys | |
18 | ||
19 | import ddt | |
20 | import mock | |
21 | import six | |
22 | ||
23 | from osprofiler.cmd import shell | |
24 | from osprofiler import exc | |
25 | from osprofiler.tests import test | |
26 | ||
27 | ||
28 | @ddt.ddt | |
29 | class ShellTestCase(test.TestCase): | |
30 | ||
31 | TRACE_ID = "c598094d-bbee-40b6-b317-d76003b679d3" | |
32 | ||
33 | def setUp(self): | |
34 | super(ShellTestCase, self).setUp() | |
35 | self.old_environment = os.environ.copy() | |
36 | os.environ = { | |
37 | "OS_USERNAME": "username", | |
38 | "OS_USER_ID": "user_id", | |
39 | "OS_PASSWORD": "password", | |
40 | "OS_USER_DOMAIN_ID": "user_domain_id", | |
41 | "OS_USER_DOMAIN_NAME": "user_domain_name", | |
42 | "OS_PROJECT_DOMAIN_ID": "project_domain_id", | |
43 | "OS_PROJECT_DOMAIN_NAME": "project_domain_name", | |
44 | "OS_PROJECT_ID": "project_id", | |
45 | "OS_PROJECT_NAME": "project_name", | |
46 | "OS_TENANT_ID": "tenant_id", | |
47 | "OS_TENANT_NAME": "tenant_name", | |
48 | "OS_AUTH_URL": "http://127.0.0.1:5000/v3/", | |
49 | "OS_AUTH_TOKEN": "pass", | |
50 | "OS_CACERT": "/path/to/cacert", | |
51 | "OS_SERVICE_TYPE": "service_type", | |
52 | "OS_ENDPOINT_TYPE": "public", | |
53 | "OS_REGION_NAME": "test" | |
54 | } | |
55 | ||
56 | self.ceiloclient = mock.MagicMock() | |
57 | sys.modules["ceilometerclient"] = self.ceiloclient | |
58 | self.addCleanup(sys.modules.pop, "ceilometerclient", None) | |
59 | ceilo_modules = ["client", "shell"] | |
60 | for module in ceilo_modules: | |
61 | sys.modules["ceilometerclient.%s" % module] = getattr( | |
62 | self.ceiloclient, module) | |
63 | self.addCleanup( | |
64 | sys.modules.pop, "ceilometerclient.%s" % module, None) | |
65 | ||
66 | def tearDown(self): | |
67 | super(ShellTestCase, self).tearDown() | |
68 | os.environ = self.old_environment | |
69 | ||
70 | def _trace_show_cmd(self, format_=None): | |
71 | cmd = "trace show %s" % self.TRACE_ID | |
72 | return cmd if format_ is None else "%s --%s" % (cmd, format_) | |
73 | ||
74 | @mock.patch("sys.stdout", six.StringIO()) | |
75 | @mock.patch("osprofiler.cmd.shell.OSProfilerShell") | |
76 | def test_shell_main(self, mock_shell): | |
77 | mock_shell.side_effect = exc.CommandError("some_message") | |
78 | shell.main() | |
79 | self.assertEqual("some_message\n", sys.stdout.getvalue()) | |
80 | ||
81 | def run_command(self, cmd): | |
82 | shell.OSProfilerShell(cmd.split()) | |
83 | ||
84 | def _test_with_command_error(self, cmd, expected_message): | |
85 | try: | |
86 | self.run_command(cmd) | |
87 | except exc.CommandError as actual_error: | |
88 | self.assertEqual(str(actual_error), expected_message) | |
89 | else: | |
90 | raise ValueError( | |
91 | "Expected: `osprofiler.exc.CommandError` is raised with " | |
92 | "message: '%s'." % expected_message) | |
93 | ||
94 | def test_username_is_not_presented(self): | |
95 | os.environ.pop("OS_USERNAME") | |
96 | msg = ("You must provide a username via either --os-username or " | |
97 | "via env[OS_USERNAME]") | |
98 | self._test_with_command_error(self._trace_show_cmd(), msg) | |
99 | ||
100 | def test_password_is_not_presented(self): | |
101 | os.environ.pop("OS_PASSWORD") | |
102 | msg = ("You must provide a password via either --os-password or " | |
103 | "via env[OS_PASSWORD]") | |
104 | self._test_with_command_error(self._trace_show_cmd(), msg) | |
105 | ||
106 | def test_auth_url(self): | |
107 | os.environ.pop("OS_AUTH_URL") | |
108 | msg = ("You must provide an auth url via either --os-auth-url or " | |
109 | "via env[OS_AUTH_URL]") | |
110 | self._test_with_command_error(self._trace_show_cmd(), msg) | |
111 | ||
112 | def test_no_project_and_domain_set(self): | |
113 | os.environ.pop("OS_PROJECT_ID") | |
114 | os.environ.pop("OS_PROJECT_NAME") | |
115 | os.environ.pop("OS_TENANT_ID") | |
116 | os.environ.pop("OS_TENANT_NAME") | |
117 | os.environ.pop("OS_USER_DOMAIN_ID") | |
118 | os.environ.pop("OS_USER_DOMAIN_NAME") | |
119 | ||
120 | msg = ("You must provide a project_id via either --os-project-id or " | |
121 | "via env[OS_PROJECT_ID] and a domain_name via either " | |
122 | "--os-user-domain-name or via env[OS_USER_DOMAIN_NAME] or a " | |
123 | "domain_id via either --os-user-domain-id or via " | |
124 | "env[OS_USER_DOMAIN_ID]") | |
125 | self._test_with_command_error(self._trace_show_cmd(), msg) | |
126 | ||
127 | def test_trace_show_ceilometerclient_is_missed(self): | |
128 | sys.modules["ceilometerclient"] = None | |
129 | sys.modules["ceilometerclient.client"] = None | |
130 | sys.modules["ceilometerclient.shell"] = None | |
131 | ||
132 | msg = ("To use this command, you should install " | |
133 | "'ceilometerclient' manually. Use command:\n " | |
134 | "'pip install python-ceilometerclient'.") | |
135 | self._test_with_command_error(self._trace_show_cmd(), msg) | |
136 | ||
137 | def test_trace_show_unauthorized(self): | |
138 | class FakeHTTPUnauthorized(Exception): | |
139 | http_status = 401 | |
140 | ||
141 | self.ceiloclient.client.get_client.side_effect = FakeHTTPUnauthorized | |
142 | ||
143 | msg = "Invalid OpenStack Identity credentials." | |
144 | self._test_with_command_error(self._trace_show_cmd(), msg) | |
145 | ||
146 | def test_trace_show_unknown_error(self): | |
147 | self.ceiloclient.client.get_client.side_effect = Exception("test") | |
148 | msg = "Error occurred while connecting to Ceilometer: test." | |
149 | self._test_with_command_error(self._trace_show_cmd(), msg) | |
150 | ||
151 | @mock.patch("osprofiler.drivers.ceilometer.Ceilometer.get_report") | |
152 | def test_trace_show_no_selected_format(self, mock_get): | |
153 | mock_get.return_value = self._create_mock_notifications() | |
154 | msg = ("You should choose one of the following output formats: " | |
155 | "json, html or dot.") | |
156 | self._test_with_command_error(self._trace_show_cmd(), msg) | |
157 | ||
158 | @mock.patch("osprofiler.drivers.ceilometer.Ceilometer.get_report") | |
159 | @ddt.data(None, {"info": {"started": 0, "finished": 1, "name": "total"}, | |
160 | "children": []}) | |
161 | def test_trace_show_trace_id_not_found(self, notifications, mock_get): | |
162 | mock_get.return_value = notifications | |
163 | ||
164 | msg = ("Trace with UUID %s not found. Please check the HMAC key " | |
165 | "used in the command." % self.TRACE_ID) | |
166 | ||
167 | self._test_with_command_error(self._trace_show_cmd(), msg) | |
168 | ||
169 | def _create_mock_notifications(self): | |
170 | notifications = { | |
171 | "info": { | |
172 | "started": 0, | |
173 | "finished": 1, | |
174 | "name": "total" | |
175 | }, | |
176 | "children": [{ | |
177 | "info": { | |
178 | "started": 0, | |
179 | "finished": 1, | |
180 | "name": "total" | |
181 | }, | |
182 | "children": [] | |
183 | }] | |
184 | } | |
185 | return notifications | |
186 | ||
187 | @mock.patch("sys.stdout", six.StringIO()) | |
188 | @mock.patch("osprofiler.drivers.ceilometer.Ceilometer.get_report") | |
189 | def test_trace_show_in_json(self, mock_get): | |
190 | notifications = self._create_mock_notifications() | |
191 | mock_get.return_value = notifications | |
192 | ||
193 | self.run_command(self._trace_show_cmd(format_="json")) | |
194 | self.assertEqual("%s\n" % json.dumps(notifications, indent=2, | |
195 | separators=(",", ": "),), | |
196 | sys.stdout.getvalue()) | |
197 | ||
198 | @mock.patch("sys.stdout", six.StringIO()) | |
199 | @mock.patch("osprofiler.drivers.ceilometer.Ceilometer.get_report") | |
200 | def test_trace_show_in_html(self, mock_get): | |
201 | notifications = self._create_mock_notifications() | |
202 | mock_get.return_value = notifications | |
203 | ||
204 | # NOTE(akurilin): to simplify assert statement, html-template should be | |
205 | # replaced. | |
206 | html_template = ( | |
207 | "A long time ago in a galaxy far, far away..." | |
208 | " some_data = $DATA" | |
209 | "It is a period of civil war. Rebel" | |
210 | "spaceships, striking from a hidden" | |
211 | "base, have won their first victory" | |
212 | "against the evil Galactic Empire.") | |
213 | ||
214 | with mock.patch("osprofiler.cmd.commands.open", | |
215 | mock.mock_open(read_data=html_template), create=True): | |
216 | self.run_command(self._trace_show_cmd(format_="html")) | |
217 | self.assertEqual("A long time ago in a galaxy far, far away..." | |
218 | " some_data = %s" | |
219 | "It is a period of civil war. Rebel" | |
220 | "spaceships, striking from a hidden" | |
221 | "base, have won their first victory" | |
222 | "against the evil Galactic Empire." | |
223 | "\n" % json.dumps(notifications, indent=4, | |
224 | separators=(",", ": ")), | |
225 | sys.stdout.getvalue()) | |
226 | ||
227 | @mock.patch("sys.stdout", six.StringIO()) | |
228 | @mock.patch("osprofiler.drivers.ceilometer.Ceilometer.get_report") | |
229 | def test_trace_show_write_to_file(self, mock_get): | |
230 | notifications = self._create_mock_notifications() | |
231 | mock_get.return_value = notifications | |
232 | ||
233 | with mock.patch("osprofiler.cmd.commands.open", | |
234 | mock.mock_open(), create=True) as mock_open: | |
235 | self.run_command("%s --out='/file'" % | |
236 | self._trace_show_cmd(format_="json")) | |
237 | ||
238 | output = mock_open.return_value.__enter__.return_value | |
239 | output.write.assert_called_once_with( | |
240 | json.dumps(notifications, indent=2, separators=(",", ": "))) |
0 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
1 | # not use this file except in compliance with the License. You may obtain | |
2 | # a copy of the License at | |
3 | # | |
4 | # http://www.apache.org/licenses/LICENSE-2.0 | |
5 | # | |
6 | # Unless required by applicable law or agreed to in writing, software | |
7 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
8 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
9 | # License for the specific language governing permissions and limitations | |
10 | # under the License. | |
11 | ||
12 | import glob | |
13 | import os | |
14 | import re | |
15 | ||
16 | import docutils.core | |
17 | ||
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class TitlesTestCase(test.TestCase): | |
22 | ||
23 | specs_path = os.path.join( | |
24 | os.path.dirname(__file__), | |
25 | os.pardir, os.pardir, os.pardir, os.pardir, | |
26 | "doc", "specs") | |
27 | ||
28 | def _get_title(self, section_tree): | |
29 | section = {"subtitles": []} | |
30 | for node in section_tree: | |
31 | if node.tagname == "title": | |
32 | section["name"] = node.rawsource | |
33 | elif node.tagname == "section": | |
34 | subsection = self._get_title(node) | |
35 | section["subtitles"].append(subsection["name"]) | |
36 | return section | |
37 | ||
38 | def _get_titles(self, spec): | |
39 | titles = {} | |
40 | for node in spec: | |
41 | if node.tagname == "section": | |
42 | # Note subsection subtitles are thrown away | |
43 | section = self._get_title(node) | |
44 | titles[section["name"]] = section["subtitles"] | |
45 | return titles | |
46 | ||
47 | def _check_titles(self, filename, expect, actual): | |
48 | missing_sections = [x for x in expect.keys() if x not in actual.keys()] | |
49 | extra_sections = [x for x in actual.keys() if x not in expect.keys()] | |
50 | ||
51 | msgs = [] | |
52 | if len(missing_sections) > 0: | |
53 | msgs.append("Missing sections: %s" % missing_sections) | |
54 | if len(extra_sections) > 0: | |
55 | msgs.append("Extra sections: %s" % extra_sections) | |
56 | ||
57 | for section in expect.keys(): | |
58 | missing_subsections = [x for x in expect[section] | |
59 | if x not in actual.get(section, {})] | |
60 | # extra subsections are allowed | |
61 | if len(missing_subsections) > 0: | |
62 | msgs.append("Section '%s' is missing subsections: %s" | |
63 | % (section, missing_subsections)) | |
64 | ||
65 | if len(msgs) > 0: | |
66 | self.fail("While checking '%s':\n %s" | |
67 | % (filename, "\n ".join(msgs))) | |
68 | ||
69 | def _check_lines_wrapping(self, tpl, raw): | |
70 | for i, line in enumerate(raw.split("\n")): | |
71 | if "http://" in line or "https://" in line: | |
72 | continue | |
73 | self.assertTrue( | |
74 | len(line) < 80, | |
75 | msg="%s:%d: Line limited to a maximum of 79 characters." % | |
76 | (tpl, i+1)) | |
77 | ||
78 | def _check_no_cr(self, tpl, raw): | |
79 | matches = re.findall("\r", raw) | |
80 | self.assertEqual( | |
81 | len(matches), 0, | |
82 | "Found %s literal carriage returns in file %s" % | |
83 | (len(matches), tpl)) | |
84 | ||
85 | def _check_trailing_spaces(self, tpl, raw): | |
86 | for i, line in enumerate(raw.split("\n")): | |
87 | trailing_spaces = re.findall(" +$", line) | |
88 | self.assertEqual( | |
89 | len(trailing_spaces), 0, | |
90 | "Found trailing spaces on line %s of %s" % (i+1, tpl)) | |
91 | ||
92 | def test_template(self): | |
93 | with open(os.path.join(self.specs_path, "template.rst")) as f: | |
94 | template = f.read() | |
95 | ||
96 | spec = docutils.core.publish_doctree(template) | |
97 | template_titles = self._get_titles(spec) | |
98 | ||
99 | for d in ["implemented", "in-progress"]: | |
100 | spec_dir = "%s/%s" % (self.specs_path, d) | |
101 | ||
102 | self.assertTrue(os.path.isdir(spec_dir), | |
103 | "%s is not a directory" % spec_dir) | |
104 | for filename in glob.glob(spec_dir + "/*"): | |
105 | if filename.endswith("README.rst"): | |
106 | continue | |
107 | ||
108 | self.assertTrue( | |
109 | filename.endswith(".rst"), | |
110 | "spec's file must have .rst ext. Found: %s" % filename) | |
111 | with open(filename) as f: | |
112 | data = f.read() | |
113 | ||
114 | titles = self._get_titles(docutils.core.publish_doctree(data)) | |
115 | self._check_titles(filename, template_titles, titles) | |
116 | self._check_lines_wrapping(filename, data) | |
117 | self._check_no_cr(filename, data) | |
118 | self._check_trailing_spaces(filename, data) |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler.drivers import base | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class NotifierBaseTestCase(test.TestCase): | |
22 | ||
23 | def test_factory(self): | |
24 | ||
25 | class A(base.Driver): | |
26 | @classmethod | |
27 | def get_name(cls): | |
28 | return "a" | |
29 | ||
30 | def notify(self, a): | |
31 | return a | |
32 | ||
33 | self.assertEqual(10, base.get_driver("a://").notify(10)) | |
34 | ||
35 | def test_factory_with_args(self): | |
36 | ||
37 | class B(base.Driver): | |
38 | ||
39 | def __init__(self, c_str, a, b=10): | |
40 | self.a = a | |
41 | self.b = b | |
42 | ||
43 | @classmethod | |
44 | def get_name(cls): | |
45 | return "b" | |
46 | ||
47 | def notify(self, c): | |
48 | return self.a + self.b + c | |
49 | ||
50 | self.assertEqual(22, base.get_driver("b://", 5, b=7).notify(10)) | |
51 | ||
52 | def test_driver_not_found(self): | |
53 | self.assertRaises(ValueError, base.get_driver, | |
54 | "Driver not found for connection string: " | |
55 | "nonexisting://") | |
56 | ||
57 | def test_plugins_are_imported(self): | |
58 | base.get_driver("messaging://", mock.MagicMock(), "context", | |
59 | "transport", "host") | |
60 | ||
61 | def test_build_empty_tree(self): | |
62 | class C(base.Driver): | |
63 | @classmethod | |
64 | def get_name(cls): | |
65 | return "c" | |
66 | ||
67 | self.assertEqual([], base.get_driver("c://")._build_tree({})) | |
68 | ||
69 | def test_build_complex_tree(self): | |
70 | class D(base.Driver): | |
71 | @classmethod | |
72 | def get_name(cls): | |
73 | return "d" | |
74 | ||
75 | test_input = { | |
76 | "2": {"parent_id": "0", "trace_id": "2", "info": {"started": 1}}, | |
77 | "1": {"parent_id": "0", "trace_id": "1", "info": {"started": 0}}, | |
78 | "21": {"parent_id": "2", "trace_id": "21", "info": {"started": 6}}, | |
79 | "22": {"parent_id": "2", "trace_id": "22", "info": {"started": 7}}, | |
80 | "11": {"parent_id": "1", "trace_id": "11", "info": {"started": 1}}, | |
81 | "113": {"parent_id": "11", "trace_id": "113", | |
82 | "info": {"started": 3}}, | |
83 | "112": {"parent_id": "11", "trace_id": "112", | |
84 | "info": {"started": 2}}, | |
85 | "114": {"parent_id": "11", "trace_id": "114", | |
86 | "info": {"started": 5}} | |
87 | } | |
88 | ||
89 | expected_output = [ | |
90 | { | |
91 | "parent_id": "0", | |
92 | "trace_id": "1", | |
93 | "info": {"started": 0}, | |
94 | "children": [ | |
95 | { | |
96 | "parent_id": "1", | |
97 | "trace_id": "11", | |
98 | "info": {"started": 1}, | |
99 | "children": [ | |
100 | {"parent_id": "11", "trace_id": "112", | |
101 | "info": {"started": 2}, "children": []}, | |
102 | {"parent_id": "11", "trace_id": "113", | |
103 | "info": {"started": 3}, "children": []}, | |
104 | {"parent_id": "11", "trace_id": "114", | |
105 | "info": {"started": 5}, "children": []} | |
106 | ] | |
107 | } | |
108 | ] | |
109 | }, | |
110 | { | |
111 | "parent_id": "0", | |
112 | "trace_id": "2", | |
113 | "info": {"started": 1}, | |
114 | "children": [ | |
115 | {"parent_id": "2", "trace_id": "21", | |
116 | "info": {"started": 6}, "children": []}, | |
117 | {"parent_id": "2", "trace_id": "22", | |
118 | "info": {"started": 7}, "children": []} | |
119 | ] | |
120 | } | |
121 | ] | |
122 | ||
123 | self.assertEqual( | |
124 | expected_output, base.get_driver("d://")._build_tree(test_input)) |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler.drivers.ceilometer import Ceilometer | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class CeilometerParserTestCase(test.TestCase): | |
22 | def setUp(self): | |
23 | super(CeilometerParserTestCase, self).setUp() | |
24 | self.ceilometer = Ceilometer("ceilometer://", | |
25 | ceilometer_api_version="2") | |
26 | ||
27 | def test_build_empty_tree(self): | |
28 | self.assertEqual([], self.ceilometer._build_tree({})) | |
29 | ||
30 | def test_build_complex_tree(self): | |
31 | test_input = { | |
32 | "2": {"parent_id": "0", "trace_id": "2", "info": {"started": 1}}, | |
33 | "1": {"parent_id": "0", "trace_id": "1", "info": {"started": 0}}, | |
34 | "21": {"parent_id": "2", "trace_id": "21", "info": {"started": 6}}, | |
35 | "22": {"parent_id": "2", "trace_id": "22", "info": {"started": 7}}, | |
36 | "11": {"parent_id": "1", "trace_id": "11", "info": {"started": 1}}, | |
37 | "113": {"parent_id": "11", "trace_id": "113", | |
38 | "info": {"started": 3}}, | |
39 | "112": {"parent_id": "11", "trace_id": "112", | |
40 | "info": {"started": 2}}, | |
41 | "114": {"parent_id": "11", "trace_id": "114", | |
42 | "info": {"started": 5}} | |
43 | } | |
44 | ||
45 | expected_output = [ | |
46 | { | |
47 | "parent_id": "0", | |
48 | "trace_id": "1", | |
49 | "info": {"started": 0}, | |
50 | "children": [ | |
51 | { | |
52 | "parent_id": "1", | |
53 | "trace_id": "11", | |
54 | "info": {"started": 1}, | |
55 | "children": [ | |
56 | {"parent_id": "11", "trace_id": "112", | |
57 | "info": {"started": 2}, "children": []}, | |
58 | {"parent_id": "11", "trace_id": "113", | |
59 | "info": {"started": 3}, "children": []}, | |
60 | {"parent_id": "11", "trace_id": "114", | |
61 | "info": {"started": 5}, "children": []} | |
62 | ] | |
63 | } | |
64 | ] | |
65 | }, | |
66 | { | |
67 | "parent_id": "0", | |
68 | "trace_id": "2", | |
69 | "info": {"started": 1}, | |
70 | "children": [ | |
71 | {"parent_id": "2", "trace_id": "21", | |
72 | "info": {"started": 6}, "children": []}, | |
73 | {"parent_id": "2", "trace_id": "22", | |
74 | "info": {"started": 7}, "children": []} | |
75 | ] | |
76 | } | |
77 | ] | |
78 | ||
79 | result = self.ceilometer._build_tree(test_input) | |
80 | self.assertEqual(expected_output, result) | |
81 | ||
82 | def test_get_report_empty(self): | |
83 | self.ceilometer.client = mock.MagicMock() | |
84 | self.ceilometer.client.events.list.return_value = [] | |
85 | ||
86 | expected = { | |
87 | "info": { | |
88 | "name": "total", | |
89 | "started": 0, | |
90 | "finished": None, | |
91 | "last_trace_started": None | |
92 | }, | |
93 | "children": [], | |
94 | "stats": {}, | |
95 | } | |
96 | ||
97 | base_id = "10" | |
98 | self.assertEqual(expected, self.ceilometer.get_report(base_id)) | |
99 | ||
100 | def test_get_report(self): | |
101 | self.ceilometer.client = mock.MagicMock() | |
102 | results = [mock.MagicMock(), mock.MagicMock(), mock.MagicMock(), | |
103 | mock.MagicMock(), mock.MagicMock()] | |
104 | ||
105 | self.ceilometer.client.events.list.return_value = results | |
106 | results[0].to_dict.return_value = { | |
107 | "traits": [ | |
108 | { | |
109 | "type": "string", | |
110 | "name": "base_id", | |
111 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
112 | }, | |
113 | { | |
114 | "type": "string", | |
115 | "name": "host", | |
116 | "value": "ubuntu" | |
117 | }, | |
118 | { | |
119 | "type": "string", | |
120 | "name": "method", | |
121 | "value": "POST" | |
122 | }, | |
123 | { | |
124 | "type": "string", | |
125 | "name": "name", | |
126 | "value": "wsgi-start" | |
127 | }, | |
128 | { | |
129 | "type": "string", | |
130 | "name": "parent_id", | |
131 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
132 | }, | |
133 | { | |
134 | "type": "string", | |
135 | "name": "project", | |
136 | "value": "keystone" | |
137 | }, | |
138 | { | |
139 | "type": "string", | |
140 | "name": "service", | |
141 | "value": "main" | |
142 | }, | |
143 | { | |
144 | "type": "string", | |
145 | "name": "timestamp", | |
146 | "value": "2015-12-23T14:02:22.338776" | |
147 | }, | |
148 | { | |
149 | "type": "string", | |
150 | "name": "trace_id", | |
151 | "value": "06320327-2c2c-45ae-923a-515de890276a" | |
152 | } | |
153 | ], | |
154 | "raw": {}, | |
155 | "generated": "2015-12-23T10:41:38.415793", | |
156 | "event_type": "profiler.main", | |
157 | "message_id": "65fc1553-3082-4a6f-9d1e-0e3183f57a47"} | |
158 | ||
159 | results[1].to_dict.return_value = { | |
160 | "traits": | |
161 | [ | |
162 | { | |
163 | "type": "string", | |
164 | "name": "base_id", | |
165 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
166 | }, | |
167 | { | |
168 | "type": "string", | |
169 | "name": "host", | |
170 | "value": "ubuntu" | |
171 | }, | |
172 | { | |
173 | "type": "string", | |
174 | "name": "name", | |
175 | "value": "wsgi-stop" | |
176 | }, | |
177 | { | |
178 | "type": "string", | |
179 | "name": "parent_id", | |
180 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
181 | }, | |
182 | { | |
183 | "type": "string", | |
184 | "name": "project", | |
185 | "value": "keystone" | |
186 | }, | |
187 | { | |
188 | "type": "string", | |
189 | "name": "service", | |
190 | "value": "main" | |
191 | }, | |
192 | { | |
193 | "type": "string", | |
194 | "name": "timestamp", | |
195 | "value": "2015-12-23T14:02:22.380405" | |
196 | }, | |
197 | { | |
198 | "type": "string", | |
199 | "name": "trace_id", | |
200 | "value": "016c97fd-87f3-40b2-9b55-e431156b694b" | |
201 | } | |
202 | ], | |
203 | "raw": {}, | |
204 | "generated": "2015-12-23T10:41:38.406052", | |
205 | "event_type": "profiler.main", | |
206 | "message_id": "3256d9f1-48ba-4ac5-a50b-64fa42c6e264"} | |
207 | ||
208 | results[2].to_dict.return_value = { | |
209 | "traits": | |
210 | [ | |
211 | { | |
212 | "type": "string", | |
213 | "name": "base_id", | |
214 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
215 | }, | |
216 | { | |
217 | "type": "string", | |
218 | "name": "db.params", | |
219 | "value": "[]" | |
220 | }, | |
221 | { | |
222 | "type": "string", | |
223 | "name": "db.statement", | |
224 | "value": "SELECT 1" | |
225 | }, | |
226 | { | |
227 | "type": "string", | |
228 | "name": "host", | |
229 | "value": "ubuntu" | |
230 | }, | |
231 | { | |
232 | "type": "string", | |
233 | "name": "name", | |
234 | "value": "db-start" | |
235 | }, | |
236 | { | |
237 | "type": "string", | |
238 | "name": "parent_id", | |
239 | "value": "06320327-2c2c-45ae-923a-515de890276a" | |
240 | }, | |
241 | { | |
242 | "type": "string", | |
243 | "name": "project", | |
244 | "value": "keystone" | |
245 | }, | |
246 | { | |
247 | "type": "string", | |
248 | "name": "service", | |
249 | "value": "main" | |
250 | }, | |
251 | { | |
252 | "type": "string", | |
253 | "name": "timestamp", | |
254 | "value": "2015-12-23T14:02:22.395365" | |
255 | }, | |
256 | { | |
257 | "type": "string", | |
258 | "name": "trace_id", | |
259 | "value": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a" | |
260 | } | |
261 | ], | |
262 | "raw": {}, | |
263 | "generated": "2015-12-23T10:41:38.984161", | |
264 | "event_type": "profiler.main", | |
265 | "message_id": "60368aa4-16f0-4f37-a8fb-89e92fdf36ff"} | |
266 | ||
267 | results[3].to_dict.return_value = { | |
268 | "traits": | |
269 | [ | |
270 | { | |
271 | "type": "string", | |
272 | "name": "base_id", | |
273 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
274 | }, | |
275 | { | |
276 | "type": "string", | |
277 | "name": "host", | |
278 | "value": "ubuntu" | |
279 | }, | |
280 | { | |
281 | "type": "string", | |
282 | "name": "name", | |
283 | "value": "db-stop" | |
284 | }, | |
285 | { | |
286 | "type": "string", | |
287 | "name": "parent_id", | |
288 | "value": "06320327-2c2c-45ae-923a-515de890276a" | |
289 | }, | |
290 | { | |
291 | "type": "string", | |
292 | "name": "project", | |
293 | "value": "keystone" | |
294 | }, | |
295 | { | |
296 | "type": "string", | |
297 | "name": "service", | |
298 | "value": "main" | |
299 | }, | |
300 | { | |
301 | "type": "string", | |
302 | "name": "timestamp", | |
303 | "value": "2015-12-23T14:02:22.415486" | |
304 | }, | |
305 | { | |
306 | "type": "string", | |
307 | "name": "trace_id", | |
308 | "value": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a" | |
309 | } | |
310 | ], | |
311 | "raw": {}, | |
312 | "generated": "2015-12-23T10:41:39.019378", | |
313 | "event_type": "profiler.main", | |
314 | "message_id": "3fbeb339-55c5-4f28-88e4-15bee251dd3d"} | |
315 | ||
316 | results[4].to_dict.return_value = { | |
317 | "traits": | |
318 | [ | |
319 | { | |
320 | "type": "string", | |
321 | "name": "base_id", | |
322 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
323 | }, | |
324 | { | |
325 | "type": "string", | |
326 | "name": "host", | |
327 | "value": "ubuntu" | |
328 | }, | |
329 | { | |
330 | "type": "string", | |
331 | "name": "method", | |
332 | "value": "GET" | |
333 | }, | |
334 | { | |
335 | "type": "string", | |
336 | "name": "name", | |
337 | "value": "wsgi-start" | |
338 | }, | |
339 | { | |
340 | "type": "string", | |
341 | "name": "parent_id", | |
342 | "value": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
343 | }, | |
344 | { | |
345 | "type": "string", | |
346 | "name": "project", | |
347 | "value": "keystone" | |
348 | }, | |
349 | { | |
350 | "type": "string", | |
351 | "name": "service", | |
352 | "value": "main" | |
353 | }, | |
354 | { | |
355 | "type": "string", | |
356 | "name": "timestamp", | |
357 | "value": "2015-12-23T14:02:22.427444" | |
358 | }, | |
359 | { | |
360 | "type": "string", | |
361 | "name": "trace_id", | |
362 | "value": "016c97fd-87f3-40b2-9b55-e431156b694b" | |
363 | } | |
364 | ], | |
365 | "raw": {}, | |
366 | "generated": "2015-12-23T10:41:38.360409", | |
367 | "event_type": "profiler.main", | |
368 | "message_id": "57b971a9-572f-4f29-9838-3ed2564c6b5b"} | |
369 | ||
370 | expected = {"children": [ | |
371 | {"children": [{"children": [], | |
372 | "info": {"finished": 76, | |
373 | "host": "ubuntu", | |
374 | "meta.raw_payload.db-start": {}, | |
375 | "meta.raw_payload.db-stop": {}, | |
376 | "name": "db", | |
377 | "project": "keystone", | |
378 | "service": "main", | |
379 | "started": 56, | |
380 | "exception": "None"}, | |
381 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
382 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a"} | |
383 | ], | |
384 | "info": {"finished": 0, | |
385 | "host": "ubuntu", | |
386 | "meta.raw_payload.wsgi-start": {}, | |
387 | "name": "wsgi", | |
388 | "project": "keystone", | |
389 | "service": "main", | |
390 | "started": 0}, | |
391 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
392 | "trace_id": "06320327-2c2c-45ae-923a-515de890276a"}, | |
393 | {"children": [], | |
394 | "info": {"finished": 41, | |
395 | "host": "ubuntu", | |
396 | "meta.raw_payload.wsgi-start": {}, | |
397 | "meta.raw_payload.wsgi-stop": {}, | |
398 | "name": "wsgi", | |
399 | "project": "keystone", | |
400 | "service": "main", | |
401 | "started": 88, | |
402 | "exception": "None"}, | |
403 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
404 | "trace_id": "016c97fd-87f3-40b2-9b55-e431156b694b"}], | |
405 | "info": { | |
406 | "finished": 88, | |
407 | "name": "total", | |
408 | "started": 0, | |
409 | "last_trace_started": 88 | |
410 | }, | |
411 | "stats": {"db": {"count": 1, "duration": 20}, | |
412 | "wsgi": {"count": 2, "duration": -47}}, | |
413 | } | |
414 | ||
415 | base_id = "10" | |
416 | ||
417 | result = self.ceilometer.get_report(base_id) | |
418 | ||
419 | expected_filter = [{"field": "base_id", "op": "eq", "value": base_id}] | |
420 | self.ceilometer.client.events.list.assert_called_once_with( | |
421 | expected_filter, limit=100000) | |
422 | self.assertEqual(expected, result) |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler.drivers.elasticsearch_driver import ElasticsearchDriver | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class ElasticsearchTestCase(test.TestCase): | |
22 | ||
23 | def setUp(self): | |
24 | super(ElasticsearchTestCase, self).setUp() | |
25 | self.elasticsearch = ElasticsearchDriver("elasticsearch://localhost") | |
26 | self.elasticsearch.project = "project" | |
27 | self.elasticsearch.service = "service" | |
28 | ||
29 | def test_init_and_notify(self): | |
30 | self.elasticsearch.client = mock.MagicMock() | |
31 | self.elasticsearch.client.reset_mock() | |
32 | project = "project" | |
33 | service = "service" | |
34 | host = "host" | |
35 | ||
36 | info = { | |
37 | "a": 10, | |
38 | "project": project, | |
39 | "service": service, | |
40 | "host": host | |
41 | } | |
42 | self.elasticsearch.notify(info) | |
43 | ||
44 | self.elasticsearch.client\ | |
45 | .index.assert_called_once_with(index="osprofiler-notifications", | |
46 | doc_type="notification", | |
47 | body=info) | |
48 | ||
49 | def test_get_empty_report(self): | |
50 | self.elasticsearch.client = mock.MagicMock() | |
51 | self.elasticsearch.client.search = mock\ | |
52 | .MagicMock(return_value={"_scroll_id": "1", "hits": {"hits": []}}) | |
53 | self.elasticsearch.client.reset_mock() | |
54 | ||
55 | get_report = self.elasticsearch.get_report | |
56 | base_id = "abacaba" | |
57 | ||
58 | get_report(base_id) | |
59 | ||
60 | self.elasticsearch.client\ | |
61 | .search.assert_called_once_with(index="osprofiler-notifications", | |
62 | doc_type="notification", | |
63 | size=10000, | |
64 | scroll="2m", | |
65 | body={"query": { | |
66 | "match": {"base_id": base_id}} | |
67 | }) | |
68 | ||
69 | def test_get_non_empty_report(self): | |
70 | base_id = "1" | |
71 | elasticsearch_first_response = { | |
72 | "_scroll_id": "1", | |
73 | "hits": { | |
74 | "hits": [ | |
75 | { | |
76 | "_source": { | |
77 | "timestamp": "2016-08-10T16:58:03.064438", | |
78 | "base_id": base_id, | |
79 | "project": "project", | |
80 | "service": "service", | |
81 | "parent_id": "0", | |
82 | "name": "test", | |
83 | "info": { | |
84 | "host": "host" | |
85 | }, | |
86 | "trace_id": "1" | |
87 | } | |
88 | } | |
89 | ]}} | |
90 | elasticsearch_second_response = { | |
91 | "_scroll_id": base_id, | |
92 | "hits": {"hits": []}} | |
93 | self.elasticsearch.client = mock.MagicMock() | |
94 | self.elasticsearch.client.search = \ | |
95 | mock.MagicMock(return_value=elasticsearch_first_response) | |
96 | self.elasticsearch.client.scroll = \ | |
97 | mock.MagicMock(return_value=elasticsearch_second_response) | |
98 | ||
99 | self.elasticsearch.client.reset_mock() | |
100 | ||
101 | self.elasticsearch.get_report(base_id) | |
102 | ||
103 | self.elasticsearch.client\ | |
104 | .search.assert_called_once_with(index="osprofiler-notifications", | |
105 | doc_type="notification", | |
106 | size=10000, | |
107 | scroll="2m", | |
108 | body={"query": { | |
109 | "match": {"base_id": base_id}} | |
110 | }) | |
111 | ||
112 | self.elasticsearch.client\ | |
113 | .scroll.assert_called_once_with(scroll_id=base_id, scroll="2m") |
0 | # Copyright (c) 2016 VMware, Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import json | |
16 | ||
17 | import ddt | |
18 | import mock | |
19 | ||
20 | from osprofiler.drivers import loginsight | |
21 | from osprofiler import exc | |
22 | from osprofiler.tests import test | |
23 | ||
24 | ||
25 | @ddt.ddt | |
26 | class LogInsightDriverTestCase(test.TestCase): | |
27 | ||
28 | BASE_ID = "8d28af1e-acc0-498c-9890-6908e33eff5f" | |
29 | ||
30 | def setUp(self): | |
31 | super(LogInsightDriverTestCase, self).setUp() | |
32 | self._client = mock.Mock(spec=loginsight.LogInsightClient) | |
33 | self._project = "cinder" | |
34 | self._service = "osapi_volume" | |
35 | self._host = "ubuntu" | |
36 | with mock.patch.object(loginsight, "LogInsightClient", | |
37 | return_value=self._client): | |
38 | self._driver = loginsight.LogInsightDriver( | |
39 | "loginsight://username:password@host", | |
40 | project=self._project, | |
41 | service=self._service, | |
42 | host=self._host) | |
43 | ||
44 | @mock.patch.object(loginsight, "LogInsightClient") | |
45 | def test_init(self, client_class): | |
46 | client = mock.Mock() | |
47 | client_class.return_value = client | |
48 | ||
49 | loginsight.LogInsightDriver("loginsight://username:password@host") | |
50 | client_class.assert_called_once_with("host", "username", "password") | |
51 | client.login.assert_called_once_with() | |
52 | ||
53 | @ddt.data("loginsight://username@host", | |
54 | "loginsight://username:p@ssword@host", | |
55 | "loginsight://us:rname:password@host") | |
56 | def test_init_with_invalid_connection_string(self, conn_str): | |
57 | self.assertRaises(ValueError, loginsight.LogInsightDriver, conn_str) | |
58 | ||
59 | @mock.patch.object(loginsight, "LogInsightClient") | |
60 | def test_init_with_special_chars_in_conn_str(self, client_class): | |
61 | client = mock.Mock() | |
62 | client_class.return_value = client | |
63 | ||
64 | loginsight.LogInsightDriver("loginsight://username:p%40ssword@host") | |
65 | client_class.assert_called_once_with("host", "username", "p@ssword") | |
66 | client.login.assert_called_once_with() | |
67 | ||
68 | def test_get_name(self): | |
69 | self.assertEqual("loginsight", self._driver.get_name()) | |
70 | ||
71 | def _create_trace(self, | |
72 | name, | |
73 | timestamp, | |
74 | parent_id="8d28af1e-acc0-498c-9890-6908e33eff5f", | |
75 | base_id=BASE_ID, | |
76 | trace_id="e465db5c-9672-45a1-b90b-da918f30aef6"): | |
77 | return {"parent_id": parent_id, | |
78 | "name": name, | |
79 | "base_id": base_id, | |
80 | "trace_id": trace_id, | |
81 | "timestamp": timestamp, | |
82 | "info": {"host": self._host}} | |
83 | ||
84 | def _create_start_trace(self): | |
85 | return self._create_trace("wsgi-start", "2016-10-04t11:50:21.902303") | |
86 | ||
87 | def _create_stop_trace(self): | |
88 | return self._create_trace("wsgi-stop", "2016-10-04t11:50:30.123456") | |
89 | ||
90 | @mock.patch("json.dumps") | |
91 | def test_notify(self, dumps): | |
92 | json_str = mock.sentinel.json_str | |
93 | dumps.return_value = json_str | |
94 | ||
95 | trace = self._create_stop_trace() | |
96 | self._driver.notify(trace) | |
97 | ||
98 | trace["project"] = self._project | |
99 | trace["service"] = self._service | |
100 | exp_event = {"text": "OSProfiler trace", | |
101 | "fields": [{"name": "base_id", | |
102 | "content": trace["base_id"]}, | |
103 | {"name": "trace_id", | |
104 | "content": trace["trace_id"]}, | |
105 | {"name": "project", | |
106 | "content": trace["project"]}, | |
107 | {"name": "service", | |
108 | "content": trace["service"]}, | |
109 | {"name": "name", | |
110 | "content": trace["name"]}, | |
111 | {"name": "trace", | |
112 | "content": json_str}] | |
113 | } | |
114 | self._client.send_event.assert_called_once_with(exp_event) | |
115 | ||
116 | @mock.patch.object(loginsight.LogInsightDriver, "_append_results") | |
117 | @mock.patch.object(loginsight.LogInsightDriver, "_parse_results") | |
118 | def test_get_report(self, parse_results, append_results): | |
119 | start_trace = self._create_start_trace() | |
120 | start_trace["project"] = self._project | |
121 | start_trace["service"] = self._service | |
122 | ||
123 | stop_trace = self._create_stop_trace() | |
124 | stop_trace["project"] = self._project | |
125 | stop_trace["service"] = self._service | |
126 | ||
127 | resp = {"events": [{"text": "OSProfiler trace", | |
128 | "fields": [{"name": "trace", | |
129 | "content": json.dumps(start_trace) | |
130 | } | |
131 | ] | |
132 | }, | |
133 | {"text": "OSProfiler trace", | |
134 | "fields": [{"name": "trace", | |
135 | "content": json.dumps(stop_trace) | |
136 | } | |
137 | ] | |
138 | } | |
139 | ] | |
140 | } | |
141 | self._client.query_events = mock.Mock(return_value=resp) | |
142 | ||
143 | self._driver.get_report(self.BASE_ID) | |
144 | self._client.query_events.assert_called_once_with({"base_id": | |
145 | self.BASE_ID}) | |
146 | append_results.assert_has_calls( | |
147 | [mock.call(start_trace["trace_id"], start_trace["parent_id"], | |
148 | start_trace["name"], start_trace["project"], | |
149 | start_trace["service"], start_trace["info"]["host"], | |
150 | start_trace["timestamp"], start_trace), | |
151 | mock.call(stop_trace["trace_id"], stop_trace["parent_id"], | |
152 | stop_trace["name"], stop_trace["project"], | |
153 | stop_trace["service"], stop_trace["info"]["host"], | |
154 | stop_trace["timestamp"], stop_trace) | |
155 | ]) | |
156 | parse_results.assert_called_once_with() | |
157 | ||
158 | ||
159 | class LogInsightClientTestCase(test.TestCase): | |
160 | ||
161 | def setUp(self): | |
162 | super(LogInsightClientTestCase, self).setUp() | |
163 | self._host = "localhost" | |
164 | self._username = "username" | |
165 | self._password = "password" | |
166 | self._client = loginsight.LogInsightClient( | |
167 | self._host, self._username, self._password) | |
168 | self._client._session_id = "4ff800d1-3175-4b49-9209-39714ea56416" | |
169 | ||
170 | def test_check_response_login_timeout(self): | |
171 | resp = mock.Mock(status_code=440) | |
172 | self.assertRaises( | |
173 | exc.LogInsightLoginTimeout, self._client._check_response, resp) | |
174 | ||
175 | def test_check_response_api_error(self): | |
176 | resp = mock.Mock(status_code=401, ok=False) | |
177 | resp.text = json.dumps( | |
178 | {"errorMessage": "Invalid username or password.", | |
179 | "errorCode": "FIELD_ERROR"}) | |
180 | e = self.assertRaises( | |
181 | exc.LogInsightAPIError, self._client._check_response, resp) | |
182 | self.assertEqual("Invalid username or password.", str(e)) | |
183 | ||
184 | @mock.patch("requests.Request") | |
185 | @mock.patch("json.dumps") | |
186 | @mock.patch.object(loginsight.LogInsightClient, "_check_response") | |
187 | def test_send_request(self, check_resp, json_dumps, request_class): | |
188 | req = mock.Mock() | |
189 | request_class.return_value = req | |
190 | prep_req = mock.sentinel.prep_req | |
191 | req.prepare = mock.Mock(return_value=prep_req) | |
192 | ||
193 | data = mock.sentinel.data | |
194 | json_dumps.return_value = data | |
195 | ||
196 | self._client._session = mock.Mock() | |
197 | resp = mock.Mock() | |
198 | self._client._session.send = mock.Mock(return_value=resp) | |
199 | resp_json = mock.sentinel.resp_json | |
200 | resp.json = mock.Mock(return_value=resp_json) | |
201 | ||
202 | header = {"X-LI-Session-Id": "foo"} | |
203 | body = mock.sentinel.body | |
204 | params = mock.sentinel.params | |
205 | ret = self._client._send_request( | |
206 | "get", "https", "api/v1/events", header, body, params) | |
207 | ||
208 | self.assertEqual(resp_json, ret) | |
209 | exp_headers = {"X-LI-Session-Id": "foo", | |
210 | "content-type": "application/json"} | |
211 | request_class.assert_called_once_with( | |
212 | "get", "https://localhost:9543/api/v1/events", headers=exp_headers, | |
213 | data=data, params=mock.sentinel.params) | |
214 | self._client._session.send.assert_called_once_with(prep_req, | |
215 | verify=False) | |
216 | check_resp.assert_called_once_with(resp) | |
217 | ||
218 | @mock.patch.object(loginsight.LogInsightClient, "_send_request") | |
219 | def test_is_current_session_active_with_active_session(self, send_request): | |
220 | self.assertTrue(self._client._is_current_session_active()) | |
221 | exp_header = {"X-LI-Session-Id": self._client._session_id} | |
222 | send_request.assert_called_once_with( | |
223 | "get", "https", "api/v1/sessions/current", headers=exp_header) | |
224 | ||
225 | @mock.patch.object(loginsight.LogInsightClient, "_send_request") | |
226 | def test_is_current_session_active_with_expired_session(self, | |
227 | send_request): | |
228 | send_request.side_effect = exc.LogInsightLoginTimeout | |
229 | ||
230 | self.assertFalse(self._client._is_current_session_active()) | |
231 | send_request.assert_called_once_with( | |
232 | "get", "https", "api/v1/sessions/current", | |
233 | headers={"X-LI-Session-Id": self._client._session_id}) | |
234 | ||
235 | @mock.patch.object(loginsight.LogInsightClient, | |
236 | "_is_current_session_active", return_value=True) | |
237 | @mock.patch.object(loginsight.LogInsightClient, "_send_request") | |
238 | def test_login_with_current_session_active(self, send_request, | |
239 | is_current_session_active): | |
240 | self._client.login() | |
241 | is_current_session_active.assert_called_once_with() | |
242 | send_request.assert_not_called() | |
243 | ||
244 | @mock.patch.object(loginsight.LogInsightClient, | |
245 | "_is_current_session_active", return_value=False) | |
246 | @mock.patch.object(loginsight.LogInsightClient, "_send_request") | |
247 | def test_login(self, send_request, is_current_session_active): | |
248 | new_session_id = "569a80aa-be5c-49e5-82c1-bb62392d2667" | |
249 | resp = {"sessionId": new_session_id} | |
250 | send_request.return_value = resp | |
251 | ||
252 | self._client.login() | |
253 | is_current_session_active.assert_called_once_with() | |
254 | exp_body = {"username": self._username, "password": self._password} | |
255 | send_request.assert_called_once_with( | |
256 | "post", "https", "api/v1/sessions", body=exp_body) | |
257 | self.assertEqual(new_session_id, self._client._session_id) | |
258 | ||
259 | @mock.patch.object(loginsight.LogInsightClient, "_send_request") | |
260 | def test_send_event(self, send_request): | |
261 | event = mock.sentinel.event | |
262 | self._client.send_event(event) | |
263 | ||
264 | exp_body = {"events": [event]} | |
265 | exp_path = ("api/v1/events/ingest/%s" % | |
266 | self._client.LI_OSPROFILER_AGENT_ID) | |
267 | send_request.assert_called_once_with( | |
268 | "post", "http", exp_path, body=exp_body) | |
269 | ||
270 | @mock.patch.object(loginsight.LogInsightClient, "_send_request") | |
271 | def test_query_events(self, send_request): | |
272 | resp = mock.sentinel.response | |
273 | send_request.return_value = resp | |
274 | ||
275 | self.assertEqual(resp, self._client.query_events({"foo": "bar"})) | |
276 | exp_header = {"X-LI-Session-Id": self._client._session_id} | |
277 | exp_params = {"limit": 20000, "timeout": self._client._query_timeout} | |
278 | send_request.assert_called_once_with( | |
279 | "get", "https", "api/v1/events/foo/CONTAINS+bar/timestamp/GT+0", | |
280 | headers=exp_header, params=exp_params) | |
281 | ||
282 | @mock.patch.object(loginsight.LogInsightClient, "_send_request") | |
283 | @mock.patch.object(loginsight.LogInsightClient, "login") | |
284 | def test_query_events_with_session_expiry(self, login, send_request): | |
285 | resp = mock.sentinel.response | |
286 | send_request.side_effect = [exc.LogInsightLoginTimeout, resp] | |
287 | ||
288 | self.assertEqual(resp, self._client.query_events({"foo": "bar"})) | |
289 | login.assert_called_once_with() | |
290 | exp_header = {"X-LI-Session-Id": self._client._session_id} | |
291 | exp_params = {"limit": 20000, "timeout": self._client._query_timeout} | |
292 | exp_send_request_call = mock.call( | |
293 | "get", "https", "api/v1/events/foo/CONTAINS+bar/timestamp/GT+0", | |
294 | headers=exp_header, params=exp_params) | |
295 | send_request.assert_has_calls([exp_send_request_call]*2) |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler.drivers import base | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class MessagingTestCase(test.TestCase): | |
22 | ||
23 | def test_init_and_notify(self): | |
24 | ||
25 | messaging = mock.MagicMock() | |
26 | context = "context" | |
27 | transport = "transport" | |
28 | project = "project" | |
29 | service = "service" | |
30 | host = "host" | |
31 | ||
32 | notify_func = base.get_driver( | |
33 | "messaging://", messaging, context, transport, | |
34 | project, service, host).notify | |
35 | ||
36 | messaging.Notifier.assert_called_once_with( | |
37 | transport, publisher_id=host, driver="messaging", | |
38 | topics=["profiler"], retry=0) | |
39 | ||
40 | info = { | |
41 | "a": 10, | |
42 | "project": project, | |
43 | "service": service, | |
44 | "host": host | |
45 | } | |
46 | notify_func(info) | |
47 | ||
48 | messaging.Notifier().info.assert_called_once_with( | |
49 | context, "profiler.service", info) | |
50 | ||
51 | messaging.reset_mock() | |
52 | notify_func(info, context="my_context") | |
53 | messaging.Notifier().info.assert_called_once_with( | |
54 | "my_context", "profiler.service", info) |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler.drivers.mongodb import MongoDB | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class MongoDBParserTestCase(test.TestCase): | |
22 | def setUp(self): | |
23 | super(MongoDBParserTestCase, self).setUp() | |
24 | self.mongodb = MongoDB("mongodb://localhost") | |
25 | ||
26 | def test_build_empty_tree(self): | |
27 | self.assertEqual([], self.mongodb._build_tree({})) | |
28 | ||
29 | def test_build_complex_tree(self): | |
30 | test_input = { | |
31 | "2": {"parent_id": "0", "trace_id": "2", "info": {"started": 1}}, | |
32 | "1": {"parent_id": "0", "trace_id": "1", "info": {"started": 0}}, | |
33 | "21": {"parent_id": "2", "trace_id": "21", "info": {"started": 6}}, | |
34 | "22": {"parent_id": "2", "trace_id": "22", "info": {"started": 7}}, | |
35 | "11": {"parent_id": "1", "trace_id": "11", "info": {"started": 1}}, | |
36 | "113": {"parent_id": "11", "trace_id": "113", | |
37 | "info": {"started": 3}}, | |
38 | "112": {"parent_id": "11", "trace_id": "112", | |
39 | "info": {"started": 2}}, | |
40 | "114": {"parent_id": "11", "trace_id": "114", | |
41 | "info": {"started": 5}} | |
42 | } | |
43 | ||
44 | expected_output = [ | |
45 | { | |
46 | "parent_id": "0", | |
47 | "trace_id": "1", | |
48 | "info": {"started": 0}, | |
49 | "children": [ | |
50 | { | |
51 | "parent_id": "1", | |
52 | "trace_id": "11", | |
53 | "info": {"started": 1}, | |
54 | "children": [ | |
55 | {"parent_id": "11", "trace_id": "112", | |
56 | "info": {"started": 2}, "children": []}, | |
57 | {"parent_id": "11", "trace_id": "113", | |
58 | "info": {"started": 3}, "children": []}, | |
59 | {"parent_id": "11", "trace_id": "114", | |
60 | "info": {"started": 5}, "children": []} | |
61 | ] | |
62 | } | |
63 | ] | |
64 | }, | |
65 | { | |
66 | "parent_id": "0", | |
67 | "trace_id": "2", | |
68 | "info": {"started": 1}, | |
69 | "children": [ | |
70 | {"parent_id": "2", "trace_id": "21", | |
71 | "info": {"started": 6}, "children": []}, | |
72 | {"parent_id": "2", "trace_id": "22", | |
73 | "info": {"started": 7}, "children": []} | |
74 | ] | |
75 | } | |
76 | ] | |
77 | ||
78 | result = self.mongodb._build_tree(test_input) | |
79 | self.assertEqual(expected_output, result) | |
80 | ||
81 | def test_get_report_empty(self): | |
82 | self.mongodb.db = mock.MagicMock() | |
83 | self.mongodb.db.profiler.find.return_value = [] | |
84 | ||
85 | expected = { | |
86 | "info": { | |
87 | "name": "total", | |
88 | "started": 0, | |
89 | "finished": None, | |
90 | "last_trace_started": None | |
91 | }, | |
92 | "children": [], | |
93 | "stats": {}, | |
94 | } | |
95 | ||
96 | base_id = "10" | |
97 | self.assertEqual(expected, self.mongodb.get_report(base_id)) | |
98 | ||
99 | def test_get_report(self): | |
100 | self.mongodb.db = mock.MagicMock() | |
101 | results = [ | |
102 | { | |
103 | "info": { | |
104 | "project": None, | |
105 | "host": "ubuntu", | |
106 | "request": { | |
107 | "path": "/v2/a322b5049d224a90bf8786c644409400/volumes", | |
108 | "scheme": "http", | |
109 | "method": "POST", | |
110 | "query": "" | |
111 | }, | |
112 | "service": None | |
113 | }, | |
114 | "name": "wsgi-start", | |
115 | "service": "main", | |
116 | "timestamp": "2015-12-23T14:02:22.338776", | |
117 | "trace_id": "06320327-2c2c-45ae-923a-515de890276a", | |
118 | "project": "keystone", | |
119 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
120 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
121 | }, | |
122 | ||
123 | { | |
124 | "info": { | |
125 | "project": None, | |
126 | "host": "ubuntu", | |
127 | "service": None | |
128 | }, | |
129 | "name": "wsgi-stop", | |
130 | "service": "main", | |
131 | "timestamp": "2015-12-23T14:02:22.380405", | |
132 | "trace_id": "839ca3f1-afcb-45be-a4a1-679124c552bf", | |
133 | "project": "keystone", | |
134 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
135 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
136 | }, | |
137 | ||
138 | { | |
139 | "info": { | |
140 | "project": None, | |
141 | "host": "ubuntu", | |
142 | "db": { | |
143 | "params": { | |
144 | ||
145 | }, | |
146 | "statement": "SELECT 1" | |
147 | }, | |
148 | "service": None | |
149 | }, | |
150 | "name": "db-start", | |
151 | "service": "main", | |
152 | "timestamp": "2015-12-23T14:02:22.395365", | |
153 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a", | |
154 | "project": "keystone", | |
155 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
156 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
157 | }, | |
158 | ||
159 | { | |
160 | "info": { | |
161 | "project": None, | |
162 | "host": "ubuntu", | |
163 | "service": None | |
164 | }, | |
165 | "name": "db-stop", | |
166 | "service": "main", | |
167 | "timestamp": "2015-12-23T14:02:22.415486", | |
168 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a", | |
169 | "project": "keystone", | |
170 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
171 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
172 | }, | |
173 | ||
174 | { | |
175 | "info": { | |
176 | "project": None, | |
177 | "host": "ubuntu", | |
178 | "request": { | |
179 | "path": "/v2/a322b5049d224a90bf8786c644409400/volumes", | |
180 | "scheme": "http", | |
181 | "method": "GET", | |
182 | "query": "" | |
183 | }, | |
184 | "service": None | |
185 | }, | |
186 | "name": "wsgi-start", | |
187 | "service": "main", | |
188 | "timestamp": "2015-12-23T14:02:22.427444", | |
189 | "trace_id": "016c97fd-87f3-40b2-9b55-e431156b694b", | |
190 | "project": "keystone", | |
191 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
192 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
193 | }] | |
194 | ||
195 | expected = {"children": [{"children": [{ | |
196 | "children": [], | |
197 | "info": {"finished": 76, | |
198 | "host": "ubuntu", | |
199 | "meta.raw_payload.db-start": { | |
200 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
201 | "info": {"db": {"params": {}, | |
202 | "statement": "SELECT 1"}, | |
203 | "host": "ubuntu", | |
204 | "project": None, | |
205 | "service": None}, | |
206 | "name": "db-start", | |
207 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
208 | "project": "keystone", | |
209 | "service": "main", | |
210 | "timestamp": "2015-12-23T14:02:22.395365", | |
211 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a"}, | |
212 | "meta.raw_payload.db-stop": { | |
213 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
214 | "info": {"host": "ubuntu", | |
215 | "project": None, | |
216 | "service": None}, | |
217 | "name": "db-stop", | |
218 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
219 | "project": "keystone", | |
220 | "service": "main", | |
221 | "timestamp": "2015-12-23T14:02:22.415486", | |
222 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a"}, | |
223 | "name": "db", | |
224 | "project": "keystone", | |
225 | "service": "main", | |
226 | "started": 56, | |
227 | "exception": "None"}, | |
228 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
229 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a"}], | |
230 | ||
231 | "info": {"finished": 0, | |
232 | "host": "ubuntu", | |
233 | "meta.raw_payload.wsgi-start": { | |
234 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
235 | "info": {"host": "ubuntu", | |
236 | "project": None, | |
237 | "request": {"method": "POST", | |
238 | "path": "/v2/a322b5049d224a90bf8" | |
239 | "786c644409400/volumes", | |
240 | "query": "", | |
241 | "scheme": "http"}, | |
242 | "service": None}, | |
243 | "name": "wsgi-start", | |
244 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
245 | "project": "keystone", | |
246 | "service": "main", | |
247 | "timestamp": "2015-12-23T14:02:22.338776", | |
248 | "trace_id": "06320327-2c2c-45ae-923a-515de890276a"}, | |
249 | "name": "wsgi", | |
250 | "project": "keystone", | |
251 | "service": "main", | |
252 | "started": 0}, | |
253 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
254 | "trace_id": "06320327-2c2c-45ae-923a-515de890276a"}, | |
255 | ||
256 | {"children": [], | |
257 | "info": {"finished": 41, | |
258 | "host": "ubuntu", | |
259 | "meta.raw_payload.wsgi-stop": { | |
260 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
261 | "info": {"host": "ubuntu", | |
262 | "project": None, | |
263 | "service": None}, | |
264 | "name": "wsgi-stop", | |
265 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
266 | "project": "keystone", | |
267 | "service": "main", | |
268 | "timestamp": "2015-12-23T14:02:22.380405", | |
269 | "trace_id": "839ca3f1-afcb-45be-a4a1-679124c552bf"}, | |
270 | "name": "wsgi", | |
271 | "project": "keystone", | |
272 | "service": "main", | |
273 | "started": 41, | |
274 | "exception": "None"}, | |
275 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
276 | "trace_id": "839ca3f1-afcb-45be-a4a1-679124c552bf"}, | |
277 | ||
278 | {"children": [], | |
279 | "info": {"finished": 88, | |
280 | "host": "ubuntu", | |
281 | "meta.raw_payload.wsgi-start": { | |
282 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
283 | "info": {"host": "ubuntu", | |
284 | "project": None, | |
285 | "request": {"method": "GET", | |
286 | "path": "/v2/a322b5049d224a90bf" | |
287 | "8786c644409400/volumes", | |
288 | "query": "", | |
289 | "scheme": "http"}, | |
290 | "service": None}, | |
291 | "name": "wsgi-start", | |
292 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
293 | "project": "keystone", | |
294 | "service": "main", | |
295 | "timestamp": "2015-12-23T14:02:22.427444", | |
296 | "trace_id": "016c97fd-87f3-40b2-9b55-e431156b694b"}, | |
297 | "name": "wsgi", | |
298 | "project": "keystone", | |
299 | "service": "main", | |
300 | "started": 88}, | |
301 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
302 | "trace_id": "016c97fd-87f3-40b2-9b55-e431156b694b"}], | |
303 | "info": { | |
304 | "finished": 88, | |
305 | "name": "total", | |
306 | "started": 0, | |
307 | "last_trace_started": 88 | |
308 | }, | |
309 | "stats": {"db": {"count": 1, "duration": 20}, | |
310 | "wsgi": {"count": 3, "duration": 0}}} | |
311 | ||
312 | self.mongodb.db.profiler.find.return_value = results | |
313 | ||
314 | base_id = "10" | |
315 | ||
316 | result = self.mongodb.get_report(base_id) | |
317 | ||
318 | expected_filter = [{"base_id": base_id}, {"_id": 0}] | |
319 | self.mongodb.db.profiler.find.assert_called_once_with( | |
320 | *expected_filter) | |
321 | self.assertEqual(expected, result) |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # Copyright 2016 IBM Corporation. | |
2 | # All Rights Reserved. | |
3 | # | |
4 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
5 | # not use this file except in compliance with the License. You may obtain | |
6 | # a copy of the License at | |
7 | # | |
8 | # http://www.apache.org/licenses/LICENSE-2.0 | |
9 | # | |
10 | # Unless required by applicable law or agreed to in writing, software | |
11 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
12 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
13 | # License for the specific language governing permissions and limitations | |
14 | # under the License. | |
15 | ||
16 | import mock | |
17 | from oslo_serialization import jsonutils | |
18 | ||
19 | from osprofiler.drivers.redis_driver import Redis | |
20 | from osprofiler.tests import test | |
21 | ||
22 | ||
23 | class RedisParserTestCase(test.TestCase): | |
24 | def setUp(self): | |
25 | super(RedisParserTestCase, self).setUp() | |
26 | self.redisdb = Redis("redis://localhost:6379") | |
27 | ||
28 | def test_build_empty_tree(self): | |
29 | self.assertEqual([], self.redisdb._build_tree({})) | |
30 | ||
31 | def test_build_complex_tree(self): | |
32 | test_input = { | |
33 | "2": {"parent_id": "0", "trace_id": "2", "info": {"started": 1}}, | |
34 | "1": {"parent_id": "0", "trace_id": "1", "info": {"started": 0}}, | |
35 | "21": {"parent_id": "2", "trace_id": "21", "info": {"started": 6}}, | |
36 | "22": {"parent_id": "2", "trace_id": "22", "info": {"started": 7}}, | |
37 | "11": {"parent_id": "1", "trace_id": "11", "info": {"started": 1}}, | |
38 | "113": {"parent_id": "11", "trace_id": "113", | |
39 | "info": {"started": 3}}, | |
40 | "112": {"parent_id": "11", "trace_id": "112", | |
41 | "info": {"started": 2}}, | |
42 | "114": {"parent_id": "11", "trace_id": "114", | |
43 | "info": {"started": 5}} | |
44 | } | |
45 | ||
46 | expected_output = [ | |
47 | { | |
48 | "parent_id": "0", | |
49 | "trace_id": "1", | |
50 | "info": {"started": 0}, | |
51 | "children": [ | |
52 | { | |
53 | "parent_id": "1", | |
54 | "trace_id": "11", | |
55 | "info": {"started": 1}, | |
56 | "children": [ | |
57 | {"parent_id": "11", "trace_id": "112", | |
58 | "info": {"started": 2}, "children": []}, | |
59 | {"parent_id": "11", "trace_id": "113", | |
60 | "info": {"started": 3}, "children": []}, | |
61 | {"parent_id": "11", "trace_id": "114", | |
62 | "info": {"started": 5}, "children": []} | |
63 | ] | |
64 | } | |
65 | ] | |
66 | }, | |
67 | { | |
68 | "parent_id": "0", | |
69 | "trace_id": "2", | |
70 | "info": {"started": 1}, | |
71 | "children": [ | |
72 | {"parent_id": "2", "trace_id": "21", | |
73 | "info": {"started": 6}, "children": []}, | |
74 | {"parent_id": "2", "trace_id": "22", | |
75 | "info": {"started": 7}, "children": []} | |
76 | ] | |
77 | } | |
78 | ] | |
79 | ||
80 | result = self.redisdb._build_tree(test_input) | |
81 | self.assertEqual(expected_output, result) | |
82 | ||
83 | def test_get_report_empty(self): | |
84 | self.redisdb.db = mock.MagicMock() | |
85 | self.redisdb.db.scan_iter.return_value = [] | |
86 | ||
87 | expected = { | |
88 | "info": { | |
89 | "name": "total", | |
90 | "started": 0, | |
91 | "finished": None, | |
92 | "last_trace_started": None | |
93 | }, | |
94 | "children": [], | |
95 | "stats": {}, | |
96 | } | |
97 | ||
98 | base_id = "10" | |
99 | self.assertEqual(expected, self.redisdb.get_report(base_id)) | |
100 | ||
101 | def test_get_report(self): | |
102 | self.redisdb.db = mock.MagicMock() | |
103 | result_elements = [ | |
104 | { | |
105 | "info": { | |
106 | "project": None, | |
107 | "host": "ubuntu", | |
108 | "request": { | |
109 | "path": "/v2/a322b5049d224a90bf8786c644409400/volumes", | |
110 | "scheme": "http", | |
111 | "method": "POST", | |
112 | "query": "" | |
113 | }, | |
114 | "service": None | |
115 | }, | |
116 | "name": "wsgi-start", | |
117 | "service": "main", | |
118 | "timestamp": "2015-12-23T14:02:22.338776", | |
119 | "trace_id": "06320327-2c2c-45ae-923a-515de890276a", | |
120 | "project": "keystone", | |
121 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
122 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
123 | }, | |
124 | ||
125 | { | |
126 | "info": { | |
127 | "project": None, | |
128 | "host": "ubuntu", | |
129 | "service": None | |
130 | }, | |
131 | "name": "wsgi-stop", | |
132 | "service": "main", | |
133 | "timestamp": "2015-12-23T14:02:22.380405", | |
134 | "trace_id": "839ca3f1-afcb-45be-a4a1-679124c552bf", | |
135 | "project": "keystone", | |
136 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
137 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
138 | }, | |
139 | ||
140 | { | |
141 | "info": { | |
142 | "project": None, | |
143 | "host": "ubuntu", | |
144 | "db": { | |
145 | "params": { | |
146 | ||
147 | }, | |
148 | "statement": "SELECT 1" | |
149 | }, | |
150 | "service": None | |
151 | }, | |
152 | "name": "db-start", | |
153 | "service": "main", | |
154 | "timestamp": "2015-12-23T14:02:22.395365", | |
155 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a", | |
156 | "project": "keystone", | |
157 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
158 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
159 | }, | |
160 | ||
161 | { | |
162 | "info": { | |
163 | "project": None, | |
164 | "host": "ubuntu", | |
165 | "service": None | |
166 | }, | |
167 | "name": "db-stop", | |
168 | "service": "main", | |
169 | "timestamp": "2015-12-23T14:02:22.415486", | |
170 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a", | |
171 | "project": "keystone", | |
172 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
173 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
174 | }, | |
175 | ||
176 | { | |
177 | "info": { | |
178 | "project": None, | |
179 | "host": "ubuntu", | |
180 | "request": { | |
181 | "path": "/v2/a322b5049d224a90bf8786c644409400/volumes", | |
182 | "scheme": "http", | |
183 | "method": "GET", | |
184 | "query": "" | |
185 | }, | |
186 | "service": None | |
187 | }, | |
188 | "name": "wsgi-start", | |
189 | "service": "main", | |
190 | "timestamp": "2015-12-23T14:02:22.427444", | |
191 | "trace_id": "016c97fd-87f3-40b2-9b55-e431156b694b", | |
192 | "project": "keystone", | |
193 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
194 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4" | |
195 | }] | |
196 | results = {result["base_id"] + "_" + result["trace_id"] + | |
197 | "_" + result["timestamp"]: result | |
198 | for result in result_elements} | |
199 | ||
200 | expected = {"children": [{"children": [{ | |
201 | "children": [], | |
202 | "info": {"finished": 76, | |
203 | "host": "ubuntu", | |
204 | "meta.raw_payload.db-start": { | |
205 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
206 | "info": {"db": {"params": {}, | |
207 | "statement": "SELECT 1"}, | |
208 | "host": "ubuntu", | |
209 | "project": None, | |
210 | "service": None}, | |
211 | "name": "db-start", | |
212 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
213 | "project": "keystone", | |
214 | "service": "main", | |
215 | "timestamp": "2015-12-23T14:02:22.395365", | |
216 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a"}, | |
217 | "meta.raw_payload.db-stop": { | |
218 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
219 | "info": {"host": "ubuntu", | |
220 | "project": None, | |
221 | "service": None}, | |
222 | "name": "db-stop", | |
223 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
224 | "project": "keystone", | |
225 | "service": "main", | |
226 | "timestamp": "2015-12-23T14:02:22.415486", | |
227 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a"}, | |
228 | "name": "db", | |
229 | "project": "keystone", | |
230 | "service": "main", | |
231 | "started": 56, | |
232 | "exception": "None"}, | |
233 | "parent_id": "06320327-2c2c-45ae-923a-515de890276a", | |
234 | "trace_id": "1baf1d24-9ca9-4f4c-bd3f-01b7e0c0735a"}], | |
235 | ||
236 | "info": {"finished": 0, | |
237 | "host": "ubuntu", | |
238 | "meta.raw_payload.wsgi-start": { | |
239 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
240 | "info": {"host": "ubuntu", | |
241 | "project": None, | |
242 | "request": {"method": "POST", | |
243 | "path": "/v2/a322b5049d224a90bf8" | |
244 | "786c644409400/volumes", | |
245 | "query": "", | |
246 | "scheme": "http"}, | |
247 | "service": None}, | |
248 | "name": "wsgi-start", | |
249 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
250 | "project": "keystone", | |
251 | "service": "main", | |
252 | "timestamp": "2015-12-23T14:02:22.338776", | |
253 | "trace_id": "06320327-2c2c-45ae-923a-515de890276a"}, | |
254 | "name": "wsgi", | |
255 | "project": "keystone", | |
256 | "service": "main", | |
257 | "started": 0}, | |
258 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
259 | "trace_id": "06320327-2c2c-45ae-923a-515de890276a"}, | |
260 | ||
261 | {"children": [], | |
262 | "info": {"finished": 41, | |
263 | "host": "ubuntu", | |
264 | "meta.raw_payload.wsgi-stop": { | |
265 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
266 | "info": {"host": "ubuntu", | |
267 | "project": None, | |
268 | "service": None}, | |
269 | "name": "wsgi-stop", | |
270 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
271 | "project": "keystone", | |
272 | "service": "main", | |
273 | "timestamp": "2015-12-23T14:02:22.380405", | |
274 | "trace_id": "839ca3f1-afcb-45be-a4a1-679124c552bf"}, | |
275 | "name": "wsgi", | |
276 | "project": "keystone", | |
277 | "service": "main", | |
278 | "started": 41, | |
279 | "exception": "None"}, | |
280 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
281 | "trace_id": "839ca3f1-afcb-45be-a4a1-679124c552bf"}, | |
282 | ||
283 | {"children": [], | |
284 | "info": {"finished": 88, | |
285 | "host": "ubuntu", | |
286 | "meta.raw_payload.wsgi-start": { | |
287 | "base_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
288 | "info": {"host": "ubuntu", | |
289 | "project": None, | |
290 | "request": {"method": "GET", | |
291 | "path": "/v2/a322b5049d224a90bf" | |
292 | "8786c644409400/volumes", | |
293 | "query": "", | |
294 | "scheme": "http"}, | |
295 | "service": None}, | |
296 | "name": "wsgi-start", | |
297 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
298 | "project": "keystone", | |
299 | "service": "main", | |
300 | "timestamp": "2015-12-23T14:02:22.427444", | |
301 | "trace_id": "016c97fd-87f3-40b2-9b55-e431156b694b"}, | |
302 | "name": "wsgi", | |
303 | "project": "keystone", | |
304 | "service": "main", | |
305 | "started": 88}, | |
306 | "parent_id": "7253ca8c-33b3-4f84-b4f1-f5a4311ddfa4", | |
307 | "trace_id": "016c97fd-87f3-40b2-9b55-e431156b694b"}], | |
308 | "info": { | |
309 | "finished": 88, | |
310 | "name": "total", | |
311 | "started": 0, | |
312 | "last_trace_started": 88 | |
313 | }, | |
314 | "stats": {"db": {"count": 1, "duration": 20}, | |
315 | "wsgi": {"count": 3, "duration": 0}}} | |
316 | ||
317 | self.redisdb.db.scan_iter.return_value = list(results.keys()) | |
318 | ||
319 | def side_effect(*args, **kwargs): | |
320 | return jsonutils.dumps(results[args[0]]) | |
321 | ||
322 | self.redisdb.db.get.side_effect = side_effect | |
323 | ||
324 | base_id = "10" | |
325 | ||
326 | result = self.redisdb.get_report(base_id) | |
327 | ||
328 | expected_filter = self.redisdb.namespace + "10*" | |
329 | self.redisdb.db.scan_iter.assert_called_once_with( | |
330 | match=expected_filter) | |
331 | self.assertEqual(expected, result) |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | ||
17 | from osprofiler import notifier | |
18 | from osprofiler.tests import test | |
19 | ||
20 | ||
21 | class NotifierTestCase(test.TestCase): | |
22 | ||
23 | def tearDown(self): | |
24 | notifier.__notifier = notifier._noop_notifier | |
25 | super(NotifierTestCase, self).tearDown() | |
26 | ||
27 | def test_set(self): | |
28 | ||
29 | def test(info): | |
30 | pass | |
31 | ||
32 | notifier.set(test) | |
33 | self.assertEqual(notifier.get(), test) | |
34 | ||
35 | def test_get_default_notifier(self): | |
36 | self.assertEqual(notifier.get(), notifier._noop_notifier) | |
37 | ||
38 | def test_notify(self): | |
39 | m = mock.MagicMock() | |
40 | notifier.set(m) | |
41 | notifier.notify(10) | |
42 | ||
43 | m.assert_called_once_with(10) | |
44 | ||
45 | @mock.patch("osprofiler.notifier.base.get_driver") | |
46 | def test_create(self, mock_factory): | |
47 | ||
48 | result = notifier.create("test", 10, b=20) | |
49 | mock_factory.assert_called_once_with("test", 10, b=20) | |
50 | self.assertEqual(mock_factory.return_value.notify, result) |
0 | # Copyright 2016 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | from oslo_config import fixture | |
17 | ||
18 | from osprofiler import opts | |
19 | from osprofiler.tests import test | |
20 | ||
21 | ||
22 | class ConfigTestCase(test.TestCase): | |
23 | def setUp(self): | |
24 | super(ConfigTestCase, self).setUp() | |
25 | self.conf_fixture = self.useFixture(fixture.Config()) | |
26 | ||
27 | def test_options_defaults(self): | |
28 | opts.set_defaults(self.conf_fixture.conf) | |
29 | self.assertFalse(self.conf_fixture.conf.profiler.enabled) | |
30 | self.assertFalse(self.conf_fixture.conf.profiler.trace_sqlalchemy) | |
31 | self.assertEqual("SECRET_KEY", | |
32 | self.conf_fixture.conf.profiler.hmac_keys) | |
33 | self.assertFalse(opts.is_trace_enabled(self.conf_fixture.conf)) | |
34 | self.assertFalse(opts.is_db_trace_enabled(self.conf_fixture.conf)) | |
35 | ||
36 | def test_options_defaults_override(self): | |
37 | opts.set_defaults(self.conf_fixture.conf, enabled=True, | |
38 | trace_sqlalchemy=True, | |
39 | hmac_keys="MY_KEY") | |
40 | self.assertTrue(self.conf_fixture.conf.profiler.enabled) | |
41 | self.assertTrue(self.conf_fixture.conf.profiler.trace_sqlalchemy) | |
42 | self.assertEqual("MY_KEY", | |
43 | self.conf_fixture.conf.profiler.hmac_keys) | |
44 | self.assertTrue(opts.is_trace_enabled(self.conf_fixture.conf)) | |
45 | self.assertTrue(opts.is_db_trace_enabled(self.conf_fixture.conf)) | |
46 | ||
47 | @mock.patch("osprofiler.web.enable") | |
48 | @mock.patch("osprofiler.web.disable") | |
49 | def test_web_trace_disabled(self, mock_disable, mock_enable): | |
50 | opts.set_defaults(self.conf_fixture.conf, hmac_keys="MY_KEY") | |
51 | opts.enable_web_trace(self.conf_fixture.conf) | |
52 | opts.disable_web_trace(self.conf_fixture.conf) | |
53 | self.assertEqual(0, mock_enable.call_count) | |
54 | self.assertEqual(0, mock_disable.call_count) | |
55 | ||
56 | @mock.patch("osprofiler.web.enable") | |
57 | @mock.patch("osprofiler.web.disable") | |
58 | def test_web_trace_enabled(self, mock_disable, mock_enable): | |
59 | opts.set_defaults(self.conf_fixture.conf, enabled=True, | |
60 | hmac_keys="MY_KEY") | |
61 | opts.enable_web_trace(self.conf_fixture.conf) | |
62 | opts.disable_web_trace(self.conf_fixture.conf) | |
63 | mock_enable.assert_called_once_with("MY_KEY") | |
64 | mock_disable.assert_called_once_with() |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import collections | |
16 | import copy | |
17 | import datetime | |
18 | import re | |
19 | ||
20 | import mock | |
21 | import six | |
22 | ||
23 | from osprofiler import profiler | |
24 | from osprofiler.tests import test | |
25 | ||
26 | ||
27 | class ProfilerGlobMethodsTestCase(test.TestCase): | |
28 | ||
29 | def test_get_profiler_not_inited(self): | |
30 | profiler._clean() | |
31 | self.assertIsNone(profiler.get()) | |
32 | ||
33 | def test_get_profiler_and_init(self): | |
34 | p = profiler.init("secret", base_id="1", parent_id="2") | |
35 | self.assertEqual(profiler.get(), p) | |
36 | ||
37 | self.assertEqual(p.get_base_id(), "1") | |
38 | # NOTE(boris-42): until we make first start we don't have | |
39 | self.assertEqual(p.get_id(), "2") | |
40 | ||
41 | def test_start_not_inited(self): | |
42 | profiler._clean() | |
43 | profiler.start("name") | |
44 | ||
45 | def test_start(self): | |
46 | p = profiler.init("secret", base_id="1", parent_id="2") | |
47 | p.start = mock.MagicMock() | |
48 | profiler.start("name", info="info") | |
49 | p.start.assert_called_once_with("name", info="info") | |
50 | ||
51 | def test_stop_not_inited(self): | |
52 | profiler._clean() | |
53 | profiler.stop() | |
54 | ||
55 | def test_stop(self): | |
56 | p = profiler.init("secret", base_id="1", parent_id="2") | |
57 | p.stop = mock.MagicMock() | |
58 | profiler.stop(info="info") | |
59 | p.stop.assert_called_once_with(info="info") | |
60 | ||
61 | ||
62 | class ProfilerTestCase(test.TestCase): | |
63 | ||
64 | def test_profiler_get_base_id(self): | |
65 | prof = profiler._Profiler("secret", base_id="1", parent_id="2") | |
66 | self.assertEqual(prof.get_base_id(), "1") | |
67 | ||
68 | @mock.patch("osprofiler.profiler.uuidutils.generate_uuid") | |
69 | def test_profiler_get_parent_id(self, mock_generate_uuid): | |
70 | mock_generate_uuid.return_value = "42" | |
71 | prof = profiler._Profiler("secret", base_id="1", parent_id="2") | |
72 | prof.start("test") | |
73 | self.assertEqual(prof.get_parent_id(), "2") | |
74 | ||
75 | @mock.patch("osprofiler.profiler.uuidutils.generate_uuid") | |
76 | def test_profiler_get_base_id_unset_case(self, mock_generate_uuid): | |
77 | mock_generate_uuid.return_value = "42" | |
78 | prof = profiler._Profiler("secret") | |
79 | self.assertEqual(prof.get_base_id(), "42") | |
80 | self.assertEqual(prof.get_parent_id(), "42") | |
81 | ||
82 | @mock.patch("osprofiler.profiler.uuidutils.generate_uuid") | |
83 | def test_profiler_get_id(self, mock_generate_uuid): | |
84 | mock_generate_uuid.return_value = "43" | |
85 | prof = profiler._Profiler("secret") | |
86 | prof.start("test") | |
87 | self.assertEqual(prof.get_id(), "43") | |
88 | ||
89 | @mock.patch("osprofiler.profiler.datetime") | |
90 | @mock.patch("osprofiler.profiler.uuidutils.generate_uuid") | |
91 | @mock.patch("osprofiler.profiler.notifier.notify") | |
92 | def test_profiler_start(self, mock_notify, mock_generate_uuid, | |
93 | mock_datetime): | |
94 | mock_generate_uuid.return_value = "44" | |
95 | now = datetime.datetime.utcnow() | |
96 | mock_datetime.datetime.utcnow.return_value = now | |
97 | ||
98 | info = {"some": "info"} | |
99 | payload = { | |
100 | "name": "test-start", | |
101 | "base_id": "1", | |
102 | "parent_id": "2", | |
103 | "trace_id": "44", | |
104 | "info": info, | |
105 | "timestamp": now.strftime("%Y-%m-%dT%H:%M:%S.%f"), | |
106 | } | |
107 | ||
108 | prof = profiler._Profiler("secret", base_id="1", parent_id="2") | |
109 | prof.start("test", info=info) | |
110 | ||
111 | mock_notify.assert_called_once_with(payload) | |
112 | ||
113 | @mock.patch("osprofiler.profiler.datetime") | |
114 | @mock.patch("osprofiler.profiler.notifier.notify") | |
115 | def test_profiler_stop(self, mock_notify, mock_datetime): | |
116 | now = datetime.datetime.utcnow() | |
117 | mock_datetime.datetime.utcnow.return_value = now | |
118 | prof = profiler._Profiler("secret", base_id="1", parent_id="2") | |
119 | prof._trace_stack.append("44") | |
120 | prof._name.append("abc") | |
121 | ||
122 | info = {"some": "info"} | |
123 | prof.stop(info=info) | |
124 | ||
125 | payload = { | |
126 | "name": "abc-stop", | |
127 | "base_id": "1", | |
128 | "parent_id": "2", | |
129 | "trace_id": "44", | |
130 | "info": info, | |
131 | "timestamp": now.strftime("%Y-%m-%dT%H:%M:%S.%f"), | |
132 | } | |
133 | ||
134 | mock_notify.assert_called_once_with(payload) | |
135 | self.assertEqual(len(prof._name), 0) | |
136 | self.assertEqual(prof._trace_stack, collections.deque(["1", "2"])) | |
137 | ||
138 | def test_profiler_hmac(self): | |
139 | hmac = "secret" | |
140 | prof = profiler._Profiler(hmac, base_id="1", parent_id="2") | |
141 | self.assertEqual(hmac, prof.hmac_key) | |
142 | ||
143 | ||
144 | class WithTraceTestCase(test.TestCase): | |
145 | ||
146 | @mock.patch("osprofiler.profiler.stop") | |
147 | @mock.patch("osprofiler.profiler.start") | |
148 | def test_with_trace(self, mock_start, mock_stop): | |
149 | ||
150 | with profiler.Trace("a", info="a1"): | |
151 | mock_start.assert_called_once_with("a", info="a1") | |
152 | mock_start.reset_mock() | |
153 | with profiler.Trace("b", info="b1"): | |
154 | mock_start.assert_called_once_with("b", info="b1") | |
155 | mock_stop.assert_called_once_with() | |
156 | mock_stop.reset_mock() | |
157 | mock_stop.assert_called_once_with() | |
158 | ||
159 | @mock.patch("osprofiler.profiler.stop") | |
160 | @mock.patch("osprofiler.profiler.start") | |
161 | def test_with_trace_etype(self, mock_start, mock_stop): | |
162 | ||
163 | def foo(): | |
164 | with profiler.Trace("foo"): | |
165 | raise ValueError("bar") | |
166 | ||
167 | self.assertRaises(ValueError, foo) | |
168 | mock_start.assert_called_once_with("foo", info=None) | |
169 | mock_stop.assert_called_once_with(info={"etype": "ValueError"}) | |
170 | ||
171 | ||
172 | @profiler.trace("function", info={"info": "some_info"}) | |
173 | def tracede_func(i): | |
174 | return i | |
175 | ||
176 | ||
177 | @profiler.trace("hide_args", hide_args=True) | |
178 | def trace_hide_args_func(a, i=10): | |
179 | return (a, i) | |
180 | ||
181 | ||
182 | class TraceDecoratorTestCase(test.TestCase): | |
183 | ||
184 | @mock.patch("osprofiler.profiler.stop") | |
185 | @mock.patch("osprofiler.profiler.start") | |
186 | def test_duplicate_trace_disallow(self, mock_start, mock_stop): | |
187 | ||
188 | @profiler.trace("test") | |
189 | def trace_me(): | |
190 | pass | |
191 | ||
192 | self.assertRaises( | |
193 | ValueError, | |
194 | profiler.trace("test-again", allow_multiple_trace=False), | |
195 | trace_me) | |
196 | ||
197 | @mock.patch("osprofiler.profiler.stop") | |
198 | @mock.patch("osprofiler.profiler.start") | |
199 | def test_with_args(self, mock_start, mock_stop): | |
200 | self.assertEqual(1, tracede_func(1)) | |
201 | expected_info = { | |
202 | "info": "some_info", | |
203 | "function": { | |
204 | "name": "osprofiler.tests.unit.test_profiler.tracede_func", | |
205 | "args": str((1,)), | |
206 | "kwargs": str({}) | |
207 | } | |
208 | } | |
209 | mock_start.assert_called_once_with("function", info=expected_info) | |
210 | mock_stop.assert_called_once_with() | |
211 | ||
212 | @mock.patch("osprofiler.profiler.stop") | |
213 | @mock.patch("osprofiler.profiler.start") | |
214 | def test_without_args(self, mock_start, mock_stop): | |
215 | self.assertEqual((1, 2), trace_hide_args_func(1, i=2)) | |
216 | expected_info = { | |
217 | "function": { | |
218 | "name": "osprofiler.tests.unit.test_profiler" | |
219 | ".trace_hide_args_func" | |
220 | } | |
221 | } | |
222 | mock_start.assert_called_once_with("hide_args", info=expected_info) | |
223 | mock_stop.assert_called_once_with() | |
224 | ||
225 | ||
226 | class FakeTracedCls(object): | |
227 | ||
228 | def method1(self, a, b, c=10): | |
229 | return a + b + c | |
230 | ||
231 | def method2(self, d, e): | |
232 | return d - e | |
233 | ||
234 | def method3(self, g=10, h=20): | |
235 | return g * h | |
236 | ||
237 | def _method(self, i): | |
238 | return i | |
239 | ||
240 | ||
241 | @profiler.trace_cls("rpc", info={"a": 10}) | |
242 | class FakeTraceClassWithInfo(FakeTracedCls): | |
243 | pass | |
244 | ||
245 | ||
246 | @profiler.trace_cls("a", info={"b": 20}, hide_args=True) | |
247 | class FakeTraceClassHideArgs(FakeTracedCls): | |
248 | pass | |
249 | ||
250 | ||
251 | @profiler.trace_cls("rpc", trace_private=True) | |
252 | class FakeTracePrivate(FakeTracedCls): | |
253 | pass | |
254 | ||
255 | ||
256 | class FakeTraceStaticMethodBase(FakeTracedCls): | |
257 | @staticmethod | |
258 | def static_method(arg): | |
259 | return arg | |
260 | ||
261 | ||
262 | @profiler.trace_cls("rpc", trace_static_methods=True) | |
263 | class FakeTraceStaticMethod(FakeTraceStaticMethodBase): | |
264 | pass | |
265 | ||
266 | ||
267 | @profiler.trace_cls("rpc") | |
268 | class FakeTraceStaticMethodSkip(FakeTraceStaticMethodBase): | |
269 | pass | |
270 | ||
271 | ||
272 | class FakeTraceClassMethodBase(FakeTracedCls): | |
273 | @classmethod | |
274 | def class_method(cls, arg): | |
275 | return arg | |
276 | ||
277 | ||
278 | @profiler.trace_cls("rpc") | |
279 | class FakeTraceClassMethodSkip(FakeTraceClassMethodBase): | |
280 | pass | |
281 | ||
282 | ||
283 | def py3_info(info): | |
284 | # NOTE(boris-42): py33 I hate you. | |
285 | info_py3 = copy.deepcopy(info) | |
286 | new_name = re.sub("FakeTrace[^.]*", "FakeTracedCls", | |
287 | info_py3["function"]["name"]) | |
288 | info_py3["function"]["name"] = new_name | |
289 | return info_py3 | |
290 | ||
291 | ||
292 | def possible_mock_calls(name, info): | |
293 | # NOTE(boris-42): py33 I hate you. | |
294 | return [mock.call(name, info=info), mock.call(name, info=py3_info(info))] | |
295 | ||
296 | ||
297 | class TraceClsDecoratorTestCase(test.TestCase): | |
298 | ||
299 | @mock.patch("osprofiler.profiler.stop") | |
300 | @mock.patch("osprofiler.profiler.start") | |
301 | def test_args(self, mock_start, mock_stop): | |
302 | fake_cls = FakeTraceClassWithInfo() | |
303 | self.assertEqual(30, fake_cls.method1(5, 15)) | |
304 | expected_info = { | |
305 | "a": 10, | |
306 | "function": { | |
307 | "name": ("osprofiler.tests.unit.test_profiler" | |
308 | ".FakeTraceClassWithInfo.method1"), | |
309 | "args": str((fake_cls, 5, 15)), | |
310 | "kwargs": str({}) | |
311 | } | |
312 | } | |
313 | self.assertEqual(1, len(mock_start.call_args_list)) | |
314 | self.assertIn(mock_start.call_args_list[0], | |
315 | possible_mock_calls("rpc", expected_info)) | |
316 | mock_stop.assert_called_once_with() | |
317 | ||
318 | @mock.patch("osprofiler.profiler.stop") | |
319 | @mock.patch("osprofiler.profiler.start") | |
320 | def test_kwargs(self, mock_start, mock_stop): | |
321 | fake_cls = FakeTraceClassWithInfo() | |
322 | self.assertEqual(50, fake_cls.method3(g=5, h=10)) | |
323 | expected_info = { | |
324 | "a": 10, | |
325 | "function": { | |
326 | "name": ("osprofiler.tests.unit.test_profiler" | |
327 | ".FakeTraceClassWithInfo.method3"), | |
328 | "args": str((fake_cls,)), | |
329 | "kwargs": str({"g": 5, "h": 10}) | |
330 | } | |
331 | } | |
332 | self.assertEqual(1, len(mock_start.call_args_list)) | |
333 | self.assertIn(mock_start.call_args_list[0], | |
334 | possible_mock_calls("rpc", expected_info)) | |
335 | mock_stop.assert_called_once_with() | |
336 | ||
337 | @mock.patch("osprofiler.profiler.stop") | |
338 | @mock.patch("osprofiler.profiler.start") | |
339 | def test_without_private(self, mock_start, mock_stop): | |
340 | fake_cls = FakeTraceClassHideArgs() | |
341 | self.assertEqual(10, fake_cls._method(10)) | |
342 | self.assertFalse(mock_start.called) | |
343 | self.assertFalse(mock_stop.called) | |
344 | ||
345 | @mock.patch("osprofiler.profiler.stop") | |
346 | @mock.patch("osprofiler.profiler.start") | |
347 | def test_without_args(self, mock_start, mock_stop): | |
348 | fake_cls = FakeTraceClassHideArgs() | |
349 | self.assertEqual(40, fake_cls.method1(5, 15, c=20)) | |
350 | expected_info = { | |
351 | "b": 20, | |
352 | "function": { | |
353 | "name": ("osprofiler.tests.unit.test_profiler" | |
354 | ".FakeTraceClassHideArgs.method1"), | |
355 | } | |
356 | } | |
357 | ||
358 | self.assertEqual(1, len(mock_start.call_args_list)) | |
359 | self.assertIn(mock_start.call_args_list[0], | |
360 | possible_mock_calls("a", expected_info)) | |
361 | mock_stop.assert_called_once_with() | |
362 | ||
363 | @mock.patch("osprofiler.profiler.stop") | |
364 | @mock.patch("osprofiler.profiler.start") | |
365 | def test_private_methods(self, mock_start, mock_stop): | |
366 | fake_cls = FakeTracePrivate() | |
367 | self.assertEqual(5, fake_cls._method(5)) | |
368 | ||
369 | expected_info = { | |
370 | "function": { | |
371 | "name": ("osprofiler.tests.unit.test_profiler" | |
372 | ".FakeTracePrivate._method"), | |
373 | "args": str((fake_cls, 5)), | |
374 | "kwargs": str({}) | |
375 | } | |
376 | } | |
377 | ||
378 | self.assertEqual(1, len(mock_start.call_args_list)) | |
379 | self.assertIn(mock_start.call_args_list[0], | |
380 | possible_mock_calls("rpc", expected_info)) | |
381 | mock_stop.assert_called_once_with() | |
382 | ||
383 | @mock.patch("osprofiler.profiler.stop") | |
384 | @mock.patch("osprofiler.profiler.start") | |
385 | @test.testcase.skip( | |
386 | "Static method tracing was disabled due the bug. This test should be " | |
387 | "skipped until we find the way to address it.") | |
388 | def test_static(self, mock_start, mock_stop): | |
389 | fake_cls = FakeTraceStaticMethod() | |
390 | ||
391 | self.assertEqual(25, fake_cls.static_method(25)) | |
392 | ||
393 | expected_info = { | |
394 | "function": { | |
395 | # fixme(boris-42): Static methods are treated differently in | |
396 | # Python 2.x and Python 3.x. So in PY2 we | |
397 | # expect to see method4 because method is | |
398 | # static and doesn't have reference to class | |
399 | # - and FakeTraceStatic.method4 in PY3 | |
400 | "name": | |
401 | "osprofiler.tests.unit.test_profiler" | |
402 | ".method4" if six.PY2 else | |
403 | "osprofiler.tests.unit.test_profiler.FakeTraceStatic" | |
404 | ".method4", | |
405 | "args": str((25,)), | |
406 | "kwargs": str({}) | |
407 | } | |
408 | } | |
409 | ||
410 | self.assertEqual(1, len(mock_start.call_args_list)) | |
411 | self.assertIn(mock_start.call_args_list[0], | |
412 | possible_mock_calls("rpc", expected_info)) | |
413 | mock_stop.assert_called_once_with() | |
414 | ||
415 | @mock.patch("osprofiler.profiler.stop") | |
416 | @mock.patch("osprofiler.profiler.start") | |
417 | def test_static_method_skip(self, mock_start, mock_stop): | |
418 | self.assertEqual(25, FakeTraceStaticMethodSkip.static_method(25)) | |
419 | self.assertFalse(mock_start.called) | |
420 | self.assertFalse(mock_stop.called) | |
421 | ||
422 | @mock.patch("osprofiler.profiler.stop") | |
423 | @mock.patch("osprofiler.profiler.start") | |
424 | def test_class_method_skip(self, mock_start, mock_stop): | |
425 | self.assertEqual("foo", FakeTraceClassMethodSkip.class_method("foo")) | |
426 | self.assertFalse(mock_start.called) | |
427 | self.assertFalse(mock_stop.called) | |
428 | ||
429 | ||
430 | @six.add_metaclass(profiler.TracedMeta) | |
431 | class FakeTraceWithMetaclassBase(object): | |
432 | __trace_args__ = {"name": "rpc", | |
433 | "info": {"a": 10}} | |
434 | ||
435 | def method1(self, a, b, c=10): | |
436 | return a + b + c | |
437 | ||
438 | def method2(self, d, e): | |
439 | return d - e | |
440 | ||
441 | def method3(self, g=10, h=20): | |
442 | return g * h | |
443 | ||
444 | def _method(self, i): | |
445 | return i | |
446 | ||
447 | ||
448 | class FakeTraceDummy(FakeTraceWithMetaclassBase): | |
449 | def method4(self, j): | |
450 | return j | |
451 | ||
452 | ||
453 | class FakeTraceWithMetaclassHideArgs(FakeTraceWithMetaclassBase): | |
454 | __trace_args__ = {"name": "a", | |
455 | "info": {"b": 20}, | |
456 | "hide_args": True} | |
457 | ||
458 | def method5(self, k, l): | |
459 | return k + l | |
460 | ||
461 | ||
462 | class FakeTraceWithMetaclassPrivate(FakeTraceWithMetaclassBase): | |
463 | __trace_args__ = {"name": "rpc", | |
464 | "trace_private": True} | |
465 | ||
466 | def _new_private_method(self, m): | |
467 | return 2 * m | |
468 | ||
469 | ||
470 | class TraceWithMetaclassTestCase(test.TestCase): | |
471 | ||
472 | def test_no_name_exception(self): | |
473 | def define_class_with_no_name(): | |
474 | @six.add_metaclass(profiler.TracedMeta) | |
475 | class FakeTraceWithMetaclassNoName(FakeTracedCls): | |
476 | pass | |
477 | self.assertRaises(TypeError, define_class_with_no_name, 1) | |
478 | ||
479 | @mock.patch("osprofiler.profiler.stop") | |
480 | @mock.patch("osprofiler.profiler.start") | |
481 | def test_args(self, mock_start, mock_stop): | |
482 | fake_cls = FakeTraceWithMetaclassBase() | |
483 | self.assertEqual(30, fake_cls.method1(5, 15)) | |
484 | expected_info = { | |
485 | "a": 10, | |
486 | "function": { | |
487 | "name": ("osprofiler.tests.unit.test_profiler" | |
488 | ".FakeTraceWithMetaclassBase.method1"), | |
489 | "args": str((fake_cls, 5, 15)), | |
490 | "kwargs": str({}) | |
491 | } | |
492 | } | |
493 | self.assertEqual(1, len(mock_start.call_args_list)) | |
494 | self.assertIn(mock_start.call_args_list[0], | |
495 | possible_mock_calls("rpc", expected_info)) | |
496 | mock_stop.assert_called_once_with() | |
497 | ||
498 | @mock.patch("osprofiler.profiler.stop") | |
499 | @mock.patch("osprofiler.profiler.start") | |
500 | def test_kwargs(self, mock_start, mock_stop): | |
501 | fake_cls = FakeTraceWithMetaclassBase() | |
502 | self.assertEqual(50, fake_cls.method3(g=5, h=10)) | |
503 | expected_info = { | |
504 | "a": 10, | |
505 | "function": { | |
506 | "name": ("osprofiler.tests.unit.test_profiler" | |
507 | ".FakeTraceWithMetaclassBase.method3"), | |
508 | "args": str((fake_cls,)), | |
509 | "kwargs": str({"g": 5, "h": 10}) | |
510 | } | |
511 | } | |
512 | self.assertEqual(1, len(mock_start.call_args_list)) | |
513 | self.assertIn(mock_start.call_args_list[0], | |
514 | possible_mock_calls("rpc", expected_info)) | |
515 | mock_stop.assert_called_once_with() | |
516 | ||
517 | @mock.patch("osprofiler.profiler.stop") | |
518 | @mock.patch("osprofiler.profiler.start") | |
519 | def test_without_private(self, mock_start, mock_stop): | |
520 | fake_cls = FakeTraceWithMetaclassHideArgs() | |
521 | self.assertEqual(10, fake_cls._method(10)) | |
522 | self.assertFalse(mock_start.called) | |
523 | self.assertFalse(mock_stop.called) | |
524 | ||
525 | @mock.patch("osprofiler.profiler.stop") | |
526 | @mock.patch("osprofiler.profiler.start") | |
527 | def test_without_args(self, mock_start, mock_stop): | |
528 | fake_cls = FakeTraceWithMetaclassHideArgs() | |
529 | self.assertEqual(20, fake_cls.method5(5, 15)) | |
530 | expected_info = { | |
531 | "b": 20, | |
532 | "function": { | |
533 | "name": ("osprofiler.tests.unit.test_profiler" | |
534 | ".FakeTraceWithMetaclassHideArgs.method5") | |
535 | } | |
536 | } | |
537 | ||
538 | self.assertEqual(1, len(mock_start.call_args_list)) | |
539 | self.assertIn(mock_start.call_args_list[0], | |
540 | possible_mock_calls("a", expected_info)) | |
541 | mock_stop.assert_called_once_with() | |
542 | ||
543 | @mock.patch("osprofiler.profiler.stop") | |
544 | @mock.patch("osprofiler.profiler.start") | |
545 | def test_private_methods(self, mock_start, mock_stop): | |
546 | fake_cls = FakeTraceWithMetaclassPrivate() | |
547 | self.assertEqual(10, fake_cls._new_private_method(5)) | |
548 | ||
549 | expected_info = { | |
550 | "function": { | |
551 | "name": ("osprofiler.tests.unit.test_profiler" | |
552 | ".FakeTraceWithMetaclassPrivate._new_private_method"), | |
553 | "args": str((fake_cls, 5)), | |
554 | "kwargs": str({}) | |
555 | } | |
556 | } | |
557 | ||
558 | self.assertEqual(1, len(mock_start.call_args_list)) | |
559 | self.assertIn(mock_start.call_args_list[0], | |
560 | possible_mock_calls("rpc", expected_info)) | |
561 | mock_stop.assert_called_once_with() |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import contextlib | |
16 | ||
17 | import mock | |
18 | ||
19 | from osprofiler import sqlalchemy | |
20 | from osprofiler.tests import test | |
21 | ||
22 | ||
23 | class SqlalchemyTracingTestCase(test.TestCase): | |
24 | ||
25 | @mock.patch("osprofiler.sqlalchemy.profiler") | |
26 | def test_before_execute(self, mock_profiler): | |
27 | handler = sqlalchemy._before_cursor_execute("sql") | |
28 | ||
29 | handler(mock.MagicMock(), 1, 2, 3, 4, 5) | |
30 | expected_info = {"db": {"statement": 2, "params": 3}} | |
31 | mock_profiler.start.assert_called_once_with("sql", info=expected_info) | |
32 | ||
33 | @mock.patch("osprofiler.sqlalchemy.profiler") | |
34 | def test_after_execute(self, mock_profiler): | |
35 | handler = sqlalchemy._after_cursor_execute() | |
36 | handler(mock.MagicMock(), 1, 2, 3, 4, 5) | |
37 | mock_profiler.stop.assert_called_once_with() | |
38 | ||
39 | @mock.patch("osprofiler.sqlalchemy._before_cursor_execute") | |
40 | @mock.patch("osprofiler.sqlalchemy._after_cursor_execute") | |
41 | def test_add_tracing(self, mock_after_exc, mock_before_exc): | |
42 | sa = mock.MagicMock() | |
43 | engine = mock.MagicMock() | |
44 | ||
45 | mock_before_exc.return_value = "before" | |
46 | mock_after_exc.return_value = "after" | |
47 | ||
48 | sqlalchemy.add_tracing(sa, engine, "sql") | |
49 | ||
50 | mock_before_exc.assert_called_once_with("sql") | |
51 | mock_after_exc.assert_called_once_with() | |
52 | expected_calls = [ | |
53 | mock.call(engine, "before_cursor_execute", "before"), | |
54 | mock.call(engine, "after_cursor_execute", "after") | |
55 | ] | |
56 | self.assertEqual(sa.event.listen.call_args_list, expected_calls) | |
57 | ||
58 | @mock.patch("osprofiler.sqlalchemy._before_cursor_execute") | |
59 | @mock.patch("osprofiler.sqlalchemy._after_cursor_execute") | |
60 | def test_wrap_session(self, mock_after_exc, mock_before_exc): | |
61 | sa = mock.MagicMock() | |
62 | ||
63 | @contextlib.contextmanager | |
64 | def _session(): | |
65 | session = mock.MagicMock() | |
66 | # current engine object stored within the session | |
67 | session.bind = mock.MagicMock() | |
68 | session.bind.traced = None | |
69 | yield session | |
70 | ||
71 | mock_before_exc.return_value = "before" | |
72 | mock_after_exc.return_value = "after" | |
73 | ||
74 | session = sqlalchemy.wrap_session(sa, _session()) | |
75 | ||
76 | with session as sess: | |
77 | pass | |
78 | ||
79 | mock_before_exc.assert_called_once_with("db") | |
80 | mock_after_exc.assert_called_once_with() | |
81 | expected_calls = [ | |
82 | mock.call(sess.bind, "before_cursor_execute", "before"), | |
83 | mock.call(sess.bind, "after_cursor_execute", "after") | |
84 | ] | |
85 | ||
86 | self.assertEqual(sa.event.listen.call_args_list, expected_calls) | |
87 | ||
88 | @mock.patch("osprofiler.sqlalchemy._before_cursor_execute") | |
89 | @mock.patch("osprofiler.sqlalchemy._after_cursor_execute") | |
90 | def test_disable_and_enable(self, mock_after_exc, mock_before_exc): | |
91 | sqlalchemy.disable() | |
92 | ||
93 | sa = mock.MagicMock() | |
94 | engine = mock.MagicMock() | |
95 | sqlalchemy.add_tracing(sa, engine, "sql") | |
96 | self.assertFalse(mock_after_exc.called) | |
97 | self.assertFalse(mock_before_exc.called) | |
98 | ||
99 | sqlalchemy.enable() | |
100 | sqlalchemy.add_tracing(sa, engine, "sql") | |
101 | self.assertTrue(mock_after_exc.called) | |
102 | self.assertTrue(mock_before_exc.called) |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import base64 | |
16 | import hashlib | |
17 | import hmac | |
18 | ||
19 | import mock | |
20 | ||
21 | from osprofiler import _utils as utils | |
22 | from osprofiler.tests import test | |
23 | ||
24 | ||
25 | class UtilsTestCase(test.TestCase): | |
26 | ||
27 | def test_split(self): | |
28 | self.assertEqual([1, 2], utils.split([1, 2])) | |
29 | self.assertEqual(["A", "B"], utils.split("A, B")) | |
30 | self.assertEqual(["A", " B"], utils.split("A, B", strip=False)) | |
31 | ||
32 | def test_split_wrong_type(self): | |
33 | self.assertRaises(TypeError, utils.split, 1) | |
34 | ||
35 | def test_binary_encode_and_decode(self): | |
36 | self.assertEqual("text", | |
37 | utils.binary_decode(utils.binary_encode("text"))) | |
38 | ||
39 | def test_binary_encode_invalid_type(self): | |
40 | self.assertRaises(TypeError, utils.binary_encode, 1234) | |
41 | ||
42 | def test_binary_encode_binary_type(self): | |
43 | binary = utils.binary_encode("text") | |
44 | self.assertEqual(binary, utils.binary_encode(binary)) | |
45 | ||
46 | def test_binary_decode_invalid_type(self): | |
47 | self.assertRaises(TypeError, utils.binary_decode, 1234) | |
48 | ||
49 | def test_binary_decode_text_type(self): | |
50 | self.assertEqual("text", utils.binary_decode("text")) | |
51 | ||
52 | def test_generate_hmac(self): | |
53 | hmac_key = "secrete" | |
54 | data = "my data" | |
55 | ||
56 | h = hmac.new(utils.binary_encode(hmac_key), digestmod=hashlib.sha1) | |
57 | h.update(utils.binary_encode(data)) | |
58 | ||
59 | self.assertEqual(h.hexdigest(), utils.generate_hmac(data, hmac_key)) | |
60 | ||
61 | def test_signed_pack_unpack(self): | |
62 | hmac = "secret" | |
63 | data = {"some": "data"} | |
64 | ||
65 | packed_data, hmac_data = utils.signed_pack(data, hmac) | |
66 | ||
67 | process_data = utils.signed_unpack(packed_data, hmac_data, [hmac]) | |
68 | self.assertIn("hmac_key", process_data) | |
69 | process_data.pop("hmac_key") | |
70 | self.assertEqual(data, process_data) | |
71 | ||
72 | def test_signed_pack_unpack_many_keys(self): | |
73 | keys = ["secret", "secret2", "secret3"] | |
74 | data = {"some": "data"} | |
75 | packed_data, hmac_data = utils.signed_pack(data, keys[-1]) | |
76 | ||
77 | process_data = utils.signed_unpack(packed_data, hmac_data, keys) | |
78 | self.assertEqual(keys[-1], process_data["hmac_key"]) | |
79 | ||
80 | def test_signed_pack_unpack_many_wrong_keys(self): | |
81 | keys = ["secret", "secret2", "secret3"] | |
82 | data = {"some": "data"} | |
83 | packed_data, hmac_data = utils.signed_pack(data, "password") | |
84 | ||
85 | process_data = utils.signed_unpack(packed_data, hmac_data, keys) | |
86 | self.assertIsNone(process_data) | |
87 | ||
88 | def test_signed_unpack_wrong_key(self): | |
89 | data = {"some": "data"} | |
90 | packed_data, hmac_data = utils.signed_pack(data, "secret") | |
91 | ||
92 | self.assertIsNone(utils.signed_unpack(packed_data, hmac_data, "wrong")) | |
93 | ||
94 | def test_signed_unpack_no_key_or_hmac_data(self): | |
95 | data = {"some": "data"} | |
96 | packed_data, hmac_data = utils.signed_pack(data, "secret") | |
97 | self.assertIsNone(utils.signed_unpack(packed_data, hmac_data, None)) | |
98 | self.assertIsNone(utils.signed_unpack(packed_data, None, "secret")) | |
99 | self.assertIsNone(utils.signed_unpack(packed_data, " ", "secret")) | |
100 | ||
101 | @mock.patch("osprofiler._utils.generate_hmac") | |
102 | def test_singed_unpack_generate_hmac_failed(self, mock_generate_hmac): | |
103 | mock_generate_hmac.side_effect = Exception | |
104 | self.assertIsNone(utils.signed_unpack("data", "hmac_data", "hmac_key")) | |
105 | ||
106 | def test_signed_unpack_invalid_json(self): | |
107 | hmac = "secret" | |
108 | data = base64.urlsafe_b64encode(utils.binary_encode("not_a_json")) | |
109 | hmac_data = utils.generate_hmac(data, hmac) | |
110 | ||
111 | self.assertIsNone(utils.signed_unpack(data, hmac_data, hmac)) | |
112 | ||
113 | def test_itersubclasses(self): | |
114 | ||
115 | class A(object): | |
116 | pass | |
117 | ||
118 | class B(A): | |
119 | pass | |
120 | ||
121 | class C(A): | |
122 | pass | |
123 | ||
124 | class D(C): | |
125 | pass | |
126 | ||
127 | self.assertEqual([B, C, D], list(utils.itersubclasses(A))) | |
128 | ||
129 | class E(type): | |
130 | pass | |
131 | ||
132 | self.assertEqual([], list(utils.itersubclasses(E))) |
0 | # Copyright 2014 Mirantis Inc. | |
1 | # All Rights Reserved. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | |
4 | # not use this file except in compliance with the License. You may obtain | |
5 | # a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | |
11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | |
12 | # License for the specific language governing permissions and limitations | |
13 | # under the License. | |
14 | ||
15 | import mock | |
16 | from webob import response as webob_response | |
17 | ||
18 | from osprofiler import _utils as utils | |
19 | from osprofiler import profiler | |
20 | from osprofiler.tests import test | |
21 | from osprofiler import web | |
22 | ||
23 | ||
24 | def dummy_app(environ, response): | |
25 | res = webob_response.Response() | |
26 | return res(environ, response) | |
27 | ||
28 | ||
29 | class WebTestCase(test.TestCase): | |
30 | ||
31 | def setUp(self): | |
32 | super(WebTestCase, self).setUp() | |
33 | profiler._clean() | |
34 | self.addCleanup(profiler._clean) | |
35 | ||
36 | def test_get_trace_id_headers_no_hmac(self): | |
37 | profiler.init(None, base_id="y", parent_id="z") | |
38 | headers = web.get_trace_id_headers() | |
39 | self.assertEqual(headers, {}) | |
40 | ||
41 | def test_get_trace_id_headers(self): | |
42 | profiler.init("key", base_id="y", parent_id="z") | |
43 | headers = web.get_trace_id_headers() | |
44 | self.assertEqual(sorted(headers.keys()), | |
45 | sorted(["X-Trace-Info", "X-Trace-HMAC"])) | |
46 | ||
47 | trace_info = utils.signed_unpack(headers["X-Trace-Info"], | |
48 | headers["X-Trace-HMAC"], ["key"]) | |
49 | self.assertIn("hmac_key", trace_info) | |
50 | self.assertEqual("key", trace_info.pop("hmac_key")) | |
51 | self.assertEqual({"parent_id": "z", "base_id": "y"}, trace_info) | |
52 | ||
53 | @mock.patch("osprofiler.profiler.get") | |
54 | def test_get_trace_id_headers_no_profiler(self, mock_get_profiler): | |
55 | mock_get_profiler.return_value = False | |
56 | headers = web.get_trace_id_headers() | |
57 | self.assertEqual(headers, {}) | |
58 | ||
59 | ||
60 | class WebMiddlewareTestCase(test.TestCase): | |
61 | def setUp(self): | |
62 | super(WebMiddlewareTestCase, self).setUp() | |
63 | profiler._clean() | |
64 | # it's default state of _ENABLED param, so let's set it here | |
65 | web._ENABLED = None | |
66 | self.addCleanup(profiler._clean) | |
67 | ||
68 | def tearDown(self): | |
69 | web.enable() | |
70 | super(WebMiddlewareTestCase, self).tearDown() | |
71 | ||
72 | def test_factory(self): | |
73 | mock_app = mock.MagicMock() | |
74 | local_conf = {"enabled": True, "hmac_keys": "123"} | |
75 | ||
76 | factory = web.WsgiMiddleware.factory(None, **local_conf) | |
77 | wsgi = factory(mock_app) | |
78 | ||
79 | self.assertEqual(wsgi.application, mock_app) | |
80 | self.assertEqual(wsgi.name, "wsgi") | |
81 | self.assertTrue(wsgi.enabled) | |
82 | self.assertEqual(wsgi.hmac_keys, [local_conf["hmac_keys"]]) | |
83 | ||
84 | def _test_wsgi_middleware_with_invalid_trace(self, headers, hmac_key, | |
85 | mock_profiler_init, | |
86 | enabled=True): | |
87 | request = mock.MagicMock() | |
88 | request.get_response.return_value = "yeah!" | |
89 | request.headers = headers | |
90 | ||
91 | middleware = web.WsgiMiddleware("app", hmac_key, enabled=enabled) | |
92 | self.assertEqual("yeah!", middleware(request)) | |
93 | request.get_response.assert_called_once_with("app") | |
94 | self.assertEqual(0, mock_profiler_init.call_count) | |
95 | ||
96 | @mock.patch("osprofiler.web.profiler.init") | |
97 | def test_wsgi_middleware_disabled(self, mock_profiler_init): | |
98 | hmac_key = "secret" | |
99 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
100 | headers = { | |
101 | "a": "1", | |
102 | "b": "2", | |
103 | "X-Trace-Info": pack[0], | |
104 | "X-Trace-HMAC": pack[1] | |
105 | } | |
106 | ||
107 | self._test_wsgi_middleware_with_invalid_trace(headers, hmac_key, | |
108 | mock_profiler_init, | |
109 | enabled=False) | |
110 | ||
111 | @mock.patch("osprofiler.web.profiler.init") | |
112 | def test_wsgi_middleware_no_trace(self, mock_profiler_init): | |
113 | headers = { | |
114 | "a": "1", | |
115 | "b": "2" | |
116 | } | |
117 | self._test_wsgi_middleware_with_invalid_trace(headers, "secret", | |
118 | mock_profiler_init) | |
119 | ||
120 | @mock.patch("osprofiler.web.profiler.init") | |
121 | def test_wsgi_middleware_invalid_trace_headers(self, mock_profiler_init): | |
122 | headers = { | |
123 | "a": "1", | |
124 | "b": "2", | |
125 | "X-Trace-Info": "abbababababa", | |
126 | "X-Trace-HMAC": "abbababababa" | |
127 | } | |
128 | self._test_wsgi_middleware_with_invalid_trace(headers, "secret", | |
129 | mock_profiler_init) | |
130 | ||
131 | @mock.patch("osprofiler.web.profiler.init") | |
132 | def test_wsgi_middleware_no_trace_hmac(self, mock_profiler_init): | |
133 | hmac_key = "secret" | |
134 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
135 | headers = { | |
136 | "a": "1", | |
137 | "b": "2", | |
138 | "X-Trace-Info": pack[0] | |
139 | } | |
140 | self._test_wsgi_middleware_with_invalid_trace(headers, hmac_key, | |
141 | mock_profiler_init) | |
142 | ||
143 | @mock.patch("osprofiler.web.profiler.init") | |
144 | def test_wsgi_middleware_invalid_hmac(self, mock_profiler_init): | |
145 | hmac_key = "secret" | |
146 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
147 | headers = { | |
148 | "a": "1", | |
149 | "b": "2", | |
150 | "X-Trace-Info": pack[0], | |
151 | "X-Trace-HMAC": "not valid hmac" | |
152 | } | |
153 | self._test_wsgi_middleware_with_invalid_trace(headers, hmac_key, | |
154 | mock_profiler_init) | |
155 | ||
156 | @mock.patch("osprofiler.web.profiler.init") | |
157 | def test_wsgi_middleware_invalid_trace_info(self, mock_profiler_init): | |
158 | hmac_key = "secret" | |
159 | pack = utils.signed_pack([{"base_id": "1"}, {"parent_id": "2"}], | |
160 | hmac_key) | |
161 | headers = { | |
162 | "a": "1", | |
163 | "b": "2", | |
164 | "X-Trace-Info": pack[0], | |
165 | "X-Trace-HMAC": pack[1] | |
166 | } | |
167 | self._test_wsgi_middleware_with_invalid_trace(headers, hmac_key, | |
168 | mock_profiler_init) | |
169 | ||
170 | @mock.patch("osprofiler.web.profiler.init") | |
171 | def test_wsgi_middleware_key_passthrough(self, mock_profiler_init): | |
172 | hmac_key = "secret2" | |
173 | request = mock.MagicMock() | |
174 | request.get_response.return_value = "yeah!" | |
175 | request.url = "someurl" | |
176 | request.host_url = "someurl" | |
177 | request.path = "path" | |
178 | request.query_string = "query" | |
179 | request.method = "method" | |
180 | request.scheme = "scheme" | |
181 | ||
182 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
183 | ||
184 | request.headers = { | |
185 | "a": "1", | |
186 | "b": "2", | |
187 | "X-Trace-Info": pack[0], | |
188 | "X-Trace-HMAC": pack[1] | |
189 | } | |
190 | ||
191 | middleware = web.WsgiMiddleware("app", "secret1,%s" % hmac_key, | |
192 | enabled=True) | |
193 | self.assertEqual("yeah!", middleware(request)) | |
194 | mock_profiler_init.assert_called_once_with(hmac_key=hmac_key, | |
195 | base_id="1", | |
196 | parent_id="2") | |
197 | ||
198 | @mock.patch("osprofiler.web.profiler.init") | |
199 | def test_wsgi_middleware_key_passthrough2(self, mock_profiler_init): | |
200 | hmac_key = "secret1" | |
201 | request = mock.MagicMock() | |
202 | request.get_response.return_value = "yeah!" | |
203 | request.url = "someurl" | |
204 | request.host_url = "someurl" | |
205 | request.path = "path" | |
206 | request.query_string = "query" | |
207 | request.method = "method" | |
208 | request.scheme = "scheme" | |
209 | ||
210 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
211 | ||
212 | request.headers = { | |
213 | "a": "1", | |
214 | "b": "2", | |
215 | "X-Trace-Info": pack[0], | |
216 | "X-Trace-HMAC": pack[1] | |
217 | } | |
218 | ||
219 | middleware = web.WsgiMiddleware("app", "%s,secret2" % hmac_key, | |
220 | enabled=True) | |
221 | self.assertEqual("yeah!", middleware(request)) | |
222 | mock_profiler_init.assert_called_once_with(hmac_key=hmac_key, | |
223 | base_id="1", | |
224 | parent_id="2") | |
225 | ||
226 | @mock.patch("osprofiler.web.profiler.Trace") | |
227 | @mock.patch("osprofiler.web.profiler.init") | |
228 | def test_wsgi_middleware(self, mock_profiler_init, mock_profiler_trace): | |
229 | hmac_key = "secret" | |
230 | request = mock.MagicMock() | |
231 | request.get_response.return_value = "yeah!" | |
232 | request.url = "someurl" | |
233 | request.host_url = "someurl" | |
234 | request.path = "path" | |
235 | request.query_string = "query" | |
236 | request.method = "method" | |
237 | request.scheme = "scheme" | |
238 | ||
239 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
240 | ||
241 | request.headers = { | |
242 | "a": "1", | |
243 | "b": "2", | |
244 | "X-Trace-Info": pack[0], | |
245 | "X-Trace-HMAC": pack[1] | |
246 | } | |
247 | ||
248 | middleware = web.WsgiMiddleware("app", hmac_key, enabled=True) | |
249 | self.assertEqual("yeah!", middleware(request)) | |
250 | mock_profiler_init.assert_called_once_with(hmac_key=hmac_key, | |
251 | base_id="1", | |
252 | parent_id="2") | |
253 | expected_info = { | |
254 | "request": { | |
255 | "path": request.path, | |
256 | "query": request.query_string, | |
257 | "method": request.method, | |
258 | "scheme": request.scheme | |
259 | } | |
260 | } | |
261 | mock_profiler_trace.assert_called_once_with("wsgi", info=expected_info) | |
262 | ||
263 | @mock.patch("osprofiler.web.profiler.init") | |
264 | def test_wsgi_middleware_disable_via_python(self, mock_profiler_init): | |
265 | request = mock.MagicMock() | |
266 | request.get_response.return_value = "yeah!" | |
267 | web.disable() | |
268 | middleware = web.WsgiMiddleware("app", "hmac_key", enabled=True) | |
269 | self.assertEqual("yeah!", middleware(request)) | |
270 | self.assertEqual(mock_profiler_init.call_count, 0) | |
271 | ||
272 | @mock.patch("osprofiler.web.profiler.init") | |
273 | def test_wsgi_middleware_enable_via_python(self, mock_profiler_init): | |
274 | request = mock.MagicMock() | |
275 | request.get_response.return_value = "yeah!" | |
276 | request.url = "someurl" | |
277 | request.host_url = "someurl" | |
278 | request.path = "path" | |
279 | request.query_string = "query" | |
280 | request.method = "method" | |
281 | request.scheme = "scheme" | |
282 | hmac_key = "super_secret_key2" | |
283 | ||
284 | pack = utils.signed_pack({"base_id": "1", "parent_id": "2"}, hmac_key) | |
285 | request.headers = { | |
286 | "a": "1", | |
287 | "b": "2", | |
288 | "X-Trace-Info": pack[0], | |
289 | "X-Trace-HMAC": pack[1] | |
290 | } | |
291 | ||
292 | web.enable("super_secret_key1,super_secret_key2") | |
293 | middleware = web.WsgiMiddleware("app", enabled=True) | |
294 | self.assertEqual("yeah!", middleware(request)) | |
295 | mock_profiler_init.assert_called_once_with(hmac_key=hmac_key, | |
296 | base_id="1", | |
297 | parent_id="2") | |
298 | ||
299 | def test_disable(self): | |
300 | web.disable() | |
301 | self.assertFalse(web._ENABLED) | |
302 | ||
303 | def test_enabled(self): | |
304 | web.disable() | |
305 | web.enable() | |
306 | self.assertTrue(web._ENABLED) |
122 | 122 | "scheme": request.scheme |
123 | 123 | } |
124 | 124 | } |
125 | with profiler.Trace(self.name, info=info): | |
126 | return request.get_response(self.application) | |
125 | try: | |
126 | with profiler.Trace(self.name, info=info): | |
127 | return request.get_response(self.application) | |
128 | finally: | |
129 | profiler._clean() |
0 | # -*- coding: utf-8 -*- | |
1 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
2 | # you may not use this file except in compliance with the License. | |
3 | # You may obtain a copy of the License at | |
4 | # | |
5 | # http://www.apache.org/licenses/LICENSE-2.0 | |
6 | # | |
7 | # Unless required by applicable law or agreed to in writing, software | |
8 | # distributed under the License is distributed on an "AS IS" BASIS, | |
9 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or | |
10 | # implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. | |
13 | ||
14 | # This file is execfile()d with the current directory set to its | |
15 | # containing dir. | |
16 | # | |
17 | # Note that not all possible configuration values are present in this | |
18 | # autogenerated file. | |
19 | # | |
20 | # All configuration values have a default; values that are commented out | |
21 | # serve to show the default. | |
22 | ||
23 | # If extensions (or modules to document with autodoc) are in another directory, | |
24 | # add these directories to sys.path here. If the directory is relative to the | |
25 | # documentation root, use os.path.abspath to make it absolute, like shown here. | |
26 | # sys.path.insert(0, os.path.abspath('.')) | |
27 | ||
28 | # -- General configuration ------------------------------------------------ | |
29 | ||
30 | # If your documentation needs a minimal Sphinx version, state it here. | |
31 | # needs_sphinx = '1.0' | |
32 | ||
33 | # Add any Sphinx extension module names here, as strings. They can be | |
34 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom | |
35 | # ones. | |
36 | extensions = [ | |
37 | 'openstackdocstheme', | |
38 | 'reno.sphinxext', | |
39 | ] | |
40 | ||
41 | # openstackdocstheme options | |
42 | repository_name = 'openstack/osprofiler' | |
43 | bug_project = 'osprofiler' | |
44 | bug_tag = '' | |
45 | ||
46 | # Add any paths that contain templates here, relative to this directory. | |
47 | templates_path = ['_templates'] | |
48 | ||
49 | # The suffix of source filenames. | |
50 | source_suffix = '.rst' | |
51 | ||
52 | # The encoding of source files. | |
53 | # source_encoding = 'utf-8-sig' | |
54 | ||
55 | # The master toctree document. | |
56 | master_doc = 'index' | |
57 | ||
58 | # General information about the project. | |
59 | project = u'osprofiler Release Notes' | |
60 | copyright = u'2016, osprofiler Developers' | |
61 | ||
62 | # The version info for the project you're documenting, acts as replacement for | |
63 | # |version| and |release|, also used in various other places throughout the | |
64 | # built documents. | |
65 | # | |
66 | # The short X.Y version. | |
67 | # The full version, including alpha/beta/rc tags. | |
68 | import pkg_resources | |
69 | release = pkg_resources.get_distribution('osprofiler').version | |
70 | # The short X.Y version. | |
71 | version = release | |
72 | ||
73 | # The language for content autogenerated by Sphinx. Refer to documentation | |
74 | # for a list of supported languages. | |
75 | # language = None | |
76 | ||
77 | # There are two options for replacing |today|: either, you set today to some | |
78 | # non-false value, then it is used: | |
79 | # today = '' | |
80 | # Else, today_fmt is used as the format for a strftime call. | |
81 | # today_fmt = '%B %d, %Y' | |
82 | ||
83 | # List of patterns, relative to source directory, that match files and | |
84 | # directories to ignore when looking for source files. | |
85 | exclude_patterns = [] | |
86 | ||
87 | # The reST default role (used for this markup: `text`) to use for all | |
88 | # documents. | |
89 | # default_role = None | |
90 | ||
91 | # If true, '()' will be appended to :func: etc. cross-reference text. | |
92 | # add_function_parentheses = True | |
93 | ||
94 | # If true, the current module name will be prepended to all description | |
95 | # unit titles (such as .. function::). | |
96 | # add_module_names = True | |
97 | ||
98 | # If true, sectionauthor and moduleauthor directives will be shown in the | |
99 | # output. They are ignored by default. | |
100 | # show_authors = False | |
101 | ||
102 | # The name of the Pygments (syntax highlighting) style to use. | |
103 | pygments_style = 'sphinx' | |
104 | ||
105 | # A list of ignored prefixes for module index sorting. | |
106 | # modindex_common_prefix = [] | |
107 | ||
108 | # If true, keep warnings as "system message" paragraphs in the built documents. | |
109 | # keep_warnings = False | |
110 | ||
111 | ||
112 | # -- Options for HTML output ---------------------------------------------- | |
113 | ||
114 | # The theme to use for HTML and HTML Help pages. See the documentation for | |
115 | # a list of builtin themes. | |
116 | html_theme = 'openstackdocs' | |
117 | ||
118 | # Theme options are theme-specific and customize the look and feel of a theme | |
119 | # further. For a list of options available for each theme, see the | |
120 | # documentation. | |
121 | # html_theme_options = {} | |
122 | ||
123 | # Add any paths that contain custom themes here, relative to this directory. | |
124 | # html_theme_path = [] | |
125 | ||
126 | # The name for this set of Sphinx documents. If None, it defaults to | |
127 | # "<project> v<release> documentation". | |
128 | # html_title = None | |
129 | ||
130 | # A shorter title for the navigation bar. Default is the same as html_title. | |
131 | # html_short_title = None | |
132 | ||
133 | # The name of an image file (relative to this directory) to place at the top | |
134 | # of the sidebar. | |
135 | # html_logo = None | |
136 | ||
137 | # The name of an image file (within the static path) to use as favicon of the | |
138 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 | |
139 | # pixels large. | |
140 | # html_favicon = None | |
141 | ||
142 | # Add any paths that contain custom static files (such as style sheets) here, | |
143 | # relative to this directory. They are copied after the builtin static files, | |
144 | # so a file named "default.css" will overwrite the builtin "default.css". | |
145 | html_static_path = ['_static'] | |
146 | ||
147 | # Add any extra paths that contain custom files (such as robots.txt or | |
148 | # .htaccess) here, relative to this directory. These files are copied | |
149 | # directly to the root of the documentation. | |
150 | # html_extra_path = [] | |
151 | ||
152 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, | |
153 | # using the given strftime format. | |
154 | html_last_updated_fmt = '%Y-%m-%d %H:%M' | |
155 | ||
156 | # If true, SmartyPants will be used to convert quotes and dashes to | |
157 | # typographically correct entities. | |
158 | # html_use_smartypants = True | |
159 | ||
160 | # Custom sidebar templates, maps document names to template names. | |
161 | # html_sidebars = {} | |
162 | ||
163 | # Additional templates that should be rendered to pages, maps page names to | |
164 | # template names. | |
165 | # html_additional_pages = {} | |
166 | ||
167 | # If false, no module index is generated. | |
168 | # html_domain_indices = True | |
169 | ||
170 | # If false, no index is generated. | |
171 | # html_use_index = True | |
172 | ||
173 | # If true, the index is split into individual pages for each letter. | |
174 | # html_split_index = False | |
175 | ||
176 | # If true, links to the reST sources are added to the pages. | |
177 | # html_show_sourcelink = True | |
178 | ||
179 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. | |
180 | # html_show_sphinx = True | |
181 | ||
182 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. | |
183 | # html_show_copyright = True | |
184 | ||
185 | # If true, an OpenSearch description file will be output, and all pages will | |
186 | # contain a <link> tag referring to it. The value of this option must be the | |
187 | # base URL from which the finished HTML is served. | |
188 | # html_use_opensearch = '' | |
189 | ||
190 | # This is the file name suffix for HTML files (e.g. ".xhtml"). | |
191 | # html_file_suffix = None | |
192 | ||
193 | # Output file base name for HTML help builder. | |
194 | htmlhelp_basename = 'osprofilerReleaseNotesDoc' | |
195 | ||
196 | ||
197 | # -- Options for LaTeX output --------------------------------------------- | |
198 | ||
199 | latex_elements = { | |
200 | # The paper size ('letterpaper' or 'a4paper'). | |
201 | # 'papersize': 'letterpaper', | |
202 | ||
203 | # The font size ('10pt', '11pt' or '12pt'). | |
204 | # 'pointsize': '10pt', | |
205 | ||
206 | # Additional stuff for the LaTeX preamble. | |
207 | # 'preamble': '', | |
208 | } | |
209 | ||
210 | # Grouping the document tree into LaTeX files. List of tuples | |
211 | # (source start file, target name, title, | |
212 | # author, documentclass [howto, manual, or own class]). | |
213 | latex_documents = [ | |
214 | ('index', 'osprofilerReleaseNotes.tex', | |
215 | u'osprofiler Release Notes Documentation', | |
216 | u'osprofiler Developers', 'manual'), | |
217 | ] | |
218 | ||
219 | # The name of an image file (relative to this directory) to place at the top of | |
220 | # the title page. | |
221 | # latex_logo = None | |
222 | ||
223 | # For "manual" documents, if this is true, then toplevel headings are parts, | |
224 | # not chapters. | |
225 | # latex_use_parts = False | |
226 | ||
227 | # If true, show page references after internal links. | |
228 | # latex_show_pagerefs = False | |
229 | ||
230 | # If true, show URL addresses after external links. | |
231 | # latex_show_urls = False | |
232 | ||
233 | # Documents to append as an appendix to all manuals. | |
234 | # latex_appendices = [] | |
235 | ||
236 | # If false, no module index is generated. | |
237 | # latex_domain_indices = True | |
238 | ||
239 | ||
240 | # -- Options for manual page output --------------------------------------- | |
241 | ||
242 | # One entry per manual page. List of tuples | |
243 | # (source start file, name, description, authors, manual section). | |
244 | man_pages = [ | |
245 | ('index', 'osprofilerReleaseNotes', | |
246 | u'osprofiler Release Notes Documentation', | |
247 | [u'osprofiler Developers'], 1) | |
248 | ] | |
249 | ||
250 | # If true, show URL addresses after external links. | |
251 | # man_show_urls = False | |
252 | ||
253 | ||
254 | # -- Options for Texinfo output ------------------------------------------- | |
255 | ||
256 | # Grouping the document tree into Texinfo files. List of tuples | |
257 | # (source start file, target name, title, author, | |
258 | # dir menu entry, description, category) | |
259 | texinfo_documents = [ | |
260 | ('index', 'osprofilerReleaseNotes', | |
261 | u'osprofiler Release Notes Documentation', | |
262 | u'osprofiler Developers', 'osprofilerReleaseNotes', | |
263 | 'One line description of project.', | |
264 | 'Miscellaneous'), | |
265 | ] | |
266 | ||
267 | # Documents to append as an appendix to all manuals. | |
268 | # texinfo_appendices = [] | |
269 | ||
270 | # If false, no module index is generated. | |
271 | # texinfo_domain_indices = True | |
272 | ||
273 | # How to display URL addresses: 'footnote', 'no', or 'inline'. | |
274 | # texinfo_show_urls = 'footnote' | |
275 | ||
276 | # If true, do not generate a @detailmenu in the "Top" node's menu. | |
277 | # texinfo_no_detailmenu = False | |
278 | ||
279 | # -- Options for Internationalization output ------------------------------ | |
280 | locale_dirs = ['locale/'] |
0 | ========================== | |
1 | osprofiler Release Notes | |
2 | ========================== | |
3 | ||
4 | .. toctree:: | |
5 | :maxdepth: 1 | |
6 | ||
7 | unreleased | |
8 | ocata |
0 | =================================== | |
1 | Ocata Series Release Notes | |
2 | =================================== | |
3 | ||
4 | .. release-notes:: | |
5 | :branch: origin/stable/ocata |
0 | ============================== | |
1 | Current Series Release Notes | |
2 | ============================== | |
3 | ||
4 | .. release-notes:: |
0 | 0 | six>=1.9.0 # MIT |
1 | oslo.utils>=3.4.0 # Apache-2.0 | |
2 | WebOb>=1.2.3 # MIT | |
1 | oslo.messaging>=5.2.0 # Apache-2.0 | |
2 | oslo.log>=3.11.0 # Apache-2.0 | |
3 | oslo.utils>=3.16.0 # Apache-2.0 | |
4 | WebOb>=1.6.0 # MIT | |
5 | requests>=2.10.0 # Apache-2.0 | |
6 | netaddr>=0.7.13,!=0.7.16 # BSD | |
7 | oslo.concurrency>=3.8.0 # Apache-2.0 |
4 | 4 | README.rst |
5 | 5 | author = OpenStack |
6 | 6 | author-email = openstack-dev@lists.openstack.org |
7 | home-page = http://www.openstack.org/ | |
7 | home-page = https://docs.openstack.org/osprofiler/latest/ | |
8 | 8 | classifier = |
9 | 9 | Environment :: OpenStack |
10 | 10 | Intended Audience :: Developers |
14 | 14 | Programming Language :: Python |
15 | 15 | Programming Language :: Python :: 2 |
16 | 16 | Programming Language :: Python :: 2.7 |
17 | Programming Language :: Python :: 3.4 | |
17 | Programming Language :: Python :: 3.5 | |
18 | 18 | |
19 | 19 | [files] |
20 | 20 | packages = |
32 | 32 | all_files = 1 |
33 | 33 | build-dir = doc/build |
34 | 34 | source-dir = doc/source |
35 | warning-is-error = 1 | |
35 | 36 | |
36 | 37 | [entry_points] |
37 | 38 | oslo.config.opts = |
38 | 39 | osprofiler = osprofiler.opts:list_opts |
39 | 40 | console_scripts = |
40 | 41 | osprofiler = osprofiler.cmd.shell:main |
42 | paste.filter_factory = | |
43 | osprofiler = osprofiler.web:WsgiMiddleware.factory |
0 | #!/usr/bin/env python | |
0 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
1 | # you may not use this file except in compliance with the License. | |
2 | # You may obtain a copy of the License at | |
3 | # | |
4 | # http://www.apache.org/licenses/LICENSE-2.0 | |
5 | # | |
6 | # Unless required by applicable law or agreed to in writing, software | |
7 | # distributed under the License is distributed on an "AS IS" BASIS, | |
8 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or | |
9 | # implied. | |
10 | # See the License for the specific language governing permissions and | |
11 | # limitations under the License. | |
1 | 12 | |
13 | # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT | |
2 | 14 | import setuptools |
3 | 15 | |
16 | # In python < 2.7.4, a lazy loading of package `pbr` will break | |
17 | # setuptools if some other modules registered functions in `atexit`. | |
18 | # solution from: http://bugs.python.org/issue15881#msg170215 | |
19 | try: | |
20 | import multiprocessing # noqa | |
21 | except ImportError: | |
22 | pass | |
23 | ||
4 | 24 | setuptools.setup( |
5 | setup_requires=['pbr'], | |
25 | setup_requires=['pbr>=1.8'], | |
6 | 26 | pbr=True) |
0 | hacking>=0.10.2,<0.11 | |
0 | hacking>=0.12.0,!=0.13.0,<0.14 # Apache-2.0 | |
1 | 1 | |
2 | coverage>=3.6 | |
3 | discover | |
4 | mock>=1.2 | |
5 | python-subunit>=0.0.18 | |
6 | testrepository>=0.0.18 | |
7 | testtools>=1.4.0 | |
2 | coverage>=3.6 # Apache-2.0 | |
3 | ddt>=1.0.1 # MIT | |
4 | mock>=2.0 # BSD | |
5 | python-subunit>=0.0.18 # Apache-2.0/BSD | |
6 | testrepository>=0.0.18 # Apache-2.0/BSD | |
7 | testtools>=1.4.0 # MIT | |
8 | 8 | |
9 | oslosphinx>=2.5.0,!=3.4.0 # Apache-2.0 | |
10 | sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 | |
9 | openstackdocstheme>=1.11.0 # Apache-2.0 | |
10 | sphinx>=1.6.2 # BSD | |
11 | ||
12 | # Bandit security code scanner | |
13 | bandit>=1.1.0 # Apache-2.0 | |
14 | ||
15 | python-ceilometerclient>=2.5.0 # Apache-2.0 | |
16 | pymongo>=3.0.2,!=3.1 # Apache-2.0 | |
17 | ||
18 | # Elasticsearch python client | |
19 | elasticsearch>=2.0.0,<=3.0.0 # Apache-2.0 | |
20 | ||
21 | # Redis python client | |
22 | redis>=2.10.0 # MIT | |
23 | ||
24 | # Build release notes | |
25 | reno>=1.8.0 # Apache-2.0 |
0 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | |
1 | ||
2 | 0 | # Copyright (c) 2013 Intel Corporation. |
3 | 1 | # All Rights Reserved. |
4 | 2 | # |
0 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | |
1 | ||
2 | 0 | # Copyright 2013 Red Hat, Inc. |
3 | 1 | # |
4 | 2 | # Licensed under the Apache License, Version 2.0 (the "License"); you may |
0 | 0 | [tox] |
1 | 1 | minversion = 1.6 |
2 | 2 | skipsdist = True |
3 | envlist = py34,py27,pep8 | |
3 | envlist = py35,py27,pep8 | |
4 | 4 | |
5 | 5 | [testenv] |
6 | 6 | setenv = VIRTUAL_ENV={envdir} |
15 | 15 | commands = python setup.py testr --slowest --testr-args='{posargs}' |
16 | 16 | distribute = false |
17 | 17 | |
18 | [testenv:functional] | |
19 | basepython = python2.7 | |
20 | setenv = {[testenv]setenv} | |
21 | OS_TEST_PATH=./osprofiler/tests/functional | |
22 | deps = {[testenv]deps} | |
23 | ||
24 | [testenv:functional-py35] | |
25 | basepython = python3.5 | |
26 | setenv = {[testenv:functional]setenv} | |
27 | deps = {[testenv:functional]deps} | |
28 | ||
18 | 29 | [testenv:pep8] |
19 | commands = flake8 | |
30 | commands = | |
31 | flake8 | |
32 | # Run security linter | |
33 | bandit -r osprofiler -n5 | |
20 | 34 | distribute = false |
21 | 35 | |
22 | 36 | [testenv:venv] |
26 | 40 | commands = python setup.py testr --coverage --testr-args='{posargs}' |
27 | 41 | |
28 | 42 | [testenv:docs] |
29 | changedir = doc/source | |
30 | commands = make html | |
43 | commands = python setup.py build_sphinx | |
44 | ||
45 | [testenv:bandit] | |
46 | commands = bandit -r osprofiler -n5 | |
31 | 47 | |
32 | 48 | [flake8] |
33 | 49 | show-source = true |
34 | 50 | builtins = _ |
35 | exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,tools,setup.py | |
51 | exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,tools,setup.py,build,releasenotes | |
36 | 52 | |
37 | 53 | [hacking] |
38 | local-check-factory = osprofiler.tests.hacking.checks.factory | |
54 | local-check-factory = osprofiler.hacking.checks.factory | |
55 | ||
56 | [testenv:releasenotes] | |
57 | commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html |