Codebase list logbook / 9af17e1
Imported Upstream version 0.6.0 Agustin Henze 10 years ago
87 changed file(s) with 10558 addition(s) and 0 deletion(s). Raw diff Collapse all Expand all
0 .ropeproject
1 .tox
2 docs/_build
3 logbook/_speedups.c
4 logbook/_speedups.so
5 Logbook.egg-info
6 dist
7 *.pyc
8 env
9 env*
10 .coverage
11 cover
12 build
13 .vagrant
14 flycheck-*
0 \.pyc$
1 \.egg-info$
2 docs/_build
3 \.ropeproject
0 language: python
1 python:
2 - "2.6"
3 - "2.7"
4 - "3.3"
5 - "pypy"
6
7 install:
8 # this fixes SemLock issues on travis
9 - "sudo rm -rf /dev/shm && sudo ln -s /run/shm /dev/shm"
10 - "sudo apt-get install libzmq3-dev redis-server"
11 - "python scripts/pypi_mirror_setup.py http://a.pypi.python.org/simple"
12 - "pip install cython redis"
13 - "make test_setup"
14 - "python setup.py develop"
15
16 env:
17 - COMMAND="make test"
18 - COMMAND="make cybuild test"
19
20 script: "$COMMAND"
21
22 matrix:
23 exclude:
24 - python: "pypy"
25 env: COMMAND="make cybuild test"
26
27 notifications:
28 email:
29 recipients:
30 - vmalloc@gmail.com
31 irc:
32 channels:
33 - "chat.freenode.net#pocoo"
34 on_success: change
35 on_failure: always
36 use_notice: true
37 skip_join: true
0 Logbook is written and maintained by the Logbook Team and various
1 contributors:
2
3 Lead Developers:
4
5 - Armin Ronacher <armin.ronacher@active-4.com>
6 - Georg Brandl
7
8 Contributors:
9
10 - Ronny Pfannschmidt
11 - Daniel Neuhäuser
12 - Kenneth Reitz
13 - Valentine Svensson
14 - Roman Valls Guimera
15 - Guillermo Carrasco Hernández
16 - Raphaël Vinot
0 Logbook Changelog
1 =================
2
3 Here you can see the full list of changes between each Logbook release.
4
5 Version 0.6.0
6 -------------
7
8 Released on October 3rd 2013. Codename "why_not_production_ready"
9
10 - Added Redis handler (Thanks a lot @guillermo-carrasco for this PR)
11 - Fixed email encoding bug (Thanks Raphaël Vinot)
12
13 Version 0.5.0
14 -------------
15
16 Released on August 10th 2013.
17
18 - Drop 2.5, 3.2 support, code cleanup
19 - The exc_info argument now accepts `True`, like in the standard logging module
20
21 Version 0.4.2
22 -------------
23
24 Released on June 2nd 2013.
25
26 - Fixed Python 3.x compatibility, including speedups
27 - Dropped Python 2.4 support. Python 2.4 support caused a lot of hacks in the code and introduced duplication to the test code. In addition, it is impossible to cover 2.4-3.x with a single tox installation, which may introduce unwitting code breakage. Travis also does not support Python 2.4 so the chances of accidentally breaking this support were very high as it was...
28
29
30 Version 0.4.1
31 -------------
32
33 Released on December 12th. Codename "121212"
34
35 - Fixed several outstanding encoding problems, thanks to @dvarazzo.
36 - Merged in minor pull requests (see https://github.com/mitsuhiko/logbook/pulls?&state=closed)
37
38 Version 0.4
39 -----------
40
41 Released on October 24th. Codename "Phoenix"
42
43 - Added preliminary RabbitMQ and CouchDB support.
44 - Added :class:`logbook.notifiers.NotifoHandler`
45 - `channel` is now documented to be used for filtering purposes if
46 wanted. Previously this was an opaque string that was not intended
47 for filtering of any kind.
48
49 Version 0.3
50 -----------
51
52 Released on October 23rd. Codename "Informant"
53
54 - Added :class:`logbook.more.ColorizingStreamHandlerMixin` and
55 :class:`logbook.more.ColorizedStderrHandler`
56 - Deprecated :class:`logbook.RotatingFileHandlerBase` because the
57 interface was not flexible enough.
58 - Provided basic Python 3 compatibility. This did cause a few smaller
59 API changes that caused minimal changes on Python 2 as well. The
60 deprecation of the :class:`logbook.RotatingFileHandlerBase` was a
61 result of this.
62 - Added support for Python 2.4
63 - Added batch emitting support for handlers which now makes it possible
64 to use the :class:`logbook.more.FingersCrossedHandler` with the
65 :class:`logbook.MailHandler`.
66 - Moved the :class:`~logbook.FingersCrossedHandler` handler into the
67 base package. The old location stays importable for a few releases.
68 - Added :class:`logbook.GroupHandler` that buffers records until the
69 handler is popped.
70 - Added :class:`logbook.more.ExternalApplicationHandler` that executes
71 an external application for each log record emitted.
72
73 Version 0.2.1
74 -------------
75
76 Bugfix release, Released on September 22nd.
77
78 - Fixes Python 2.5 compatibility.
79
80 Version 0.2
81 -----------
82
83 Released on September 21st. Codename "Walls of Text"
84
85 - Implemented default with statement for handlers which is an
86 alias for `threadbound`.
87 - `applicationbound` and `threadbound` return the handler now.
88 - Implemented channel recording on the log records.
89 - The :class:`logbook.more.FingersCrossedHandler` now is set to
90 `ERROR` by default and has the ability to create new loggers
91 from a factory function.
92 - Implemented maximum buffer size for the
93 :class:`logbook.more.FingersCrossedHandler` as well as a lock
94 for thread safety.
95 - Added ability to filter for context.
96 - Moved bubbling flags and filters to the handler object.
97 - Moved context processors on their own stack.
98 - Removed the `iter_context_handlers` function.
99 - Renamed `NestedHandlerSetup` to :class:`~logbook.NestedSetup`
100 because it can now also configure processors.
101 - Added the :class:`logbook.Processor` class.
102 - There is no difference between logger attached handlers and
103 context specific handlers any more.
104 - Added a function to redirect warnings to logbook
105 (:func:`logbook.compat.redirected_warnings`).
106 - Fixed and improved :class:`logbook.LoggerGroup`.
107 - The :class:`logbook.TestHandler` now keeps the record open
108 for further inspection.
109 - The traceback is now removed from a log record when the record
110 is closed. The formatted traceback is a cached property
111 instead of a function.
112 - Added ticketing handlers that send logs directly into a database.
113 - Added MongoDB backend for ticketing handlers
114 - Added a :func:`logbook.base.dispatch_record` function to dispatch
115 records to handlers independently of a logger (uses the default
116 record dispatching logic).
117 - Renamed `logger_name` to `channel`.
118 - Added a multi processing log handler
119 (:class:`logbook.more.MultiProcessingHandler`).
120 - Added a twitter handler.
121 - Added a ZeroMQ handler.
122 - Added a Growl handler.
123 - Added a Libnotify handler.
124 - Added a monitoring file handler.
125 - Added a handler wrapper that moves the actual handling into a
126 background thread.
127 - The mail handler can now be configured to deliver each log record
128 not more than n times in m seconds.
129 - Added support for Python 2.5
130 - Added a :class:`logbook.queues.SubscriberGroup` to deal with multiple
131 subscribers.
132 - Added a :class:`logbook.compat.LoggingHandler` for redirecting logbook
133 log calls to the standard library's :mod:`logging` module.
134
135 Version 0.1
136 -----------
137
138 First public release.
0 Copyright (c) 2010 by the Logbook Team, see AUTHORS for more details.
1
2 Some rights reserved.
3
4 Redistribution and use in source and binary forms, with or without
5 modification, are permitted provided that the following conditions are
6 met:
7
8 * Redistributions of source code must retain the above copyright
9 notice, this list of conditions and the following disclaimer.
10
11 * Redistributions in binary form must reproduce the above
12 copyright notice, this list of conditions and the following
13 disclaimer in the documentation and/or other materials provided
14 with the distribution.
15
16 * The names of the contributors may not be used to endorse or
17 promote products derived from this software without specific
18 prior written permission.
19
20 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
21 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
22 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
23 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
24 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
25 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
26 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
27 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
28 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
29 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
0 include MANIFEST.in Makefile CHANGES logbook/_speedups.c logbook/_speedups.pyx tox.ini
1 include scripts/test_setup.py
2 recursive-include tests *
3
0 all: clean-pyc test
1
2 clean-pyc:
3 find . -name '*.pyc' -exec rm -f {} +
4 find . -name '*.pyo' -exec rm -f {} +
5 find . -name '*~' -exec rm -f {} +
6
7 test_setup:
8 @python scripts/test_setup.py
9
10 test:
11 @nosetests -w tests
12
13 toxtest:
14 @tox
15
16 vagrant_toxtest:
17 @vagrant up
18 @vagrant ssh --command "rsync -avP --delete --exclude=_build --exclude=.tox /vagrant/ ~/src/ && cd ~/src/ && tox"
19
20 bench:
21 @python benchmark/run.py
22
23 upload-docs: docs
24 python setup.py upload_docs
25
26 docs:
27 make -C docs html SPHINXOPTS=-Aonline=1
28
29 release: upload-docs
30 python scripts/make-release.py
31
32 logbook/_speedups.so: logbook/_speedups.pyx
33 cython logbook/_speedups.pyx
34 python setup.py build
35 cp build/*/logbook/_speedups*.so logbook
36
37 cybuild: logbook/_speedups.so
38
39 .PHONY: test upload-docs clean-pyc cybuild bench all docs
0 Welcome to Logbook
1 ==================
2
3 .. image:: https://secure.travis-ci.org/mitsuhiko/logbook.png
4 :target: https://travis-ci.org/mitsuhiko/logbook
5
6 .. image:: https://pypip.in/d/Logbook/badge.png
7 :target: https://crate.io/packages/Logbook
8
9 .. image:: https://pypip.in/v/Logbook/badge.png
10 :target: https://crate.io/packages/Logbook
11
12 Logbook is a nice logging replacement.
13
14 It should be easy to setup, use and configure and support web applications :)
15
16 For more information look at http://logbook.pocoo.org/
0 # -*- mode: ruby -*-
1 # vi: set ft=ruby :
2 PYTHON_VERSIONS = ["python2.6", "python2.7", "python3.3"]
3
4 Vagrant::Config.run do |config|
5 config.vm.define :box do |config|
6 config.vm.box = "precise64"
7 config.vm.box_url = "http://files.vagrantup.com/precise64.box"
8 config.vm.host_name = "box"
9 config.vm.provision :shell, :inline => "sudo apt-get -y update"
10 config.vm.provision :shell, :inline => "sudo apt-get install -y python-software-properties"
11 config.vm.provision :shell, :inline => "sudo add-apt-repository -y ppa:fkrull/deadsnakes"
12 config.vm.provision :shell, :inline => "sudo apt-get update"
13 PYTHON_VERSIONS.each { |python_version|
14 config.vm.provision :shell, :inline => "sudo apt-get install -y " + python_version + " " + python_version + "-dev"
15 }
16 config.vm.provision :shell, :inline => "sudo apt-get install -y libzmq-dev wget libbluetooth-dev libsqlite3-dev"
17 config.vm.provision :shell, :inline => "wget http://python-distribute.org/distribute_setup.py -O /tmp/distribute_setup.py"
18 PYTHON_VERSIONS.each { |python_executable|
19 config.vm.provision :shell, :inline => python_executable + " /tmp/distribute_setup.py"
20 }
21 config.vm.provision :shell, :inline => "sudo easy_install tox==1.2"
22 config.vm.provision :shell, :inline => "sudo easy_install virtualenv==1.6.4"
23 end
24 end
0 """Tests with frame introspection disabled"""
1 from logbook import Logger, NullHandler, Flags
2
3
4 log = Logger('Test logger')
5
6
7 class DummyHandler(NullHandler):
8 blackhole = False
9
10
11 def run():
12 with Flags(introspection=False):
13 with DummyHandler() as handler:
14 for x in xrange(500):
15 log.warning('this is not handled')
0 """Tests with the whole logger disabled"""
1 from logbook import Logger
2
3
4 log = Logger('Test logger')
5 log.disabled = True
6
7
8 def run():
9 for x in xrange(500):
10 log.warning('this is not handled')
0 """Tests with stack frame introspection enabled"""
1 from logbook import Logger, NullHandler, Flags
2
3
4 log = Logger('Test logger')
5
6
7 class DummyHandler(NullHandler):
8 blackhole = False
9
10
11 def run():
12 with Flags(introspection=True):
13 with DummyHandler() as handler:
14 for x in xrange(500):
15 log.warning('this is not handled')
0 """Benchmarks the file handler"""
1 from logbook import Logger, FileHandler
2 from tempfile import NamedTemporaryFile
3
4
5 log = Logger('Test logger')
6
7
8 def run():
9 f = NamedTemporaryFile()
10 with FileHandler(f.name) as handler:
11 for x in xrange(500):
12 log.warning('this is handled')
0 """Benchmarks the file handler with unicode"""
1 from logbook import Logger, FileHandler
2 from tempfile import NamedTemporaryFile
3
4
5 log = Logger('Test logger')
6
7
8 def run():
9 f = NamedTemporaryFile()
10 with FileHandler(f.name) as handler:
11 for x in xrange(500):
12 log.warning(u'this is handled \x6f')
0 """Test with no handler active"""
1 from logbook import Logger
2
3
4 def run():
5 for x in xrange(500):
6 Logger('Test')
0 """Benchmarks too low logger levels"""
1 from logbook import Logger, StreamHandler, ERROR
2 from cStringIO import StringIO
3
4
5 log = Logger('Test logger')
6 log.level = ERROR
7
8
9 def run():
10 out = StringIO()
11 with StreamHandler(out):
12 for x in xrange(500):
13 log.warning('this is not handled')
0 """Tests logging file handler in comparison"""
1 from logging import getLogger, FileHandler
2 from tempfile import NamedTemporaryFile
3
4
5 log = getLogger('Testlogger')
6
7
8 def run():
9 f = NamedTemporaryFile()
10 handler = FileHandler(f.name)
11 log.addHandler(handler)
12 for x in xrange(500):
13 log.warning('this is handled')
0 """Tests logging file handler in comparison"""
1 from logging import getLogger, FileHandler
2 from tempfile import NamedTemporaryFile
3
4
5 log = getLogger('Testlogger')
6
7
8 def run():
9 f = NamedTemporaryFile()
10 handler = FileHandler(f.name)
11 log.addHandler(handler)
12 for x in xrange(500):
13 log.warning(u'this is handled \x6f')
0 """Test with no handler active"""
1 from logging import getLogger
2
3
4 root_logger = getLogger()
5
6
7 def run():
8 for x in xrange(500):
9 getLogger('Test')
10 del root_logger.manager.loggerDict['Test']
0 """Tests with a logging handler becoming a noop for comparison"""
1 from logging import getLogger, StreamHandler, ERROR
2 from cStringIO import StringIO
3
4
5 log = getLogger('Testlogger')
6 log.setLevel(ERROR)
7
8
9 def run():
10 out = StringIO()
11 handler = StreamHandler(out)
12 log.addHandler(handler)
13 for x in xrange(500):
14 log.warning('this is not handled')
0 """Tests with a logging handler becoming a noop for comparison"""
1 from logging import getLogger, StreamHandler, ERROR
2 from cStringIO import StringIO
3
4
5 log = getLogger('Testlogger')
6
7
8 def run():
9 out = StringIO()
10 handler = StreamHandler(out)
11 handler.setLevel(ERROR)
12 log.addHandler(handler)
13 for x in xrange(500):
14 log.warning('this is not handled')
0 """Tests with a filter disabling a handler for comparsion in logging"""
1 from logging import getLogger, StreamHandler, Filter
2 from cStringIO import StringIO
3
4
5 log = getLogger('Testlogger')
6
7
8 class DisableFilter(Filter):
9 def filter(self, record):
10 return False
11
12
13 def run():
14 out = StringIO()
15 handler = StreamHandler(out)
16 handler.addFilter(DisableFilter())
17 log.addHandler(handler)
18 for x in xrange(500):
19 log.warning('this is not handled')
0 """Tests the stream handler in logging"""
1 from logging import Logger, StreamHandler
2 from cStringIO import StringIO
3
4
5 log = Logger('Test logger')
6
7
8 def run():
9 out = StringIO()
10 log.addHandler(StreamHandler(out))
11 for x in xrange(500):
12 log.warning('this is not handled')
13 assert out.getvalue().count('\n') == 500
0 """Test with no handler active"""
1 from logbook import Logger, StreamHandler, NullHandler, ERROR
2 from cStringIO import StringIO
3
4
5 log = Logger('Test logger')
6
7
8 def run():
9 out = StringIO()
10 with NullHandler():
11 with StreamHandler(out, level=ERROR) as handler:
12 for x in xrange(500):
13 log.warning('this is not handled')
14 assert not out.getvalue()
0 from logbook import Logger, StreamHandler, NullHandler
1 from cStringIO import StringIO
2
3
4 log = Logger('Test logger')
5
6
7 def run():
8 out = StringIO()
9 with NullHandler():
10 with StreamHandler(out, filter=lambda r, h: False) as handler:
11 for x in xrange(500):
12 log.warning('this is not handled')
13 assert not out.getvalue()
0 """Like the filter test, but with the should_handle implemented"""
1 from logbook import Logger, StreamHandler, NullHandler
2 from cStringIO import StringIO
3
4
5 log = Logger('Test logger')
6
7
8 class CustomStreamHandler(StreamHandler):
9 def should_handle(self, record):
10 return False
11
12
13 def run():
14 out = StringIO()
15 with NullHandler():
16 with CustomStreamHandler(out) as handler:
17 for x in xrange(500):
18 log.warning('this is not handled')
19 assert not out.getvalue()
0 """Tests redirects from logging to logbook"""
1 from logging import getLogger
2 from logbook import StreamHandler
3 from logbook.compat import redirect_logging
4 from cStringIO import StringIO
5
6
7 redirect_logging()
8 log = getLogger('Test logger')
9
10
11 def run():
12 out = StringIO()
13 with StreamHandler(out):
14 for x in xrange(500):
15 log.warning('this is not handled')
16 assert out.getvalue().count('\n') == 500
0 """Tests redirects from logging to logbook"""
1 from logging import getLogger, StreamHandler
2 from logbook.compat import LoggingHandler
3 from cStringIO import StringIO
4
5
6 log = getLogger('Test logger')
7
8
9 def run():
10 out = StringIO()
11 log.addHandler(StreamHandler(out))
12 with LoggingHandler():
13 for x in xrange(500):
14 log.warning('this is not handled')
15 assert out.getvalue().count('\n') == 500
0 """Tests basic stack manipulation performance"""
1 from logbook import Handler, NullHandler, StreamHandler, FileHandler, \
2 ERROR, WARNING
3 from tempfile import NamedTemporaryFile
4 from cStringIO import StringIO
5
6
7 def run():
8 f = NamedTemporaryFile()
9 out = StringIO()
10 with NullHandler():
11 with StreamHandler(out, level=WARNING):
12 with FileHandler(f.name, level=ERROR):
13 for x in xrange(100):
14 list(Handler.stack_manager.iter_context_objects())
0 """Tests the stream handler"""
1 from logbook import Logger, StreamHandler
2 from cStringIO import StringIO
3
4
5 log = Logger('Test logger')
6
7
8 def run():
9 out = StringIO()
10 with StreamHandler(out) as handler:
11 for x in xrange(500):
12 log.warning('this is not handled')
13 assert out.getvalue().count('\n') == 500
0 """Tests the test handler"""
1 from logbook import Logger, TestHandler
2
3
4 log = Logger('Test logger')
5
6
7 def run():
8 with TestHandler() as handler:
9 for x in xrange(500):
10 log.warning('this is not handled')
0 #!/usr/bin/env python
1 """
2 Runs the benchmarks
3 """
4 import sys
5 import os
6 import re
7 from subprocess import Popen
8
9 try:
10 from pkg_resources import get_distribution
11 version = get_distribution('Logbook').version
12 except Exception:
13 version = 'unknown version'
14
15
16 _filename_re = re.compile(r'^bench_(.*?)\.py$')
17 bench_directory = os.path.abspath(os.path.dirname(__file__))
18
19
20 def list_benchmarks():
21 result = []
22 for name in os.listdir(bench_directory):
23 match = _filename_re.match(name)
24 if match is not None:
25 result.append(match.group(1))
26 result.sort(key=lambda x: (x.startswith('logging_'), x.lower()))
27 return result
28
29
30 def run_bench(name):
31 sys.stdout.write('%-32s' % name)
32 sys.stdout.flush()
33 Popen([sys.executable, '-mtimeit', '-s',
34 'from bench_%s import run' % name,
35 'run()']).wait()
36
37
38 def main():
39 print '=' * 80
40 print 'Running benchmark with Logbook %s' % version
41 print '-' * 80
42 os.chdir(bench_directory)
43 for bench in list_benchmarks():
44 run_bench(bench)
45 print '-' * 80
46
47
48 if __name__ == '__main__':
49 main()
0 # Makefile for Sphinx documentation
1 #
2
3 # You can set these variables from the command line.
4 SPHINXOPTS =
5 SPHINXBUILD = sphinx-build
6 PAPER =
7 BUILDDIR = _build
8
9 # Internal variables.
10 PAPEROPT_a4 = -D latex_paper_size=a4
11 PAPEROPT_letter = -D latex_paper_size=letter
12 ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
13
14 .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest
15
16 help:
17 @echo "Please use \`make <target>' where <target> is one of"
18 @echo " html to make standalone HTML files"
19 @echo " dirhtml to make HTML files named index.html in directories"
20 @echo " singlehtml to make a single large HTML file"
21 @echo " pickle to make pickle files"
22 @echo " json to make JSON files"
23 @echo " htmlhelp to make HTML files and a HTML help project"
24 @echo " qthelp to make HTML files and a qthelp project"
25 @echo " devhelp to make HTML files and a Devhelp project"
26 @echo " epub to make an epub"
27 @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
28 @echo " latexpdf to make LaTeX files and run them through pdflatex"
29 @echo " text to make text files"
30 @echo " man to make manual pages"
31 @echo " changes to make an overview of all changed/added/deprecated items"
32 @echo " linkcheck to check all external links for integrity"
33 @echo " doctest to run all doctests embedded in the documentation (if enabled)"
34
35 clean:
36 -rm -rf $(BUILDDIR)/*
37
38 html:
39 $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
40 @echo
41 @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
42
43 dirhtml:
44 $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
45 @echo
46 @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
47
48 singlehtml:
49 $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
50 @echo
51 @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
52
53 pickle:
54 $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
55 @echo
56 @echo "Build finished; now you can process the pickle files."
57
58 json:
59 $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
60 @echo
61 @echo "Build finished; now you can process the JSON files."
62
63 htmlhelp:
64 $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
65 @echo
66 @echo "Build finished; now you can run HTML Help Workshop with the" \
67 ".hhp project file in $(BUILDDIR)/htmlhelp."
68
69 qthelp:
70 $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
71 @echo
72 @echo "Build finished; now you can run "qcollectiongenerator" with the" \
73 ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
74 @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Logbook.qhcp"
75 @echo "To view the help file:"
76 @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Logbook.qhc"
77
78 devhelp:
79 $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
80 @echo
81 @echo "Build finished."
82 @echo "To view the help file:"
83 @echo "# mkdir -p $$HOME/.local/share/devhelp/Logbook"
84 @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Logbook"
85 @echo "# devhelp"
86
87 epub:
88 $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
89 @echo
90 @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
91
92 latex:
93 $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
94 @echo
95 @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
96 @echo "Run \`make' in that directory to run these through (pdf)latex" \
97 "(use \`make latexpdf' here to do that automatically)."
98
99 latexpdf:
100 $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
101 @echo "Running LaTeX files through pdflatex..."
102 make -C $(BUILDDIR)/latex all-pdf
103 @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
104
105 text:
106 $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
107 @echo
108 @echo "Build finished. The text files are in $(BUILDDIR)/text."
109
110 man:
111 $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
112 @echo
113 @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
114
115 changes:
116 $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
117 @echo
118 @echo "The overview file is in $(BUILDDIR)/changes."
119
120 linkcheck:
121 $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
122 @echo
123 @echo "Link check complete; look for any errors in the above output " \
124 "or in $(BUILDDIR)/linkcheck/output.txt."
125
126 doctest:
127 $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
128 @echo "Testing of doctests in the sources finished, look at the " \
129 "results in $(BUILDDIR)/doctest/output.txt."
0 Core Interface
1 ==============
2
3 This implements the core interface.
4
5 .. module:: logbook
6
7 .. autoclass:: Logger
8 :members:
9 :inherited-members:
10
11 .. autoclass:: LoggerGroup
12 :members:
13
14 .. autoclass:: LogRecord
15 :members:
16
17 .. autoclass:: Flags
18 :members:
19 :inherited-members:
20
21 .. autoclass:: Processor
22 :members:
23 :inherited-members:
24
25 .. autofunction:: get_level_name
26
27 .. autofunction:: lookup_level
28
29 .. data:: CRITICAL
30 ERROR
31 WARNING
32 INFO
33 DEBUG
34 NOTSET
35
36 The log level constants
0 Compatibility
1 =============
2
3 This documents compatibility support with existing systems such as
4 :mod:`logging` and :mod:`warnings`.
5
6 .. module:: logbook.compat
7
8 Logging Compatibility
9 ---------------------
10
11 .. autofunction:: redirect_logging
12
13 .. autofunction:: redirected_logging
14
15 .. autoclass:: RedirectLoggingHandler
16 :members:
17
18 .. autoclass:: LoggingHandler
19 :members:
20
21
22 Warnings Compatibility
23 ----------------------
24
25 .. autofunction:: redirect_warnings
26
27 .. autofunction:: redirected_warnings
0 Handlers
1 ========
2
3 This documents the base handler interface as well as the provided core
4 handlers. There are additional handlers for special purposes in the
5 :mod:`logbook.more`, :mod:`logbook.ticketing` and :mod:`logbook.queues`
6 modules.
7
8 .. module:: logbook
9
10 Base Interface
11 --------------
12
13 .. autoclass:: Handler
14 :members:
15 :inherited-members:
16
17 .. autoclass:: NestedSetup
18 :members:
19
20 .. autoclass:: StringFormatter
21 :members:
22
23 Core Handlers
24 -------------
25
26 .. autoclass:: StreamHandler
27 :members:
28
29 .. autoclass:: FileHandler
30 :members:
31
32 .. autoclass:: MonitoringFileHandler
33 :members:
34
35 .. autoclass:: StderrHandler
36 :members:
37
38 .. autoclass:: RotatingFileHandler
39 :members:
40
41 .. autoclass:: TimedRotatingFileHandler
42 :members:
43
44 .. autoclass:: TestHandler
45 :members:
46
47 .. autoclass:: MailHandler
48 :members:
49
50 .. autoclass:: GMailHandler
51 :members:
52
53 .. autoclass:: SyslogHandler
54 :members:
55
56 .. autoclass:: NTEventLogHandler
57 :members:
58
59 .. autoclass:: NullHandler
60 :members:
61
62 .. autoclass:: WrapperHandler
63 :members:
64
65 .. autofunction:: create_syshandler
66
67 Special Handlers
68 ----------------
69
70 .. autoclass:: FingersCrossedHandler
71 :members:
72
73 .. autoclass:: GroupHandler
74 :members:
75
76 Mixin Classes
77 -------------
78
79 .. autoclass:: StringFormatterHandlerMixin
80 :members:
81
82 .. autoclass:: HashingHandlerMixin
83 :members:
84
85 .. autoclass:: LimitingHandlerMixin
86 :members:
0 API Documentation
1 =================
2
3 This part of the documentation documents all the classes and functions
4 provided by Logbook.
5
6 .. toctree::
7
8 base
9 handlers
10 utilities
11 queues
12 ticketing
13 more
14 notifiers
15 compat
16 internal
0 Internal API
1 ============
2
3 This documents the internal API that might be useful for more advanced
4 setups or custom handlers.
5
6 .. module:: logbook.base
7
8 .. autofunction:: dispatch_record
9
10 .. autoclass:: StackedObject
11 :members:
12
13 .. autoclass:: RecordDispatcher
14 :members:
15
16 .. autoclass:: LoggerMixin
17 :members:
18 :inherited-members:
19
20 .. module:: logbook.handlers
21
22 .. autoclass:: RotatingFileHandlerBase
23 :members:
24
25 .. autoclass:: StringFormatterHandlerMixin
26 :members:
0 The More Module
1 ===============
2
3 The more module implements special handlers and other things that are
4 beyond the scope of Logbook itself or depend on external libraries.
5 Additionally there are some handlers in :mod:`logbook.ticketing`,
6 :mod:`logbook.queues` and :mod:`logbook.notifiers`.
7
8 .. module:: logbook.more
9
10 Tagged Logging
11 --------------
12
13 .. autoclass:: TaggingLogger
14 :members:
15 :inherited-members:
16
17 .. autoclass:: TaggingHandler
18 :members:
19
20 Special Handlers
21 ----------------
22
23 .. autoclass:: TwitterHandler
24 :members:
25
26 .. autoclass:: ExternalApplicationHandler
27 :members:
28
29 .. autoclass:: ExceptionHandler
30 :members:
31
32 Colorized Handlers
33 ------------------
34
35 .. versionadded:: 0.3
36
37 .. autoclass:: ColorizedStderrHandler
38
39 .. autoclass:: ColorizingStreamHandlerMixin
40 :members:
41
42 Other
43 -----
44
45 .. autoclass:: JinjaFormatter
46 :members:
0 .. _notifiers:
1
2 The Notifiers Module
3 ====================
4
5 The notifiers module implements special handlers for various platforms
6 that depend on external libraries.
7 The more module implements special handlers and other things that are
8 beyond the scope of Logbook itself or depend on external libraries.
9
10 .. module:: logbook.notifiers
11
12 .. autofunction:: create_notification_handler
13
14 OSX Specific Handlers
15 ---------------------
16
17 .. autoclass:: GrowlHandler
18 :members:
19
20 Linux Specific Handlers
21 -----------------------
22
23 .. autoclass:: LibNotifyHandler
24 :members:
25
26 Other Services
27 --------------
28
29 .. autoclass:: BoxcarHandler
30 :members:
31
32 .. autoclass:: NotifoHandler
33 :members:
34
35 Base Interface
36 --------------
37
38 .. autoclass:: NotificationBaseHandler
39 :members:
0 Queue Support
1 =============
2
3 The queue support module makes it possible to add log records to a queue
4 system. This is useful for distributed setups where you want multiple
5 processes to log to the same backend. Currently supported are ZeroMQ as
6 well as the :mod:`multiprocessing` :class:`~multiprocessing.Queue` class.
7
8 .. module:: logbook.queues
9
10 ZeroMQ
11 ------
12
13 .. autoclass:: ZeroMQHandler
14 :members:
15
16 .. autoclass:: ZeroMQSubscriber
17 :members:
18 :inherited-members:
19
20 Redis
21 -----
22
23 .. autoclass:: RedisHandler
24 :members:
25
26 MultiProcessing
27 ---------------
28
29 .. autoclass:: MultiProcessingHandler
30 :members:
31
32 .. autoclass:: MultiProcessingSubscriber
33 :members:
34 :inherited-members:
35
36 Other
37 -----
38
39 .. autoclass:: ThreadedWrapperHandler
40 :members:
41
42 .. autoclass:: SubscriberGroup
43 :members:
44
45 Base Interface
46 --------------
47
48 .. autoclass:: SubscriberBase
49 :members:
50
51 .. autoclass:: ThreadController
52 :members:
53
54 .. autoclass:: TWHThreadController
55 :members:
0 Ticketing Support
1 =================
2
3 This documents the support classes for ticketing. With ticketing handlers
4 log records are categorized by location and for every emitted log record a
5 count is added. That way you know how often certain messages are
6 triggered, at what times and when the last occurrence was.
7
8 .. module:: logbook.ticketing
9
10 .. autoclass:: TicketingBaseHandler
11 :members:
12
13 .. autoclass:: TicketingHandler
14 :members:
15
16 .. autoclass:: BackendBase
17 :members:
18
19 .. autoclass:: SQLAlchemyBackend
20
21 .. autoclass:: MongoDBBackend
0 Utilities
1 =========
2
3 This documents general purpose utility functions available in Logbook.
4
5 .. module:: logbook
6
7 .. autofunction:: debug
8
9 .. autofunction:: info
10
11 .. autofunction:: warn
12
13 .. autofunction:: warning
14
15 .. autofunction:: notice
16
17 .. autofunction:: error
18
19 .. autofunction:: exception
20
21 .. autofunction:: catch_exceptions
22
23 .. autofunction:: critical
24
25 .. autofunction:: log
26
27 .. autofunction:: set_datetime_format
0 .. include:: ../CHANGES
0 .. _logging-compat:
1
2 Logging Compatibility
3 =====================
4
5 Logbook provides backwards compatibility with the logging library. When
6 activated, the logging library will transparently redirect all the logging calls
7 to your Logbook logging setup.
8
9 Basic Setup
10 -----------
11
12 If you import the compat system and call the
13 :func:`~logbook.compat.redirect_logging` function, all logging calls that happen
14 after this call will transparently be redirected to Logbook::
15
16 from logbook.compat import redirect_logging
17 redirect_logging()
18
19 This also means you don't have to call :func:`logging.basicConfig`:
20
21 >>> from logbook.compat import redirect_logging
22 >>> redirect_logging()
23 >>> from logging import getLogger
24 >>> log = getLogger('My Logger')
25 >>> log.warn('This is a warning')
26 [2010-07-25 00:24] WARNING: My Logger: This is a warning
27
28 Advanced Setup
29 --------------
30
31 The way this is implemented is with a
32 :class:`~logbook.compat.RedirectLoggingHandler`. This class is a handler for
33 the old logging system that sends records via an internal logbook logger to the
34 active logbook handlers. This handler can then be added to specific logging
35 loggers if you want:
36
37 >>> from logging import getLogger
38 >>> mylog = getLogger('My Log')
39 >>> from logbook.compat import RedirectLoggingHandler
40 >>> mylog.addHandler(RedirectLoggingHandler())
41 >>> otherlog = getLogger('Other Log')
42 >>> otherlog.warn('logging is deprecated')
43 No handlers could be found for logger "Other Log"
44 >>> mylog.warn('but logbook is awesome')
45 [2010-07-25 00:29] WARNING: My Log: but logbook is awesome
46
47 Reverse Redirects
48 -----------------
49
50 You can also redirect logbook records to logging, so the other way round.
51 For this you just have to activate the
52 :class:`~logbook.compat.LoggingHandler` for the thread or application::
53
54 from logbook import Logger
55 from logbook.compat import LoggingHandler
56
57 log = Logger('My app')
58 with LoggingHandler():
59 log.warn('Going to logging')
0 # -*- coding: utf-8 -*-
1 #
2 # Logbook documentation build configuration file, created by
3 # sphinx-quickstart on Fri Jul 23 16:54:49 2010.
4 #
5 # This file is execfile()d with the current directory set to its containing dir.
6 #
7 # Note that not all possible configuration values are present in this
8 # autogenerated file.
9 #
10 # All configuration values have a default; values that are commented out
11 # serve to show the default.
12
13 import sys, os
14
15 # If extensions (or modules to document with autodoc) are in another directory,
16 # add these directories to sys.path here. If the directory is relative to the
17 # documentation root, use os.path.abspath to make it absolute, like shown here.
18 sys.path.extend((os.path.abspath('.'), os.path.abspath('..')))
19
20 # -- General configuration -----------------------------------------------------
21
22 # If your documentation needs a minimal Sphinx version, state it here.
23 #needs_sphinx = '1.0'
24
25 # Add any Sphinx extension module names here, as strings. They can be extensions
26 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
27 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx']
28
29 # Add any paths that contain templates here, relative to this directory.
30 templates_path = ['_templates']
31
32 # The suffix of source filenames.
33 source_suffix = '.rst'
34
35 # The encoding of source files.
36 #source_encoding = 'utf-8-sig'
37
38 # The master toctree document.
39 master_doc = 'index'
40
41 # General information about the project.
42 project = u'Logbook'
43 copyright = u'2010, Armin Ronacher, Georg Brandl'
44
45 # The version info for the project you're documenting, acts as replacement for
46 # |version| and |release|, also used in various other places throughout the
47 # built documents.
48 #
49 # The short X.Y version.
50 version = '0.6.1-dev'
51 # The full version, including alpha/beta/rc tags.
52 release = '0.6.1-dev'
53
54 # The language for content autogenerated by Sphinx. Refer to documentation
55 # for a list of supported languages.
56 #language = None
57
58 # There are two options for replacing |today|: either, you set today to some
59 # non-false value, then it is used:
60 #today = ''
61 # Else, today_fmt is used as the format for a strftime call.
62 #today_fmt = '%B %d, %Y'
63
64 # List of patterns, relative to source directory, that match files and
65 # directories to ignore when looking for source files.
66 exclude_patterns = ['_build']
67
68 # The reST default role (used for this markup: `text`) to use for all documents.
69 #default_role = None
70
71 # If true, '()' will be appended to :func: etc. cross-reference text.
72 #add_function_parentheses = True
73
74 # If true, the current module name will be prepended to all description
75 # unit titles (such as .. function::).
76 #add_module_names = True
77
78 # If true, sectionauthor and moduleauthor directives will be shown in the
79 # output. They are ignored by default.
80 #show_authors = False
81
82 # The name of the Pygments (syntax highlighting) style to use.
83 pygments_style = 'sphinx'
84
85 # A list of ignored prefixes for module index sorting.
86 #modindex_common_prefix = []
87
88
89 # -- Options for HTML output ---------------------------------------------------
90
91 # The theme to use for HTML and HTML Help pages. See the documentation for
92 # a list of builtin themes.
93 html_theme = 'sheet'
94
95 # Theme options are theme-specific and customize the look and feel of a theme
96 # further. For a list of options available for each theme, see the
97 # documentation.
98 html_theme_options = {
99 'nosidebar': True,
100 }
101
102 # Add any paths that contain custom themes here, relative to this directory.
103 html_theme_path = ['.']
104
105 # The name for this set of Sphinx documents. If None, it defaults to
106 # "<project> v<release> documentation".
107 html_title = "Logbook"
108
109 # A shorter title for the navigation bar. Default is the same as html_title.
110 html_short_title = "Logbook " + release
111
112 # The name of an image file (relative to this directory) to place at the top
113 # of the sidebar.
114 #html_logo = None
115
116 # The name of an image file (within the static path) to use as favicon of the
117 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
118 # pixels large.
119 #html_favicon = None
120
121 # Add any paths that contain custom static files (such as style sheets) here,
122 # relative to this directory. They are copied after the builtin static files,
123 # so a file named "default.css" will overwrite the builtin "default.css".
124 #html_static_path = ['_static']
125
126 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
127 # using the given strftime format.
128 #html_last_updated_fmt = '%b %d, %Y'
129
130 # If true, SmartyPants will be used to convert quotes and dashes to
131 # typographically correct entities.
132 #html_use_smartypants = True
133
134 # Custom sidebar templates, maps document names to template names.
135 #html_sidebars = {}
136
137 # Additional templates that should be rendered to pages, maps page names to
138 # template names.
139 #html_additional_pages = {}
140
141 # If false, no module index is generated.
142 #html_domain_indices = True
143
144 # If false, no index is generated.
145 #html_use_index = True
146
147 # If true, the index is split into individual pages for each letter.
148 #html_split_index = False
149
150 # If true, links to the reST sources are added to the pages.
151 #html_show_sourcelink = True
152
153 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
154 #html_show_sphinx = True
155
156 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
157 #html_show_copyright = True
158
159 html_add_permalinks = False
160
161 # If true, an OpenSearch description file will be output, and all pages will
162 # contain a <link> tag referring to it. The value of this option must be the
163 # base URL from which the finished HTML is served.
164 #html_use_opensearch = ''
165
166 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
167 #html_file_suffix = ''
168
169 # Output file base name for HTML help builder.
170 htmlhelp_basename = 'Logbookdoc'
171
172
173 # -- Options for LaTeX output --------------------------------------------------
174
175 # The paper size ('letter' or 'a4').
176 #latex_paper_size = 'letter'
177
178 # The font size ('10pt', '11pt' or '12pt').
179 #latex_font_size = '10pt'
180
181 # Grouping the document tree into LaTeX files. List of tuples
182 # (source start file, target name, title, author, documentclass [howto/manual]).
183 latex_documents = [
184 ('index', 'Logbook.tex', u'Logbook Documentation',
185 u'Armin Ronacher, Georg Brandl', 'manual'),
186 ]
187
188 # The name of an image file (relative to this directory) to place at the top of
189 # the title page.
190 #latex_logo = None
191
192 # For "manual" documents, if this is true, then toplevel headings are parts,
193 # not chapters.
194 #latex_use_parts = False
195
196 # If true, show page references after internal links.
197 #latex_show_pagerefs = False
198
199 # If true, show URL addresses after external links.
200 #latex_show_urls = False
201
202 # Additional stuff for the LaTeX preamble.
203 #latex_preamble = ''
204
205 # Documents to append as an appendix to all manuals.
206 #latex_appendices = []
207
208 # If false, no module index is generated.
209 #latex_domain_indices = True
210
211
212 # -- Options for manual page output --------------------------------------------
213
214 # One entry per manual page. List of tuples
215 # (source start file, name, description, authors, manual section).
216 man_pages = [
217 ('index', 'logbook', u'Logbook Documentation',
218 [u'Armin Ronacher, Georg Brandl'], 1)
219 ]
220
221 intersphinx_mapping = {
222 'http://docs.python.org': None
223 }
0 Design Principles
1 =================
2
3 .. currentmodule:: logbook
4
5 Logbook is a logging library that breaks many expectations people have in
6 logging libraries to support paradigms we think are more suitable for
7 modern applications than the traditional Java inspired logging system that
8 can also be found in the Python standard library and many more programming
9 languages.
10
11 This section of the documentation should help you understand the design of
12 Logbook and why it was implemented like this.
13
14 No Logger Registry
15 ------------------
16
17 Logbook is unique in that it has the concept of logging channels but that
18 it does not keep a global registry of them. In the standard library's
19 logging module a logger is attached to a tree of loggers that are stored
20 in the logging module itself as global state.
21
22 In logbook a logger is just an opaque object that might or might not have
23 a name and attached information such as log level or customizations, but
24 the lifetime and availability of that object is controlled by the person
25 creating that logger.
26
27 The registry is necessary for the logging library to give the user the
28 ability to configure these loggers.
29
30 Logbook has a completely different concept of dispatching from loggers to
31 the actual handlers which removes the requirement and usefulness of such a
32 registry. The advantage of the logbook system is that it's a cheap
33 operation to create a logger and that a logger can easily be garbage
34 collected to remove all traces of it.
35
36 Instead Logbook moves the burden of delivering a log record from the log
37 channel's attached log to an independent entity that looks at the context
38 of the execution to figure out where to deliver it.
39
40 Context Sensitive Handler Stack
41 -------------------------------
42
43 Python has two builtin ways to express implicit context: processes and
44 threads. What this means is that if you have a function that is passed no
45 arguments at all, you can figure out what thread called the function and
46 what process you are sitting in. Logbook supports this context
47 information and lets you bind a handler (or more!) for such a context.
48
49 This is how this works: there are two stacks available at all times in
50 Logbook. The first stack is the process wide stack. It is manipulated
51 with :class:`Handler.push_application` and
52 :class:`Handler.pop_application` (and of course the context manager
53 :class:`Handler.applicationbound`). Then there is a second stack which is
54 per thread. The manipulation of that stack happens with
55 :class:`Handler.push_thread`, :class:`Handler.pop_thread` and the
56 :class:`Handler.threadbound` contextmanager.
57
58 Let's take a WSGI web application as first example. When a request comes
59 in your WSGI server will most likely do one of the following two things:
60 either spawn a new Python process (or reuse a process in a pool), or
61 create a thread (or again, reuse something that already exists). Either
62 way, we can now say that the context of process id and thread id is our
63 playground. For this context we can define a log handler that is active
64 in this context only for a certain time. In pseudocode this would look
65 like this::
66
67 def my_application(environ, start_response):
68 my_handler = FileHandler(...)
69 my_handler.push_thread()
70 try:
71 # whatever happens here in terms of logging is handled
72 # by the `my_handler` handler.
73 ...
74 finally:
75 my_handler.pop_thread()
76
77 Because this is a lot to type, you can also use the `with` statement to do
78 the very same::
79
80 def my_application(environ, start_response):
81 with FileHandler(...).threadbound() as my_handler:
82 # whatever happens here in terms of logging is handled
83 # by the `my_handler` handler.
84 ...
85
86 Additionally there is another place where you can put handlers: directly
87 onto a logging channel (for example on a :class:`Logger`).
88
89 This stack system might seem like overkill for a traditional system, but
90 it allows complete decoupling from the log handling system and other
91 systems that might log messages.
92
93 Let's take a GUI application rather than a web application. You have an
94 application that starts up, shuts down and at any point in between might
95 fail or log messages. The typical default behaviour here would be to log
96 into a logfile. Fair enough, that's how these applications work.
97
98 But what's the point in logging if not even a single warning happened?
99 The traditional solution with the logging library from Python is to set
100 the level high (like `ERROR` or `WARNING`) and log into a file. When
101 things break, you have a look at the file and hope it contains enough
102 information.
103
104 When you are in full control of the context of execution with a stack based
105 system like Logbook has, there is a lot more you can do.
106
107 For example you could immediately after your application boots up
108 instanciate a :class:`~logbook.FingersCrossedHandler`. This handler
109 buffers *all* log records in memory and does not emit them at all. What's
110 the point? That handler activates when a certain threshold is reached.
111 For example, when the first warning occurs you can write the buffered
112 messages as well as the warning that just happened into a logfile and
113 continue logging from that point. Because there is no point in logging
114 when you will never look at that file anyways.
115
116 But that alone is not the killer feature of a stack. In a GUI application
117 there is the point where we are still initializing the windowing system.
118 So a file is the best place to log messages. But once we have the GUI
119 initialized, it would be very helpful to show error messages to a user in
120 a console window or a dialog. So what we can do is to initialize at that
121 point a new handler that logs into a dialog.
122
123 When then a long running tasks in the GUI starts we can move that into a
124 separate thread and intercept all the log calls for that thread into a
125 separate window until the task succeeded.
126
127 Here such a setup in pseudocode::
128
129 from logbook import FileHandler, WARNING
130 from logbook import FingersCrossedHandler
131
132 def main():
133 # first we set up a handler that logs everything (including debug
134 # messages, but only starts doing that when a warning happens
135 default_handler = FingersCrossedHandler(FileHandler(filename,
136 delay=True),
137 WARNING)
138 # this handler is now activated as the default handler for the
139 # whole process. We do not bubble up to the default handler
140 # that logs to stderr.
141 with default_handler.applicationbound(bubble=False):
142 # now we initialize the GUI of the application
143 initialize_gui()
144 # at that point we can hook our own logger in that intercepts
145 # errors and displays them in a log window
146 with gui.log_handler.applicationbound():
147 # run the gui mainloop
148 gui.mainloop()
149
150 This stack can also be used to inject additional information automatically
151 into log records. This is also used to replace the need for custom log
152 levels.
153
154 No Custom Log Levels
155 --------------------
156
157 This change over logging was controversial, even under the two original
158 core developers. There clearly are use cases for custom log levels, but
159 there is an inherent problem with then: they require a registry. If you
160 want custom log levels, you will have to register them somewhere or parts
161 of the system will not know about them. Now we just spent a lot of time
162 ripping out the registry with a stack based approach to solve delivery
163 problems, why introduce a global state again just for log levels?
164
165 Instead we looked at the cases where custom log levels are useful and
166 figured that in most situations custom log levels are used to put
167 additional information into a log entry. For example it's not uncommon to
168 have separate log levels to filter user input out of a logfile.
169
170 We instead provide powerful tools to inject arbitrary additional data into
171 log records with the concept of log processors.
172
173 So for example if you want to log user input and tag it appropriately you
174 can override the :meth:`Logger.process_record` method::
175
176 class InputLogger(Logger):
177 def process_record(self, record):
178 record.extra['kind'] = 'input'
179
180 A handler can then use this information to filter out input::
181
182 def no_input(record, handler):
183 return record.extra.get('kind') != 'input'
184
185 with MyHandler().threadbound(filter=no_input):
186 ...
187
188 Injecting Context-Sensitive Information
189 ---------------------------------------
190
191 For many situations it's not only necessary to inject information on a
192 per-channel basis but also for all logging calls from a given context.
193 This is best explained for web applications again. If you have some
194 libraries doing logging in code that is triggered from a request you might
195 want to record the URL of that request for each log record so that you get
196 an idea where a specific error happened.
197
198 This can easily be accomplished by registering a custom processor when
199 binding a handler to a thread::
200
201 def my_application(environ, start_reponse):
202 def inject_request_info(record, handler):
203 record.extra['path'] = environ['PATH_INFO']
204 with Processor(inject_request_info).threadbound():
205 with my_handler.threadbound():
206 # rest of the request code here
207 ...
208
209 Logging Compatibility
210 ---------------------
211
212 The last pillar of logbook's design is the compatibility with the standard
213 libraries logging system. There are many libraries that exist currently
214 that log information with the standard libraries logging module. Having
215 two separate logging systems in the same process is countrproductive and
216 will cause separate logfiles to appear in the best case or complete chaos
217 in the worst.
218
219 Because of that, logbook provides ways to transparently redirect all
220 logging records into the logbook stack based record delivery system. That
221 way you can even continue to use the standard libraries logging system to
222 emit log messages and can take the full advantage of logbook's powerful
223 stack system.
224
225 If you are curious, have a look at :ref:`logging-compat`.
0 The Design Explained
1 ====================
2
3 This part of the documentation explains the design of Logbook in detail.
4 This is not strictly necessary to make use of Logbook but might be helpful
5 when writing custom handlers for Logbook or when using it in a more
6 complex environment.
7
8 Dispatchers and Channels
9 ------------------------
10
11 Logbook does not use traditional loggers, instead a logger is internally
12 named as :class:`~logbook.base.RecordDispatcher`. While a logger also has
13 methods to create new log records, the base class for all record
14 dispatchers itself only has ways to dispatch :class:`~logbook.LogRecord`\s
15 to the handlers. A log record itself might have an attribute that points
16 to the dispatcher that was responsible for dispatching, but it does not
17 have to be.
18
19 If a log record was created from the builtin :class:`~logbook.Logger` it
20 will have the channel set to the name of the logger. But that itself is
21 no requirement. The only requirement for the channel is that it's a
22 string with some human readable origin information. It could be
23 ``'Database'`` if the database issued the log record, it could be
24 ``'Process-4223'`` if the process with the pid 4223 issued it etc.
25
26 For example if you are logging from the :func:`logbook.log` function they
27 will have a cannel set, but no dispatcher:
28
29 >>> from logbook import TestHandler, warn
30 >>> handler = TestHandler()
31 >>> handler.push_application()
32 >>> warn('This is a warning')
33 >>> handler.records[0].channel
34 'Generic'
35 >>> handler.records[0].dispatcher is None
36 True
37
38 If you are logging from a custom logger, the channel attribute points to
39 the logger for as long this logger class is not garbage collected:
40
41 >>> from logbook import Logger, TestHandler
42 >>> logger = Logger('Console')
43 >>> handler = TestHandler()
44 >>> handler.push_application()
45 >>> logger.warn('A warning')
46 >>> handler.records[0].dispatcher is logger
47 True
48
49 You don't need a record dispatcher to dispatch a log record though. The
50 default dispatching can be triggered from a function
51 :func:`~logbook.base.dispatch_record`:
52
53 >>> from logbook import dispatch_record, LogRecord, INFO
54 >>> record = LogRecord('My channel', INFO, 'Hello World!')
55 >>> dispatch_record(record)
56 [2010-09-04 15:56] INFO: My channel: Hello World!
57
58 It is pretty common for log records to be created without a dispatcher.
59 Here some common use cases for log records without a dispatcher:
60
61 - log records that were redirected from a different logging system
62 such as the standard library's :mod:`logging` module or the
63 :mod:`warnings` module.
64 - log records that came from different processes and do not have a
65 dispatcher equivalent in the current process.
66 - log records that came from over the network.
67
68 The Log Record Container
69 ------------------------
70
71 The :class:`~logbook.LogRecord` class is a simple container that
72 holds all the information necessary for a log record. Usually they are
73 created from a :class:`~logbook.Logger` or one of the default log
74 functions (:func:`logbook.warn` etc.) and immediately dispatched to the
75 handlers. The logger will apply some additional knowledge to figure out
76 where the record was created from and if a traceback information should be
77 attached.
78
79 Normally if log records are dispatched they will be closed immediately
80 after all handlers had their chance to write it down. On closing, the
81 interpreter frame and traceback object will be removed from the log record
82 to break up circular dependencies.
83
84 Sometimes however it might be necessary to keep log records around for a
85 longer time. Logbook provides three different ways to accomplish that:
86
87 1. Handlers can set the :attr:`~logbook.LogRecord.keep_open` attribute of
88 a log record to `True` so that the record dispatcher will not close
89 the object. This is for example used by the
90 :class:`~logbook.TestHandler` so that unittests can still access
91 interpreter frames and traceback objects if necessary.
92 2. Because some information on the log records depends on the interpreter
93 frame (such as the location of the log call) it is possible to pull
94 that related information directly into the log record so that it can
95 safely be closed without losing that information (see
96 :meth:`~logbook.LogRecord.pull_information`).
97 3. Last but not least, log records can be converted to dictionaries and
98 recreated from these. It is also possible to make these dictionaries
99 safe for JSON export which is used by the
100 :class:`~logbook.ticketing.TicketingHandler` to store information in a
101 database or the :class:`~logbook.more.MultiProcessingHandler` to send
102 information between processes.
0 What does it do?
1 ================
2
3 Although the Python standard library provides a logging system, you should
4 consider having a look at Logbook for your applications.
5
6 We think it will work out for you and be fun to use :)
7
8 Logbook leverages some features of Python that are not available in older Python releases.
9 Logbook currently requires Python 2.7 or higher including Python 3 (3.1 or
10 higher, 3.0 is not supported).
11
12 Core Features
13 -------------
14
15 - Logbook is based on the concept of loggers that are extensible by the
16 application.
17 - Each logger and handler, as well as other parts of the system, may inject
18 additional information into the logging record that improves the usefulness
19 of log entries.
20 - Handlers can be set on an application-wide stack as well as a thread-wide
21 stack. Setting a handler does not replace existing handlers, but gives it
22 higher priority. Each handler has the ability to prevent records from
23 propagating to lower-priority handlers.
24 - Logbook comes with a useful default configuration that spits all the
25 information to stderr in a useful manner.
26 - All of the built-in handlers have a useful default configuration applied with
27 formatters that provide all the available information in a format that
28 makes the most sense for the given handler. For example, a default stream
29 handler will try to put all the required information into one line, whereas
30 an email handler will split it up into nicely formatted ASCII tables that
31 span multiple lines.
32 - Logbook has built-in handlers for streams, arbitrary files, files with time
33 and size based rotation, a handler that delivers mails, a handler for the
34 syslog daemon as well as the NT log file.
35 - There is also a special "fingers crossed" handler that, in combination with
36 the handler stack, has the ability to accumulate all logging messages and
37 will deliver those in case a severity level was exceeded. For example, it
38 can withhold all logging messages for a specific request to a web
39 application until an error record appears, in which case it will also send
40 all withheld records to the handler it wraps. This way, you can always log
41 lots of debugging records, but only get see them when they can actually
42 tell you something of interest.
43 - It is possible to inject a handler for testing that records messages for
44 assertions.
45 - Logbook was designed to be fast and with modern Python features in mind.
46 For example, it uses context managers to handle the stack of handlers as
47 well as new-style string formatting for all of the core log calls.
48 - Builtin support for ZeroMQ, RabbitMQ, Redis and other means to distribute
49 log messages between heavily distributed systems and multiple processes.
50 - The Logbook system does not depend on log levels. In fact, custom log
51 levels are not supported, instead we strongly recommend using logging
52 subclasses or log processors that inject tagged information into the log
53 record for this purpose.
54 - :pep:`8` naming and code style.
55
56 Advantages over Logging
57 -----------------------
58
59 If properly configured, Logbook's logging calls will be very cheap and
60 provide a great performance improvement over an equivalent configuration
61 of the standard library's logging module. While for some parts we are not
62 quite at performance we desire, there will be some further performance
63 improvements in the upcoming versions.
64
65 It also supports the ability to inject additional information for all
66 logging calls happening in a specific thread or for the whole application.
67 For example, this makes it possible for a web application to add
68 request-specific information to each log record such as remote address,
69 request URL, HTTP method and more.
70
71 The logging system is (besides the stack) stateless and makes unit testing
72 it very simple. If context managers are used, it is impossible to corrupt
73 the stack, so each test can easily hook in custom log handlers.
74
75 Cooperation
76 -----------
77
78 Logbook is an addon library to Python and working in an area where there
79 are already a couple of contestants. First of all there is the standard
80 library's :mod:`logging` module, secondly there is also the
81 :mod:`warnings` module which is used internally in Python to warn about
82 invalid uses of APIs and more. We know that there are many situations
83 where you want to use either of them. Be it that they are integrated into
84 a legacy system, part of a library outside of your control or just because
85 they are a better choice.
86
87 Because of that, Logbook is two-way compatible with :mod:`logging` and
88 one-way compatible with :mod:`warnings`. If you want, you can let all
89 logging calls redirect to the logbook handlers or the other way round,
90 depending on what your desired setup looks like. That way you can enjoy
91 the best of both worlds.
92
93 It should be Fun
94 ----------------
95
96 Logging should be fun. A good log setup makes debugging easier when
97 things go rough. For good results you really have to start using logging
98 before things actually break. Logbook comes with a couple of unusual log
99 handlers to bring the fun back to logging. You can log to your personal
100 twitter feed, you can log to mobile devices, your desktop notification
101 system and more.
102
103 Logbook in a Nutshell
104 ---------------------
105
106 This is how easy it is to get started with Logbook::
107
108 from logbook import warn
109 warn('This is a warning')
110
111 That will use the default logging channel. But you can create as many as
112 you like::
113
114 from logbook import Logger
115 log = Logger('My Logger')
116 log.warn('This is a warning')
117
118 Roadmap
119 -------
120
121 Here a list of things you can expect in upcoming versions:
122
123 - c implementation of the internal stack management and record
124 dispatching for higher performance.
125 - a ticketing log handler that creates tickets in trac and redmine.
126 - a web frontend for the ticketing database handler.
0 Welcome to Logbook
1 ==================
2
3 Logbook is a logging sytem for Python that replaces the standard library's
4 logging module. It was designed with both complex and simple applications
5 in mind and the idea to make logging fun:
6
7 >>> from logbook import Logger
8 >>> log = Logger('Logbook')
9 >>> log.info('Hello, World!')
10 [2010-07-23 16:34] INFO: Logbook: Hello, World!
11
12 What makes it fun? What about getting log messages on your phone or
13 desktop notification system? :ref:`Logbook can do that <notifiers>`.
14
15 Feedback is appreciated. The docs here only show a tiny,
16 tiny feature set and can be incomplete. We will have better docs
17 soon, but until then we hope this gives a sneak peak about how cool
18 Logbook is. If you want more, have a look at the comprehensive suite of tests.
19
20 Documentation
21 -------------
22
23 .. toctree::
24 :maxdepth: 2
25
26 features
27 quickstart
28 setups
29 stacks
30 performance
31 libraries
32 unittesting
33 ticketing
34 compat
35 api/index
36 designexplained
37 designdefense
38 changelog
39
40 Project Information
41 -------------------
42
43 .. cssclass:: toctree-l1
44
45 * `Download from PyPI`_
46 * `Master repository on GitHub`_
47 * `Mailing list`_
48 * IRC: ``#pocoo`` on freenode
49
50 .. _Download from PyPI: http://pypi.python.org/pypi/Logbook
51 .. _Master repository on GitHub: https://github.com/mitsuhiko/logbook
52 .. _Mailing list: http://groups.google.com/group/pocoo-libs
0 Logbook in Libraries
1 ====================
2
3 Logging becomes more useful the higher the number of components in a
4 system that are using it. Logbook itself is not a widely supported
5 library so far, but a handful of libraries are using the :mod:`logging`
6 already which can be redirected to Logbook if necessary.
7
8 Logbook itself is easier to support for libraries than logging because it
9 does away with the central logger registry and can easily be mocked in
10 case the library is not available.
11
12 Mocking Logbook
13 ---------------
14
15 If you want to support Logbook in your library but not depend on it you
16 can copy/paste the following piece of code. It will attempt to import
17 logbook and create a :class:`~logbook.Logger` and if it fails provide a
18 class that just swallows all calls::
19
20 try:
21 from logbook import Logger
22 except ImportError:
23 class Logger(object):
24 def __init__(self, name, level=0):
25 self.name = name
26 self.level = level
27 debug = info = warn = warning = notice = error = exception = \
28 critical = log = lambda *a, **kw: None
29
30 log = Logger('My library')
31
32 Best Practices
33 --------------
34
35 - A library that wants to log to the Logbook system should generally be
36 designed to provide an interface to the record dispatchers it is
37 using. That does not have to be a reference to the record dispatcher
38 itself, it is perfectly fine if there is a toggle to switch it on or
39 off.
40
41 - The channel name should be readable and descriptive.
42
43 - For example, if you are a database library that wants to use the
44 logging system to log all SQL statements issued in debug mode, you can
45 enable and disable your record dispatcher based on that debug flag.
46
47 - Libraries should never set up log setups except temporarily on a
48 per-thread basis if it never changes the stack for a longer duration
49 than a function call in a library. For example, hooking in a null
50 handler for a call to a noisy function is fine, changing the global
51 stack in a function and not reverting it at the end of the function is
52 bad.
53
54 Debug Loggers
55 -------------
56
57 Sometimes you want to have loggers in place that are only really good for
58 debugging. For example you might have a library that does a lot of
59 server/client communication and for debugging purposes it would be nice if
60 you can enable/disable that log output as necessary.
61
62 In that case it makes sense to create a logger and disable that by default
63 and give people a way to get hold of the logger to flip the flag.
64 Additionally you can override the :attr:`~logbook.Logger.disabled` flag to
65 automatically set it based on another value::
66
67 class MyLogger(Logger):
68 @property
69 def disabled(self):
70 return not database_connection.debug
71 database_connection.logger = MyLogger('mylibrary.dbconnection')
0 @ECHO OFF
1
2 REM Command file for Sphinx documentation
3
4 if "%SPHINXBUILD%" == "" (
5 set SPHINXBUILD=sphinx-build
6 )
7 set BUILDDIR=_build
8 set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
9 if NOT "%PAPER%" == "" (
10 set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
11 )
12
13 if "%1" == "" goto help
14
15 if "%1" == "help" (
16 :help
17 echo.Please use `make ^<target^>` where ^<target^> is one of
18 echo. html to make standalone HTML files
19 echo. dirhtml to make HTML files named index.html in directories
20 echo. singlehtml to make a single large HTML file
21 echo. pickle to make pickle files
22 echo. json to make JSON files
23 echo. htmlhelp to make HTML files and a HTML help project
24 echo. qthelp to make HTML files and a qthelp project
25 echo. devhelp to make HTML files and a Devhelp project
26 echo. epub to make an epub
27 echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
28 echo. text to make text files
29 echo. man to make manual pages
30 echo. changes to make an overview over all changed/added/deprecated items
31 echo. linkcheck to check all external links for integrity
32 echo. doctest to run all doctests embedded in the documentation if enabled
33 goto end
34 )
35
36 if "%1" == "clean" (
37 for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
38 del /q /s %BUILDDIR%\*
39 goto end
40 )
41
42 if "%1" == "html" (
43 %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
44 echo.
45 echo.Build finished. The HTML pages are in %BUILDDIR%/html.
46 goto end
47 )
48
49 if "%1" == "dirhtml" (
50 %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
51 echo.
52 echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
53 goto end
54 )
55
56 if "%1" == "singlehtml" (
57 %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
58 echo.
59 echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
60 goto end
61 )
62
63 if "%1" == "pickle" (
64 %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
65 echo.
66 echo.Build finished; now you can process the pickle files.
67 goto end
68 )
69
70 if "%1" == "json" (
71 %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
72 echo.
73 echo.Build finished; now you can process the JSON files.
74 goto end
75 )
76
77 if "%1" == "htmlhelp" (
78 %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
79 echo.
80 echo.Build finished; now you can run HTML Help Workshop with the ^
81 .hhp project file in %BUILDDIR%/htmlhelp.
82 goto end
83 )
84
85 if "%1" == "qthelp" (
86 %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
87 echo.
88 echo.Build finished; now you can run "qcollectiongenerator" with the ^
89 .qhcp project file in %BUILDDIR%/qthelp, like this:
90 echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Logbook.qhcp
91 echo.To view the help file:
92 echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Logbook.ghc
93 goto end
94 )
95
96 if "%1" == "devhelp" (
97 %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
98 echo.
99 echo.Build finished.
100 goto end
101 )
102
103 if "%1" == "epub" (
104 %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
105 echo.
106 echo.Build finished. The epub file is in %BUILDDIR%/epub.
107 goto end
108 )
109
110 if "%1" == "latex" (
111 %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
112 echo.
113 echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
114 goto end
115 )
116
117 if "%1" == "text" (
118 %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
119 echo.
120 echo.Build finished. The text files are in %BUILDDIR%/text.
121 goto end
122 )
123
124 if "%1" == "man" (
125 %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
126 echo.
127 echo.Build finished. The manual pages are in %BUILDDIR%/man.
128 goto end
129 )
130
131 if "%1" == "changes" (
132 %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
133 echo.
134 echo.The overview file is in %BUILDDIR%/changes.
135 goto end
136 )
137
138 if "%1" == "linkcheck" (
139 %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
140 echo.
141 echo.Link check complete; look for any errors in the above output ^
142 or in %BUILDDIR%/linkcheck/output.txt.
143 goto end
144 )
145
146 if "%1" == "doctest" (
147 %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
148 echo.
149 echo.Testing of doctests in the sources finished, look at the ^
150 results in %BUILDDIR%/doctest/output.txt.
151 goto end
152 )
153
154 :end
0 Performance Tuning
1 ==================
2
3 The more logging calls you add to your application and libraries, the more
4 overhead will you introduce. There are a couple things you can do to
5 remedy this behavior.
6
7 Debug-Only Logging
8 ------------------
9
10 There are debug log calls, and there are debug log calls. Some debug log
11 calls would sometimes be interesting in a production environment, others
12 really only if you are on your local machine fiddling around with the
13 code. Logbook internally makes sure to process as little of your logging
14 call as necessary, but it will still have to walk the current stack to
15 figure out if there are any active handlers or not. Depending on the
16 number of handlers on the stack, the kind of handler etc, there will be
17 more or less processed.
18
19 Generally speaking a not-handled logging call is cheap enough that you
20 don't have to care about it. However there is not only your logging call,
21 there might also be some data you have to process for the record. This
22 will always be processed, even if the log record ends up being discarded.
23
24 This is where the Python ``__debug__`` feature comes in handy. This
25 variable is a special flag that is evaluated at the time where Python
26 processes your script. It can elliminate code completely from your script
27 so that it does not even exist in the compiled bytecode (requires Python
28 to be run with the ``-O`` switch)::
29
30 if __debug__:
31 info = get_wallcalculate_debug_info()
32 logger.debug("Call to response() failed. Reason: {0}", info)
33
34 Keep the Fingers Crossed
35 ------------------------
36
37 Do you really need the debug info? In case you find yourself only looking
38 at the logfiles when errors occurred it would be an option to put in the
39 :class:`~logbook.FingersCrossedHandler`. Logging into memory is always
40 cheaper than logging on a filesystem.
41
42 Keep the Stack Static
43 ---------------------
44
45 Whenever you do a push or pop from one of the stacks you will invalidate
46 an internal cache that is used by logbook. This is an implementation
47 detail, but this is how it works for the moment. That means that the
48 first logging call after a push or pop will have a higher impact on the
49 performance than following calls. That means you should not attempt to
50 push or pop from a stack for each logging call. Make sure to do the
51 pushing and popping only as needed. (start/end of application/request)
52
53 Disable Introspection
54 ---------------------
55
56 By default Logbook will try to pull in the interpreter frame of the caller
57 that invoked a logging function. While this is a fast operation that
58 usually does not slow down the execution of your script it also means that
59 for certain Python implementations it invalidates assumptions a JIT
60 compiler might have made of the function body. Currently this for example
61 is the case for applications running on pypy. If you would be using a
62 stock logbook setup on pypy, the JIT wouldn't be able to work properly.
63
64 In case you don't need the frame based information (name of module,
65 calling function, filename, line number) you can disable the introspection
66 feature::
67
68 from logbook import Flags
69
70 with Flags(introspection=False):
71 # all logging calls here will not use introspection
72 ...
0 Quickstart
1 ==========
2
3 .. currentmodule:: logbook
4
5 Logbook makes it very easy to get started with logging. Just import the logger
6 class, create yourself a logger and you are set:
7
8 >>> from logbook import Logger
9 >>> log = Logger('My Awesome Logger')
10 >>> log.warn('This is too cool for stdlib')
11 [2010-07-23 16:34] WARNING: My Awesome Logger: This is too cool for stdlib
12
13 A logger is a so-called :class:`~logbook.base.RecordDispatcher`, which is
14 commonly referred to as a "logging channel". The name you give such a channel
15 is up to you and need not be unique although it's a good idea to keep it
16 unique so that you can filter by it if you want.
17
18 The basic interface is similar to what you may already know from the standard
19 library's :mod:`logging` module.
20
21 There are several logging levels, available as methods on the logger. The
22 levels -- and their suggested meaning -- are:
23
24 * ``critical`` -- for errors that lead to termination
25 * ``error`` -- for errors that occur, but are handled
26 * ``warning`` -- for exceptional circumstances that might not be errors
27 * ``notice`` -- for non-error messages you usually want to see
28 * ``info`` -- for messages you usually don't want to see
29 * ``debug`` -- for debug messages
30
31 Each of these levels is available as method on the :class:`Logger`.
32 Additionally the ``warning`` level is aliased as :meth:`~Logger.warn`.
33
34 Alternatively, there is the :meth:`~Logger.log` method that takes the logging
35 level (string or integer) as an argument.
36
37 Handlers
38 --------
39
40 Each call to a logging method creates a log *record* which is then passed to
41 *handlers*, which decide how to store or present the logging info. There are a
42 multitude of available handlers, and of course you can also create your own:
43
44 * :class:`StreamHandler` for logging to arbitrary streams
45 * :class:`StderrHandler` for logging to stderr
46 * :class:`FileHandler`, :class:`MonitoringFileHandler`,
47 :class:`RotatingFileHandler` and :class:`TimedRotatingFileHandler` for
48 logging to files
49 * :class:`MailHandler` and :class:`GMailHandler` for logging via e-mail
50 * :class:`SyslogHandler` for logging to the syslog daemon
51 * :class:`NTEventLogHandler` for logging to the Windows NT event log
52
53 On top of those there are a couple of handlers for special use cases:
54
55 * :class:`logbook.FingersCrossedHandler` for logging into memory and
56 delegating information to another handler when a certain level was
57 exceeded, otherwise discarding all buffered records.
58 * :class:`logbook.more.TaggingHandler` for dispatching log records that
59 are tagged (used in combination with a
60 :class:`logbook.more.TaggingLogger`)
61 * :class:`logbook.queues.ZeroMQHandler` for logging to ZeroMQ
62 * :class:`logbook.queues.RedisHandler` for logging to Redis
63 * :class:`logbook.queues.MultiProcessingHandler` for logging from a child
64 process to a handler from the outer process.
65 * :class:`logbook.queues.ThreadedWrapperHandler` for moving the actual
66 handling of a handler into a background thread and using a queue to
67 deliver records to that thread.
68 * :class:`logbook.notifiers.GrowlHandler` and
69 :class:`logbook.notifiers.LibNotifyHandler` for logging to the OS X Growl
70 or the linux notification daemon.
71 * :class:`logbook.notifiers.BoxcarHandler` for logging to `boxcar`_.
72 * :class:`logbook.more.TwitterHandler` for logging to twitter.
73 * :class:`logbook.more.ExternalApplicationHandler` for logging to an
74 external application such as the OS X ``say`` command.
75 * :class:`logbook.ticketing.TicketingHandler` for creating tickets from
76 log records in a database or other data store.
77
78 .. _boxcar: http://boxcar.io/
79
80 Registering Handlers
81 --------------------
82
83 So how are handlers registered? If you are used to the standard Python logging
84 system, it works a little bit differently here. Handlers can be registered for
85 a thread or for a whole process or individually for a logger. However, it is
86 strongly recommended not to add handlers to loggers unless there is a very good
87 use case for that.
88
89 If you want errors to go to syslog, you can set up logging like this::
90
91 from logbook import SyslogHandler
92
93 error_handler = SyslogHandler('logbook example', level='ERROR')
94 with error_handler.applicationbound():
95 # whatever is executed here and an error is logged to the
96 # error handler
97 ...
98
99 This will send all errors to the syslog but warnings and lower record
100 levels still to stderr. This is because the handler is not bubbling by
101 default which means that if a record is handled by the handler, it will
102 not bubble up to a higher handler. If you want to display all records on
103 stderr, even if they went to the syslog you can enable bubbling by setting
104 *bubble* to ``True``::
105
106 from logbook import SyslogHandler
107
108 error_handler = SyslogHandler('logbook example', level='ERROR', bubble=True)
109 with error_handler.applicationbound():
110 # whatever is executed here and an error is logged to the
111 # error handler but it will also bubble up to the default
112 # stderr handler.
113 ...
114
115 So what if you want to only log errors to the syslog and nothing to
116 stderr? Then you can combine this with a :class:`NullHandler`::
117
118 from logbook import SyslogHandler, NullHandler
119
120 error_handler = SyslogHandler('logbook example', level='ERROR')
121 null_handler = NullHandler()
122
123 with null_handler.applicationbound():
124 with error_handler.applicationbound():
125 # errors now go to the error_handler and everything else
126 # is swallowed by the null handler so nothing ends up
127 # on the default stderr handler
128 ...
129
130 Record Processors
131 -----------------
132
133 What makes logbook interesting is the ability to automatically process log
134 records. This is handy if you want additional information to be logged for
135 everything you do. A good example use case is recording the IP of the current
136 request in a web application. Or, in a daemon process you might want to log
137 the user and working directory of the process.
138
139 A context processor can be injected at two places: you can either bind a
140 processor to a stack like you do with handlers or you can override the
141 override the :meth:`.RecordDispatcher.process_record` method.
142
143 Here an example that injects the current working directory into the
144 `extra` dictionary of a log record::
145
146 import os
147 from logbook import Processor
148
149 def inject_cwd(record):
150 record.extra['cwd'] = os.getcwd()
151
152 with my_handler.applicationbound():
153 with Processor(inject_cwd).applicationbound():
154 # everything logged here will have the current working
155 # directory in the log record.
156 ...
157
158 The alternative is to inject information just for one logger in which case
159 you might want to subclass it::
160
161 import os
162
163 class MyLogger(logbook.Logger):
164
165 def process_record(self, record):
166 logbook.Logger.process_record(self, record)
167 record.extra['cwd'] = os.getcwd()
168
169
170 Configuring the Logging Format
171 ------------------------------
172
173 All handlers have a useful default log format you don't have to change to use
174 logbook. However if you start injecting custom information into log records,
175 it makes sense to configure the log formatting so that you can see that
176 information.
177
178 There are two ways to configure formatting: you can either just change the
179 format string or hook in a custom format function.
180
181 All the handlers that come with logbook and that log into a string use the
182 :class:`~logbook.StringFormatter` by default. Their constructors accept a
183 format string which sets the :attr:`logbook.Handler.format_string` attribute.
184 You can override this attribute in which case a new string formatter is set:
185
186 >>> from logbook import StderrHandler
187 >>> handler = StderrHandler()
188 >>> handler.format_string = '{record.channel}: {record.message}'
189 >>> handler.formatter
190 <logbook.handlers.StringFormatter object at 0x100641b90>
191
192 Alternatively you can also set a custom format function which is invoked
193 with the record and handler as arguments:
194
195 >>> def my_formatter(record, handler):
196 ... return record.message
197 ...
198 >>> handler.formatter = my_formatter
199
200 The format string used for the default string formatter has one variable called
201 `record` available which is the log record itself. All attributes can be
202 looked up using the dotted syntax, and items in the `extra` dict looked up
203 using brackets. Note that if you are accessing an item in the extra dict that
204 does not exist, an empty string is returned.
205
206 Here is an example configuration that shows the current working directory from
207 the example in the previous section::
208
209 handler = StderrHandler(format_string=
210 '{record.channel}: {record.message) [{record.extra[cwd]}]')
211
212 In the :mod:`~logbook.more` module there is a formatter that uses the Jinja2
213 template engine to format log records, especially useful for multi-line log
214 formatting such as mails (:class:`~logbook.more.JinjaFormatter`).
0 Common Logbook Setups
1 =====================
2
3 This part of the documentation shows how you can configure Logbook for
4 different kinds of setups.
5
6
7 Desktop Application Setup
8 -------------------------
9
10 If you develop a desktop application (command line or GUI), you probably have a line
11 like this in your code::
12
13 if __name__ == '__main__':
14 main()
15
16 This is what you should wrap with a ``with`` statement that sets up your log
17 handler::
18
19 from logbook import FileHandler
20 log_handler = FileHandler('application.log')
21
22 if __name__ == '__main__':
23 with log_handler.applicationbound():
24 main()
25
26 Alternatively you can also just push a handler in there::
27
28 from logbook import FileHandler
29 log_handler = FileHandler('application.log')
30 log_handler.push_application()
31
32 if __name__ == '__main__':
33 main()
34
35 Please keep in mind that you will have to pop the handlers in reverse order if
36 you want to remove them from the stack, so it is recommended to use the context
37 manager API if you plan on reverting the handlers.
38
39 Web Application Setup
40 ---------------------
41
42 Typical modern web applications written in Python have two separate contexts
43 where code might be executed: when the code is imported, as well as when a
44 request is handled. The first case is easy to handle, just push a global file
45 handler that writes everything into a file.
46
47 But Logbook also gives you the ability to improve upon the logging. For
48 example, you can easily create yourself a log handler that is used for
49 request-bound logging that also injects additional information.
50
51 For this you can either subclass the logger or you can bind to the handler with
52 a function that is invoked before logging. The latter has the advantage that it
53 will also be triggered for other logger instances which might be used by a
54 different library.
55
56 Here is a simple WSGI example application that showcases sending error mails for
57 errors happened during a WSGI application::
58
59 from logbook import MailHandler
60
61 mail_handler = MailHandler('errors@example.com',
62 ['admin@example.com'],
63 format_string=u'''\
64 Subject: Application Error at {record.extra[url]}
65
66 Message type: {record.level_name}
67 Location: {record.filename}:{record.lineno}
68 Module: {record.module}
69 Function: {record.func_name}
70 Time: {record.time:%Y-%m-%d %H:%M:%S}
71 Remote IP: {record.extra[ip]}
72 Request: {record.extra[url]} [{record.extra[method]}]
73
74 Message:
75
76 {record.message}
77 ''', bubble=True)
78
79 def application(environ, start_response):
80 request = Request(environ)
81
82 def inject_info(record, handler):
83 record.extra.update(
84 ip=request.remote_addr,
85 method=request.method,
86 url=request.url
87 )
88
89 with mail_handler.threadbound(processor=inject_info):
90 # standard WSGI processing happens here. If an error
91 # is logged, a mail will be sent to the admin on
92 # example.com
93 ...
94
95 Deeply Nested Setups
96 --------------------
97
98 If you want deeply nested logger setups, you can use the
99 :class:`~logbook.NestedSetup` class which simplifies that. This is best
100 explained using an example::
101
102 import os
103 from logbook import NestedSetup, NullHandler, FileHandler, \
104 MailHandler, Processor
105
106 def inject_information(record):
107 record.extra['cwd'] = os.getcwd()
108
109 # a nested handler setup can be used to configure more complex setups
110 setup = NestedSetup([
111 # make sure we never bubble up to the stderr handler
112 # if we run out of setup handling
113 NullHandler(),
114 # then write messages that are at least warnings to to a logfile
115 FileHandler('application.log', level='WARNING'),
116 # errors should then be delivered by mail and also be kept
117 # in the application log, so we let them bubble up.
118 MailHandler('servererrors@example.com',
119 ['admin@example.com'],
120 level='ERROR', bubble=True),
121 # while we're at it we can push a processor on its own stack to
122 # record additional information. Because processors and handlers
123 # go to different stacks it does not matter if the processor is
124 # added here at the bottom or at the very beginning. Same would
125 # be true for flags.
126 Processor(inject_information)
127 ])
128
129 Once such a complex setup is defined, the nested handler setup can be used as if
130 it was a single handler::
131
132 with setup.threadbound():
133 # everything here is handled as specified by the rules above.
134 ...
135
136
137 Distributed Logging
138 -------------------
139
140 For applications that are spread over multiple processes or even machines
141 logging into a central system can be a pain. Logbook supports ZeroMQ to
142 deal with that. You can set up a :class:`~logbook.queues.ZeroMQHandler`
143 that acts as ZeroMQ publisher and will send log records encoded as JSON
144 over the wire::
145
146 from logbook.queues import ZeroMQHandler
147 handler = ZeroMQHandler('tcp://127.0.0.1:5000')
148
149 Then you just need a separate process that can receive the log records and
150 hand it over to another log handler using the
151 :class:`~logbook.queues.ZeroMQSubscriber`. The usual setup is this::
152
153 from logbook.queues import ZeroMQSubscriber
154 subscriber = ZeroMQSubscriber('tcp://127.0.0.1:5000')
155 with my_handler:
156 subscriber.dispatch_forever()
157
158 You can also run that loop in a background thread with
159 :meth:`~logbook.queues.ZeroMQSubscriber.dispatch_in_background`::
160
161 from logbook.queues import ZeroMQSubscriber
162 subscriber = ZeroMQSubscriber('tcp://127.0.0.1:5000')
163 subscriber.dispatch_in_background(my_handler)
164
165 If you just want to use this in a :mod:`multiprocessing` environment you
166 can use the :class:`~logbook.queues.MultiProcessingHandler` and
167 :class:`~logbook.queues.MultiProcessingSubscriber` instead. They work the
168 same way as the ZeroMQ equivalents but are connected through a
169 :class:`multiprocessing.Queue`::
170
171 from multiprocessing import Queue
172 from logbook.queues import MultiProcessingHandler, \
173 MultiProcessingSubscriber
174 queue = Queue(-1)
175 handler = MultiProcessingHandler(queue)
176 subscriber = MultiProcessingSubscriber(queue)
177
178 There is also the possibility to log into a Redis instance using the
179 :class:`~logbook.queues.RedisHandler`. To do so, you just need to create an
180 instance of this handler as follows::
181
182 import logbook
183 from logbook.queues import RedisHandler
184
185 handler = RedisHandler()
186 l = logbook.Logger()
187 with handler:
188 l.info('Your log message')
189
190 With the default parameters, this will send a message to redis under the key redis.
191
192
193 Redirecting Single Loggers
194 --------------------------
195
196 If you want to have a single logger go to another logfile you have two
197 options. First of all you can attach a handler to a specific record
198 dispatcher. So just import the logger and attach something::
199
200 from yourapplication.yourmodule import logger
201 logger.handlers.append(MyHandler(...))
202
203 Handlers attached directly to a record dispatcher will always take
204 precedence over the stack based handlers. The bubble flag works as
205 expected, so if you have a non-bubbling handler on your logger and it
206 always handles, it will never be passed to other handlers.
207
208 Secondly you can write a handler that looks at the logging channel and
209 only accepts loggers of a specific kind. You can also do that with a
210 filter function::
211
212 handler = MyHandler(filter=lambda r: r.channel == 'app.database')
213
214 Keep in mind that the channel is intended to be a human readable string
215 and is not necessarily unique. If you really need to keep loggers apart
216 on a central point you might want to introduce some more meta information
217 into the extra dictionary.
218
219 You can also compare the dispatcher on the log record::
220
221 from yourapplication.yourmodule import logger
222 handler = MyHandler(filter=lambda r: r.dispatcher is logger)
223
224 This however has the disadvantage that the dispatcher entry on the log
225 record is a weak reference and might go away unexpectedly and will not be
226 there if log records are sent to a different process.
227
228 Last but not least you can check if you can modify the stack around the
229 execution of the code that triggers that logger For instance if the
230 logger you are interested in is used by a specific subsystem, you can
231 modify the stacks before calling into the system.
0 {% extends "basic/layout.html" %}
1
2 {% block extrahead %}
3 {% if online %}
4 <link href='http://fonts.googleapis.com/css?family=OFL+Sorts+Mill+Goudy+TT|Inconsolata'
5 rel='stylesheet' type='text/css'>
6 {% endif %}
7 {% endblock %}
8
9 {% block header %}
10 <div class="book">
11 <div class="banner">
12 <a href="{{ pathto(master_doc) }}">
13 <!-- <img src="{{ pathto('_static/' + logo, 1) }}" alt="{{ project }} logo"></img> -->
14 <h1>{{ project }} </h1>
15 </a>
16 </div>
17 {% endblock %}
18
19 {% block footer %}
20 {% if online %}
21 <a href="http://github.com/mitsuhiko/logbook">
22 <img style="position: fixed; top: 0; right: 0; border: 0;"
23 src="http://s3.amazonaws.com/github/ribbons/forkme_right_gray_6d6d6d.png"
24 alt="Fork me on GitHub">
25 </a>
26 {% endif %}
27 {{ super() }}
28 </div>
29 {% endblock %}
0 /*
1 * sheet.css
2 * ~~~~~~~~~
3 *
4 * Sphinx stylesheet for the sheet theme.
5 *
6 * :copyright: Copyright 2010 by Armin Ronacher, Georg Brandl.
7 * :license: BSD, see LICENSE for details.
8 *
9 */
10
11 @import url("basic.css");
12
13 body {
14 text-align: center;
15 font-family: {{ theme_bodyfont }};
16 margin: 0;
17 padding: 0;
18 background: #d5dde2;
19 }
20
21 .book {
22 padding: 15px 25px 25px 85px;
23 margin: 0 auto;
24 width: 695px;
25 text-align: left;
26 background: white url("background.png") repeat-y;
27 }
28
29 a {
30 font-weight: bold;
31 color: #003366;
32 }
33
34 h1, h2, h3, h4, h5, h6 {
35 font-family: {{ theme_seriffont }};
36 font-weight: normal;
37 }
38
39 h1 { font-size: 2.8em; }
40 h2 { font-size: 2.2em; }
41
42 .document {
43 border-bottom: 1px solid #837D7C;
44 border-top: 1px solid #837D7C;
45 margin: 12px 0px;
46 line-height: 1.5em;
47 }
48
49 .related a {
50 margin: 0;
51 font-size: 1.1em;
52 font-weight: normal;
53 letter-spacing: 3px;
54 text-transform: uppercase;
55 text-decoration: none;
56 border-bottom: 1px dashed #ddd;
57 color: #837D7C;
58 }
59
60 div.related ul {
61 margin-right: -10px;
62 padding: 0px;
63 }
64
65 .banner h1 {
66 font-size: 4.5em;
67 font-weight: normal;
68 line-height: 1;
69 letter-spacing: -3px;
70 margin-top: 12px;
71 margin-bottom: 12px;
72 }
73
74 .banner {
75 color: #000000;
76 letter-spacing: -1px;
77 font-family: {{ theme_seriffont }};
78 margin-bottom: 24px;
79 }
80
81 .banner a {
82 color: #000000;
83 text-decoration: none;
84 }
85
86 .footer {
87 color: #000000;
88 font-size: 0.7em;
89 text-align: center;
90 line-height: 1;
91 margin-top: 20px;
92 font-family: {{ theme_monofont }};
93 font-weight: normal;
94 letter-spacing: 2px;
95 text-transform: uppercase;
96 text-decoration: none;
97 color: #837D7C;
98 }
99
100 .highlight pre {
101 background-color: #f8f8f8;
102 border-top: 1px solid #c8c8c8;
103 border-bottom: 1px solid #c8c8c8;
104 line-height: 120%;
105 padding: 10px 6px;
106 }
107
108 .highlighttable .highlight pre {
109 margin: 0px;
110 }
111
112 div.sphinxsidebar {
113 margin-left: 0px;
114 float: none;
115 width: 100%;
116 font-size: 0.8em;
117 }
118
119 .toctree-l1 a {
120 text-decoration: none;
121 }
122
123 img.align-right {
124 margin-left: 24px;
125 }
126
127 pre, tt {
128 font-family: {{ theme_monofont }};
129 font-size: 15px!important;
130 }
131
132 dl.class dt {
133 padding-left: 60px;
134 text-indent: -60px;
135 }
136
137 tt.descname {
138 font-size: 1em;
139 }
140
141 p.output-caption {
142 font-size: small;
143 margin: 0px;
144 }
0 [theme]
1 inherit = basic
2 stylesheet = sheet.css
3 pygments_style = tango
4
5 [options]
6 bodyfont = 'Cantarell', 'Lucida Grande', sans-serif
7 seriffont = 'OFL Sorts Mill Goudy TT', 'Georgia', 'Bitstream Vera Serif', serif
8 monofont = 'Consolas', 'Inconsolata', 'Bitstream Vera Sans Mono', monospace
0 Stacks in Logbook
1 =================
2
3 Logbook keeps three stacks internally currently:
4
5 - one for the :class:`~logbook.Handler`\s: each handler is handled from
6 stack top to bottom. When a record was handled it depends on the
7 :attr:`~logbook.Handler.bubble` flag of the handler if it should still
8 be processed by the next handler on the stack.
9 - one for the :class:`~logbook.Processor`\s: each processor in the stack
10 is applied on a record before the log record is handled by the
11 handler.
12 - one for the :class:`~logbook.Flags`: this stack manages simple flags
13 such as how errors during logging should be processed or if stackframe
14 introspection should be used etc.
15
16 General Stack Management
17 ------------------------
18
19 Generally all objects that are management by stacks have a common
20 interface (:class:`~logbook.base.StackedObject`) and can be used in
21 combination with the :class:`~logbook.NestedSetup` class.
22
23 Commonly stacked objects are used with a context manager (`with`
24 statement)::
25
26 with context_object.threadbound():
27 # this is managed for this thread only
28 ...
29
30 with context_object.applicationbound():
31 # this is managed for all applications
32 ...
33
34 Alternatively you can also use `try`/`finally`::
35
36 context_object.push_thread()
37 try:
38 # this is managed for this thread only
39 ...
40 finally:
41 context_object.pop_thread()
42
43 context_object.push_application()
44 try:
45 # this is managed for all applications
46 ...
47 finally:
48 context_object.pop_application()
49
50 It's very important that you will always pop from the stack again unless
51 you really want the change to last until the application closes down,
52 which probably is not the case.
53
54 If you want to push and pop multiple stacked objects at the same time, you
55 can use the :class:`~logbook.NestedSetup`::
56
57 setup = NestedSetup([stacked_object1, stacked_object2])
58 with setup.threadbound():
59 # both objects are now bound to the thread's stack
60 ...
61
62 Sometimes a stacked object can be passed to one of the functions or
63 methods in Logbook. If any stacked object can be passed, this is usually
64 called the `setup`. This is for example the case when you specify a
65 handler or processor for things like the
66 :class:`~logbook.queues.ZeroMQSubscriber`.
67
68 Handlers
69 --------
70
71 Handlers use the features of the stack the most because not only do they
72 stack, but they also specify how stack handling is supposed to work. Each
73 handler can decide if it wants to process the record, and then it has a
74 flag (the :attr:`~logbook.Handler.bubble` flag) which specifies if the
75 next handler in the chain is supposed to get this record passed to.
76
77 If a handler is bubbeling it will give the record to the next handler,
78 even if it was properly handled. If it's not, it will stop promoting
79 handlers further down the chain. Additionally there are so-called
80 "blackhole" handlers (:class:`~logbook.NullHandler`) which stop processing
81 at any case when they are reached. If you push a blackhole handler on top
82 of an existing infrastructure you can build up a separate one without
83 performance impact.
84
85 Processor
86 ---------
87
88 A processor can inject additional information into a log record when the
89 record is handled. Processors are called once at least one log handler is
90 interested in handling the record. Before that happens, no processing
91 takes place.
92
93 Here an example processor that injects the current working directory into
94 the extra attribute of the record::
95
96 import os
97
98 def inject_cwd(record):
99 record.extra['cwd'] = os.getcwd()
100
101 with Processor(inject_cwd):
102 # all logging calls inside this block in this thread will now
103 # have the current working directory information attached.
104 ...
105
106 Flags
107 -----
108
109 The last pillar of logbook is the flags stack. This stack can be used to
110 override settings of the logging system. Currently this can be used to
111 change the behavior of logbook in case an exception during log handling
112 happens (for instance if a log record is supposed to be delivered to the
113 filesystem but it ran out of available space). Additionally there is a
114 flag that disables frame introspection which can result in a speedup on
115 JIT compiled Python interpreters.
116
117 Here an example of a silenced error reporting::
118
119 with Flags(errors='silent'):
120 # errors are now silent for this block
121 ...
0 Logging to Tickets
1 ==================
2
3 Logbook supports the concept of creating unique tickets for log records
4 and keeping track of the number of times these log records were created.
5 The default implementation logs into a relational database, but there is a
6 baseclass that can be subclassed to log into existing ticketing systems
7 such as trac or other data stores.
8
9 The ticketing handlers and store backends are all implemented in the
10 module :mod:`logbook.ticketing`.
11
12 How does it work?
13 -----------------
14
15 When a ticketing handler is used each call to a logbook logger is assigned
16 a unique hash that is based on the name of the logger, the location of the
17 call as well as the level of the message. The message itself is not taken
18 into account as it might be changing depending on the arguments passed to
19 it.
20
21 Once that unique hash is created the database is checked if there is
22 already a ticket for that hash. If there is, a new occurrence is logged
23 with all details available. Otherwise a new ticket is created.
24
25 This makes it possible to analyze how often certain log messages are
26 triggered and over what period of time.
27
28 Why should I use it?
29 --------------------
30
31 The ticketing handlers have the big advantage over a regular log handler
32 that they will capture the full data of the log record in machine
33 processable format. Whatever information was attached to the log record
34 will be send straight to the data store in JSON.
35
36 This makes it easier to track down issues that might happen in production
37 systems. Due to the higher overhead of ticketing logging over a standard
38 logfile or something comparable it should only be used for higher log
39 levels (:data:`~logbook.WARNING` or higher).
40
41 Common Setups
42 -------------
43
44 The builtin ticketing handler is called
45 :class:`~logbook.ticketing.TicketingHandler`. In the default configuration
46 it will connect to a relational database with the help of `SQLAlchemy`_
47 and log into two tables there: tickets go into ``${prefix}tickets`` and
48 occurrences go into ``${prefix}occurrences``. The default table prefix is
49 ``'logbook_'`` but can be overriden. If the tables do not exist already,
50 the handler will create them.
51
52 Here an example setup that logs into a postgres database::
53
54 from logbook import ERROR
55 from logbook.ticketing import TicketingHandler
56 handler = TicketingHandler('postgres://localhost/database',
57 level=ERROR)
58 with handler:
59 # everything in this block and thread will be handled by
60 # the ticketing database handler
61 ...
62
63 Alternative backends can be swapped in by providing the `backend`
64 parameter. There is a second implementation of a backend that is using
65 MongoDB: :class:`~logbook.ticketing.MongoDBBackend`.
66
67 .. _SQLAlchemy: http://sqlalchemy.org/
0 Unittesting Support
1 ===================
2
3 .. currentmodule:: logbook
4
5 Logbook has builtin support for testing logging calls. There is a handler
6 that can be hooked in and will catch all log records for inspection. Not
7 only that, it also provides methods to test if certain things were logged.
8
9 Basic Setup
10 -----------
11
12 The interface to satisfaction is :class:`logbook.TestHandler`. Create it,
13 and bind it, and you're done. If you are using classic :mod:`unittest`
14 test cases, you might want to set it up in the before and after callback
15 methods::
16
17 import logbook
18 import unittest
19
20 class LoggingTestCase(unittest.TestCase):
21
22 def setUp(self):
23 self.log_handler = logbook.TestHandler()
24 self.log_handler.push_thread()
25
26 def tearDown(self):
27 self.log_handler.pop_thread()
28
29 Alternatively you can also use it in a with statement in an individual
30 test. This is also how this can work in nose and other testing systems::
31
32 def my_test():
33 with logbook.TestHandler() as log_handler:
34 ...
35
36
37 Test Handler Interface
38 ----------------------
39
40 The test handler has a few attributes and methods to gain access to the
41 logged messages. The most important ones are :attr:`~TestHandler.records`
42 and :attr:`~TestHandler.formatted_records`. The first is a list of the
43 captured :class:`~LogRecord`\s, the second a list of the formatted records
44 as unicode strings:
45
46 >>> from logbook import TestHandler, Logger
47 >>> logger = Logger('Testing')
48 >>> handler = TestHandler()
49 >>> handler.push_thread()
50 >>> logger.warn('Hello World')
51 >>> handler.records
52 [<logbook.base.LogRecord object at 0x100640cd0>]
53 >>> handler.formatted_records
54 [u'[WARNING] Testing: Hello World']
55
56
57 .. _probe-log-records:
58
59 Probe Log Records
60 -----------------
61
62 The handler also provide some convenience methods to do assertions:
63
64 >>> handler.has_warnings
65 True
66 >>> handler.has_errors
67 False
68 >>> handler.has_warning('Hello World')
69 True
70
71 Methods like :meth:`~logbook.TestHandler.has_warning` accept two
72 arguments:
73
74 `message`
75 If provided and not `None` it will check if there is at least one log
76 record where the message matches. This can also be a compiled regular
77 expression.
78
79 `channel`
80 If provided and not `None` it will check if there is at least one log
81 record where the logger name of the record matches.
82
83 Example usage:
84
85 >>> handler.has_warning('A different message')
86 False
87 >>> handler.has_warning(re.compile('^Hello'))
88 True
89 >>> handler.has_warning('Hello World', channel='Testing')
90 True
91 >>> handler.has_warning(channel='Testing')
92 True
0 # -*- coding: utf-8 -*-
1 """
2 logbook
3 ~~~~~~~
4
5 Simple logging library that aims to support desktop, command line
6 and web applications alike.
7
8 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
9 :license: BSD, see LICENSE for more details.
10 """
11
12
13 from logbook.base import LogRecord, Logger, LoggerGroup, NestedSetup, \
14 Processor, Flags, get_level_name, lookup_level, dispatch_record, \
15 CRITICAL, ERROR, WARNING, NOTICE, INFO, DEBUG, NOTSET, \
16 set_datetime_format
17 from logbook.handlers import Handler, StreamHandler, FileHandler, \
18 MonitoringFileHandler, StderrHandler, RotatingFileHandler, \
19 TimedRotatingFileHandler, TestHandler, MailHandler, GMailHandler, SyslogHandler, \
20 NullHandler, NTEventLogHandler, create_syshandler, StringFormatter, \
21 StringFormatterHandlerMixin, HashingHandlerMixin, \
22 LimitingHandlerMixin, WrapperHandler, FingersCrossedHandler, \
23 GroupHandler
24
25 __version__ = '0.6.1-dev'
26
27 # create an anonymous default logger and provide all important
28 # methods of that logger as global functions
29 _default_logger = Logger('Generic')
30 _default_logger.suppress_dispatcher = True
31 debug = _default_logger.debug
32 info = _default_logger.info
33 warn = _default_logger.warn
34 warning = _default_logger.warning
35 notice = _default_logger.notice
36 error = _default_logger.error
37 exception = _default_logger.exception
38 catch_exceptions = _default_logger.catch_exceptions
39 critical = _default_logger.critical
40 exception = _default_logger.exception
41 catch_exceptions = _default_logger.catch_exceptions
42 log = _default_logger.log
43 del _default_logger
44
45
46 # install a default global handler
47 default_handler = StderrHandler()
48 default_handler.push_application()
0 # -*- coding: utf-8 -*-
1 """
2 logbook._fallback
3 ~~~~~~~~~~~~~~~~~
4
5 Fallback implementations in case speedups is not around.
6
7 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
8 :license: BSD, see LICENSE for more details.
9 """
10 import threading
11 from itertools import count
12 try:
13 from thread import get_ident as current_thread
14 except ImportError:
15 from _thread import get_ident as current_thread
16 from logbook.helpers import get_iterator_next_method
17
18 _missing = object()
19 _MAX_CONTEXT_OBJECT_CACHE = 256
20
21 def group_reflected_property(name, default, fallback=_missing):
22 """Returns a property for a given name that falls back to the
23 value of the group if set. If there is no such group, the
24 provided default is used.
25 """
26 def _get(self):
27 rv = getattr(self, '_' + name, _missing)
28 if rv is not _missing and rv != fallback:
29 return rv
30 if self.group is None:
31 return default
32 return getattr(self.group, name)
33 def _set(self, value):
34 setattr(self, '_' + name, value)
35 def _del(self):
36 delattr(self, '_' + name)
37 return property(_get, _set, _del)
38
39
40 class _StackBound(object):
41
42 def __init__(self, obj, push, pop):
43 self.__obj = obj
44 self.__push = push
45 self.__pop = pop
46
47 def __enter__(self):
48 self.__push()
49 return self.__obj
50
51 def __exit__(self, exc_type, exc_value, tb):
52 self.__pop()
53
54
55 class StackedObject(object):
56 """Baseclass for all objects that provide stack manipulation
57 operations.
58 """
59
60 def push_thread(self):
61 """Pushes the stacked object to the thread stack."""
62 raise NotImplementedError()
63
64 def pop_thread(self):
65 """Pops the stacked object from the thread stack."""
66 raise NotImplementedError()
67
68 def push_application(self):
69 """Pushes the stacked object to the application stack."""
70 raise NotImplementedError()
71
72 def pop_application(self):
73 """Pops the stacked object from the application stack."""
74 raise NotImplementedError()
75
76 def __enter__(self):
77 self.push_thread()
78 return self
79
80 def __exit__(self, exc_type, exc_value, tb):
81 self.pop_thread()
82
83 def threadbound(self, _cls=_StackBound):
84 """Can be used in combination with the `with` statement to
85 execute code while the object is bound to the thread.
86 """
87 return _cls(self, self.push_thread, self.pop_thread)
88
89 def applicationbound(self, _cls=_StackBound):
90 """Can be used in combination with the `with` statement to
91 execute code while the object is bound to the application.
92 """
93 return _cls(self, self.push_application, self.pop_application)
94
95
96 class ContextStackManager(object):
97 """Helper class for context objects that manages a stack of
98 objects.
99 """
100
101 def __init__(self):
102 self._global = []
103 self._context_lock = threading.Lock()
104 self._context = threading.local()
105 self._cache = {}
106 self._stackop = get_iterator_next_method(count())
107
108 def iter_context_objects(self):
109 """Returns an iterator over all objects for the combined
110 application and context cache.
111 """
112 tid = current_thread()
113 objects = self._cache.get(tid)
114 if objects is None:
115 if len(self._cache) > _MAX_CONTEXT_OBJECT_CACHE:
116 self._cache.clear()
117 objects = self._global[:]
118 objects.extend(getattr(self._context, 'stack', ()))
119 objects.sort(reverse=True)
120 objects = [x[1] for x in objects]
121 self._cache[tid] = objects
122 return iter(objects)
123
124 def push_thread(self, obj):
125 self._context_lock.acquire()
126 try:
127 self._cache.pop(current_thread(), None)
128 item = (self._stackop(), obj)
129 stack = getattr(self._context, 'stack', None)
130 if stack is None:
131 self._context.stack = [item]
132 else:
133 stack.append(item)
134 finally:
135 self._context_lock.release()
136
137 def pop_thread(self):
138 self._context_lock.acquire()
139 try:
140 self._cache.pop(current_thread(), None)
141 stack = getattr(self._context, 'stack', None)
142 assert stack, 'no objects on stack'
143 return stack.pop()[1]
144 finally:
145 self._context_lock.release()
146
147 def push_application(self, obj):
148 self._global.append((self._stackop(), obj))
149 self._cache.clear()
150
151 def pop_application(self):
152 assert self._global, 'no objects on application stack'
153 popped = self._global.pop()[1]
154 self._cache.clear()
155 return popped
0 # -*- coding: utf-8 -*-
1 """
2 logbook._speedups
3 ~~~~~~~~~~~~~~~~~
4
5 Cython implementation of some core objects.
6
7 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
8 :license: BSD, see LICENSE for more details.
9 """
10
11 import thread
12 import threading
13 import platform
14
15 from cpython.dict cimport PyDict_Clear, PyDict_SetItem
16 from cpython.list cimport PyList_New, PyList_Append, PyList_Sort, \
17 PyList_SET_ITEM, PyList_GET_SIZE
18 from cpython.pythread cimport PyThread_type_lock, PyThread_allocate_lock, \
19 PyThread_release_lock, PyThread_acquire_lock, WAIT_LOCK
20
21 cdef object _missing = object()
22
23 cdef enum:
24 _MAX_CONTEXT_OBJECT_CACHE = 256
25
26 cdef current_thread = thread.get_ident
27
28 cdef class group_reflected_property:
29 cdef char* name
30 cdef char* _name
31 cdef object default
32 cdef object fallback
33
34 def __init__(self, char* name, object default, object fallback=_missing):
35 self.name = name
36 _name = '_' + name
37 self._name = _name
38 self.default = default
39 self.fallback = fallback
40
41 def __get__(self, obj, type):
42 if obj is None:
43 return self
44 rv = getattr3(obj, self._name, _missing)
45 if rv is not _missing and rv != self.fallback:
46 return rv
47 if obj.group is None:
48 return self.default
49 return getattr(obj.group, self.name)
50
51 def __set__(self, obj, value):
52 setattr(obj, self._name, value)
53
54 def __del__(self, obj):
55 delattr(obj, self._name)
56
57
58 cdef class _StackItem:
59 cdef int id
60 cdef readonly object val
61
62 def __init__(self, int id, object val):
63 self.id = id
64 self.val = val
65 def __richcmp__(_StackItem self, _StackItem other, int op):
66 cdef int diff = other.id - self.id # preserving older code
67 if op == 0: # <
68 return diff < 0
69 if op == 1: # <=
70 return diff <= 0
71 if op == 2: # ==
72 return diff == 0
73 if op == 3: # !=
74 return diff != 0
75 if op == 4: # >
76 return diff > 0
77 if op == 5: # >=
78 return diff >= 0
79 assert False, "should never get here"
80
81 cdef class _StackBound:
82 cdef object obj
83 cdef object push_func
84 cdef object pop_func
85
86 def __init__(self, obj, push, pop):
87 self.obj = obj
88 self.push_func = push
89 self.pop_func = pop
90
91 def __enter__(self):
92 self.push_func()
93 return self.obj
94
95 def __exit__(self, exc_type, exc_value, tb):
96 self.pop_func()
97
98
99 cdef class StackedObject:
100 """Baseclass for all objects that provide stack manipulation
101 operations.
102 """
103
104 cpdef push_thread(self):
105 """Pushes the stacked object to the thread stack."""
106 raise NotImplementedError()
107
108 cpdef pop_thread(self):
109 """Pops the stacked object from the thread stack."""
110 raise NotImplementedError()
111
112 cpdef push_application(self):
113 """Pushes the stacked object to the application stack."""
114 raise NotImplementedError()
115
116 cpdef pop_application(self):
117 """Pops the stacked object from the application stack."""
118 raise NotImplementedError()
119
120 def __enter__(self):
121 self.push_thread()
122 return self
123
124 def __exit__(self, exc_type, exc_value, tb):
125 self.pop_thread()
126
127 cpdef threadbound(self):
128 """Can be used in combination with the `with` statement to
129 execute code while the object is bound to the thread.
130 """
131 return _StackBound(self, self.push_thread, self.pop_thread)
132
133 cpdef applicationbound(self):
134 """Can be used in combination with the `with` statement to
135 execute code while the object is bound to the application.
136 """
137 return _StackBound(self, self.push_application, self.pop_application)
138
139
140 cdef class ContextStackManager:
141 cdef list _global
142 cdef PyThread_type_lock _context_lock
143 cdef object _context
144 cdef dict _cache
145 cdef int _stackcnt
146
147 def __init__(self):
148 self._global = []
149 self._context_lock = PyThread_allocate_lock()
150 self._context = threading.local()
151 self._cache = {}
152 self._stackcnt = 0
153
154 cdef _stackop(self):
155 self._stackcnt += 1
156 return self._stackcnt
157
158 cpdef iter_context_objects(self):
159 tid = current_thread()
160 objects = self._cache.get(tid)
161 if objects is None:
162 if PyList_GET_SIZE(self._cache) > _MAX_CONTEXT_OBJECT_CACHE:
163 PyDict_Clear(self._cache)
164 objects = self._global[:]
165 objects.extend(getattr3(self._context, 'stack', ()))
166 PyList_Sort(objects)
167 objects = [(<_StackItem>x).val for x in objects]
168 PyDict_SetItem(self._cache, tid, objects)
169 return iter(objects)
170
171 cpdef push_thread(self, obj):
172 PyThread_acquire_lock(self._context_lock, WAIT_LOCK)
173 try:
174 self._cache.pop(current_thread(), None)
175 item = _StackItem(self._stackop(), obj)
176 stack = getattr3(self._context, 'stack', None)
177 if stack is None:
178 self._context.stack = [item]
179 else:
180 PyList_Append(stack, item)
181 finally:
182 PyThread_release_lock(self._context_lock)
183
184 cpdef pop_thread(self):
185 PyThread_acquire_lock(self._context_lock, WAIT_LOCK)
186 try:
187 self._cache.pop(current_thread(), None)
188 stack = getattr3(self._context, 'stack', None)
189 assert stack, 'no objects on stack'
190 return (<_StackItem>stack.pop()).val
191 finally:
192 PyThread_release_lock(self._context_lock)
193
194 cpdef push_application(self, obj):
195 self._global.append(_StackItem(self._stackop(), obj))
196 PyDict_Clear(self._cache)
197
198 cpdef pop_application(self):
199 assert self._global, 'no objects on application stack'
200 popped = (<_StackItem>self._global.pop()).val
201 PyDict_Clear(self._cache)
202 return popped
0 # -*- coding: utf-8 -*-
1 """
2 logbook._termcolors
3 ~~~~~~~~~~~~~~~~~~~
4
5 Provides terminal color mappings.
6
7 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
8 :license: BSD, see LICENSE for more details.
9 """
10
11 esc = "\x1b["
12
13 codes = {}
14 codes[""] = ""
15 codes["reset"] = esc + "39;49;00m"
16
17 dark_colors = ["black", "darkred", "darkgreen", "brown", "darkblue",
18 "purple", "teal", "lightgray"]
19 light_colors = ["darkgray", "red", "green", "yellow", "blue",
20 "fuchsia", "turquoise", "white"]
21
22 x = 30
23 for d, l in zip(dark_colors, light_colors):
24 codes[d] = esc + "%im" % x
25 codes[l] = esc + "%i;01m" % x
26 x += 1
27
28 del d, l, x
29
30 codes["darkteal"] = codes["turquoise"]
31 codes["darkyellow"] = codes["brown"]
32 codes["fuscia"] = codes["fuchsia"]
33
34
35 def _str_to_type(obj, strtype):
36 """Helper for ansiformat and colorize"""
37 if isinstance(obj, type(strtype)):
38 return obj
39 return obj.encode('ascii')
40
41
42 def colorize(color_key, text):
43 """Returns an ANSI formatted text with the given color."""
44 return _str_to_type(codes[color_key], text) + text + \
45 _str_to_type(codes["reset"], text)
0 # -*- coding: utf-8 -*-
1 """
2 logbook.base
3 ~~~~~~~~~~~~
4
5 Base implementation for logbook.
6
7 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
8 :license: BSD, see LICENSE for more details.
9 """
10 import os
11 import sys
12 try:
13 import thread
14 except ImportError:
15 # for python 3.1,3.2
16 import _thread as thread
17 import threading
18 import traceback
19 from itertools import chain
20 from weakref import ref as weakref
21 from datetime import datetime
22
23 from logbook.helpers import to_safe_json, parse_iso8601, cached_property, \
24 PY2, u, string_types, iteritems, integer_types
25 try:
26 from logbook._speedups import group_reflected_property, \
27 ContextStackManager, StackedObject
28 except ImportError:
29 from logbook._fallback import group_reflected_property, \
30 ContextStackManager, StackedObject
31
32 _datetime_factory = datetime.utcnow
33 def set_datetime_format(datetime_format):
34 """
35 Set the format for the datetime objects created, which are then
36 made available as the :py:attr:`LogRecord.time` attribute of
37 :py:class:`LogRecord` instances.
38
39 :param datetime_format: Indicates how to generate datetime objects. Possible values are:
40
41 "utc"
42 :py:attr:`LogRecord.time` will be a datetime in UTC time zone (but not time zone aware)
43 "local"
44 :py:attr:`LogRecord.time` will be a datetime in local time zone (but not time zone aware)
45
46 This function defaults to creating datetime objects in UTC time,
47 using `datetime.utcnow()
48 <http://docs.python.org/3/library/datetime.html#datetime.datetime.utcnow>`_,
49 so that logbook logs all times in UTC time by default. This is
50 recommended in case you have multiple software modules or
51 instances running in different servers in different time zones, as
52 it makes it simple and less error prone to correlate logging
53 across the different servers.
54
55 On the other hand if all your software modules are running in the
56 same time zone and you have to correlate logging with third party
57 modules already logging in local time, it can be more convenient
58 to have logbook logging to local time instead of UTC. Local time
59 logging can be enabled like this::
60
61 import logbook
62 from datetime import datetime
63 logbook.set_datetime_format("local")
64
65 """
66 global _datetime_factory
67 if datetime_format == "utc":
68 _datetime_factory = datetime.utcnow
69 elif datetime_format == "local":
70 _datetime_factory = datetime.now
71 else:
72 raise ValueError("Invalid value %r. Valid values are 'utc' and 'local'." % (datetime_format,))
73
74 # make sure to sync these up with _speedups.pyx
75 CRITICAL = 6
76 ERROR = 5
77 WARNING = 4
78 NOTICE = 3
79 INFO = 2
80 DEBUG = 1
81 NOTSET = 0
82
83 _level_names = {
84 CRITICAL: 'CRITICAL',
85 ERROR: 'ERROR',
86 WARNING: 'WARNING',
87 NOTICE: 'NOTICE',
88 INFO: 'INFO',
89 DEBUG: 'DEBUG',
90 NOTSET: 'NOTSET'
91 }
92 _reverse_level_names = dict((v, k) for (k, v) in iteritems(_level_names))
93 _missing = object()
94
95
96 # on python 3 we can savely assume that frame filenames will be in
97 # unicode, on Python 2 we have to apply a trick.
98 if PY2:
99 def _convert_frame_filename(fn):
100 if isinstance(fn, unicode):
101 fn = fn.decode(sys.getfilesystemencoding() or 'utf-8',
102 'replace')
103 return fn
104 else:
105 def _convert_frame_filename(fn):
106 return fn
107
108
109 def level_name_property():
110 """Returns a property that reflects the level as name from
111 the internal level attribute.
112 """
113
114 def _get_level_name(self):
115 return get_level_name(self.level)
116
117 def _set_level_name(self, level):
118 self.level = lookup_level(level)
119 return property(_get_level_name, _set_level_name,
120 doc='The level as unicode string')
121
122
123 def lookup_level(level):
124 """Return the integer representation of a logging level."""
125 if isinstance(level, integer_types):
126 return level
127 try:
128 return _reverse_level_names[level]
129 except KeyError:
130 raise LookupError('unknown level name %s' % level)
131
132
133 def get_level_name(level):
134 """Return the textual representation of logging level 'level'."""
135 try:
136 return _level_names[level]
137 except KeyError:
138 raise LookupError('unknown level')
139
140
141 class ExtraDict(dict):
142 """A dictionary which returns ``u''`` on missing keys."""
143
144 if sys.version_info[:2] < (2, 5):
145 def __getitem__(self, key):
146 try:
147 return dict.__getitem__(self, key)
148 except KeyError:
149 return u''
150 else:
151 def __missing__(self, key):
152 return u''
153
154 def copy(self):
155 return self.__class__(self)
156
157 def __repr__(self):
158 return '%s(%s)' % (
159 self.__class__.__name__,
160 dict.__repr__(self)
161 )
162
163
164 class _ExceptionCatcher(object):
165 """Helper for exception caught blocks."""
166
167 def __init__(self, logger, args, kwargs):
168 self.logger = logger
169 self.args = args
170 self.kwargs = kwargs
171
172 def __enter__(self):
173 return self
174
175 def __exit__(self, exc_type, exc_value, tb):
176 if exc_type is not None:
177 kwargs = self.kwargs.copy()
178 kwargs['exc_info'] = (exc_type, exc_value, tb)
179 self.logger.exception(*self.args, **kwargs)
180 return True
181
182
183 class ContextObject(StackedObject):
184 """An object that can be bound to a context. It is managed by the
185 :class:`ContextStackManager`"""
186
187 #: subclasses have to instanciate a :class:`ContextStackManager`
188 #: object on this attribute which is then shared for all the
189 #: subclasses of it.
190 stack_manager = None
191
192 def push_thread(self):
193 """Pushes the context object to the thread stack."""
194 self.stack_manager.push_thread(self)
195
196 def pop_thread(self):
197 """Pops the context object from the stack."""
198 popped = self.stack_manager.pop_thread()
199 assert popped is self, 'popped unexpected object'
200
201 def push_application(self):
202 """Pushes the context object to the application stack."""
203 self.stack_manager.push_application(self)
204
205 def pop_application(self):
206 """Pops the context object from the stack."""
207 popped = self.stack_manager.pop_application()
208 assert popped is self, 'popped unexpected object'
209
210
211 class NestedSetup(StackedObject):
212 """A nested setup can be used to configure multiple handlers
213 and processors at once.
214 """
215
216 def __init__(self, objects=None):
217 self.objects = list(objects or ())
218
219 def push_application(self):
220 for obj in self.objects:
221 obj.push_application()
222
223 def pop_application(self):
224 for obj in reversed(self.objects):
225 obj.pop_application()
226
227 def push_thread(self):
228 for obj in self.objects:
229 obj.push_thread()
230
231 def pop_thread(self):
232 for obj in reversed(self.objects):
233 obj.pop_thread()
234
235
236 class Processor(ContextObject):
237 """Can be pushed to a stack to inject additional information into
238 a log record as necessary::
239
240 def inject_ip(record):
241 record.extra['ip'] = '127.0.0.1'
242
243 with Processor(inject_ip):
244 ...
245 """
246
247 stack_manager = ContextStackManager()
248
249 def __init__(self, callback=None):
250 #: the callback that was passed to the constructor
251 self.callback = callback
252
253 def process(self, record):
254 """Called with the log record that should be overridden. The default
255 implementation calls :attr:`callback` if it is not `None`.
256 """
257 if self.callback is not None:
258 self.callback(record)
259
260
261 class _InheritedType(object):
262 __slots__ = ()
263
264 def __repr__(self):
265 return 'Inherit'
266
267 def __reduce__(self):
268 return 'Inherit'
269 Inherit = _InheritedType()
270
271
272 class Flags(ContextObject):
273 """Allows flags to be pushed on a flag stack. Currently two flags
274 are available:
275
276 `errors`
277 Can be set to override the current error behaviour. This value is
278 used when logging calls fail. The default behaviour is spitting
279 out the stacktrace to stderr but this can be overridden:
280
281 =================== ==========================================
282 ``'silent'`` fail silently
283 ``'raise'`` raise a catchable exception
284 ``'print'`` print the stacktrace to stderr (default)
285 =================== ==========================================
286
287 `introspection`
288 Can be used to disable frame introspection. This can give a
289 speedup on production systems if you are using a JIT compiled
290 Python interpreter such as pypy. The default is `True`.
291
292 Note that the default setup of some of the handler (mail for
293 instance) includes frame dependent information which will
294 not be available when introspection is disabled.
295
296 Example usage::
297
298 with Flags(errors='silent'):
299 ...
300 """
301 stack_manager = ContextStackManager()
302
303 def __init__(self, **flags):
304 self.__dict__.update(flags)
305
306 @staticmethod
307 def get_flag(flag, default=None):
308 """Looks up the current value of a specific flag."""
309 for flags in Flags.stack_manager.iter_context_objects():
310 val = getattr(flags, flag, Inherit)
311 if val is not Inherit:
312 return val
313 return default
314
315
316 def _create_log_record(cls, dict):
317 """Extra function for reduce because on Python 3 unbound methods
318 can no longer be pickled.
319 """
320 return cls.from_dict(dict)
321
322
323 class LogRecord(object):
324 """A LogRecord instance represents an event being logged.
325
326 LogRecord instances are created every time something is logged. They
327 contain all the information pertinent to the event being logged. The
328 main information passed in is in msg and args
329 """
330 _pullable_information = frozenset((
331 'func_name', 'module', 'filename', 'lineno', 'process_name', 'thread',
332 'thread_name', 'formatted_exception', 'message', 'exception_name',
333 'exception_message'
334 ))
335 _noned_on_close = frozenset(('exc_info', 'frame', 'calling_frame'))
336
337 #: can be overriden by a handler to not close the record. This could
338 #: lead to memory leaks so it should be used carefully.
339 keep_open = False
340
341 #: the time of the log record creation as :class:`datetime.datetime`
342 #: object. This information is unavailable until the record was
343 #: heavy initialized.
344 time = None
345
346 #: a flag that is `True` if the log record is heavy initialized which
347 #: is not the case by default.
348 heavy_initialized = False
349
350 #: a flag that is `True` when heavy initialization is no longer possible
351 late = False
352
353 #: a flag that is `True` when all the information was pulled from the
354 #: information that becomes unavailable on close.
355 information_pulled = False
356
357 def __init__(self, channel, level, msg, args=None, kwargs=None,
358 exc_info=None, extra=None, frame=None, dispatcher=None):
359 #: the name of the logger that created it or any other textual
360 #: channel description. This is a descriptive name and can be
361 #: used for filtering.
362 self.channel = channel
363 #: The message of the log record as new-style format string.
364 self.msg = msg
365 #: the positional arguments for the format string.
366 self.args = args or ()
367 #: the keyword arguments for the format string.
368 self.kwargs = kwargs or {}
369 #: the level of the log record as integer.
370 self.level = level
371 #: optional exception information. If set, this is a tuple in the
372 #: form ``(exc_type, exc_value, tb)`` as returned by
373 #: :func:`sys.exc_info`.
374 #: This parameter can also be ``True``, which would cause the exception info tuple
375 #: to be fetched for you.
376 self.exc_info = exc_info
377 #: optional extra information as dictionary. This is the place
378 #: where custom log processors can attach custom context sensitive
379 #: data.
380 self.extra = ExtraDict(extra or ())
381 #: If available, optionally the interpreter frame that pulled the
382 #: heavy init. This usually points to somewhere in the dispatcher.
383 #: Might not be available for all calls and is removed when the log
384 #: record is closed.
385 self.frame = frame
386 #: the PID of the current process
387 self.process = None
388 if dispatcher is not None:
389 dispatcher = weakref(dispatcher)
390 self._dispatcher = dispatcher
391
392 def heavy_init(self):
393 """Does the heavy initialization that could be expensive. This must
394 not be called from a higher stack level than when the log record was
395 created and the later the initialization happens, the more off the
396 date information will be for example.
397
398 This is internally used by the record dispatching system and usually
399 something not to worry about.
400 """
401 if self.heavy_initialized:
402 return
403 assert not self.late, 'heavy init is no longer possible'
404 self.heavy_initialized = True
405 self.process = os.getpid()
406 self.time = _datetime_factory()
407 if self.frame is None and Flags.get_flag('introspection', True):
408 self.frame = sys._getframe(1)
409 if self.exc_info is True:
410 self.exc_info = sys.exc_info()
411
412 def pull_information(self):
413 """A helper function that pulls all frame-related information into
414 the object so that this information is available after the log
415 record was closed.
416 """
417 if self.information_pulled:
418 return
419 # due to how cached_property is implemented, the attribute access
420 # has the side effect of caching the attribute on the instance of
421 # the class.
422 for key in self._pullable_information:
423 getattr(self, key)
424 self.information_pulled = True
425
426 def close(self):
427 """Closes the log record. This will set the frame and calling
428 frame to `None` and frame-related information will no longer be
429 available unless it was pulled in first (:meth:`pull_information`).
430 This makes a log record safe for pickling and will clean up
431 memory that might be still referenced by the frames.
432 """
433 for key in self._noned_on_close:
434 setattr(self, key, None)
435 self.late = True
436
437 def __reduce_ex__(self, protocol):
438 return _create_log_record, (type(self), self.to_dict())
439
440 def to_dict(self, json_safe=False):
441 """Exports the log record into a dictionary without the information
442 that cannot be safely serialized like interpreter frames and
443 tracebacks.
444 """
445 self.pull_information()
446 rv = {}
447 for key, value in iteritems(self.__dict__):
448 if key[:1] != '_' and key not in self._noned_on_close:
449 rv[key] = value
450 # the extra dict is exported as regular dict
451 rv['extra'] = dict(rv['extra'])
452 if json_safe:
453 return to_safe_json(rv)
454 return rv
455
456 @classmethod
457 def from_dict(cls, d):
458 """Creates a log record from an exported dictionary. This also
459 supports JSON exported dictionaries.
460 """
461 rv = object.__new__(cls)
462 rv.update_from_dict(d)
463 return rv
464
465 def update_from_dict(self, d):
466 """Like the :meth:`from_dict` classmethod, but will update the
467 instance in place. Helpful for constructors.
468 """
469 self.__dict__.update(d)
470 for key in self._noned_on_close:
471 setattr(self, key, None)
472 self._information_pulled = True
473 self._channel = None
474 if isinstance(self.time, string_types):
475 self.time = parse_iso8601(self.time)
476 return self
477
478 @cached_property
479 def message(self):
480 """The formatted message."""
481 if not (self.args or self.kwargs):
482 return self.msg
483 try:
484 try:
485 return self.msg.format(*self.args, **self.kwargs)
486 except UnicodeDecodeError:
487 # Assume an unicode message but mixed-up args
488 msg = self.msg.encode('utf-8', 'replace')
489 return msg.format(*self.args, **self.kwargs)
490 except (UnicodeEncodeError, AttributeError):
491 # we catch AttributeError since if msg is bytes, it won't have the 'format' method
492 if sys.exc_info()[0] is AttributeError and (PY2 or not isinstance(self.msg, bytes)):
493 # this is not the case we thought it is...
494 raise
495 # Assume encoded message with unicode args.
496 # The assumption of utf8 as input encoding is just a guess,
497 # but this codepath is unlikely (if the message is a constant
498 # string in the caller's source file)
499 msg = self.msg.decode('utf-8', 'replace')
500 return msg.format(*self.args, **self.kwargs)
501
502 except Exception:
503 # this obviously will not give a proper error message if the
504 # information was not pulled and the log record no longer has
505 # access to the frame. But there is not much we can do about
506 # that.
507 e = sys.exc_info()[1]
508 errormsg = ('Could not format message with provided '
509 'arguments: {err}\n msg={msg!r}\n '
510 'args={args!r} \n kwargs={kwargs!r}.\n'
511 'Happened in file {file}, line {lineno}').format(
512 err=e, msg=self.msg, args=self.args,
513 kwargs=self.kwargs, file=self.filename,
514 lineno=self.lineno
515 )
516 if PY2:
517 errormsg = errormsg.encode('utf-8')
518 raise TypeError(errormsg)
519
520 level_name = level_name_property()
521
522 @cached_property
523 def calling_frame(self):
524 """The frame in which the record has been created. This only
525 exists for as long the log record is not closed.
526 """
527 frm = self.frame
528 globs = globals()
529 while frm is not None and frm.f_globals is globs:
530 frm = frm.f_back
531 return frm
532
533 @cached_property
534 def func_name(self):
535 """The name of the function that triggered the log call if
536 available. Requires a frame or that :meth:`pull_information`
537 was called before.
538 """
539 cf = self.calling_frame
540 if cf is not None:
541 return cf.f_code.co_name
542
543 @cached_property
544 def module(self):
545 """The name of the module that triggered the log call if
546 available. Requires a frame or that :meth:`pull_information`
547 was called before.
548 """
549 cf = self.calling_frame
550 if cf is not None:
551 return cf.f_globals.get('__name__')
552
553 @cached_property
554 def filename(self):
555 """The filename of the module in which the record has been created.
556 Requires a frame or that :meth:`pull_information` was called before.
557 """
558 cf = self.calling_frame
559 if cf is not None:
560 fn = cf.f_code.co_filename
561 if fn[:1] == '<' and fn[-1:] == '>':
562 return fn
563 return _convert_frame_filename(os.path.abspath(fn))
564
565 @cached_property
566 def lineno(self):
567 """The line number of the file in which the record has been created.
568 Requires a frame or that :meth:`pull_information` was called before.
569 """
570 cf = self.calling_frame
571 if cf is not None:
572 return cf.f_lineno
573
574 @cached_property
575 def thread(self):
576 """The ident of the thread. This is evaluated late and means that
577 if the log record is passed to another thread, :meth:`pull_information`
578 was called in the old thread.
579 """
580 return thread.get_ident()
581
582 @cached_property
583 def thread_name(self):
584 """The name of the thread. This is evaluated late and means that
585 if the log record is passed to another thread, :meth:`pull_information`
586 was called in the old thread.
587 """
588 return threading.currentThread().getName()
589
590 @cached_property
591 def process_name(self):
592 """The name of the process in which the record has been created."""
593 # Errors may occur if multiprocessing has not finished loading
594 # yet - e.g. if a custom import hook causes third-party code
595 # to run when multiprocessing calls import. See issue 8200
596 # for an example
597 mp = sys.modules.get('multiprocessing')
598 if mp is not None: # pragma: no cover
599 try:
600 return mp.current_process().name
601 except Exception:
602 pass
603
604 @cached_property
605 def formatted_exception(self):
606 """The formatted exception which caused this record to be created
607 in case there was any.
608 """
609 if self.exc_info is not None:
610 rv = ''.join(traceback.format_exception(*self.exc_info))
611 if PY2:
612 rv = rv.decode('utf-8', 'replace')
613 return rv.rstrip()
614
615 @cached_property
616 def exception_name(self):
617 """The name of the exception."""
618 if self.exc_info is not None:
619 cls = self.exc_info[0]
620 return u(cls.__module__ + '.' + cls.__name__)
621
622 @property
623 def exception_shortname(self):
624 """An abbreviated exception name (no import path)"""
625 return self.exception_name.rsplit('.')[-1]
626
627 @cached_property
628 def exception_message(self):
629 """The message of the exception."""
630 if self.exc_info is not None:
631 val = self.exc_info[1]
632 try:
633 return u(str(val))
634 except UnicodeError:
635 return str(val).decode('utf-8', 'replace')
636
637 @property
638 def dispatcher(self):
639 """The dispatcher that created the log record. Might not exist because
640 a log record does not have to be created from a logger or other
641 dispatcher to be handled by logbook. If this is set, it will point to
642 an object that implements the :class:`~logbook.base.RecordDispatcher`
643 interface.
644 """
645 if self._dispatcher is not None:
646 return self._dispatcher()
647
648
649 class LoggerMixin(object):
650 """This mixin class defines and implements the "usual" logger
651 interface (i.e. the descriptive logging functions).
652
653 Classes using this mixin have to implement a :meth:`!handle` method which
654 takes a :class:`~logbook.LogRecord` and passes it along.
655 """
656
657 #: The name of the minimium logging level required for records to be
658 #: created.
659 level_name = level_name_property()
660
661 def debug(self, *args, **kwargs):
662 """Logs a :class:`~logbook.LogRecord` with the level set
663 to :data:`~logbook.DEBUG`.
664 """
665 if not self.disabled and DEBUG >= self.level:
666 self._log(DEBUG, args, kwargs)
667
668 def info(self, *args, **kwargs):
669 """Logs a :class:`~logbook.LogRecord` with the level set
670 to :data:`~logbook.INFO`.
671 """
672 if not self.disabled and INFO >= self.level:
673 self._log(INFO, args, kwargs)
674
675 def warn(self, *args, **kwargs):
676 """Logs a :class:`~logbook.LogRecord` with the level set
677 to :data:`~logbook.WARNING`. This function has an alias
678 named :meth:`warning`.
679 """
680 if not self.disabled and WARNING >= self.level:
681 self._log(WARNING, args, kwargs)
682
683 def warning(self, *args, **kwargs):
684 """Alias for :meth:`warn`."""
685 return self.warn(*args, **kwargs)
686
687 def notice(self, *args, **kwargs):
688 """Logs a :class:`~logbook.LogRecord` with the level set
689 to :data:`~logbook.NOTICE`.
690 """
691 if not self.disabled and NOTICE >= self.level:
692 self._log(NOTICE, args, kwargs)
693
694 def error(self, *args, **kwargs):
695 """Logs a :class:`~logbook.LogRecord` with the level set
696 to :data:`~logbook.ERROR`.
697 """
698 if not self.disabled and ERROR >= self.level:
699 self._log(ERROR, args, kwargs)
700
701 def exception(self, *args, **kwargs):
702 """Works exactly like :meth:`error` just that the message
703 is optional and exception information is recorded.
704 """
705 if self.disabled or ERROR < self.level:
706 return
707 if not args:
708 args = ('Uncaught exception occurred',)
709 if 'exc_info' not in kwargs:
710 exc_info = sys.exc_info()
711 assert exc_info[0] is not None, 'no exception occurred'
712 kwargs.setdefault('exc_info', sys.exc_info())
713 return self.error(*args, **kwargs)
714
715 def critical(self, *args, **kwargs):
716 """Logs a :class:`~logbook.LogRecord` with the level set
717 to :data:`~logbook.CRITICAL`.
718 """
719 if not self.disabled and CRITICAL >= self.level:
720 self._log(CRITICAL, args, kwargs)
721
722 def log(self, level, *args, **kwargs):
723 """Logs a :class:`~logbook.LogRecord` with the level set
724 to the `level` parameter. Because custom levels are not
725 supported by logbook, this method is mainly used to avoid
726 the use of reflection (e.g.: :func:`getattr`) for programmatic
727 logging.
728 """
729 level = lookup_level(level)
730 if level >= self.level:
731 self._log(level, args, kwargs)
732
733 def catch_exceptions(self, *args, **kwargs):
734 """A context manager that catches exceptions and calls
735 :meth:`exception` for exceptions caught that way. Example:
736
737 .. code-block:: python
738
739 with logger.catch_exceptions():
740 execute_code_that_might_fail()
741 """
742 if not args:
743 args = ('Uncaught exception occurred',)
744 return _ExceptionCatcher(self, args, kwargs)
745
746 def _log(self, level, args, kwargs):
747 exc_info = kwargs.pop('exc_info', None)
748 extra = kwargs.pop('extra', None)
749 self.make_record_and_handle(level, args[0], args[1:], kwargs,
750 exc_info, extra)
751
752
753 class RecordDispatcher(object):
754 """A record dispatcher is the internal base class that implements
755 the logic used by the :class:`~logbook.Logger`.
756 """
757
758 #: If this is set to `True` the dispatcher information will be suppressed
759 #: for log records emitted from this logger.
760 suppress_dispatcher = False
761
762 def __init__(self, name=None, level=NOTSET):
763 #: the name of the record dispatcher
764 self.name = name
765 #: list of handlers specific for this record dispatcher
766 self.handlers = []
767 #: optionally the name of the group this logger belongs to
768 self.group = None
769 #: the level of the record dispatcher as integer
770 self.level = level
771
772 disabled = group_reflected_property('disabled', False)
773 level = group_reflected_property('level', NOTSET, fallback=NOTSET)
774
775 def handle(self, record):
776 """Call the handlers for the specified record. This is
777 invoked automatically when a record should be handled.
778 The default implementation checks if the dispatcher is disabled
779 and if the record level is greater than the level of the
780 record dispatcher. In that case it will call the handlers
781 (:meth:`call_handlers`).
782 """
783 if not self.disabled and record.level >= self.level:
784 self.call_handlers(record)
785
786 def make_record_and_handle(self, level, msg, args, kwargs, exc_info,
787 extra):
788 """Creates a record from some given arguments and heads it
789 over to the handling system.
790 """
791 # The channel information can be useful for some use cases which is
792 # why we keep it on there. The log record however internally will
793 # only store a weak reference to the channel, so it might disappear
794 # from one instruction to the other. It will also disappear when
795 # a log record is transmitted to another process etc.
796 channel = None
797 if not self.suppress_dispatcher:
798 channel = self
799
800 record = LogRecord(self.name, level, msg, args, kwargs, exc_info,
801 extra, None, channel)
802
803 # after handling the log record is closed which will remove some
804 # referenes that would require a GC run on cpython. This includes
805 # the current stack frame, exception information. However there are
806 # some use cases in keeping the records open for a little longer.
807 # For example the test handler keeps log records open until the
808 # test handler is closed to allow assertions based on stack frames
809 # and exception information.
810 try:
811 self.handle(record)
812 finally:
813 record.late = True
814 if not record.keep_open:
815 record.close()
816
817 def call_handlers(self, record):
818 """Pass a record to all relevant handlers in the following
819 order:
820
821 - per-dispatcher handlers are handled first
822 - afterwards all the current context handlers in the
823 order they were pushed
824
825 Before the first handler is invoked, the record is processed
826 (:meth:`process_record`).
827 """
828 # for performance reasons records are only heavy initialized
829 # and processed if at least one of the handlers has a higher
830 # level than the record and that handler is not a black hole.
831 record_initialized = False
832
833 # Both logger attached handlers as well as context specific
834 # handlers are handled one after another. The latter also
835 # include global handlers.
836 for handler in chain(self.handlers,
837 Handler.stack_manager.iter_context_objects()):
838 # skip records that this handler is not interested in based
839 # on the record and handler level or in case this method was
840 # overridden on some custom logic.
841 if not handler.should_handle(record):
842 continue
843
844 # if this is a blackhole handler, don't even try to
845 # do further processing, stop right away. Technically
846 # speaking this is not 100% correct because if the handler
847 # is bubbling we shouldn't apply this logic, but then we
848 # won't enter this branch anyways. The result is that a
849 # bubbling blackhole handler will never have this shortcut
850 # applied and do the heavy init at one point. This is fine
851 # however because a bubbling blackhole handler is not very
852 # useful in general.
853 if handler.blackhole:
854 break
855
856 # we are about to handle the record. If it was not yet
857 # processed by context-specific record processors we
858 # have to do that now and remeber that we processed
859 # the record already.
860 if not record_initialized:
861 record.heavy_init()
862 self.process_record(record)
863 record_initialized = True
864
865 # a filter can still veto the handling of the record. This
866 # however is already operating on an initialized and processed
867 # record. The impact is that filters are slower than the
868 # handler's should_handle function in case there is no default
869 # handler that would handle the record (delayed init).
870 if handler.filter is not None \
871 and not handler.filter(record, handler):
872 continue
873
874 # handle the record. If the record was handled and
875 # the record is not bubbling we can abort now.
876 if handler.handle(record) and not handler.bubble:
877 break
878
879 def process_record(self, record):
880 """Processes the record with all context specific processors. This
881 can be overriden to also inject additional information as necessary
882 that can be provided by this record dispatcher.
883 """
884 if self.group is not None:
885 self.group.process_record(record)
886 for processor in Processor.stack_manager.iter_context_objects():
887 processor.process(record)
888
889
890 class Logger(RecordDispatcher, LoggerMixin):
891 """Instances of the Logger class represent a single logging channel.
892 A "logging channel" indicates an area of an application. Exactly
893 how an "area" is defined is up to the application developer.
894
895 Names used by logbook should be descriptive and are intended for user
896 display, not for filtering. Filtering should happen based on the
897 context information instead.
898
899 A logger internally is a subclass of a
900 :class:`~logbook.base.RecordDispatcher` that implements the actual
901 logic. If you want to implement a custom logger class, have a look
902 at the interface of that class as well.
903 """
904
905
906 class LoggerGroup(object):
907 """A LoggerGroup represents a group of loggers. It cannot emit log
908 messages on its own but it can be used to set the disabled flag and
909 log level of all loggers in the group.
910
911 Furthermore the :meth:`process_record` method of the group is called
912 by any logger in the group which by default calls into the
913 :attr:`processor` callback function.
914 """
915
916 def __init__(self, loggers=None, level=NOTSET, processor=None):
917 #: a list of all loggers on the logger group. Use the
918 #: :meth:`add_logger` and :meth:`remove_logger` methods to add
919 #: or remove loggers from this list.
920 self.loggers = []
921 if loggers is not None:
922 for logger in loggers:
923 self.add_logger(logger)
924
925 #: the level of the group. This is reflected to the loggers
926 #: in the group unless they overrode the setting.
927 self.level = lookup_level(level)
928 #: the disabled flag for all loggers in the group, unless
929 #: the loggers overrode the setting.
930 self.disabled = False
931 #: an optional callback function that is executed to process
932 #: the log records of all loggers in the group.
933 self.processor = processor
934
935 def add_logger(self, logger):
936 """Adds a logger to this group."""
937 assert logger.group is None, 'Logger already belongs to a group'
938 logger.group = self
939 self.loggers.append(logger)
940
941 def remove_logger(self, logger):
942 """Removes a logger from the group."""
943 self.loggers.remove(logger)
944 logger.group = None
945
946 def process_record(self, record):
947 """Like :meth:`Logger.process_record` but for all loggers in
948 the group. By default this calls into the :attr:`processor`
949 function is it's not `None`.
950 """
951 if self.processor is not None:
952 self.processor(record)
953
954
955 _default_dispatcher = RecordDispatcher()
956
957
958 def dispatch_record(record):
959 """Passes a record on to the handlers on the stack. This is useful when
960 log records are created programmatically and already have all the
961 information attached and should be dispatched independent of a logger.
962 """
963 _default_dispatcher.call_handlers(record)
964
965
966 # at that point we are save to import handler
967 from logbook.handlers import Handler
0 # -*- coding: utf-8 -*-
1 """
2 logbook.compat
3 ~~~~~~~~~~~~~~
4
5 Backwards compatibility with stdlib's logging package and the
6 warnings module.
7
8 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
9 :license: BSD, see LICENSE for more details.
10 """
11 import sys
12 import logging
13 import warnings
14 import logbook
15 from datetime import date, datetime
16
17 from logbook.helpers import u, string_types, iteritems
18
19 _epoch_ord = date(1970, 1, 1).toordinal()
20
21
22 def redirect_logging():
23 """Permanently redirects logging to the stdlib. This also
24 removes all otherwise registered handlers on root logger of
25 the logging system but leaves the other loggers untouched.
26 """
27 del logging.root.handlers[:]
28 logging.root.addHandler(RedirectLoggingHandler())
29
30
31 class redirected_logging(object):
32 """Temporarily redirects logging for all threads and reverts
33 it later to the old handlers. Mainly used by the internal
34 unittests::
35
36 from logbook.compat import redirected_logging
37 with redirected_logging():
38 ...
39 """
40 def __init__(self):
41 self.old_handlers = logging.root.handlers[:]
42
43 def start(self):
44 redirect_logging()
45
46 def end(self, etype=None, evalue=None, tb=None):
47 logging.root.handlers[:] = self.old_handlers
48
49 __enter__ = start
50 __exit__ = end
51
52
53 class RedirectLoggingHandler(logging.Handler):
54 """A handler for the stdlib's logging system that redirects
55 transparently to logbook. This is used by the
56 :func:`redirect_logging` and :func:`redirected_logging`
57 functions.
58
59 If you want to customize the redirecting you can subclass it.
60 """
61
62 def __init__(self):
63 logging.Handler.__init__(self)
64
65 def convert_level(self, level):
66 """Converts a logging level into a logbook level."""
67 if level >= logging.CRITICAL:
68 return logbook.CRITICAL
69 if level >= logging.ERROR:
70 return logbook.ERROR
71 if level >= logging.WARNING:
72 return logbook.WARNING
73 if level >= logging.INFO:
74 return logbook.INFO
75 return logbook.DEBUG
76
77 def find_extra(self, old_record):
78 """Tries to find custom data from the old logging record. The
79 return value is a dictionary that is merged with the log record
80 extra dictionaries.
81 """
82 rv = vars(old_record).copy()
83 for key in ('name', 'msg', 'args', 'levelname', 'levelno',
84 'pathname', 'filename', 'module', 'exc_info',
85 'exc_text', 'lineno', 'funcName', 'created',
86 'msecs', 'relativeCreated', 'thread', 'threadName',
87 'processName', 'process'):
88 rv.pop(key, None)
89 return rv
90
91 def find_caller(self, old_record):
92 """Tries to find the caller that issued the call."""
93 frm = sys._getframe(2)
94 while frm is not None:
95 if frm.f_globals is globals() or \
96 frm.f_globals is logbook.base.__dict__ or \
97 frm.f_globals is logging.__dict__:
98 frm = frm.f_back
99 else:
100 return frm
101
102 def convert_time(self, timestamp):
103 """Converts the UNIX timestamp of the old record into a
104 datetime object as used by logbook.
105 """
106 return datetime.utcfromtimestamp(timestamp)
107
108 def convert_record(self, old_record):
109 """Converts an old logging record into a logbook log record."""
110 record = logbook.LogRecord(old_record.name,
111 self.convert_level(old_record.levelno),
112 old_record.getMessage(),
113 None, None, old_record.exc_info,
114 self.find_extra(old_record),
115 self.find_caller(old_record))
116 record.time = self.convert_time(old_record.created)
117 return record
118
119 def emit(self, record):
120 logbook.dispatch_record(self.convert_record(record))
121
122
123 class LoggingHandler(logbook.Handler):
124 """Does the opposite of the :class:`RedirectLoggingHandler`, it sends
125 messages from logbook to logging. Because of that, it's a very bad
126 idea to configure both.
127
128 This handler is for logbook and will pass stuff over to a logger
129 from the standard library.
130
131 Example usage::
132
133 from logbook.compat import LoggingHandler, warn
134 with LoggingHandler():
135 warn('This goes to logging')
136 """
137
138 def __init__(self, logger=None, level=logbook.NOTSET, filter=None,
139 bubble=False):
140 logbook.Handler.__init__(self, level, filter, bubble)
141 if logger is None:
142 logger = logging.getLogger()
143 elif isinstance(logger, string_types):
144 logger = logging.getLogger(logger)
145 self.logger = logger
146
147 def get_logger(self, record):
148 """Returns the logger to use for this record. This implementation
149 always return :attr:`logger`.
150 """
151 return self.logger
152
153 def convert_level(self, level):
154 """Converts a logbook level into a logging level."""
155 if level >= logbook.CRITICAL:
156 return logging.CRITICAL
157 if level >= logbook.ERROR:
158 return logging.ERROR
159 if level >= logbook.WARNING:
160 return logging.WARNING
161 if level >= logbook.INFO:
162 return logging.INFO
163 return logging.DEBUG
164
165 def convert_time(self, dt):
166 """Converts a datetime object into a timestamp."""
167 year, month, day, hour, minute, second = dt.utctimetuple()[:6]
168 days = date(year, month, 1).toordinal() - _epoch_ord + day - 1
169 hours = days * 24 + hour
170 minutes = hours * 60 + minute
171 seconds = minutes * 60 + second
172 return seconds
173
174 def convert_record(self, old_record):
175 """Converts a record from logbook to logging."""
176 if sys.version_info >= (2, 5):
177 # make sure 2to3 does not screw this up
178 optional_kwargs = {'func': getattr(old_record, 'func_name')}
179 else:
180 optional_kwargs = {}
181 record = logging.LogRecord(old_record.channel,
182 self.convert_level(old_record.level),
183 old_record.filename,
184 old_record.lineno,
185 old_record.message,
186 (), old_record.exc_info,
187 **optional_kwargs)
188 for key, value in iteritems(old_record.extra):
189 record.__dict__.setdefault(key, value)
190 record.created = self.convert_time(old_record.time)
191 return record
192
193 def emit(self, record):
194 self.get_logger(record).handle(self.convert_record(record))
195
196
197 def redirect_warnings():
198 """Like :func:`redirected_warnings` but will redirect all warnings
199 to the shutdown of the interpreter:
200
201 .. code-block:: python
202
203 from logbook.compat import redirect_warnings
204 redirect_warnings()
205 """
206 redirected_warnings().__enter__()
207
208
209 class redirected_warnings(object):
210 """A context manager that copies and restores the warnings filter upon
211 exiting the context, and logs warnings using the logbook system.
212
213 The :attr:`~logbook.LogRecord.channel` attribute of the log record will be
214 the import name of the warning.
215
216 Example usage:
217
218 .. code-block:: python
219
220 from logbook.compat import redirected_warnings
221 from warnings import warn
222
223 with redirected_warnings():
224 warn(DeprecationWarning('logging should be deprecated'))
225 """
226
227 def __init__(self):
228 self._entered = False
229
230 def message_to_unicode(self, message):
231 try:
232 return u(str(message))
233 except UnicodeError:
234 return str(message).decode('utf-8', 'replace')
235
236 def make_record(self, message, exception, filename, lineno):
237 category = exception.__name__
238 if exception.__module__ not in ('exceptions', 'builtins'):
239 category = exception.__module__ + '.' + category
240 rv = logbook.LogRecord(category, logbook.WARNING, message)
241 # we don't know the caller, but we get that information from the
242 # warning system. Just attach them.
243 rv.filename = filename
244 rv.lineno = lineno
245 return rv
246
247 def start(self):
248 if self._entered: # pragma: no cover
249 raise RuntimeError("Cannot enter %r twice" % self)
250 self._entered = True
251 self._filters = warnings.filters
252 warnings.filters = self._filters[:]
253 self._showwarning = warnings.showwarning
254
255 def showwarning(message, category, filename, lineno,
256 file=None, line=None):
257 message = self.message_to_unicode(message)
258 record = self.make_record(message, category, filename, lineno)
259 logbook.dispatch_record(record)
260 warnings.showwarning = showwarning
261
262 def end(self, etype=None, evalue=None, tb=None):
263 if not self._entered: # pragma: no cover
264 raise RuntimeError("Cannot exit %r without entering first" % self)
265 warnings.filters = self._filters
266 warnings.showwarning = self._showwarning
267
268 __enter__ = start
269 __exit__ = end
0 # -*- coding: utf-8 -*-
1 """
2 logbook.handlers
3 ~~~~~~~~~~~~~~~~
4
5 The handler interface and builtin handlers.
6
7 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
8 :license: BSD, see LICENSE for more details.
9 """
10 import os
11 import re
12 import sys
13 import stat
14 import errno
15 import socket
16 try:
17 from hashlib import sha1
18 except ImportError:
19 from sha import new as sha1
20 import threading
21 import traceback
22 from datetime import datetime, timedelta
23 from threading import Lock
24 from collections import deque
25
26 from logbook.base import CRITICAL, ERROR, WARNING, NOTICE, INFO, DEBUG, \
27 NOTSET, level_name_property, _missing, lookup_level, \
28 Flags, ContextObject, ContextStackManager
29 from logbook.helpers import rename, b, _is_text_stream, is_unicode, PY2, \
30 zip, xrange, string_types, integer_types, iteritems, reraise
31
32
33 DEFAULT_FORMAT_STRING = (
34 u'[{record.time:%Y-%m-%d %H:%M}] '
35 u'{record.level_name}: {record.channel}: {record.message}'
36 )
37 SYSLOG_FORMAT_STRING = u'{record.channel}: {record.message}'
38 NTLOG_FORMAT_STRING = u'''\
39 Message Level: {record.level_name}
40 Location: {record.filename}:{record.lineno}
41 Module: {record.module}
42 Function: {record.func_name}
43 Exact Time: {record.time:%Y-%m-%d %H:%M:%S}
44
45 Event provided Message:
46
47 {record.message}
48 '''
49 TEST_FORMAT_STRING = \
50 u'[{record.level_name}] {record.channel}: {record.message}'
51 MAIL_FORMAT_STRING = u'''\
52 Subject: {handler.subject}
53
54 Message type: {record.level_name}
55 Location: {record.filename}:{record.lineno}
56 Module: {record.module}
57 Function: {record.func_name}
58 Time: {record.time:%Y-%m-%d %H:%M:%S}
59
60 Message:
61
62 {record.message}
63 '''
64 MAIL_RELATED_FORMAT_STRING = u'''\
65 Message type: {record.level_name}
66 Location: {record.filename}:{record.lineno}
67 Module: {record.module}
68 Function: {record.func_name}
69 {record.message}
70 '''
71
72 SYSLOG_PORT = 514
73
74 REGTYPE = type(re.compile("I'm a regular expression!"))
75
76 def create_syshandler(application_name, level=NOTSET):
77 """Creates the handler the operating system provides. On Unix systems
78 this creates a :class:`SyslogHandler`, on Windows sytems it will
79 create a :class:`NTEventLogHandler`.
80 """
81 if os.name == 'nt':
82 return NTEventLogHandler(application_name, level=level)
83 return SyslogHandler(application_name, level=level)
84
85
86 class _HandlerType(type):
87 """The metaclass of handlers injects a destructor if the class has an
88 overridden close method. This makes it possible that the default
89 handler class as well as all subclasses that don't need cleanup to be
90 collected with less overhead.
91 """
92
93 def __new__(cls, name, bases, d):
94 # aha, that thing has a custom close method. We will need a magic
95 # __del__ for it to be called on cleanup.
96 if bases != (ContextObject,) and 'close' in d and '__del__' not in d \
97 and not any(hasattr(x, '__del__') for x in bases):
98 def _magic_del(self):
99 try:
100 self.close()
101 except Exception:
102 # del is also invoked when init fails, so we better just
103 # ignore any exception that might be raised here
104 pass
105 d['__del__'] = _magic_del
106 return type.__new__(cls, name, bases, d)
107
108
109 class Handler(ContextObject):
110 """Handler instances dispatch logging events to specific destinations.
111
112 The base handler class. Acts as a placeholder which defines the Handler
113 interface. Handlers can optionally use Formatter instances to format
114 records as desired. By default, no formatter is specified; in this case,
115 the 'raw' message as determined by record.message is logged.
116
117 To bind a handler you can use the :meth:`push_application` and
118 :meth:`push_thread` methods. This will push the handler on a stack of
119 handlers. To undo this, use the :meth:`pop_application` and
120 :meth:`pop_thread` methods::
121
122 handler = MyHandler()
123 handler.push_application()
124 # all here goes to that handler
125 handler.pop_application()
126
127 By default messages send to that handler will not go to a handler on
128 an outer level on the stack, if handled. This can be changed by
129 setting bubbling to `True`. This setup for example would not have
130 any effect::
131
132 handler = NullHandler(bubble=False)
133 handler.push_application()
134
135 Whereas this setup disables all logging for the application::
136
137 handler = NullHandler()
138 handler.push_application()
139
140 There are also context managers to setup the handler for the duration
141 of a `with`-block::
142
143 with handler.applicationbound():
144 ...
145
146 with handler.threadbound():
147 ...
148
149 Because `threadbound` is a common operation, it is aliased to a with
150 on the handler itself::
151
152 with handler:
153 ...
154 """
155 __metaclass__ = _HandlerType
156
157 stack_manager = ContextStackManager()
158
159 #: a flag for this handler that can be set to `True` for handlers that
160 #: are consuming log records but are not actually displaying it. This
161 #: flag is set for the :class:`NullHandler` for instance.
162 blackhole = False
163
164 def __init__(self, level=NOTSET, filter=None, bubble=False):
165 #: the level for the handler. Defaults to `NOTSET` which
166 #: consumes all entries.
167 self.level = lookup_level(level)
168 #: the formatter to be used on records. This is a function
169 #: that is passed a log record as first argument and the
170 #: handler as second and returns something formatted
171 #: (usually a unicode string)
172 self.formatter = None
173 #: the filter to be used with this handler
174 self.filter = filter
175 #: the bubble flag of this handler
176 self.bubble = bubble
177
178 level_name = level_name_property()
179
180 def format(self, record):
181 """Formats a record with the given formatter. If no formatter
182 is set, the record message is returned. Generally speaking the
183 return value is most likely a unicode string, but nothing in
184 the handler interface requires a formatter to return a unicode
185 string.
186
187 The combination of a handler and formatter might have the
188 formatter return an XML element tree for example.
189 """
190 if self.formatter is None:
191 return record.message
192 return self.formatter(record, self)
193
194 def should_handle(self, record):
195 """Returns `True` if this handler wants to handle the record. The
196 default implementation checks the level.
197 """
198 return record.level >= self.level
199
200 def handle(self, record):
201 """Emits the record and falls back. It tries to :meth:`emit` the
202 record and if that fails, it will call into :meth:`handle_error` with
203 the record and traceback. This function itself will always emit
204 when called, even if the logger level is higher than the record's
205 level.
206
207 If this method returns `False` it signals to the calling function that
208 no recording took place in which case it will automatically bubble.
209 This should not be used to signal error situations. The default
210 implementation always returns `True`.
211 """
212 try:
213 self.emit(record)
214 except Exception:
215 self.handle_error(record, sys.exc_info())
216 return True
217
218 def emit(self, record):
219 """Emit the specified logging record. This should take the
220 record and deliver it to whereever the handler sends formatted
221 log records.
222 """
223
224 def emit_batch(self, records, reason):
225 """Some handlers may internally queue up records and want to forward
226 them at once to another handler. For example the
227 :class:`~logbook.FingersCrossedHandler` internally buffers
228 records until a level threshold is reached in which case the buffer
229 is sent to this method and not :meth:`emit` for each record.
230
231 The default behaviour is to call :meth:`emit` for each record in
232 the buffer, but handlers can use this to optimize log handling. For
233 instance the mail handler will try to batch up items into one mail
234 and not to emit mails for each record in the buffer.
235
236 Note that unlike :meth:`emit` there is no wrapper method like
237 :meth:`handle` that does error handling. The reason is that this
238 is intended to be used by other handlers which are already protected
239 against internal breakage.
240
241 `reason` is a string that specifies the rason why :meth:`emit_batch`
242 was called, and not :meth:`emit`. The following are valid values:
243
244 ``'buffer'``
245 Records were buffered for performance reasons or because the
246 records were sent to another process and buffering was the only
247 possible way. For most handlers this should be equivalent to
248 calling :meth:`emit` for each record.
249
250 ``'escalation'``
251 Escalation means that records were buffered in case the threshold
252 was exceeded. In this case, the last record in the iterable is the
253 record that triggered the call.
254
255 ``'group'``
256 All the records in the iterable belong to the same logical
257 component and happened in the same process. For example there was
258 a long running computation and the handler is invoked with a bunch
259 of records that happened there. This is similar to the escalation
260 reason, just that the first one is the significant one, not the
261 last.
262
263 If a subclass overrides this and does not want to handle a specific
264 reason it must call into the superclass because more reasons might
265 appear in future releases.
266
267 Example implementation::
268
269 def emit_batch(self, records, reason):
270 if reason not in ('escalation', 'group'):
271 Handler.emit_batch(self, records, reason)
272 ...
273 """
274 for record in records:
275 self.emit(record)
276
277 def close(self):
278 """Tidy up any resources used by the handler. This is automatically
279 called by the destructor of the class as well, but explicit calls are
280 encouraged. Make sure that multiple calls to close are possible.
281 """
282
283 def handle_error(self, record, exc_info):
284 """Handle errors which occur during an emit() call. The behaviour of
285 this function depends on the current `errors` setting.
286
287 Check :class:`Flags` for more information.
288 """
289 try:
290 behaviour = Flags.get_flag('errors', 'print')
291 if behaviour == 'raise':
292 reraise(exc_info[0], exc_info[1], exc_info[2])
293 elif behaviour == 'print':
294 traceback.print_exception(*(exc_info + (None, sys.stderr)))
295 sys.stderr.write('Logged from file %s, line %s\n' % (
296 record.filename, record.lineno))
297 except IOError:
298 pass
299
300
301 class NullHandler(Handler):
302 """A handler that does nothing, meant to be inserted in a handler chain
303 with ``bubble=False`` to stop further processing.
304 """
305 blackhole = True
306
307
308 class WrapperHandler(Handler):
309 """A class that can wrap another handler and redirect all calls to the
310 wrapped handler::
311
312 handler = WrapperHandler(other_handler)
313
314 Subclasses should override the :attr:`_direct_attrs` attribute as
315 necessary.
316 """
317
318 #: a set of direct attributes that are not forwarded to the inner
319 #: handler. This has to be extended as necessary.
320 _direct_attrs = frozenset(['handler'])
321
322 def __init__(self, handler):
323 self.handler = handler
324
325 def __getattr__(self, name):
326 return getattr(self.handler, name)
327
328 def __setattr__(self, name, value):
329 if name in self._direct_attrs:
330 return Handler.__setattr__(self, name, value)
331 setattr(self.handler, name, value)
332
333
334 class StringFormatter(object):
335 """Many handlers format the log entries to text format. This is done
336 by a callable that is passed a log record and returns an unicode
337 string. The default formatter for this is implemented as a class so
338 that it becomes possible to hook into every aspect of the formatting
339 process.
340 """
341
342 def __init__(self, format_string):
343 self.format_string = format_string
344
345 def _get_format_string(self):
346 return self._format_string
347
348 def _set_format_string(self, value):
349 self._format_string = value
350 self._formatter = value
351
352 format_string = property(_get_format_string, _set_format_string)
353 del _get_format_string, _set_format_string
354
355 def format_record(self, record, handler):
356 try:
357 return self._formatter.format(record=record, handler=handler)
358 except UnicodeEncodeError:
359 # self._formatter is a str, but some of the record items
360 # are unicode
361 fmt = self._formatter.decode('ascii', 'replace')
362 return fmt.format(record=record, handler=handler)
363 except UnicodeDecodeError:
364 # self._formatter is unicode, but some of the record items
365 # are non-ascii str
366 fmt = self._formatter.encode('ascii', 'replace')
367 return fmt.format(record=record, handler=handler)
368
369 def format_exception(self, record):
370 return record.formatted_exception
371
372 def __call__(self, record, handler):
373 line = self.format_record(record, handler)
374 exc = self.format_exception(record)
375 if exc:
376 line += u'\n' + exc
377 return line
378
379
380 class StringFormatterHandlerMixin(object):
381 """A mixin for handlers that provides a default integration for the
382 :class:`~logbook.StringFormatter` class. This is used for all handlers
383 by default that log text to a destination.
384 """
385
386 #: a class attribute for the default format string to use if the
387 #: constructor was invoked with `None`.
388 default_format_string = DEFAULT_FORMAT_STRING
389
390 #: the class to be used for string formatting
391 formatter_class = StringFormatter
392
393 def __init__(self, format_string):
394 if format_string is None:
395 format_string = self.default_format_string
396
397 #: the currently attached format string as new-style format
398 #: string.
399 self.format_string = format_string
400
401 def _get_format_string(self):
402 if isinstance(self.formatter, StringFormatter):
403 return self.formatter.format_string
404
405 def _set_format_string(self, value):
406 if value is None:
407 self.formatter = None
408 else:
409 self.formatter = self.formatter_class(value)
410
411 format_string = property(_get_format_string, _set_format_string)
412 del _get_format_string, _set_format_string
413
414
415 class HashingHandlerMixin(object):
416 """Mixin class for handlers that are hashing records."""
417
418 def hash_record_raw(self, record):
419 """Returns a hashlib object with the hash of the record."""
420 hash = sha1()
421 hash.update(('%d\x00' % record.level).encode('ascii'))
422 hash.update((record.channel or u'').encode('utf-8') + b('\x00'))
423 hash.update(record.filename.encode('utf-8') + b('\x00'))
424 hash.update(b(str(record.lineno)))
425 return hash
426
427 def hash_record(self, record):
428 """Returns a hash for a record to keep it apart from other records.
429 This is used for the `record_limit` feature. By default
430 The level, channel, filename and location are hashed.
431
432 Calls into :meth:`hash_record_raw`.
433 """
434 return self.hash_record_raw(record).hexdigest()
435
436 _NUMBER_TYPES = integer_types + (float,)
437
438 class LimitingHandlerMixin(HashingHandlerMixin):
439 """Mixin class for handlers that want to limit emitting records.
440
441 In the default setting it delivers all log records but it can be set up
442 to not send more than n mails for the same record each hour to not
443 overload an inbox and the network in case a message is triggered multiple
444 times a minute. The following example limits it to 60 mails an hour::
445
446 from datetime import timedelta
447 handler = MailHandler(record_limit=1,
448 record_delta=timedelta(minutes=1))
449 """
450
451 def __init__(self, record_limit, record_delta):
452 self.record_limit = record_limit
453 self._limit_lock = Lock()
454 self._record_limits = {}
455 if record_delta is None:
456 record_delta = timedelta(seconds=60)
457 elif isinstance(record_delta, _NUMBER_TYPES):
458 record_delta = timedelta(seconds=record_delta)
459 self.record_delta = record_delta
460
461 def check_delivery(self, record):
462 """Helper function to check if data should be delivered by this
463 handler. It returns a tuple in the form ``(suppression_count,
464 allow)``. The first one is the number of items that were not delivered
465 so far, the second is a boolean flag if a delivery should happen now.
466 """
467 if self.record_limit is None:
468 return 0, True
469 hash = self.hash_record(record)
470 self._limit_lock.acquire()
471 try:
472 allow_delivery = None
473 suppression_count = old_count = 0
474 first_count = now = datetime.utcnow()
475
476 if hash in self._record_limits:
477 last_count, suppression_count = self._record_limits[hash]
478 if last_count + self.record_delta < now:
479 allow_delivery = True
480 else:
481 first_count = last_count
482 old_count = suppression_count
483
484 if not suppression_count and \
485 len(self._record_limits) >= self.max_record_cache:
486 cache_items = self._record_limits.items()
487 cache_items.sort()
488 del cache_items[:int(self._record_limits) \
489 * self.record_cache_prune]
490 self._record_limits = dict(cache_items)
491
492 self._record_limits[hash] = (first_count, old_count + 1)
493
494 if allow_delivery is None:
495 allow_delivery = old_count < self.record_limit
496 return suppression_count, allow_delivery
497 finally:
498 self._limit_lock.release()
499
500
501 class StreamHandler(Handler, StringFormatterHandlerMixin):
502 """a handler class which writes logging records, appropriately formatted,
503 to a stream. note that this class does not close the stream, as sys.stdout
504 or sys.stderr may be used.
505
506 If a stream handler is used in a `with` statement directly it will
507 :meth:`close` on exit to support this pattern::
508
509 with StreamHandler(my_stream):
510 pass
511
512 .. admonition:: Notes on the encoding
513
514 On Python 3, the encoding parameter is only used if a stream was
515 passed that was opened in binary mode.
516 """
517
518 def __init__(self, stream, level=NOTSET, format_string=None,
519 encoding=None, filter=None, bubble=False):
520 Handler.__init__(self, level, filter, bubble)
521 StringFormatterHandlerMixin.__init__(self, format_string)
522 self.encoding = encoding
523 self.lock = threading.Lock()
524 if stream is not _missing:
525 self.stream = stream
526
527 def __enter__(self):
528 return Handler.__enter__(self)
529
530 def __exit__(self, exc_type, exc_value, tb):
531 self.close()
532 return Handler.__exit__(self, exc_type, exc_value, tb)
533
534 def close(self):
535 """The default stream handler implementation is not to close
536 the wrapped stream but to flush it.
537 """
538 self.flush()
539
540 def flush(self):
541 """Flushes the inner stream."""
542 if self.stream is not None and hasattr(self.stream, 'flush'):
543 self.stream.flush()
544
545 def format_and_encode(self, record):
546 """Formats the record and encodes it to the stream encoding."""
547 stream = self.stream
548 rv = self.format(record) + '\n'
549 if (PY2 and is_unicode(rv)) or \
550 not (PY2 or is_unicode(rv) or _is_text_stream(stream)):
551 enc = self.encoding
552 if enc is None:
553 enc = getattr(stream, 'encoding', None) or 'utf-8'
554 rv = rv.encode(enc, 'replace')
555 return rv
556
557 def write(self, item):
558 """Writes a bytestring to the stream."""
559 self.stream.write(item)
560
561 def emit(self, record):
562 self.lock.acquire()
563 try:
564 self.write(self.format_and_encode(record))
565 self.flush()
566 finally:
567 self.lock.release()
568
569
570 class FileHandler(StreamHandler):
571 """A handler that does the task of opening and closing files for you.
572 By default the file is opened right away, but you can also `delay`
573 the open to the point where the first message is written.
574
575 This is useful when the handler is used with a
576 :class:`~logbook.FingersCrossedHandler` or something similar.
577 """
578
579 def __init__(self, filename, mode='a', encoding=None, level=NOTSET,
580 format_string=None, delay=False, filter=None, bubble=False):
581 if encoding is None:
582 encoding = 'utf-8'
583 StreamHandler.__init__(self, None, level, format_string,
584 encoding, filter, bubble)
585 self._filename = filename
586 self._mode = mode
587 if delay:
588 self.stream = None
589 else:
590 self._open()
591
592 def _open(self, mode=None):
593 if mode is None:
594 mode = self._mode
595 self.stream = open(self._filename, mode)
596
597 def write(self, item):
598 if self.stream is None:
599 self._open()
600 if not PY2 and isinstance(item, bytes):
601 self.stream.buffer.write(item)
602 else:
603 self.stream.write(item)
604
605 def close(self):
606 if self.stream is not None:
607 self.flush()
608 self.stream.close()
609 self.stream = None
610
611 def format_and_encode(self, record):
612 # encodes based on the stream settings, so the stream has to be
613 # open at the time this function is called.
614 if self.stream is None:
615 self._open()
616 return StreamHandler.format_and_encode(self, record)
617
618 def emit(self, record):
619 if self.stream is None:
620 self._open()
621 StreamHandler.emit(self, record)
622
623
624 class MonitoringFileHandler(FileHandler):
625 """A file handler that will check if the file was moved while it was
626 open. This might happen on POSIX systems if an application like
627 logrotate moves the logfile over.
628
629 Because of different IO concepts on Windows, this handler will not
630 work on a windows system.
631 """
632
633 def __init__(self, filename, mode='a', encoding='utf-8', level=NOTSET,
634 format_string=None, delay=False, filter=None, bubble=False):
635 FileHandler.__init__(self, filename, mode, encoding, level,
636 format_string, delay, filter, bubble)
637 if os.name == 'nt':
638 raise RuntimeError('MonitoringFileHandler '
639 'does not support Windows')
640 self._query_fd()
641
642 def _query_fd(self):
643 if self.stream is None:
644 self._last_stat = None, None
645 else:
646 try:
647 st = os.stat(self._filename)
648 except OSError:
649 e = sys.exc_info()[1]
650 if e.errno != 2:
651 raise
652 self._last_stat = None, None
653 else:
654 self._last_stat = st[stat.ST_DEV], st[stat.ST_INO]
655
656 def emit(self, record):
657 last_stat = self._last_stat
658 self._query_fd()
659 if last_stat != self._last_stat:
660 self.close()
661 FileHandler.emit(self, record)
662 self._query_fd()
663
664
665 class StderrHandler(StreamHandler):
666 """A handler that writes to what is currently at stderr. At the first
667 glace this appears to just be a :class:`StreamHandler` with the stream
668 set to :data:`sys.stderr` but there is a difference: if the handler is
669 created globally and :data:`sys.stderr` changes later, this handler will
670 point to the current `stderr`, whereas a stream handler would still
671 point to the old one.
672 """
673
674 def __init__(self, level=NOTSET, format_string=None, filter=None,
675 bubble=False):
676 StreamHandler.__init__(self, _missing, level, format_string,
677 None, filter, bubble)
678
679 @property
680 def stream(self):
681 return sys.stderr
682
683
684 class RotatingFileHandlerBase(FileHandler):
685 """Baseclass for rotating file handlers.
686
687 .. versionchanged:: 0.3
688 This class was deprecated because the interface is not flexible
689 enough to implement proper file rotations. The former builtin
690 subclasses no longer use this baseclass.
691 """
692
693 def __init__(self, *args, **kwargs):
694 from warnings import warn
695 warn(DeprecationWarning('This class is deprecated'))
696 FileHandler.__init__(self, *args, **kwargs)
697
698 def emit(self, record):
699 self.lock.acquire()
700 try:
701 msg = self.format_and_encode(record)
702 if self.should_rollover(record, msg):
703 self.perform_rollover()
704 self.write(msg)
705 self.flush()
706 finally:
707 self.lock.release()
708
709 def should_rollover(self, record, formatted_record):
710 """Called with the log record and the return value of the
711 :meth:`format_and_encode` method. The method has then to
712 return `True` if a rollover should happen or `False`
713 otherwise.
714
715 .. versionchanged:: 0.3
716 Previously this method was called with the number of bytes
717 returned by :meth:`format_and_encode`
718 """
719 return False
720
721 def perform_rollover(self):
722 """Called if :meth:`should_rollover` returns `True` and has
723 to perform the actual rollover.
724 """
725
726
727 class RotatingFileHandler(FileHandler):
728 """This handler rotates based on file size. Once the maximum size
729 is reached it will reopen the file and start with an empty file
730 again. The old file is moved into a backup copy (named like the
731 file, but with a ``.backupnumber`` appended to the file. So if
732 you are logging to ``mail`` the first backup copy is called
733 ``mail.1``.)
734
735 The default number of backups is 5. Unlike a similar logger from
736 the logging package, the backup count is mandatory because just
737 reopening the file is dangerous as it deletes the log without
738 asking on rollover.
739 """
740
741 def __init__(self, filename, mode='a', encoding='utf-8', level=NOTSET,
742 format_string=None, delay=False, max_size=1024 * 1024,
743 backup_count=5, filter=None, bubble=False):
744 FileHandler.__init__(self, filename, mode, encoding, level,
745 format_string, delay, filter, bubble)
746 self.max_size = max_size
747 self.backup_count = backup_count
748 assert backup_count > 0, 'at least one backup file has to be ' \
749 'specified'
750
751 def should_rollover(self, record, bytes):
752 self.stream.seek(0, 2)
753 return self.stream.tell() + bytes >= self.max_size
754
755 def perform_rollover(self):
756 self.stream.close()
757 for x in xrange(self.backup_count - 1, 0, -1):
758 src = '%s.%d' % (self._filename, x)
759 dst = '%s.%d' % (self._filename, x + 1)
760 try:
761 rename(src, dst)
762 except OSError:
763 e = sys.exc_info()[1]
764 if e.errno != errno.ENOENT:
765 raise
766 rename(self._filename, self._filename + '.1')
767 self._open('w')
768
769 def emit(self, record):
770 self.lock.acquire()
771 try:
772 msg = self.format_and_encode(record)
773 if self.should_rollover(record, len(msg)):
774 self.perform_rollover()
775 self.write(msg)
776 self.flush()
777 finally:
778 self.lock.release()
779
780
781 class TimedRotatingFileHandler(FileHandler):
782 """This handler rotates based on dates. It will name the file
783 after the filename you specify and the `date_format` pattern.
784
785 So for example if you configure your handler like this::
786
787 handler = TimedRotatingFileHandler('/var/log/foo.log',
788 date_format='%Y-%m-%d')
789
790 The filenames for the logfiles will look like this::
791
792 /var/log/foo-2010-01-10.log
793 /var/log/foo-2010-01-11.log
794 ...
795
796 By default it will keep all these files around, if you want to limit
797 them, you can specify a `backup_count`.
798 """
799
800 def __init__(self, filename, mode='a', encoding='utf-8', level=NOTSET,
801 format_string=None, date_format='%Y-%m-%d',
802 backup_count=0, filter=None, bubble=False):
803 FileHandler.__init__(self, filename, mode, encoding, level,
804 format_string, True, filter, bubble)
805 self.date_format = date_format
806 self.backup_count = backup_count
807 self._fn_parts = os.path.splitext(os.path.abspath(filename))
808 self._filename = None
809
810 def _get_timed_filename(self, datetime):
811 return datetime.strftime('-' + self.date_format) \
812 .join(self._fn_parts)
813
814 def should_rollover(self, record):
815 fn = self._get_timed_filename(record.time)
816 rv = self._filename is not None and self._filename != fn
817 # remember the current filename. In case rv is True, the rollover
818 # performing function will already have the new filename
819 self._filename = fn
820 return rv
821
822 def files_to_delete(self):
823 """Returns a list with the files that have to be deleted when
824 a rollover occours.
825 """
826 directory = os.path.dirname(self._filename)
827 files = []
828 for filename in os.listdir(directory):
829 filename = os.path.join(directory, filename)
830 if filename.startswith(self._fn_parts[0] + '-') and \
831 filename.endswith(self._fn_parts[1]):
832 files.append((os.path.getmtime(filename), filename))
833 files.sort()
834 return files[:-self.backup_count + 1]
835
836 def perform_rollover(self):
837 self.stream.close()
838 if self.backup_count > 0:
839 for time, filename in self.files_to_delete():
840 os.remove(filename)
841 self._open('w')
842
843 def emit(self, record):
844 self.lock.acquire()
845 try:
846 if self.should_rollover(record):
847 self.perform_rollover()
848 self.write(self.format_and_encode(record))
849 self.flush()
850 finally:
851 self.lock.release()
852
853
854 class TestHandler(Handler, StringFormatterHandlerMixin):
855 """Like a stream handler but keeps the values in memory. This
856 logger provides some ways to test for the records in memory.
857
858 Example usage::
859
860 def my_test():
861 with logbook.TestHandler() as handler:
862 logger.warn('A warning')
863 assert logger.has_warning('A warning')
864 ...
865 """
866 default_format_string = TEST_FORMAT_STRING
867
868 def __init__(self, level=NOTSET, format_string=None, filter=None,
869 bubble=False):
870 Handler.__init__(self, level, filter, bubble)
871 StringFormatterHandlerMixin.__init__(self, format_string)
872 #: captures the :class:`LogRecord`\s as instances
873 self.records = []
874 self._formatted_records = []
875 self._formatted_record_cache = []
876
877 def close(self):
878 """Close all records down when the handler is closed."""
879 for record in self.records:
880 record.close()
881
882 def emit(self, record):
883 # keep records open because we will want to examine them after the
884 # call to the emit function. If we don't do that, the traceback
885 # attribute and other things will already be removed.
886 record.keep_open = True
887 self.records.append(record)
888
889 @property
890 def formatted_records(self):
891 """Captures the formatted log records as unicode strings."""
892 if len(self._formatted_record_cache) != len(self.records) or \
893 any(r1 != r2 for r1, r2 in
894 zip(self.records, self._formatted_record_cache)):
895 self._formatted_records = [self.format(r) for r in self.records]
896 self._formatted_record_cache = list(self.records)
897 return self._formatted_records
898
899 @property
900 def has_criticals(self):
901 """`True` if any :data:`CRITICAL` records were found."""
902 return any(r.level == CRITICAL for r in self.records)
903
904 @property
905 def has_errors(self):
906 """`True` if any :data:`ERROR` records were found."""
907 return any(r.level == ERROR for r in self.records)
908
909 @property
910 def has_warnings(self):
911 """`True` if any :data:`WARNING` records were found."""
912 return any(r.level == WARNING for r in self.records)
913
914 @property
915 def has_notices(self):
916 """`True` if any :data:`NOTICE` records were found."""
917 return any(r.level == NOTICE for r in self.records)
918
919 @property
920 def has_infos(self):
921 """`True` if any :data:`INFO` records were found."""
922 return any(r.level == INFO for r in self.records)
923
924 @property
925 def has_debugs(self):
926 """`True` if any :data:`DEBUG` records were found."""
927 return any(r.level == DEBUG for r in self.records)
928
929 def has_critical(self, *args, **kwargs):
930 """`True` if a specific :data:`CRITICAL` log record exists.
931
932 See :ref:`probe-log-records` for more information.
933 """
934 kwargs['level'] = CRITICAL
935 return self._test_for(*args, **kwargs)
936
937 def has_error(self, *args, **kwargs):
938 """`True` if a specific :data:`ERROR` log record exists.
939
940 See :ref:`probe-log-records` for more information.
941 """
942 kwargs['level'] = ERROR
943 return self._test_for(*args, **kwargs)
944
945 def has_warning(self, *args, **kwargs):
946 """`True` if a specific :data:`WARNING` log record exists.
947
948 See :ref:`probe-log-records` for more information.
949 """
950 kwargs['level'] = WARNING
951 return self._test_for(*args, **kwargs)
952
953 def has_notice(self, *args, **kwargs):
954 """`True` if a specific :data:`NOTICE` log record exists.
955
956 See :ref:`probe-log-records` for more information.
957 """
958 kwargs['level'] = NOTICE
959 return self._test_for(*args, **kwargs)
960
961 def has_info(self, *args, **kwargs):
962 """`True` if a specific :data:`INFO` log record exists.
963
964 See :ref:`probe-log-records` for more information.
965 """
966 kwargs['level'] = INFO
967 return self._test_for(*args, **kwargs)
968
969 def has_debug(self, *args, **kwargs):
970 """`True` if a specific :data:`DEBUG` log record exists.
971
972 See :ref:`probe-log-records` for more information.
973 """
974 kwargs['level'] = DEBUG
975 return self._test_for(*args, **kwargs)
976
977 def _test_for(self, message=None, channel=None, level=None):
978 def _match(needle, haystack):
979 "Matches both compiled regular expressions and strings"
980 if isinstance(needle, REGTYPE) and needle.search(haystack):
981 return True
982 if needle == haystack:
983 return True
984 return False
985 for record in self.records:
986 if level is not None and record.level != level:
987 continue
988 if channel is not None and record.channel != channel:
989 continue
990 if message is not None and not _match(message, record.message):
991 continue
992 return True
993 return False
994
995
996 class MailHandler(Handler, StringFormatterHandlerMixin,
997 LimitingHandlerMixin):
998 """A handler that sends error mails. The format string used by this
999 handler are the contents of the mail plus the headers. This is handy
1000 if you want to use a custom subject or ``X-`` header::
1001
1002 handler = MailHandler(format_string='''\
1003 Subject: {record.level_name} on My Application
1004
1005 {record.message}
1006 {record.extra[a_custom_injected_record]}
1007 ''')
1008
1009 This handler will always emit text-only mails for maximum portability and
1010 best performance.
1011
1012 In the default setting it delivers all log records but it can be set up
1013 to not send more than n mails for the same record each hour to not
1014 overload an inbox and the network in case a message is triggered multiple
1015 times a minute. The following example limits it to 60 mails an hour::
1016
1017 from datetime import timedelta
1018 handler = MailHandler(record_limit=1,
1019 record_delta=timedelta(minutes=1))
1020
1021 The default timedelta is 60 seconds (one minute).
1022
1023 The mail handler is sending mails in a blocking manner. If you are not
1024 using some centralized system for logging these messages (with the help
1025 of ZeroMQ or others) and the logging system slows you down you can
1026 wrap the handler in a :class:`logbook.queues.ThreadedWrapperHandler`
1027 that will then send the mails in a background thread.
1028
1029 .. versionchanged:: 0.3
1030 The handler supports the batching system now.
1031 """
1032 default_format_string = MAIL_FORMAT_STRING
1033 default_related_format_string = MAIL_RELATED_FORMAT_STRING
1034 default_subject = u'Server Error in Application'
1035
1036 #: the maximum number of record hashes in the cache for the limiting
1037 #: feature. Afterwards, record_cache_prune percent of the oldest
1038 #: entries are removed
1039 max_record_cache = 512
1040
1041 #: the number of items to prune on a cache overflow in percent.
1042 record_cache_prune = 0.333
1043
1044 def __init__(self, from_addr, recipients, subject=None,
1045 server_addr=None, credentials=None, secure=None,
1046 record_limit=None, record_delta=None, level=NOTSET,
1047 format_string=None, related_format_string=None,
1048 filter=None, bubble=False):
1049 Handler.__init__(self, level, filter, bubble)
1050 StringFormatterHandlerMixin.__init__(self, format_string)
1051 LimitingHandlerMixin.__init__(self, record_limit, record_delta)
1052 self.from_addr = from_addr
1053 self.recipients = recipients
1054 if subject is None:
1055 subject = self.default_subject
1056 self.subject = subject
1057 self.server_addr = server_addr
1058 self.credentials = credentials
1059 self.secure = secure
1060 if related_format_string is None:
1061 related_format_string = self.default_related_format_string
1062 self.related_format_string = related_format_string
1063
1064 def _get_related_format_string(self):
1065 if isinstance(self.related_formatter, StringFormatter):
1066 return self.related_formatter.format_string
1067 def _set_related_format_string(self, value):
1068 if value is None:
1069 self.related_formatter = None
1070 else:
1071 self.related_formatter = self.formatter_class(value)
1072 related_format_string = property(_get_related_format_string,
1073 _set_related_format_string)
1074 del _get_related_format_string, _set_related_format_string
1075
1076 def get_recipients(self, record):
1077 """Returns the recipients for a record. By default the
1078 :attr:`recipients` attribute is returned for all records.
1079 """
1080 return self.recipients
1081
1082 def message_from_record(self, record, suppressed):
1083 """Creates a new message for a record as email message object
1084 (:class:`email.message.Message`). `suppressed` is the number
1085 of mails not sent if the `record_limit` feature is active.
1086 """
1087 from email.message import Message
1088 from email.header import Header
1089 msg = Message()
1090 msg.set_charset('utf-8')
1091 lineiter = iter(self.format(record).splitlines())
1092 for line in lineiter:
1093 if not line:
1094 break
1095 h, v = line.split(':', 1)
1096 # We could probably just encode everything. For the moment encode
1097 # only what really needed to avoid breaking a couple of tests.
1098 try:
1099 v.encode('ascii')
1100 except UnicodeEncodeError:
1101 msg[h.strip()] = Header(v.strip(), 'utf-8')
1102 else:
1103 msg[h.strip()] = v.strip()
1104
1105 msg.replace_header('Content-Transfer-Encoding', '8bit')
1106
1107 body = '\r\n'.join(lineiter)
1108 if suppressed:
1109 body += '\r\n\r\nThis message occurred additional %d ' \
1110 'time(s) and was suppressed' % suppressed
1111
1112 # inconsistency in Python 2.5
1113 # other versions correctly return msg.get_payload() as str
1114 if sys.version_info < (2, 6) and isinstance(body, unicode):
1115 body = body.encode('utf-8')
1116
1117 msg.set_payload(body, 'UTF-8')
1118 return msg
1119
1120 def format_related_record(self, record):
1121 """Used for format the records that led up to another record or
1122 records that are related into strings. Used by the batch formatter.
1123 """
1124 return self.related_formatter(record, self)
1125
1126 def generate_mail(self, record, suppressed=0):
1127 """Generates the final email (:class:`email.message.Message`)
1128 with headers and date. `suppressed` is the number of mails
1129 that were not send if the `record_limit` feature is active.
1130 """
1131 from email.utils import formatdate
1132 msg = self.message_from_record(record, suppressed)
1133 msg['From'] = self.from_addr
1134 msg['Date'] = formatdate()
1135 return msg
1136
1137 def collapse_mails(self, mail, related, reason):
1138 """When escaling or grouped mails are """
1139 if not related:
1140 return mail
1141 if reason == 'group':
1142 title = 'Other log records in the same group'
1143 else:
1144 title = 'Log records that led up to this one'
1145 mail.set_payload('%s\r\n\r\n\r\n%s:\r\n\r\n%s' % (
1146 mail.get_payload(),
1147 title,
1148 '\r\n\r\n'.join(body.rstrip() for body in related)
1149 ))
1150 return mail
1151
1152 def get_connection(self):
1153 """Returns an SMTP connection. By default it reconnects for
1154 each sent mail.
1155 """
1156 from smtplib import SMTP, SMTP_PORT, SMTP_SSL_PORT
1157 if self.server_addr is None:
1158 host = 'localhost'
1159 port = self.secure and SMTP_SSL_PORT or SMTP_PORT
1160 else:
1161 host, port = self.server_addr
1162 con = SMTP()
1163 con.connect(host, port)
1164 if self.credentials is not None:
1165 if self.secure is not None:
1166 con.ehlo()
1167 con.starttls(*self.secure)
1168 con.ehlo()
1169 con.login(*self.credentials)
1170 return con
1171
1172 def close_connection(self, con):
1173 """Closes the connection that was returned by
1174 :meth:`get_connection`.
1175 """
1176 try:
1177 if con is not None:
1178 con.quit()
1179 except Exception:
1180 pass
1181
1182 def deliver(self, msg, recipients):
1183 """Delivers the given message to a list of recpients."""
1184 con = self.get_connection()
1185 try:
1186 con.sendmail(self.from_addr, recipients, msg.as_string())
1187 finally:
1188 self.close_connection(con)
1189
1190 def emit(self, record):
1191 suppressed = 0
1192 if self.record_limit is not None:
1193 suppressed, allow_delivery = self.check_delivery(record)
1194 if not allow_delivery:
1195 return
1196 self.deliver(self.generate_mail(record, suppressed),
1197 self.get_recipients(record))
1198
1199 def emit_batch(self, records, reason):
1200 if reason not in ('escalation', 'group'):
1201 return MailHandler.emit_batch(self, records, reason)
1202 records = list(records)
1203 if not records:
1204 return
1205
1206 trigger = records.pop(reason == 'escalation' and -1 or 0)
1207 suppressed = 0
1208 if self.record_limit is not None:
1209 suppressed, allow_delivery = self.check_delivery(trigger)
1210 if not allow_delivery:
1211 return
1212
1213 trigger_mail = self.generate_mail(trigger, suppressed)
1214 related = [self.format_related_record(record)
1215 for record in records]
1216
1217 self.deliver(self.collapse_mails(trigger_mail, related, reason),
1218 self.get_recipients(trigger))
1219
1220
1221 class GMailHandler(MailHandler):
1222 """
1223 A customized mail handler class for sending emails via GMail (or Google Apps mail)::
1224
1225 handler = GMailHandler("my_user@gmail.com", "mypassword", ["to_user@some_mail.com"], ...) # other arguments same as MailHandler
1226
1227 .. versionadded:: 0.6.0
1228 """
1229
1230 def __init__(self, account_id, password, recipients, **kw):
1231 super(GMailHandler, self).__init__(
1232 account_id, recipients, secure=(), server_addr=("smtp.gmail.com", 587),
1233 credentials=(account_id, password), **kw)
1234
1235
1236 class SyslogHandler(Handler, StringFormatterHandlerMixin):
1237 """A handler class which sends formatted logging records to a
1238 syslog server. By default it will send to it via unix socket.
1239 """
1240 default_format_string = SYSLOG_FORMAT_STRING
1241
1242 # priorities
1243 LOG_EMERG = 0 # system is unusable
1244 LOG_ALERT = 1 # action must be taken immediately
1245 LOG_CRIT = 2 # critical conditions
1246 LOG_ERR = 3 # error conditions
1247 LOG_WARNING = 4 # warning conditions
1248 LOG_NOTICE = 5 # normal but significant condition
1249 LOG_INFO = 6 # informational
1250 LOG_DEBUG = 7 # debug-level messages
1251
1252 # facility codes
1253 LOG_KERN = 0 # kernel messages
1254 LOG_USER = 1 # random user-level messages
1255 LOG_MAIL = 2 # mail system
1256 LOG_DAEMON = 3 # system daemons
1257 LOG_AUTH = 4 # security/authorization messages
1258 LOG_SYSLOG = 5 # messages generated internally by syslogd
1259 LOG_LPR = 6 # line printer subsystem
1260 LOG_NEWS = 7 # network news subsystem
1261 LOG_UUCP = 8 # UUCP subsystem
1262 LOG_CRON = 9 # clock daemon
1263 LOG_AUTHPRIV = 10 # security/authorization messages (private)
1264 LOG_FTP = 11 # FTP daemon
1265
1266 # other codes through 15 reserved for system use
1267 LOG_LOCAL0 = 16 # reserved for local use
1268 LOG_LOCAL1 = 17 # reserved for local use
1269 LOG_LOCAL2 = 18 # reserved for local use
1270 LOG_LOCAL3 = 19 # reserved for local use
1271 LOG_LOCAL4 = 20 # reserved for local use
1272 LOG_LOCAL5 = 21 # reserved for local use
1273 LOG_LOCAL6 = 22 # reserved for local use
1274 LOG_LOCAL7 = 23 # reserved for local use
1275
1276 facility_names = {
1277 'auth': LOG_AUTH,
1278 'authpriv': LOG_AUTHPRIV,
1279 'cron': LOG_CRON,
1280 'daemon': LOG_DAEMON,
1281 'ftp': LOG_FTP,
1282 'kern': LOG_KERN,
1283 'lpr': LOG_LPR,
1284 'mail': LOG_MAIL,
1285 'news': LOG_NEWS,
1286 'syslog': LOG_SYSLOG,
1287 'user': LOG_USER,
1288 'uucp': LOG_UUCP,
1289 'local0': LOG_LOCAL0,
1290 'local1': LOG_LOCAL1,
1291 'local2': LOG_LOCAL2,
1292 'local3': LOG_LOCAL3,
1293 'local4': LOG_LOCAL4,
1294 'local5': LOG_LOCAL5,
1295 'local6': LOG_LOCAL6,
1296 'local7': LOG_LOCAL7,
1297 }
1298
1299 level_priority_map = {
1300 DEBUG: LOG_DEBUG,
1301 INFO: LOG_INFO,
1302 NOTICE: LOG_NOTICE,
1303 WARNING: LOG_WARNING,
1304 ERROR: LOG_ERR,
1305 CRITICAL: LOG_CRIT
1306 }
1307
1308 def __init__(self, application_name=None, address=None,
1309 facility='user', socktype=socket.SOCK_DGRAM,
1310 level=NOTSET, format_string=None, filter=None,
1311 bubble=False):
1312 Handler.__init__(self, level, filter, bubble)
1313 StringFormatterHandlerMixin.__init__(self, format_string)
1314 self.application_name = application_name
1315
1316 if address is None:
1317 if sys.platform == 'darwin':
1318 address = '/var/run/syslog'
1319 else:
1320 address = '/dev/log'
1321
1322 self.address = address
1323 self.facility = facility
1324 self.socktype = socktype
1325
1326 if isinstance(address, string_types):
1327 self._connect_unixsocket()
1328 else:
1329 self._connect_netsocket()
1330
1331 def _connect_unixsocket(self):
1332 self.unixsocket = True
1333 self.socket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
1334 try:
1335 self.socket.connect(self.address)
1336 except socket.error:
1337 self.socket.close()
1338 self.socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
1339 self.socket.connect(self.address)
1340
1341 def _connect_netsocket(self):
1342 self.unixsocket = False
1343 self.socket = socket.socket(socket.AF_INET, self.socktype)
1344 if self.socktype == socket.SOCK_STREAM:
1345 self.socket.connect(self.address)
1346 self.address = self.socket.getsockname()
1347
1348 def encode_priority(self, record):
1349 facility = self.facility_names[self.facility]
1350 priority = self.level_priority_map.get(record.level,
1351 self.LOG_WARNING)
1352 return (facility << 3) | priority
1353
1354 def emit(self, record):
1355 prefix = u''
1356 if self.application_name is not None:
1357 prefix = self.application_name + u':'
1358 self.send_to_socket((u'<%d>%s%s\x00' % (
1359 self.encode_priority(record),
1360 prefix,
1361 self.format(record)
1362 )).encode('utf-8'))
1363
1364 def send_to_socket(self, data):
1365 if self.unixsocket:
1366 try:
1367 self.socket.send(data)
1368 except socket.error:
1369 self._connect_unixsocket()
1370 self.socket.send(data)
1371 elif self.socktype == socket.SOCK_DGRAM:
1372 # the flags are no longer optional on Python 3
1373 self.socket.sendto(data, 0, self.address)
1374 else:
1375 self.socket.sendall(data)
1376
1377 def close(self):
1378 self.socket.close()
1379
1380
1381 class NTEventLogHandler(Handler, StringFormatterHandlerMixin):
1382 """A handler that sends to the NT event log system."""
1383 dllname = None
1384 default_format_string = NTLOG_FORMAT_STRING
1385
1386 def __init__(self, application_name, log_type='Application',
1387 level=NOTSET, format_string=None, filter=None,
1388 bubble=False):
1389 Handler.__init__(self, level, filter, bubble)
1390 StringFormatterHandlerMixin.__init__(self, format_string)
1391
1392 if os.name != 'nt':
1393 raise RuntimeError('NTLogEventLogHandler requires a Windows '
1394 'operating system.')
1395
1396 try:
1397 import win32evtlogutil
1398 import win32evtlog
1399 except ImportError:
1400 raise RuntimeError('The pywin32 library is required '
1401 'for the NTEventLogHandler.')
1402
1403 self.application_name = application_name
1404 self._welu = win32evtlogutil
1405 dllname = self.dllname
1406 if not dllname:
1407 dllname = os.path.join(os.path.dirname(self._welu.__file__),
1408 '../win32service.pyd')
1409 self.log_type = log_type
1410 self._welu.AddSourceToRegistry(self.application_name, dllname,
1411 log_type)
1412
1413 self._default_type = win32evtlog.EVENTLOG_INFORMATION_TYPE
1414 self._type_map = {
1415 DEBUG: win32evtlog.EVENTLOG_INFORMATION_TYPE,
1416 INFO: win32evtlog.EVENTLOG_INFORMATION_TYPE,
1417 NOTICE: win32evtlog.EVENTLOG_INFORMATION_TYPE,
1418 WARNING: win32evtlog.EVENTLOG_WARNING_TYPE,
1419 ERROR: win32evtlog.EVENTLOG_ERROR_TYPE,
1420 CRITICAL: win32evtlog.EVENTLOG_ERROR_TYPE
1421 }
1422
1423 def unregister_logger(self):
1424 """Removes the application binding from the registry. If you call
1425 this, the log viewer will no longer be able to provide any
1426 information about the message.
1427 """
1428 self._welu.RemoveSourceFromRegistry(self.application_name,
1429 self.log_type)
1430
1431 def get_event_type(self, record):
1432 return self._type_map.get(record.level, self._default_type)
1433
1434 def get_event_category(self, record):
1435 return 0
1436
1437 def get_message_id(self, record):
1438 return 1
1439
1440 def emit(self, record):
1441 id = self.get_message_id(record)
1442 cat = self.get_event_category(record)
1443 type = self.get_event_type(record)
1444 self._welu.ReportEvent(self.application_name, id, cat, type,
1445 [self.format(record)])
1446
1447
1448 class FingersCrossedHandler(Handler):
1449 """This handler wraps another handler and will log everything in
1450 memory until a certain level (`action_level`, defaults to `ERROR`)
1451 is exceeded. When that happens the fingers crossed handler will
1452 activate forever and log all buffered records as well as records
1453 yet to come into another handled which was passed to the constructor.
1454
1455 Alternatively it's also possible to pass a factory function to the
1456 constructor instead of a handler. That factory is then called with
1457 the triggering log entry and the finger crossed handler to create
1458 a handler which is then cached.
1459
1460 The idea of this handler is to enable debugging of live systems. For
1461 example it might happen that code works perfectly fine 99% of the time,
1462 but then some exception happens. But the error that caused the
1463 exception alone might not be the interesting bit, the interesting
1464 information were the warnings that lead to the error.
1465
1466 Here a setup that enables this for a web application::
1467
1468 from logbook import FileHandler
1469 from logbook import FingersCrossedHandler
1470
1471 def issue_logging():
1472 def factory(record, handler):
1473 return FileHandler('/var/log/app/issue-%s.log' % record.time)
1474 return FingersCrossedHandler(factory)
1475
1476 def application(environ, start_response):
1477 with issue_logging():
1478 return the_actual_wsgi_application(environ, start_response)
1479
1480 Whenever an error occours, a new file in ``/var/log/app`` is created
1481 with all the logging calls that lead up to the error up to the point
1482 where the `with` block is exited.
1483
1484 Please keep in mind that the :class:`~logbook.FingersCrossedHandler`
1485 handler is a one-time handler. Once triggered, it will not reset. Because
1486 of that you will have to re-create it whenever you bind it. In this case
1487 the handler is created when it's bound to the thread.
1488
1489 Due to how the handler is implemented, the filter, bubble and level
1490 flags of the wrapped handler are ignored.
1491
1492 .. versionchanged:: 0.3
1493
1494 The default behaviour is to buffer up records and then invoke another
1495 handler when a severity theshold was reached with the buffer emitting.
1496 This now enables this logger to be properly used with the
1497 :class:`~logbook.MailHandler`. You will now only get one mail for
1498 each bfufered record. However once the threshold was reached you would
1499 still get a mail for each record which is why the `reset` flag was added.
1500
1501 When set to `True`, the handler will instantly reset to the untriggered
1502 state and start buffering again::
1503
1504 handler = FingersCrossedHandler(MailHandler(...),
1505 buffer_size=10,
1506 reset=True)
1507
1508 .. versionadded:: 0.3
1509 The `reset` flag was added.
1510 """
1511
1512 #: the reason to be used for the batch emit. The default is
1513 #: ``'escalation'``.
1514 #:
1515 #: .. versionadded:: 0.3
1516 batch_emit_reason = 'escalation'
1517
1518 def __init__(self, handler, action_level=ERROR, buffer_size=0,
1519 pull_information=True, reset=False, filter=None,
1520 bubble=False):
1521 Handler.__init__(self, NOTSET, filter, bubble)
1522 self.lock = Lock()
1523 self._level = action_level
1524 if isinstance(handler, Handler):
1525 self._handler = handler
1526 self._handler_factory = None
1527 else:
1528 self._handler = None
1529 self._handler_factory = handler
1530 #: the buffered records of the handler. Once the action is triggered
1531 #: (:attr:`triggered`) this list will be None. This attribute can
1532 #: be helpful for the handler factory function to select a proper
1533 #: filename (for example time of first log record)
1534 self.buffered_records = deque()
1535 #: the maximum number of entries in the buffer. If this is exhausted
1536 #: the oldest entries will be discarded to make place for new ones
1537 self.buffer_size = buffer_size
1538 self._buffer_full = False
1539 self._pull_information = pull_information
1540 self._action_triggered = False
1541 self._reset = reset
1542
1543 def close(self):
1544 if self._handler is not None:
1545 self._handler.close()
1546
1547 def enqueue(self, record):
1548 if self._pull_information:
1549 record.pull_information()
1550 if self._action_triggered:
1551 self._handler.emit(record)
1552 else:
1553 self.buffered_records.append(record)
1554 if self._buffer_full:
1555 self.buffered_records.popleft()
1556 elif self.buffer_size and \
1557 len(self.buffered_records) >= self.buffer_size:
1558 self._buffer_full = True
1559 return record.level >= self._level
1560 return False
1561
1562 def rollover(self, record):
1563 if self._handler is None:
1564 self._handler = self._handler_factory(record, self)
1565 self._handler.emit_batch(iter(self.buffered_records), 'escalation')
1566 self.buffered_records.clear()
1567 self._action_triggered = not self._reset
1568
1569 @property
1570 def triggered(self):
1571 """This attribute is `True` when the action was triggered. From
1572 this point onwards the finger crossed handler transparently
1573 forwards all log records to the inner handler. If the handler resets
1574 itself this will always be `False`.
1575 """
1576 return self._action_triggered
1577
1578 def emit(self, record):
1579 self.lock.acquire()
1580 try:
1581 if self.enqueue(record):
1582 self.rollover(record)
1583 finally:
1584 self.lock.release()
1585
1586
1587 class GroupHandler(WrapperHandler):
1588 """A handler that buffers all messages until it is popped again and then
1589 forwards all messages to another handler. This is useful if you for
1590 example have an application that does computations and only a result
1591 mail is required. A group handler makes sure that only one mail is sent
1592 and not multiple. Some other handles might support this as well, though
1593 currently none of the builtins do.
1594
1595 Example::
1596
1597 with GroupHandler(MailHandler(...)):
1598 # everything here ends up in the mail
1599
1600 The :class:`GroupHandler` is implemented as a :class:`WrapperHandler`
1601 thus forwarding all attributes of the wrapper handler.
1602
1603 Notice that this handler really only emit the records when the handler
1604 is popped from the stack.
1605
1606 .. versionadded:: 0.3
1607 """
1608 _direct_attrs = frozenset(['handler', 'pull_information',
1609 'buffered_records'])
1610
1611 def __init__(self, handler, pull_information=True):
1612 WrapperHandler.__init__(self, handler)
1613 self.pull_information = pull_information
1614 self.buffered_records = []
1615
1616 def rollover(self):
1617 self.handler.emit_batch(self.buffered_records, 'group')
1618 self.buffered_records = []
1619
1620 def pop_application(self):
1621 Handler.pop_application(self)
1622 self.rollover()
1623
1624 def pop_thread(self):
1625 Handler.pop_thread(self)
1626 self.rollover()
1627
1628 def emit(self, record):
1629 if self.pull_information:
1630 record.pull_information()
1631 self.buffered_records.append(record)
0 # -*- coding: utf-8 -*-
1 """
2 logbook.helpers
3 ~~~~~~~~~~~~~~~
4
5 Various helper functions
6
7 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
8 :license: BSD, see LICENSE for more details.
9 """
10 import os
11 import re
12 import sys
13 import errno
14 import time
15 import random
16 from datetime import datetime, timedelta
17
18 PY2 = sys.version_info[0] == 2
19
20 if PY2:
21 import __builtin__ as _builtins
22 else:
23 import builtins as _builtins
24
25 try:
26 import json
27 except ImportError:
28 import simplejson as json
29
30 if PY2:
31 from cStringIO import StringIO
32 iteritems = dict.iteritems
33 from itertools import izip as zip
34 xrange = _builtins.xrange
35 else:
36 from io import StringIO
37 zip = _builtins.zip
38 xrange = range
39 iteritems = dict.items
40
41 _IDENTITY = lambda obj: obj
42
43 if PY2:
44 def u(s):
45 return unicode(s, "unicode_escape")
46 else:
47 u = _IDENTITY
48
49 if PY2:
50 integer_types = (int, long)
51 string_types = (basestring,)
52 else:
53 integer_types = (int,)
54 string_types = (str,)
55
56 if PY2:
57 import httplib as http_client
58 else:
59 from http import client as http_client
60
61 if PY2:
62 #Yucky, but apparently that's the only way to do this
63 exec("""
64 def reraise(tp, value, tb=None):
65 raise tp, value, tb
66 """, locals(), globals())
67 else:
68 def reraise(tp, value, tb=None):
69 if value.__traceback__ is not tb:
70 raise value.with_traceback(tb)
71 raise value
72
73
74 # this regexp also matches incompatible dates like 20070101 because
75 # some libraries (like the python xmlrpclib modules) use this
76 _iso8601_re = re.compile(
77 # date
78 r'(\d{4})(?:-?(\d{2})(?:-?(\d{2}))?)?'
79 # time
80 r'(?:T(\d{2}):(\d{2})(?::(\d{2}(?:\.\d+)?))?(Z|[+-]\d{2}:\d{2})?)?$'
81 )
82 _missing = object()
83 if PY2:
84 def b(x): return x
85 def _is_text_stream(x): return True
86 else:
87 import io
88 def b(x): return x.encode('ascii')
89 def _is_text_stream(stream): return isinstance(stream, io.TextIOBase)
90
91
92 can_rename_open_file = False
93 if os.name == 'nt': # pragma: no cover
94 _rename = lambda src, dst: False
95 _rename_atomic = lambda src, dst: False
96
97 try:
98 import ctypes
99
100 _MOVEFILE_REPLACE_EXISTING = 0x1
101 _MOVEFILE_WRITE_THROUGH = 0x8
102 _MoveFileEx = ctypes.windll.kernel32.MoveFileExW
103
104 def _rename(src, dst):
105 if PY2:
106 if not isinstance(src, unicode):
107 src = unicode(src, sys.getfilesystemencoding())
108 if not isinstance(dst, unicode):
109 dst = unicode(dst, sys.getfilesystemencoding())
110 if _rename_atomic(src, dst):
111 return True
112 retry = 0
113 rv = False
114 while not rv and retry < 100:
115 rv = _MoveFileEx(src, dst, _MOVEFILE_REPLACE_EXISTING |
116 _MOVEFILE_WRITE_THROUGH)
117 if not rv:
118 time.sleep(0.001)
119 retry += 1
120 return rv
121
122 # new in Vista and Windows Server 2008
123 _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction
124 _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction
125 _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW
126 _CloseHandle = ctypes.windll.kernel32.CloseHandle
127 can_rename_open_file = True
128
129 def _rename_atomic(src, dst):
130 ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, 'Logbook rename')
131 if ta == -1:
132 return False
133 try:
134 retry = 0
135 rv = False
136 while not rv and retry < 100:
137 rv = _MoveFileTransacted(src, dst, None, None,
138 _MOVEFILE_REPLACE_EXISTING |
139 _MOVEFILE_WRITE_THROUGH, ta)
140 if rv:
141 rv = _CommitTransaction(ta)
142 break
143 else:
144 time.sleep(0.001)
145 retry += 1
146 return rv
147 finally:
148 _CloseHandle(ta)
149 except Exception:
150 pass
151
152 def rename(src, dst):
153 # Try atomic or pseudo-atomic rename
154 if _rename(src, dst):
155 return
156 # Fall back to "move away and replace"
157 try:
158 os.rename(src, dst)
159 except OSError:
160 e = sys.exc_info()[1]
161 if e.errno != errno.EEXIST:
162 raise
163 old = "%s-%08x" % (dst, random.randint(0, sys.maxint))
164 os.rename(dst, old)
165 os.rename(src, dst)
166 try:
167 os.unlink(old)
168 except Exception:
169 pass
170 else:
171 rename = os.rename
172 can_rename_open_file = True
173
174 _JSON_SIMPLE_TYPES = (bool, float) + integer_types + string_types
175
176 def to_safe_json(data):
177 """Makes a data structure safe for JSON silently discarding invalid
178 objects from nested structures. This also converts dates.
179 """
180 def _convert(obj):
181 if obj is None:
182 return None
183 elif PY2 and isinstance(obj, str):
184 return obj.decode('utf-8', 'replace')
185 elif isinstance(obj, _JSON_SIMPLE_TYPES):
186 return obj
187 elif isinstance(obj, datetime):
188 return format_iso8601(obj)
189 elif isinstance(obj, list):
190 return [_convert(x) for x in obj]
191 elif isinstance(obj, tuple):
192 return tuple(_convert(x) for x in obj)
193 elif isinstance(obj, dict):
194 rv = {}
195 for key, value in iteritems(obj):
196 if not isinstance(key, string_types):
197 key = str(key)
198 if not is_unicode(key):
199 key = u(key)
200 rv[key] = _convert(value)
201 return rv
202 return _convert(data)
203
204
205 def format_iso8601(d=None):
206 """Returns a date in iso8601 format."""
207 if d is None:
208 d = datetime.utcnow()
209 rv = d.strftime('%Y-%m-%dT%H:%M:%S')
210 if d.microsecond:
211 rv += '.' + str(d.microsecond)
212 return rv + 'Z'
213
214
215 def parse_iso8601(value):
216 """Parse an iso8601 date into a datetime object. The timezone is
217 normalized to UTC.
218 """
219 m = _iso8601_re.match(value)
220 if m is None:
221 raise ValueError('not a valid iso8601 date value')
222
223 groups = m.groups()
224 args = []
225 for group in groups[:-2]:
226 if group is not None:
227 group = int(group)
228 args.append(group)
229 seconds = groups[-2]
230 if seconds is not None:
231 if '.' in seconds:
232 sec, usec = seconds.split('.')
233 args.append(int(sec))
234 args.append(int(usec.ljust(6, '0')))
235 else:
236 args.append(int(seconds))
237
238 rv = datetime(*args)
239 tz = groups[-1]
240 if tz and tz != 'Z':
241 args = [int(x) for x in tz[1:].split(':')]
242 delta = timedelta(hours=args[0], minutes=args[1])
243 if tz[0] == '+':
244 rv -= delta
245 else:
246 rv += delta
247
248 return rv
249
250
251 def get_application_name():
252 if not sys.argv or not sys.argv[0]:
253 return 'Python'
254 return os.path.basename(sys.argv[0]).title()
255
256
257 class cached_property(object):
258 """A property that is lazily calculated and then cached."""
259
260 def __init__(self, func, name=None, doc=None):
261 self.__name__ = name or func.__name__
262 self.__module__ = func.__module__
263 self.__doc__ = doc or func.__doc__
264 self.func = func
265
266 def __get__(self, obj, type=None):
267 if obj is None:
268 return self
269 value = obj.__dict__.get(self.__name__, _missing)
270 if value is _missing:
271 value = self.func(obj)
272 obj.__dict__[self.__name__] = value
273 return value
274
275 def get_iterator_next_method(it):
276 return lambda: next(it)
277
278 # python 2 support functions and aliases
279 def is_unicode(x):
280 if PY2:
281 return isinstance(x, unicode)
282 return isinstance(x, str)
0 # -*- coding: utf-8 -*-
1 """
2 logbook.more
3 ~~~~~~~~~~~~
4
5 Fancy stuff for logbook.
6
7 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
8 :license: BSD, see LICENSE for more details.
9 """
10 import re
11 import os
12 from cgi import parse_qsl
13
14 from logbook.base import RecordDispatcher, NOTSET, ERROR, NOTICE
15 from logbook.handlers import Handler, StringFormatter, \
16 StringFormatterHandlerMixin, StderrHandler
17 from logbook._termcolors import colorize
18 from logbook.helpers import PY2, string_types, iteritems
19
20 from logbook.ticketing import TicketingHandler as DatabaseHandler
21 from logbook.ticketing import BackendBase
22
23 if PY2:
24 from urllib import urlencode
25 else:
26 from urllib.parse import urlencode
27
28 _ws_re = re.compile(r'(\s+)(?u)')
29 TWITTER_FORMAT_STRING = \
30 u'[{record.channel}] {record.level_name}: {record.message}'
31 TWITTER_ACCESS_TOKEN_URL = 'https://twitter.com/oauth/access_token'
32 NEW_TWEET_URL = 'https://api.twitter.com/1/statuses/update.json'
33
34
35 class CouchDBBackend(BackendBase):
36 """Implements a backend that writes into a CouchDB database.
37 """
38 def setup_backend(self):
39 from couchdb import Server
40
41 uri = self.options.pop('uri', u'')
42 couch = Server(uri)
43 db_name = self.options.pop('db')
44 self.database = couch[db_name]
45
46 def record_ticket(self, record, data, hash, app_id):
47 """Records a log record as ticket.
48 """
49 db = self.database
50
51 ticket = record.to_dict()
52 ticket["time"] = ticket["time"].isoformat() + "Z"
53 ticket_id, _ = db.save(ticket)
54
55 db.save(ticket)
56
57
58 class TwitterFormatter(StringFormatter):
59 """Works like the standard string formatter and is used by the
60 :class:`TwitterHandler` unless changed.
61 """
62 max_length = 140
63
64 def format_exception(self, record):
65 return u'%s: %s' % (record.exception_shortname,
66 record.exception_message)
67
68 def __call__(self, record, handler):
69 formatted = StringFormatter.__call__(self, record, handler)
70 rv = []
71 length = 0
72 for piece in _ws_re.split(formatted):
73 length += len(piece)
74 if length > self.max_length:
75 if length - len(piece) < self.max_length:
76 rv.append(u'…')
77 break
78 rv.append(piece)
79 return u''.join(rv)
80
81
82 class TaggingLogger(RecordDispatcher):
83 """A logger that attaches a tag to each record. This is an alternative
84 record dispatcher that does not use levels but tags to keep log
85 records apart. It is constructed with a descriptive name and at least
86 one tag. The tags are up for you to define::
87
88 logger = TaggingLogger('My Logger', ['info', 'warning'])
89
90 For each tag defined that way, a method appears on the logger with
91 that name::
92
93 logger.info('This is a info message')
94
95 To dispatch to different handlers based on tags you can use the
96 :class:`TaggingHandler`.
97
98 The tags themselves are stored as list named ``'tags'`` in the
99 :attr:`~logbook.LogRecord.extra` dictionary.
100 """
101
102 def __init__(self, name=None, tags=None):
103 RecordDispatcher.__init__(self, name)
104 # create a method for each tag named
105 list(setattr(self, tag, lambda msg, *args, **kwargs:
106 self.log(tag, msg, *args, **kwargs)) for tag in (tags or ()))
107
108 def log(self, tags, msg, *args, **kwargs):
109 if isinstance(tags, string_types):
110 tags = [tags]
111 exc_info = kwargs.pop('exc_info', None)
112 extra = kwargs.pop('extra', {})
113 extra['tags'] = list(tags)
114 return self.make_record_and_handle(NOTSET, msg, args, kwargs,
115 exc_info, extra)
116
117
118 class TaggingHandler(Handler):
119 """A handler that logs for tags and dispatches based on those.
120
121 Example::
122
123 import logbook
124 from logbook.more import TaggingHandler
125
126 handler = TaggingHandler(dict(
127 info=OneHandler(),
128 warning=AnotherHandler()
129 ))
130 """
131
132 def __init__(self, handlers, filter=None, bubble=False):
133 Handler.__init__(self, NOTSET, filter, bubble)
134 assert isinstance(handlers, dict)
135 self._handlers = dict(
136 (tag, isinstance(handler, Handler) and [handler] or handler)
137 for (tag, handler) in iteritems(handlers))
138
139 def emit(self, record):
140 for tag in record.extra.get('tags', ()):
141 for handler in self._handlers.get(tag, ()):
142 handler.handle(record)
143
144
145 class TwitterHandler(Handler, StringFormatterHandlerMixin):
146 """A handler that logs to twitter. Requires that you sign up an
147 application on twitter and request xauth support. Furthermore the
148 oauth2 library has to be installed.
149
150 If you don't want to register your own application and request xauth
151 credentials, there are a couple of leaked consumer key and secret
152 pairs from application explicitly whitelisted at Twitter
153 (`leaked secrets <http://bit.ly/leaked-secrets>`_).
154 """
155 default_format_string = TWITTER_FORMAT_STRING
156 formatter_class = TwitterFormatter
157
158 def __init__(self, consumer_key, consumer_secret, username,
159 password, level=NOTSET, format_string=None, filter=None,
160 bubble=False):
161 Handler.__init__(self, level, filter, bubble)
162 StringFormatterHandlerMixin.__init__(self, format_string)
163 self.consumer_key = consumer_key
164 self.consumer_secret = consumer_secret
165 self.username = username
166 self.password = password
167
168 try:
169 import oauth2
170 except ImportError:
171 raise RuntimeError('The python-oauth2 library is required for '
172 'the TwitterHandler.')
173
174 self._oauth = oauth2
175 self._oauth_token = None
176 self._oauth_token_secret = None
177 self._consumer = oauth2.Consumer(consumer_key,
178 consumer_secret)
179 self._client = oauth2.Client(self._consumer)
180
181 def get_oauth_token(self):
182 """Returns the oauth access token."""
183 if self._oauth_token is None:
184 resp, content = self._client.request(
185 TWITTER_ACCESS_TOKEN_URL + '?', 'POST',
186 body=urlencode({
187 'x_auth_username': self.username.encode('utf-8'),
188 'x_auth_password': self.password.encode('utf-8'),
189 'x_auth_mode': 'client_auth'
190 }),
191 headers={'Content-Type': 'application/x-www-form-urlencoded'}
192 )
193 if resp['status'] != '200':
194 raise RuntimeError('unable to login to Twitter')
195 data = dict(parse_qsl(content))
196 self._oauth_token = data['oauth_token']
197 self._oauth_token_secret = data['oauth_token_secret']
198 return self._oauth.Token(self._oauth_token,
199 self._oauth_token_secret)
200
201 def make_client(self):
202 """Creates a new oauth client auth a new access token."""
203 return self._oauth.Client(self._consumer, self.get_oauth_token())
204
205 def tweet(self, status):
206 """Tweets a given status. Status must not exceed 140 chars."""
207 client = self.make_client()
208 resp, content = client.request(NEW_TWEET_URL, 'POST',
209 body=urlencode({'status': status.encode('utf-8')}),
210 headers={'Content-Type': 'application/x-www-form-urlencoded'})
211 return resp['status'] == '200'
212
213 def emit(self, record):
214 self.tweet(self.format(record))
215
216
217 class JinjaFormatter(object):
218 """A formatter object that makes it easy to format using a Jinja 2
219 template instead of a format string.
220 """
221
222 def __init__(self, template):
223 try:
224 from jinja2 import Template
225 except ImportError:
226 raise RuntimeError('The jinja2 library is required for '
227 'the JinjaFormatter.')
228 self.template = Template(template)
229
230 def __call__(self, record, handler):
231 return self.template.render(record=record, handler=handler)
232
233
234 class ExternalApplicationHandler(Handler):
235 """This handler invokes an external application to send parts of
236 the log record to. The constructor takes a list of arguments that
237 are passed to another application where each of the arguments is a
238 format string, and optionally a format string for data that is
239 passed to stdin.
240
241 For example it can be used to invoke the ``say`` command on OS X::
242
243 from logbook.more import ExternalApplicationHandler
244 say_handler = ExternalApplicationHandler(['say', '{record.message}'])
245
246 Note that the above example is blocking until ``say`` finished, so it's
247 recommended to combine this handler with the
248 :class:`logbook.ThreadedWrapperHandler` to move the execution into
249 a background thread.
250
251 .. versionadded:: 0.3
252 """
253
254 def __init__(self, arguments, stdin_format=None,
255 encoding='utf-8', level=NOTSET, filter=None,
256 bubble=False):
257 Handler.__init__(self, level, filter, bubble)
258 self.encoding = encoding
259 self._arguments = list(arguments)
260 if stdin_format is not None:
261 stdin_format = stdin_format
262 self._stdin_format = stdin_format
263 import subprocess
264 self._subprocess = subprocess
265
266 def emit(self, record):
267 args = [arg.format(record=record).encode(self.encoding)
268 for arg in self._arguments]
269 if self._stdin_format is not None:
270 stdin_data = self._stdin_format.format(record=record) \
271 .encode(self.encoding)
272 stdin = self._subprocess.PIPE
273 else:
274 stdin = None
275 c = self._subprocess.Popen(args, stdin=stdin)
276 if stdin is not None:
277 c.communicate(stdin_data)
278 c.wait()
279
280
281 class ColorizingStreamHandlerMixin(object):
282 """A mixin class that does colorizing.
283
284 .. versionadded:: 0.3
285 """
286
287 def should_colorize(self, record):
288 """Returns `True` if colorizing should be applied to this
289 record. The default implementation returns `True` if the
290 stream is a tty and we are not executing on windows.
291 """
292 if os.name == 'nt':
293 return False
294 isatty = getattr(self.stream, 'isatty', None)
295 return isatty and isatty()
296
297 def get_color(self, record):
298 """Returns the color for this record."""
299 if record.level >= ERROR:
300 return 'red'
301 elif record.level >= NOTICE:
302 return 'yellow'
303 return 'lightgray'
304
305 def format_and_encode(self, record):
306 rv = super(ColorizingStreamHandlerMixin, self) \
307 .format_and_encode(record)
308 if self.should_colorize(record):
309 color = self.get_color(record)
310 if color:
311 rv = colorize(color, rv)
312 return rv
313
314
315 class ColorizedStderrHandler(ColorizingStreamHandlerMixin, StderrHandler):
316 """A colorizing stream handler that writes to stderr. It will only
317 colorize if a terminal was detected. Note that this handler does
318 not colorize on Windows systems.
319
320 .. versionadded:: 0.3
321 """
322
323
324 # backwards compat. Should go away in some future releases
325 from logbook.handlers import FingersCrossedHandler as \
326 FingersCrossedHandlerBase
327 class FingersCrossedHandler(FingersCrossedHandlerBase):
328 def __init__(self, *args, **kwargs):
329 FingersCrossedHandlerBase.__init__(self, *args, **kwargs)
330 from warnings import warn
331 warn(PendingDeprecationWarning('fingers crossed handler changed '
332 'location. It\'s now a core component of Logbook.'))
333
334
335 class ExceptionHandler(Handler, StringFormatterHandlerMixin):
336 """An exception handler which raises exceptions of the given `exc_type`.
337 This is especially useful if you set a specific error `level` e.g. to treat
338 warnings as exceptions::
339
340 from logbook.more import ExceptionHandler
341
342 class ApplicationWarning(Exception):
343 pass
344
345 exc_handler = ExceptionHandler(ApplicationWarning, level='WARNING')
346
347 .. versionadded:: 0.3
348 """
349 def __init__(self, exc_type, level=NOTSET, format_string=None,
350 filter=None, bubble=False):
351 Handler.__init__(self, level, filter, bubble)
352 StringFormatterHandlerMixin.__init__(self, format_string)
353 self.exc_type = exc_type
354
355 def handle(self, record):
356 if self.should_handle(record):
357 raise self.exc_type(self.format(record))
358 return False
0 # -*- coding: utf-8 -*-
1 """
2 logbook.notifiers
3 ~~~~~~~~~~~~~~~~~
4
5 System notify handlers for OSX and Linux.
6
7 :copyright: (c) 2010 by Armin Ronacher, Christopher Grebs.
8 :license: BSD, see LICENSE for more details.
9 """
10 import os
11 import sys
12 import base64
13 from time import time
14
15 from logbook.base import NOTSET, ERROR, WARNING
16 from logbook.handlers import Handler, LimitingHandlerMixin
17 from logbook.helpers import get_application_name, PY2, http_client
18
19 if PY2:
20 from urllib import urlencode
21 else:
22 from urllib.parse import urlencode
23
24 def create_notification_handler(application_name=None, level=NOTSET, icon=None):
25 """Creates a handler perfectly fit the current platform. On Linux
26 systems this creates a :class:`LibNotifyHandler`, on OS X systems it
27 will create a :class:`GrowlHandler`.
28 """
29 if sys.platform == 'darwin':
30 return GrowlHandler(application_name, level=level, icon=icon)
31 return LibNotifyHandler(application_name, level=level, icon=icon)
32
33
34 class NotificationBaseHandler(Handler, LimitingHandlerMixin):
35 """Baseclass for notification handlers."""
36
37 def __init__(self, application_name=None, record_limit=None,
38 record_delta=None, level=NOTSET, filter=None, bubble=False):
39 Handler.__init__(self, level, filter, bubble)
40 LimitingHandlerMixin.__init__(self, record_limit, record_delta)
41 if application_name is None:
42 application_name = get_application_name()
43 self.application_name = application_name
44
45 def make_title(self, record):
46 """Called to get the title from the record."""
47 return u'%s: %s' % (record.channel, record.level_name.title())
48
49 def make_text(self, record):
50 """Called to get the text of the record."""
51 return record.message
52
53
54 class GrowlHandler(NotificationBaseHandler):
55 """A handler that dispatches to Growl. Requires that either growl-py or
56 py-Growl are installed.
57 """
58
59 def __init__(self, application_name=None, icon=None, host=None,
60 password=None, record_limit=None, record_delta=None,
61 level=NOTSET, filter=None, bubble=False):
62 NotificationBaseHandler.__init__(self, application_name, record_limit,
63 record_delta, level, filter, bubble)
64
65 # growl is using the deprecated md5 module, but we really don't need
66 # to see that deprecation warning
67 from warnings import filterwarnings
68 filterwarnings(module='Growl', category=DeprecationWarning,
69 action='ignore')
70
71 try:
72 import Growl
73 self._growl = Growl
74 except ImportError:
75 raise RuntimeError('The growl module is not available. You have '
76 'to install either growl-py or py-Growl to '
77 'use the GrowlHandler.')
78
79 if icon is not None:
80 if not os.path.isfile(icon):
81 raise IOError('Filename to an icon expected.')
82 icon = self._growl.Image.imageFromPath(icon)
83 else:
84 try:
85 icon = self._growl.Image.imageWithIconForCurrentApplication()
86 except TypeError:
87 icon = None
88
89 self._notifier = self._growl.GrowlNotifier(
90 applicationName=self.application_name,
91 applicationIcon=icon,
92 notifications=['Notset', 'Debug', 'Info', 'Notice', 'Warning',
93 'Error', 'Critical'],
94 hostname=host,
95 password=password
96 )
97 self._notifier.register()
98
99 def is_sticky(self, record):
100 """Returns `True` if the sticky flag should be set for this record.
101 The default implementation marks errors and criticals sticky.
102 """
103 return record.level >= ERROR
104
105 def get_priority(self, record):
106 """Returns the priority flag for Growl. Errors and criticals are
107 get highest priority (2), warnings get higher priority (1) and the
108 rest gets 0. Growl allows values between -2 and 2.
109 """
110 if record.level >= ERROR:
111 return 2
112 elif record.level == WARNING:
113 return 1
114 return 0
115
116 def emit(self, record):
117 if not self.check_delivery(record)[1]:
118 return
119 self._notifier.notify(record.level_name.title(),
120 self.make_title(record),
121 self.make_text(record),
122 sticky=self.is_sticky(record),
123 priority=self.get_priority(record))
124
125
126 class LibNotifyHandler(NotificationBaseHandler):
127 """A handler that dispatches to libnotify. Requires pynotify installed.
128 If `no_init` is set to `True` the initialization of libnotify is skipped.
129 """
130
131 def __init__(self, application_name=None, icon=None, no_init=False,
132 record_limit=None, record_delta=None, level=NOTSET,
133 filter=None, bubble=False):
134 NotificationBaseHandler.__init__(self, application_name, record_limit,
135 record_delta, level, filter, bubble)
136
137 try:
138 import pynotify
139 self._pynotify = pynotify
140 except ImportError:
141 raise RuntimeError('The pynotify library is required for '
142 'the LibNotifyHandler.')
143
144 self.icon = icon
145 if not no_init:
146 pynotify.init(self.application_name)
147
148 def set_notifier_icon(self, notifier, icon):
149 """Used to attach an icon on a notifier object."""
150 try:
151 from gtk import gdk
152 except ImportError:
153 #TODO: raise a warning?
154 raise RuntimeError('The gtk.gdk module is required to set an icon.')
155
156 if icon is not None:
157 if not isinstance(icon, gdk.Pixbuf):
158 icon = gdk.pixbuf_new_from_file(icon)
159 notifier.set_icon_from_pixbuf(icon)
160
161 def get_expires(self, record):
162 """Returns either EXPIRES_DEFAULT or EXPIRES_NEVER for this record.
163 The default implementation marks errors and criticals as EXPIRES_NEVER.
164 """
165 pn = self._pynotify
166 return pn.EXPIRES_NEVER if record.level >= ERROR else pn.EXPIRES_DEFAULT
167
168 def get_urgency(self, record):
169 """Returns the urgency flag for pynotify. Errors and criticals are
170 get highest urgency (CRITICAL), warnings get higher priority (NORMAL)
171 and the rest gets LOW.
172 """
173 pn = self._pynotify
174 if record.level >= ERROR:
175 return pn.URGENCY_CRITICAL
176 elif record.level == WARNING:
177 return pn.URGENCY_NORMAL
178 return pn.URGENCY_LOW
179
180 def emit(self, record):
181 if not self.check_delivery(record)[1]:
182 return
183 notifier = self._pynotify.Notification(self.make_title(record),
184 self.make_text(record))
185 notifier.set_urgency(self.get_urgency(record))
186 notifier.set_timeout(self.get_expires(record))
187 self.set_notifier_icon(notifier, self.icon)
188 notifier.show()
189
190
191 class BoxcarHandler(NotificationBaseHandler):
192 """Sends notifications to boxcar.io. Can be forwarded to your iPhone or
193 other compatible device.
194 """
195 api_url = 'https://boxcar.io/notifications/'
196
197 def __init__(self, email, password, record_limit=None, record_delta=None,
198 level=NOTSET, filter=None, bubble=False):
199 NotificationBaseHandler.__init__(self, None, record_limit, record_delta,
200 level, filter, bubble)
201 self.email = email
202 self.password = password
203
204 def get_screen_name(self, record):
205 """Returns the value of the screen name field."""
206 return record.level_name.title()
207
208 def emit(self, record):
209 if not self.check_delivery(record)[1]:
210 return
211 body = urlencode({
212 'notification[from_screen_name]':
213 self.get_screen_name(record).encode('utf-8'),
214 'notification[message]':
215 self.make_text(record).encode('utf-8'),
216 'notification[from_remote_service_id]': str(int(time() * 100))
217 })
218 con = http_client.HTTPSConnection('boxcar.io')
219 con.request('POST', '/notifications/', headers={
220 'Authorization': 'Basic ' +
221 base64.b64encode((u'%s:%s' %
222 (self.email, self.password)).encode('utf-8')).strip(),
223 }, body=body)
224 con.close()
225
226
227 class NotifoHandler(NotificationBaseHandler):
228 """Sends notifications to notifo.com. Can be forwarded to your Desktop,
229 iPhone, or other compatible device.
230 """
231
232 def __init__(self, application_name=None, username=None, secret=None,
233 record_limit=None, record_delta=None, level=NOTSET, filter=None,
234 bubble=False, hide_level=False):
235 try:
236 import notifo
237 except ImportError:
238 raise RuntimeError(
239 'The notifo module is not available. You have '
240 'to install notifo to use the NotifoHandler.'
241 )
242 NotificationBaseHandler.__init__(self, None, record_limit, record_delta,
243 level, filter, bubble)
244 self._notifo = notifo
245 self.application_name = application_name
246 self.username = username
247 self.secret = secret
248 self.hide_level = hide_level
249
250
251 def emit(self, record):
252
253 if self.hide_level:
254 _level_name = None
255 else:
256 _level_name = self.level_name
257
258 self._notifo.send_notification(self.username, self.secret, None,
259 record.message, self.application_name,
260 _level_name, None)
0 # -*- coding: utf-8 -*-
1 """
2 logbook.queues
3 ~~~~~~~~~~~~~~
4
5 This module implements queue backends.
6
7 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
8 :license: BSD, see LICENSE for more details.
9 """
10 import json
11 import threading
12 from threading import Thread, Lock
13 import platform
14 from logbook.base import NOTSET, LogRecord, dispatch_record
15 from logbook.handlers import Handler, WrapperHandler
16 from logbook.helpers import PY2
17
18 if PY2:
19 from Queue import Empty, Queue as ThreadQueue
20 else:
21 from queue import Empty, Queue as ThreadQueue
22
23
24 class RedisHandler(Handler):
25 """A handler that sends log messages to a Redis instance.
26
27 It publishes each record as json dump. Requires redis module.
28
29 To receive such records you need to have a running instance of Redis.
30
31 Example setup::
32
33 handler = RedisHandler('http://localhost', port='9200', key='redis')
34
35 If your Redis instance is password protected, you can securely connect passing
36 your password when creating a RedisHandler object.
37
38 Example::
39
40 handler = RedisHandler(password='your_redis_password')
41
42 More info about the default buffer size: wp.me/p3tYJu-3b
43 """
44 def __init__(self, host='localhost', port=6379, key='redis', extra_fields={},
45 flush_threshold=128, flush_time=1, level=NOTSET, filter=None,
46 password=False, bubble=True, context=None):
47 Handler.__init__(self, level, filter, bubble)
48 try:
49 import redis
50 from redis import ResponseError
51 except ImportError:
52 raise RuntimeError('The redis library is required for '
53 'the RedisHandler')
54
55 self.redis = redis.Redis(host=host, port=port, password=password, decode_responses=True)
56 try:
57 self.redis.ping()
58 except ResponseError:
59 raise ResponseError('The password provided is apparently incorrect')
60 self.key = key
61 self.extra_fields = extra_fields
62 self.flush_threshold = flush_threshold
63 self.queue = []
64 self.lock = Lock()
65
66 #Set up a thread that flushes the queue every specified seconds
67 self._stop_event = threading.Event()
68 self._flushing_t = threading.Thread(target=self._flush_task,
69 args=(flush_time, self._stop_event))
70 self._flushing_t.daemon = True
71 self._flushing_t.start()
72
73
74 def _flush_task(self, time, stop_event):
75 """Calls the method _flush_buffer every certain time.
76 """
77 while not self._stop_event.isSet():
78 with self.lock:
79 self._flush_buffer()
80 self._stop_event.wait(time)
81
82
83 def _flush_buffer(self):
84 """Flushes the messaging queue into Redis.
85
86 All values are pushed at once for the same key.
87 """
88 if self.queue:
89 self.redis.rpush(self.key, *self.queue)
90 self.queue = []
91
92
93 def disable_buffering(self):
94 """Disables buffering.
95
96 If called, every single message will be directly pushed to Redis.
97 """
98 self._stop_event.set()
99 self.flush_threshold = 1
100
101
102 def emit(self, record):
103 """Emits a pair (key, value) to redis.
104
105 The key is the one provided when creating the handler, or redis if none was
106 provided. The value contains both the message and the hostname. Extra values
107 are also appended to the message.
108 """
109 with self.lock:
110 r = {"message": record.msg, "host": platform.node(), "level": record.level_name}
111 r.update(self.extra_fields)
112 r.update(record.kwargs)
113 self.queue.append(json.dumps(r))
114 if len(self.queue) == self.flush_threshold:
115 self._flush_buffer()
116
117
118 def close(self):
119 self._flush_buffer()
120
121
122 class RabbitMQHandler(Handler):
123 """A handler that acts as a RabbitMQ publisher, which publishes each record
124 as json dump. Requires the kombu module.
125
126 The queue will be filled with JSON exported log records. To receive such
127 log records from a queue you can use the :class:`RabbitMQSubscriber`.
128
129
130 Example setup::
131
132 handler = RabbitMQHandler('amqp://guest:guest@localhost//', queue='my_log')
133 """
134 def __init__(self, uri=None, queue='logging', level=NOTSET,
135 filter=None, bubble=False, context=None):
136 Handler.__init__(self, level, filter, bubble)
137 try:
138 import kombu
139 except ImportError:
140 raise RuntimeError('The kombu library is required for '
141 'the RabbitMQSubscriber.')
142 if uri:
143 connection = kombu.Connection(uri)
144
145 self.queue = connection.SimpleQueue(queue)
146
147 def export_record(self, record):
148 """Exports the record into a dictionary ready for JSON dumping.
149 """
150 return record.to_dict(json_safe=True)
151
152 def emit(self, record):
153 self.queue.put(self.export_record(record))
154
155 def close(self):
156 self.queue.close()
157
158
159 class ZeroMQHandler(Handler):
160 """A handler that acts as a ZeroMQ publisher, which publishes each record
161 as json dump. Requires the pyzmq library.
162
163 The queue will be filled with JSON exported log records. To receive such
164 log records from a queue you can use the :class:`ZeroMQSubscriber`.
165
166
167 Example setup::
168
169 handler = ZeroMQHandler('tcp://127.0.0.1:5000')
170 """
171
172 def __init__(self, uri=None, level=NOTSET, filter=None, bubble=False,
173 context=None):
174 Handler.__init__(self, level, filter, bubble)
175 try:
176 import zmq
177 except ImportError:
178 raise RuntimeError('The pyzmq library is required for '
179 'the ZeroMQHandler.')
180 #: the zero mq context
181 self.context = context or zmq.Context()
182 #: the zero mq socket.
183 self.socket = self.context.socket(zmq.PUB)
184 if uri is not None:
185 self.socket.bind(uri)
186
187 def export_record(self, record):
188 """Exports the record into a dictionary ready for JSON dumping."""
189 return record.to_dict(json_safe=True)
190
191 def emit(self, record):
192 self.socket.send(json.dumps(self.export_record(record)).encode("utf-8"))
193
194 def close(self):
195 self.socket.close()
196
197
198 class ThreadController(object):
199 """A helper class used by queue subscribers to control the background
200 thread. This is usually created and started in one go by
201 :meth:`~logbook.queues.ZeroMQSubscriber.dispatch_in_background` or
202 a comparable function.
203 """
204
205 def __init__(self, subscriber, setup=None):
206 self.setup = setup
207 self.subscriber = subscriber
208 self.running = False
209 self._thread = None
210
211 def start(self):
212 """Starts the task thread."""
213 self.running = True
214 self._thread = Thread(target=self._target)
215 self._thread.setDaemon(True)
216 self._thread.start()
217
218 def stop(self):
219 """Stops the task thread."""
220 if self.running:
221 self.running = False
222 self._thread.join()
223 self._thread = None
224
225 def _target(self):
226 if self.setup is not None:
227 self.setup.push_thread()
228 try:
229 while self.running:
230 self.subscriber.dispatch_once(timeout=0.05)
231 finally:
232 if self.setup is not None:
233 self.setup.pop_thread()
234
235
236 class SubscriberBase(object):
237 """Baseclass for all subscribers."""
238
239 def recv(self, timeout=None):
240 """Receives a single record from the socket. Timeout of 0 means nonblocking,
241 `None` means blocking and otherwise it's a timeout in seconds after which
242 the function just returns with `None`.
243
244 Subclasses have to override this.
245 """
246 raise NotImplementedError()
247
248 def dispatch_once(self, timeout=None):
249 """Receives one record from the socket, loads it and dispatches it. Returns
250 `True` if something was dispatched or `False` if it timed out.
251 """
252 rv = self.recv(timeout)
253 if rv is not None:
254 dispatch_record(rv)
255 return True
256 return False
257
258 def dispatch_forever(self):
259 """Starts a loop that dispatches log records forever."""
260 while 1:
261 self.dispatch_once()
262
263 def dispatch_in_background(self, setup=None):
264 """Starts a new daemonized thread that dispatches in the background.
265 An optional handler setup can be provided that pushed to the new
266 thread (can be any :class:`logbook.base.StackedObject`).
267
268 Returns a :class:`ThreadController` object for shutting down
269 the background thread. The background thread will already be
270 running when this function returns.
271 """
272 controller = ThreadController(self, setup)
273 controller.start()
274 return controller
275
276
277 class RabbitMQSubscriber(SubscriberBase):
278 """A helper that acts as RabbitMQ subscriber and will dispatch received
279 log records to the active handler setup. There are multiple ways to
280 use this class.
281
282 It can be used to receive log records from a queue::
283
284 subscriber = RabbitMQSubscriber('amqp://guest:guest@localhost//')
285 record = subscriber.recv()
286
287 But it can also be used to receive and dispatch these in one go::
288
289 with target_handler:
290 subscriber = RabbitMQSubscriber('amqp://guest:guest@localhost//')
291 subscriber.dispatch_forever()
292
293 This will take all the log records from that queue and dispatch them
294 over to `target_handler`. If you want you can also do that in the
295 background::
296
297 subscriber = RabbitMQSubscriber('amqp://guest:guest@localhost//')
298 controller = subscriber.dispatch_in_background(target_handler)
299
300 The controller returned can be used to shut down the background
301 thread::
302
303 controller.stop()
304 """
305
306 def __init__(self, uri=None, queue='logging'):
307 try:
308 import kombu
309 except ImportError:
310 raise RuntimeError('The kombu library is required for '
311 'the RabbitMQSubscriber.')
312 if uri:
313 connection = kombu.Connection(uri)
314
315 self.queue = connection.SimpleQueue(queue)
316
317 def __del__(self):
318 try:
319 self.close()
320 except AttributeError:
321 # subscriber partially created
322 pass
323
324 def close(self):
325 self.queue.close()
326
327 def recv(self, timeout=None):
328 """Receives a single record from the socket. Timeout of 0 means nonblocking,
329 `None` means blocking and otherwise it's a timeout in seconds after which
330 the function just returns with `None`.
331 """
332 if timeout == 0:
333 try:
334 rv = self.queue.get(block=False)
335 except Exception:
336 return
337 else:
338 rv = self.queue.get(timeout=timeout)
339
340 log_record = rv.payload
341 rv.ack()
342
343 return LogRecord.from_dict(log_record)
344
345
346 class ZeroMQSubscriber(SubscriberBase):
347 """A helper that acts as ZeroMQ subscriber and will dispatch received
348 log records to the active handler setup. There are multiple ways to
349 use this class.
350
351 It can be used to receive log records from a queue::
352
353 subscriber = ZeroMQSubscriber('tcp://127.0.0.1:5000')
354 record = subscriber.recv()
355
356 But it can also be used to receive and dispatch these in one go::
357
358 with target_handler:
359 subscriber = ZeroMQSubscriber('tcp://127.0.0.1:5000')
360 subscriber.dispatch_forever()
361
362 This will take all the log records from that queue and dispatch them
363 over to `target_handler`. If you want you can also do that in the
364 background::
365
366 subscriber = ZeroMQSubscriber('tcp://127.0.0.1:5000')
367 controller = subscriber.dispatch_in_background(target_handler)
368
369 The controller returned can be used to shut down the background
370 thread::
371
372 controller.stop()
373 """
374
375 def __init__(self, uri=None, context=None):
376 try:
377 import zmq
378 except ImportError:
379 raise RuntimeError('The pyzmq library is required for '
380 'the ZeroMQSubscriber.')
381 self._zmq = zmq
382
383 #: the zero mq context
384 self.context = context or zmq.Context()
385 #: the zero mq socket.
386 self.socket = self.context.socket(zmq.SUB)
387 if uri is not None:
388 self.socket.connect(uri)
389 self.socket.setsockopt_unicode(zmq.SUBSCRIBE, u'')
390
391 def __del__(self):
392 try:
393 self.close()
394 except AttributeError:
395 # subscriber partially created
396 pass
397
398 def close(self):
399 """Closes the zero mq socket."""
400 self.socket.close()
401
402 def recv(self, timeout=None):
403 """Receives a single record from the socket. Timeout of 0 means nonblocking,
404 `None` means blocking and otherwise it's a timeout in seconds after which
405 the function just returns with `None`.
406 """
407 if timeout is None:
408 rv = self.socket.recv()
409 elif not timeout:
410 rv = self.socket.recv(self._zmq.NOBLOCK)
411 if rv is None:
412 return
413 else:
414 if not self._zmq.select([self.socket], [], [], timeout)[0]:
415 return
416 rv = self.socket.recv(self._zmq.NOBLOCK)
417 if not PY2:
418 rv = rv.decode("utf-8")
419 return LogRecord.from_dict(json.loads(rv))
420
421
422 def _fix_261_mplog():
423 """necessary for older python's to disable a broken monkeypatch
424 in the logging module. See multiprocessing/util.py for the
425 hasattr() check. At least in Python 2.6.1 the multiprocessing
426 module is not imported by logging and as such the test in
427 the util fails.
428 """
429 import logging
430 import multiprocessing
431 logging.multiprocessing = multiprocessing
432
433
434 class MultiProcessingHandler(Handler):
435 """Implements a handler that dispatches over a queue to a different
436 process. It is connected to a subscriber with a
437 :class:`multiprocessing.Queue`::
438
439 from multiprocessing import Queue
440 from logbook.queues import MultiProcessingHandler
441 queue = Queue(-1)
442 handler = MultiProcessingHandler(queue)
443
444 """
445
446 def __init__(self, queue, level=NOTSET, filter=None, bubble=False):
447 Handler.__init__(self, level, filter, bubble)
448 self.queue = queue
449 _fix_261_mplog()
450
451 def emit(self, record):
452 self.queue.put_nowait(record.to_dict(json_safe=True))
453
454
455 class MultiProcessingSubscriber(SubscriberBase):
456 """Receives log records from the given multiprocessing queue and
457 dispatches them to the active handler setup. Make sure to use the same
458 queue for both handler and subscriber. Idaelly the queue is set
459 up with maximum size (``-1``)::
460
461 from multiprocessing import Queue
462 queue = Queue(-1)
463
464 It can be used to receive log records from a queue::
465
466 subscriber = MultiProcessingSubscriber(queue)
467 record = subscriber.recv()
468
469 But it can also be used to receive and dispatch these in one go::
470
471 with target_handler:
472 subscriber = MultiProcessingSubscriber(queue)
473 subscriber.dispatch_forever()
474
475 This will take all the log records from that queue and dispatch them
476 over to `target_handler`. If you want you can also do that in the
477 background::
478
479 subscriber = MultiProcessingSubscriber(queue)
480 controller = subscriber.dispatch_in_background(target_handler)
481
482 The controller returned can be used to shut down the background
483 thread::
484
485 controller.stop()
486
487 If no queue is provided the subscriber will create one. This one can the
488 be used by handlers::
489
490 subscriber = MultiProcessingSubscriber()
491 handler = MultiProcessingHandler(subscriber.queue)
492 """
493
494 def __init__(self, queue=None):
495 if queue is None:
496 from multiprocessing import Queue
497 queue = Queue(-1)
498 self.queue = queue
499 _fix_261_mplog()
500
501 def recv(self, timeout=None):
502 if timeout is None:
503 rv = self.queue.get()
504 else:
505 try:
506 rv = self.queue.get(block=False, timeout=timeout)
507 except Empty:
508 return None
509 return LogRecord.from_dict(rv)
510
511
512 class ExecnetChannelHandler(Handler):
513 """Implements a handler that dispatches over a execnet channel
514 to a different process.
515 """
516
517 def __init__(self, channel, level=NOTSET, filter=None, bubble=False):
518 Handler.__init__(self, level, filter, bubble)
519 self.channel = channel
520
521 def emit(self, record):
522 self.channel.send(record.to_dict(json_safe=True))
523
524
525 class ExecnetChannelSubscriber(SubscriberBase):
526 """subscribes to a execnet channel"""
527
528 def __init__(self, channel):
529 self.channel = channel
530
531 def recv(self, timeout=-1):
532 try:
533 rv = self.channel.receive(timeout=timeout)
534 except self.channel.RemoteError:
535 #XXX: handle
536 return None
537 except (self.channel.TimeoutError, EOFError):
538 return None
539 else:
540 return LogRecord.from_dict(rv)
541
542
543 class TWHThreadController(object):
544 """A very basic thread controller that pulls things in from a
545 queue and sends it to a handler. Both queue and handler are
546 taken from the passed :class:`ThreadedWrapperHandler`.
547 """
548 _sentinel = object()
549
550 def __init__(self, wrapper_handler):
551 self.wrapper_handler = wrapper_handler
552 self.running = False
553 self._thread = None
554
555 def start(self):
556 """Starts the task thread."""
557 self.running = True
558 self._thread = Thread(target=self._target)
559 self._thread.setDaemon(True)
560 self._thread.start()
561
562 def stop(self):
563 """Stops the task thread."""
564 if self.running:
565 self.wrapper_handler.queue.put_nowait(self._sentinel)
566 self._thread.join()
567 self._thread = None
568
569 def _target(self):
570 while 1:
571 record = self.wrapper_handler.queue.get()
572 if record is self._sentinel:
573 self.running = False
574 break
575 self.wrapper_handler.handler.handle(record)
576
577
578 class ThreadedWrapperHandler(WrapperHandler):
579 """This handled uses a single background thread to dispatch log records
580 to a specific other handler using an internal queue. The idea is that if
581 you are using a handler that requires some time to hand off the log records
582 (such as the mail handler) and would block your request, you can let
583 Logbook do that in a background thread.
584
585 The threaded wrapper handler will automatically adopt the methods and
586 properties of the wrapped handler. All the values will be reflected:
587
588 >>> twh = ThreadedWrapperHandler(TestHandler())
589 >>> from logbook import WARNING
590 >>> twh.level_name = 'WARNING'
591 >>> twh.handler.level_name
592 'WARNING'
593 """
594 _direct_attrs = frozenset(['handler', 'queue', 'controller'])
595
596 def __init__(self, handler):
597 WrapperHandler.__init__(self, handler)
598 self.queue = ThreadQueue(-1)
599 self.controller = TWHThreadController(self)
600 self.controller.start()
601
602 def close(self):
603 self.controller.stop()
604 self.handler.close()
605
606 def emit(self, record):
607 self.queue.put_nowait(record)
608
609
610 class GroupMember(ThreadController):
611 def __init__(self, subscriber, queue):
612 ThreadController.__init__(self, subscriber, None)
613 self.queue = queue
614
615 def _target(self):
616 if self.setup is not None:
617 self.setup.push_thread()
618 try:
619 while self.running:
620 record = self.subscriber.recv()
621 if record:
622 try:
623 self.queue.put(record, timeout=0.05)
624 except Queue.Full:
625 pass
626 finally:
627 if self.setup is not None:
628 self.setup.pop_thread()
629
630
631 class SubscriberGroup(SubscriberBase):
632 """This is a subscriber which represents a group of subscribers.
633
634 This is helpful if you are writing a server-like application which has
635 "slaves". This way a user is easily able to view every log record which
636 happened somewhere in the entire system without having to check every
637 single slave::
638
639 subscribers = SubscriberGroup([
640 MultiProcessingSubscriber(queue),
641 ZeroMQSubscriber('tcp://localhost:5000')
642 ])
643 with target_handler:
644 subscribers.dispatch_forever()
645 """
646 def __init__(self, subscribers=None, queue_limit=10):
647 self.members = []
648 self.queue = ThreadQueue(queue_limit)
649 for subscriber in subscribers or []:
650 self.add(subscriber)
651
652 def add(self, subscriber):
653 """Adds the given `subscriber` to the group."""
654 member = GroupMember(subscriber, self.queue)
655 member.start()
656 self.members.append(member)
657
658 def recv(self, timeout=None):
659 try:
660 return self.queue.get(timeout=timeout)
661 except Empty:
662 return
663
664 def stop(self):
665 """Stops the group from internally recieving any more messages, once the
666 internal queue is exhausted :meth:`recv` will always return `None`.
667 """
668 for member in self.members:
669 self.member.stop()
0 # -*- coding: utf-8 -*-
1 """
2 logbook.ticketing
3 ~~~~~~~~~~~~~~~~~
4
5 Implements long handlers that write to remote data stores and assign
6 each logging message a ticket id.
7
8 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
9 :license: BSD, see LICENSE for more details.
10 """
11 from time import time
12 import json
13 from logbook.base import NOTSET, level_name_property, LogRecord
14 from logbook.handlers import Handler, HashingHandlerMixin
15 from logbook.helpers import cached_property, b, PY2
16
17 class Ticket(object):
18 """Represents a ticket from the database."""
19
20 level_name = level_name_property()
21
22 def __init__(self, db, row):
23 self.db = db
24 self.__dict__.update(row)
25
26 @cached_property
27 def last_occurrence(self):
28 """The last occurrence."""
29 rv = self.get_occurrences(limit=1)
30 if rv:
31 return rv[0]
32
33 def get_occurrences(self, order_by='-time', limit=50, offset=0):
34 """Returns the occurrences for this ticket."""
35 return self.db.get_occurrences(self.ticket_id, order_by, limit, offset)
36
37 def solve(self):
38 """Marks this ticket as solved."""
39 self.db.solve_ticket(self.ticket_id)
40 self.solved = True
41
42 def delete(self):
43 """Deletes the ticket from the database."""
44 self.db.delete_ticket(self.ticket_id)
45
46 # Silence DeprecationWarning
47 __hash__ = None
48
49 def __eq__(self, other):
50 equal = True
51 for key in self.__dict__.keys():
52 if getattr(self, key) != getattr(other, key):
53 equal = False
54 break
55 return equal
56
57 def __ne__(self, other):
58 return not self.__eq__(other)
59
60
61 class Occurrence(LogRecord):
62 """Represents an occurrence of a ticket."""
63
64 def __init__(self, db, row):
65 self.update_from_dict(json.loads(row['data']))
66 self.db = db
67 self.time = row['time']
68 self.ticket_id = row['ticket_id']
69 self.occurrence_id = row['occurrence_id']
70
71
72 class BackendBase(object):
73 """Provides an abstract interface to various databases."""
74
75 def __init__(self, **options):
76 self.options = options
77 self.setup_backend()
78
79 def setup_backend(self):
80 """Setup the database backend."""
81 raise NotImplementedError()
82
83 def record_ticket(self, record, data, hash, app_id):
84 """Records a log record as ticket."""
85 raise NotImplementedError()
86
87 def count_tickets(self):
88 """Returns the number of tickets."""
89 raise NotImplementedError()
90
91 def get_tickets(self, order_by='-last_occurrence_time', limit=50, offset=0):
92 """Selects tickets from the database."""
93 raise NotImplementedError()
94
95 def solve_ticket(self, ticket_id):
96 """Marks a ticket as solved."""
97 raise NotImplementedError()
98
99 def delete_ticket(self, ticket_id):
100 """Deletes a ticket from the database."""
101 raise NotImplementedError()
102
103 def get_ticket(self, ticket_id):
104 """Return a single ticket with all occurrences."""
105 raise NotImplementedError()
106
107 def get_occurrences(self, ticket, order_by='-time', limit=50, offset=0):
108 """Selects occurrences from the database for a ticket."""
109 raise NotImplementedError()
110
111
112 class SQLAlchemyBackend(BackendBase):
113 """Implements a backend that is writing into a database SQLAlchemy can
114 interface.
115
116 This backend takes some additional options:
117
118 `table_prefix`
119 an optional table prefix for all tables created by
120 the logbook ticketing handler.
121
122 `metadata`
123 an optional SQLAlchemy metadata object for the table creation.
124
125 `autocreate_tables`
126 can be set to `False` to disable the automatic
127 creation of the logbook tables.
128
129 """
130
131 def setup_backend(self):
132 from sqlalchemy import create_engine, MetaData
133 engine_or_uri = self.options.pop('uri', None)
134 metadata = self.options.pop('metadata', None)
135 table_prefix = self.options.pop('table_prefix', 'logbook_')
136
137 if hasattr(engine_or_uri, 'execute'):
138 self.engine = engine_or_uri
139 else:
140 self.engine = create_engine(engine_or_uri, convert_unicode=True)
141 if metadata is None:
142 metadata = MetaData()
143 self.table_prefix = table_prefix
144 self.metadata = metadata
145 self.create_tables()
146 if self.options.get('autocreate_tables', True):
147 self.metadata.create_all(bind=self.engine)
148
149 def create_tables(self):
150 """Creates the tables required for the handler on the class and
151 metadata.
152 """
153 import sqlalchemy as db
154 def table(name, *args, **kwargs):
155 return db.Table(self.table_prefix + name, self.metadata,
156 *args, **kwargs)
157 self.tickets = table('tickets',
158 db.Column('ticket_id', db.Integer, primary_key=True),
159 db.Column('record_hash', db.String(40), unique=True),
160 db.Column('level', db.Integer),
161 db.Column('channel', db.String(120)),
162 db.Column('location', db.String(512)),
163 db.Column('module', db.String(256)),
164 db.Column('last_occurrence_time', db.DateTime),
165 db.Column('occurrence_count', db.Integer),
166 db.Column('solved', db.Boolean),
167 db.Column('app_id', db.String(80))
168 )
169 self.occurrences = table('occurrences',
170 db.Column('occurrence_id', db.Integer, primary_key=True),
171 db.Column('ticket_id', db.Integer,
172 db.ForeignKey(self.table_prefix + 'tickets.ticket_id')),
173 db.Column('time', db.DateTime),
174 db.Column('data', db.Text),
175 db.Column('app_id', db.String(80))
176 )
177
178 def _order(self, q, table, order_by):
179 if order_by[0] == '-':
180 return q.order_by(table.c[order_by[1:]].desc())
181 return q.order_by(table.c[order_by])
182
183 def record_ticket(self, record, data, hash, app_id):
184 """Records a log record as ticket."""
185 cnx = self.engine.connect()
186 trans = cnx.begin()
187 try:
188 q = self.tickets.select(self.tickets.c.record_hash == hash)
189 row = cnx.execute(q).fetchone()
190 if row is None:
191 row = cnx.execute(self.tickets.insert().values(
192 record_hash=hash,
193 level=record.level,
194 channel=record.channel or u'',
195 location=u'%s:%d' % (record.filename, record.lineno),
196 module=record.module or u'<unknown>',
197 occurrence_count=0,
198 solved=False,
199 app_id=app_id
200 ))
201 ticket_id = row.inserted_primary_key[0]
202 else:
203 ticket_id = row['ticket_id']
204 cnx.execute(self.occurrences.insert()
205 .values(ticket_id=ticket_id,
206 time=record.time,
207 app_id=app_id,
208 data=json.dumps(data)))
209 cnx.execute(self.tickets.update()
210 .where(self.tickets.c.ticket_id == ticket_id)
211 .values(occurrence_count=self.tickets.c.occurrence_count + 1,
212 last_occurrence_time=record.time,
213 solved=False))
214 trans.commit()
215 except Exception:
216 trans.rollback()
217 raise
218 cnx.close()
219
220 def count_tickets(self):
221 """Returns the number of tickets."""
222 return self.engine.execute(self.tickets.count()).fetchone()[0]
223
224 def get_tickets(self, order_by='-last_occurrence_time', limit=50, offset=0):
225 """Selects tickets from the database."""
226 return [Ticket(self, row) for row in self.engine.execute(
227 self._order(self.tickets.select(), self.tickets, order_by)
228 .limit(limit).offset(offset)).fetchall()]
229
230 def solve_ticket(self, ticket_id):
231 """Marks a ticket as solved."""
232 self.engine.execute(self.tickets.update()
233 .where(self.tickets.c.ticket_id == ticket_id)
234 .values(solved=True))
235
236 def delete_ticket(self, ticket_id):
237 """Deletes a ticket from the database."""
238 self.engine.execute(self.occurrences.delete()
239 .where(self.occurrences.c.ticket_id == ticket_id))
240 self.engine.execute(self.tickets.delete()
241 .where(self.tickets.c.ticket_id == ticket_id))
242
243 def get_ticket(self, ticket_id):
244 """Return a single ticket with all occurrences."""
245 row = self.engine.execute(self.tickets.select().where(
246 self.tickets.c.ticket_id == ticket_id)).fetchone()
247 if row is not None:
248 return Ticket(self, row)
249
250 def get_occurrences(self, ticket, order_by='-time', limit=50, offset=0):
251 """Selects occurrences from the database for a ticket."""
252 return [Occurrence(self, row) for row in
253 self.engine.execute(self._order(self.occurrences.select()
254 .where(self.occurrences.c.ticket_id == ticket),
255 self.occurrences, order_by)
256 .limit(limit).offset(offset)).fetchall()]
257
258
259 class MongoDBBackend(BackendBase):
260 """Implements a backend that writes into a MongoDB database."""
261
262 class _FixedTicketClass(Ticket):
263 @property
264 def ticket_id(self):
265 return self._id
266
267 class _FixedOccurrenceClass(Occurrence):
268 def __init__(self, db, row):
269 self.update_from_dict(json.loads(row['data']))
270 self.db = db
271 self.time = row['time']
272 self.ticket_id = row['ticket_id']
273 self.occurrence_id = row['_id']
274
275 #TODO: Update connection setup once PYTHON-160 is solved.
276 def setup_backend(self):
277 import pymongo
278 from pymongo import ASCENDING, DESCENDING
279 from pymongo.connection import Connection
280
281 try:
282 from pymongo.uri_parser import parse_uri
283 except ImportError:
284 from pymongo.connection import _parse_uri as parse_uri
285
286 from pymongo.errors import AutoReconnect
287
288 _connection = None
289 uri = self.options.pop('uri', u'')
290 _connection_attempts = 0
291
292 parsed_uri = parse_uri(uri, Connection.PORT)
293
294 if type(parsed_uri) is tuple:
295 # pymongo < 2.0
296 database = parsed_uri[1]
297 else:
298 # pymongo >= 2.0
299 database = parsed_uri['database']
300
301 # Handle auto reconnect signals properly
302 while _connection_attempts < 5:
303 try:
304 if _connection is None:
305 _connection = Connection(uri)
306 database = _connection[database]
307 break
308 except AutoReconnect:
309 _connection_attempts += 1
310 time.sleep(0.1)
311
312 self.database = database
313
314 # setup correct indexes
315 database.tickets.ensure_index([('record_hash', ASCENDING)], unique=True)
316 database.tickets.ensure_index([('solved', ASCENDING), ('level', ASCENDING)])
317 database.occurrences.ensure_index([('time', DESCENDING)])
318
319 def _order(self, q, order_by):
320 from pymongo import ASCENDING, DESCENDING
321 col = '%s' % (order_by[0] == '-' and order_by[1:] or order_by)
322 if order_by[0] == '-':
323 return q.sort(col, DESCENDING)
324 return q.sort(col, ASCENDING)
325
326 def _oid(self, ticket_id):
327 from pymongo.objectid import ObjectId
328 return ObjectId(ticket_id)
329
330 def record_ticket(self, record, data, hash, app_id):
331 """Records a log record as ticket."""
332 db = self.database
333 ticket = db.tickets.find_one({'record_hash': hash})
334 if not ticket:
335 doc = {
336 'record_hash': hash,
337 'level': record.level,
338 'channel': record.channel or u'',
339 'location': u'%s:%d' % (record.filename, record.lineno),
340 'module': record.module or u'<unknown>',
341 'occurrence_count': 0,
342 'solved': False,
343 'app_id': app_id,
344 }
345 ticket_id = db.tickets.insert(doc)
346 else:
347 ticket_id = ticket['_id']
348
349 db.tickets.update({'_id': ticket_id}, {
350 '$inc': {
351 'occurrence_count': 1
352 },
353 '$set': {
354 'last_occurrence_time': record.time,
355 'solved': False
356 }
357 })
358 # We store occurrences in a seperate collection so that
359 # we can make it a capped collection optionally.
360 db.occurrences.insert({
361 'ticket_id': self._oid(ticket_id),
362 'app_id': app_id,
363 'time': record.time,
364 'data': json.dumps(data),
365 })
366
367 def count_tickets(self):
368 """Returns the number of tickets."""
369 return self.database.tickets.count()
370
371 def get_tickets(self, order_by='-last_occurrence_time', limit=50, offset=0):
372 """Selects tickets from the database."""
373 query = self._order(self.database.tickets.find(), order_by) \
374 .limit(limit).skip(offset)
375 return [self._FixedTicketClass(self, obj) for obj in query]
376
377 def solve_ticket(self, ticket_id):
378 """Marks a ticket as solved."""
379 self.database.tickets.update({'_id': self._oid(ticket_id)},
380 {'solved': True})
381
382 def delete_ticket(self, ticket_id):
383 """Deletes a ticket from the database."""
384 self.database.occurrences.remove({'ticket_id': self._oid(ticket_id)})
385 self.database.tickets.remove({'_id': self._oid(ticket_id)})
386
387 def get_ticket(self, ticket_id):
388 """Return a single ticket with all occurrences."""
389 ticket = self.database.tickets.find_one({'_id': self._oid(ticket_id)})
390 if ticket:
391 return Ticket(self, ticket)
392
393 def get_occurrences(self, ticket, order_by='-time', limit=50, offset=0):
394 """Selects occurrences from the database for a ticket."""
395 collection = self.database.occurrences
396 occurrences = self._order(collection.find(
397 {'ticket_id': self._oid(ticket)}
398 ), order_by).limit(limit).skip(offset)
399 return [self._FixedOccurrenceClass(self, obj) for obj in occurrences]
400
401
402 class TicketingBaseHandler(Handler, HashingHandlerMixin):
403 """Baseclass for ticketing handlers. This can be used to interface
404 ticketing systems that do not necessarily provide an interface that
405 would be compatible with the :class:`BackendBase` interface.
406 """
407
408 def __init__(self, hash_salt, level=NOTSET, filter=None, bubble=False):
409 Handler.__init__(self, level, filter, bubble)
410 self.hash_salt = hash_salt
411
412 def hash_record_raw(self, record):
413 """Returns the unique hash of a record."""
414 hash = HashingHandlerMixin.hash_record_raw(self, record)
415 if self.hash_salt is not None:
416 hash_salt = self.hash_salt
417 if not PY2 or isinstance(hash_salt, unicode):
418 hash_salt = hash_salt.encode('utf-8')
419 hash.update(b('\x00') + hash_salt)
420 return hash
421
422
423 class TicketingHandler(TicketingBaseHandler):
424 """A handler that writes log records into a remote database. This
425 database can be connected to from different dispatchers which makes
426 this a nice setup for web applications::
427
428 from logbook.ticketing import TicketingHandler
429 handler = TicketingHandler('sqlite:////tmp/myapp-logs.db')
430
431 :param uri: a backend specific string or object to decide where to log to.
432 :param app_id: a string with an optional ID for an application. Can be
433 used to keep multiple application setups apart when logging
434 into the same database.
435 :param hash_salt: an optional salt (binary string) for the hashes.
436 :param backend: A backend class that implements the proper database handling.
437 Backends available are: :class:`SQLAlchemyBackend`,
438 :class:`MongoDBBackend`.
439 """
440
441 #: The default backend that is being used when no backend is specified.
442 #: Unless overriden by a subclass this will be the
443 #: :class:`SQLAlchemyBackend`.
444 default_backend = SQLAlchemyBackend
445
446 def __init__(self, uri, app_id='generic', level=NOTSET,
447 filter=None, bubble=False, hash_salt=None, backend=None,
448 **db_options):
449 if hash_salt is None:
450 hash_salt = u'apphash-' + app_id
451 TicketingBaseHandler.__init__(self, hash_salt, level, filter, bubble)
452 if backend is None:
453 backend = self.default_backend
454 db_options['uri'] = uri
455 self.set_backend(backend, **db_options)
456 self.app_id = app_id
457
458 def set_backend(self, cls, **options):
459 self.db = cls(**options)
460
461 def process_record(self, record, hash):
462 """Subclasses can override this to tamper with the data dict that
463 is sent to the database as JSON.
464 """
465 return record.to_dict(json_safe=True)
466
467 def record_ticket(self, record, data, hash):
468 """Record either a new ticket or a new occurrence for a
469 ticket based on the hash.
470 """
471 self.db.record_ticket(record, data, hash, self.app_id)
472
473 def emit(self, record):
474 """Emits a single record and writes it to the database."""
475 hash = self.hash_record(record)
476 data = self.process_record(record, hash)
477 self.record_ticket(record, data, hash)
0 #!/usr/bin/env python
1 # -*- coding: utf-8 -*-
2 """
3 make-release
4 ~~~~~~~~~~~~
5
6 Helper script that performs a release. Does pretty much everything
7 automatically for us.
8
9 :copyright: (c) 2011 by Armin Ronacher.
10 :license: BSD, see LICENSE for more details.
11 """
12 import sys
13 import os
14 import re
15 import argparse
16 from datetime import datetime, date
17 from subprocess import Popen, PIPE
18
19 _date_clean_re = re.compile(r'(\d+)(st|nd|rd|th)')
20
21
22 def parse_changelog():
23 with open('CHANGES') as f:
24 lineiter = iter(f)
25 for line in lineiter:
26 match = re.search('^Version\s+(.*)', line.strip())
27 if match is None:
28 continue
29 length = len(match.group(1))
30 version = match.group(1).strip()
31 if lineiter.next().count('-') != len(match.group(0)):
32 continue
33 while 1:
34 change_info = lineiter.next().strip()
35 if change_info:
36 break
37
38 match = re.search(r'released on (\w+\s+\d+\w+\s+\d+)'
39 r'(?:, codename (.*))?(?i)', change_info)
40 if match is None:
41 continue
42
43 datestr, codename = match.groups()
44 return version, parse_date(datestr), codename
45
46
47 def bump_version(version):
48 try:
49 parts = map(int, version.split('.'))
50 except ValueError:
51 fail('Current version is not numeric')
52 parts[-1] += 1
53 return '.'.join(map(str, parts))
54
55
56 def parse_date(string):
57 string = _date_clean_re.sub(r'\1', string)
58 return datetime.strptime(string, '%B %d %Y')
59
60
61 def set_filename_version(filename, version_number, pattern):
62 changed = []
63 def inject_version(match):
64 before, old, after = match.groups()
65 changed.append(True)
66 return before + version_number + after
67 with open(filename) as f:
68 contents = re.sub(r"^(\s*%s\s*=\s*')(.+?)(')(?sm)" % pattern,
69 inject_version, f.read())
70
71 if not changed:
72 fail('Could not find %s in %s', pattern, filename)
73
74 with open(filename, 'w') as f:
75 f.write(contents)
76
77
78 def set_init_version(version):
79 info('Setting __init__.py version to %s', version)
80 set_filename_version('logbook/__init__.py', version, '__version__')
81
82
83 def set_setup_version(version):
84 info('Setting setup.py version to %s', version)
85 set_filename_version('setup.py', version, 'version')
86
87 def set_doc_version(version):
88 info('Setting docs/conf.py version to %s', version)
89 set_filename_version('docs/conf.py', version, 'version')
90 set_filename_version('docs/conf.py', version, 'release')
91
92
93 def build_and_upload():
94 Popen([sys.executable, 'setup.py', 'release', 'sdist', 'upload']).wait()
95
96
97 def fail(message, *args):
98 print >> sys.stderr, 'Error:', message % args
99 sys.exit(1)
100
101
102 def info(message, *args):
103 print >> sys.stderr, message % args
104
105
106 def get_git_tags():
107 return set(Popen(['git', 'tag'], stdout=PIPE).communicate()[0].splitlines())
108
109
110 def git_is_clean():
111 return Popen(['git', 'diff', '--quiet']).wait() == 0
112
113
114 def make_git_commit(message, *args):
115 message = message % args
116 Popen(['git', 'commit', '-am', message]).wait()
117
118
119 def make_git_tag(tag):
120 info('Tagging "%s"', tag)
121 Popen(['git', 'tag', tag]).wait()
122
123
124 parser = argparse.ArgumentParser("%prog [options]")
125 parser.add_argument("--no-upload", dest="upload", action="store_false", default=True)
126
127 def main():
128 args = parser.parse_args()
129
130 os.chdir(os.path.join(os.path.dirname(__file__), '..'))
131
132 rv = parse_changelog()
133 if rv is None:
134 fail('Could not parse changelog')
135
136 version, release_date, codename = rv
137 dev_version = bump_version(version) + '-dev'
138
139 info('Releasing %s (codename %s, release date %s)',
140 version, codename, release_date.strftime('%d/%m/%Y'))
141 tags = get_git_tags()
142
143 if version in tags:
144 fail('Version "%s" is already tagged', version)
145 if release_date.date() != date.today():
146 fail('Release date is not today (%s != %s)' % (release_date.date(), date.today()))
147
148 if not git_is_clean():
149 fail('You have uncommitted changes in git')
150
151 set_init_version(version)
152 set_setup_version(version)
153 set_doc_version(version)
154 make_git_commit('Bump version number to %s', version)
155 make_git_tag(version)
156 if args.upload:
157 build_and_upload()
158 set_init_version(dev_version)
159 set_setup_version(dev_version)
160 set_doc_version(dev_version)
161 make_git_commit('Bump version number to %s', dev_version)
162
163
164 if __name__ == '__main__':
165 main()
0 #! /usr/bin/python
1 import os
2 import sys
3
4
5 if __name__ == '__main__':
6 mirror = sys.argv[1]
7 f = open(os.path.expanduser("~/.pydistutils.cfg"), "w")
8 f.write("""
9 [easy_install]
10 index_url = %s
11 """ % mirror)
12 f.close()
13 pip_dir = os.path.expanduser("~/.pip")
14 if not os.path.isdir(pip_dir):
15 os.makedirs(pip_dir)
16 f = open(os.path.join(pip_dir, "pip.conf"), "w")
17 f.write("""
18 [global]
19 index-url = %s
20
21 [install]
22 use-mirrors = true
23 """ % mirror)
24 f.close()
0 #! /usr/bin/python
1 import platform
2 import subprocess
3 import sys
4
5 def _execute(*args, **kwargs):
6 result = subprocess.call(*args, **kwargs)
7 if result != 0:
8 sys.exit(result)
9
10 if __name__ == '__main__':
11 python_version = platform.python_version()
12
13 deps = [
14 "execnet",
15 "Jinja2",
16 "nose",
17 "pyzmq",
18 "sqlalchemy",
19 ]
20
21 if python_version < "2.7":
22 deps.append("unittest2")
23 print("Setting up dependencies...")
24 _execute("pip install %s" % " ".join(deps), shell=True)
0 #! /usr/bin/python
1 from __future__ import print_function
2 import ast
3 import os
4 import subprocess
5 import sys
6
7 _PYPY = hasattr(sys, "pypy_version_info")
8
9 if __name__ == '__main__':
10 use_cython = ast.literal_eval(os.environ["USE_CYTHON"])
11 if use_cython and _PYPY:
12 print("PyPy+Cython configuration skipped")
13 else:
14 sys.exit(
15 subprocess.call("make cybuild test" if use_cython else "make test", shell=True)
16 )
0 [build_sphinx]
1 source-dir = docs/
2 build-dir = docs/_build
3 all_files = 1
4
5 [upload_docs]
6 upload-dir = docs/_build/html
7
8 [egg_info]
9 tag_date = true
10
11 [aliases]
12 release = egg_info -RDb ''
0 r"""
1 Logbook
2 -------
3
4 An awesome logging implementation that is fun to use.
5
6 Quickstart
7 ``````````
8
9 ::
10
11 from logbook import Logger
12 log = Logger('A Fancy Name')
13
14 log.warn('Logbook is too awesome for most applications')
15 log.error("Can't touch this")
16
17 Works for web apps too
18 ``````````````````````
19
20 ::
21
22 from logbook import MailHandler, Processor
23
24 mailhandler = MailHandler(from_addr='servererror@example.com',
25 recipients=['admin@example.com'],
26 level='ERROR', format_string=u'''\
27 Subject: Application Error for {record.extra[path]} [{record.extra[method]}]
28
29 Message type: {record.level_name}
30 Location: {record.filename}:{record.lineno}
31 Module: {record.module}
32 Function: {record.func_name}
33 Time: {record.time:%Y-%m-%d %H:%M:%S}
34 Remote IP: {record.extra[ip]}
35 Request: {record.extra[path]} [{record.extra[method]}]
36
37 Message:
38
39 {record.message}
40 ''')
41
42 def handle_request(request):
43 def inject_extra(record, handler):
44 record.extra['ip'] = request.remote_addr
45 record.extra['method'] = request.method
46 record.extra['path'] = request.path
47
48 with Processor(inject_extra):
49 with mailhandler:
50 # execute code that might fail in the context of the
51 # request.
52 """
53
54 import os
55 import sys
56 from setuptools import setup, Extension, Feature
57 from distutils.command.build_ext import build_ext
58 from distutils.errors import CCompilerError, DistutilsExecError, \
59 DistutilsPlatformError
60
61
62 extra = {}
63 cmdclass = {}
64
65
66 class BuildFailed(Exception):
67 pass
68
69
70 ext_errors = (CCompilerError, DistutilsExecError, DistutilsPlatformError)
71 if sys.platform == 'win32' and sys.version_info > (2, 6):
72 # 2.6's distutils.msvc9compiler can raise an IOError when failing to
73 # find the compiler
74 ext_errors += (IOError,)
75
76
77 class ve_build_ext(build_ext):
78 """This class allows C extension building to fail."""
79
80 def run(self):
81 try:
82 build_ext.run(self)
83 except DistutilsPlatformError:
84 raise BuildFailed()
85
86 def build_extension(self, ext):
87 try:
88 build_ext.build_extension(self, ext)
89 except ext_errors:
90 raise BuildFailed()
91
92 cmdclass['build_ext'] = ve_build_ext
93 # Don't try to compile the extension if we're running on PyPy
94 if os.path.isfile('logbook/_speedups.c') and not hasattr(sys, "pypy_translation_info"):
95 speedups = Feature('optional C speed-enhancement module', standard=True,
96 ext_modules=[Extension('logbook._speedups',
97 ['logbook/_speedups.c'])])
98 else:
99 speedups = None
100
101
102 def run_setup(with_binary):
103 features = {}
104 if with_binary and speedups is not None:
105 features['speedups'] = speedups
106 setup(
107 name='Logbook',
108 version='0.6.1-dev',
109 license='BSD',
110 url='http://logbook.pocoo.org/',
111 author='Armin Ronacher, Georg Brandl',
112 author_email='armin.ronacher@active-4.com',
113 description='A logging replacement for Python',
114 long_description=__doc__,
115 packages=['logbook'],
116 zip_safe=False,
117 platforms='any',
118 cmdclass=cmdclass,
119 features=features,
120 install_requires=[
121 ],
122 **extra
123 )
124
125
126 def echo(msg=''):
127 sys.stdout.write(msg + '\n')
128
129
130 try:
131 run_setup(True)
132 except BuildFailed:
133 LINE = '=' * 74
134 BUILD_EXT_WARNING = ('WARNING: The C extension could not be compiled, '
135 'speedups are not enabled.')
136
137 echo(LINE)
138 echo(BUILD_EXT_WARNING)
139 echo('Failure information, if any, is above.')
140 echo('Retrying the build without the C extension now.')
141 echo()
142
143 run_setup(False)
144
145 echo(LINE)
146 echo(BUILD_EXT_WARNING)
147 echo('Plain-Python installation succeeded.')
148 echo(LINE)
0 # -*- coding: utf-8 -*-
1 from .utils import (
2 LogbookTestCase,
3 activate_via_push_pop,
4 activate_via_with_statement,
5 capturing_stderr_context,
6 get_total_delta_seconds,
7 make_fake_mail_handler,
8 missing,
9 require_module,
10 require_py3,
11 )
12 from contextlib import closing, contextmanager
13 from datetime import datetime, timedelta
14 from random import randrange
15 import logbook
16 from logbook.helpers import StringIO, xrange, iteritems, zip, u
17 import os
18 import pickle
19 import re
20 import shutil
21 import socket
22 import sys
23 import tempfile
24 import time
25 import json
26 try:
27 from thread import get_ident
28 except ImportError:
29 from _thread import get_ident
30
31 __file_without_pyc__ = __file__
32 if __file_without_pyc__.endswith(".pyc"):
33 __file_without_pyc__ = __file_without_pyc__[:-1]
34
35 LETTERS = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
36
37 class _BasicAPITestCase(LogbookTestCase):
38 def test_basic_logging(self):
39 with self.thread_activation_strategy(logbook.TestHandler()) as handler:
40 self.log.warn('This is a warning. Nice hah?')
41
42 self.assert_(handler.has_warning('This is a warning. Nice hah?'))
43 self.assertEqual(handler.formatted_records, [
44 '[WARNING] testlogger: This is a warning. Nice hah?'
45 ])
46
47 def test_extradict(self):
48 with self.thread_activation_strategy(logbook.TestHandler()) as handler:
49 self.log.warn('Test warning')
50 record = handler.records[0]
51 record.extra['existing'] = 'foo'
52 self.assertEqual(record.extra['nonexisting'], '')
53 self.assertEqual(record.extra['existing'], 'foo')
54 self.assertEqual(repr(record.extra),
55 'ExtraDict({\'existing\': \'foo\'})')
56
57 def test_custom_logger(self):
58 client_ip = '127.0.0.1'
59
60 class CustomLogger(logbook.Logger):
61 def process_record(self, record):
62 record.extra['ip'] = client_ip
63
64 custom_log = CustomLogger('awesome logger')
65 fmt = '[{record.level_name}] {record.channel}: ' \
66 '{record.message} [{record.extra[ip]}]'
67 handler = logbook.TestHandler(format_string=fmt)
68 self.assertEqual(handler.format_string, fmt)
69
70 with self.thread_activation_strategy(handler):
71 custom_log.warn('Too many sounds')
72 self.log.warn('"Music" playing')
73
74 self.assertEqual(handler.formatted_records, [
75 '[WARNING] awesome logger: Too many sounds [127.0.0.1]',
76 '[WARNING] testlogger: "Music" playing []'
77 ])
78
79 def test_handler_exception(self):
80 class ErroringHandler(logbook.TestHandler):
81 def emit(self, record):
82 raise RuntimeError('something bad happened')
83
84 with capturing_stderr_context() as stderr:
85 with self.thread_activation_strategy(ErroringHandler()) as handler:
86 self.log.warn('I warn you.')
87 self.assert_('something bad happened' in stderr.getvalue())
88 self.assert_('I warn you' not in stderr.getvalue())
89
90 def test_formatting_exception(self):
91 def make_record():
92 return logbook.LogRecord('Test Logger', logbook.WARNING,
93 'Hello {foo:invalid}',
94 kwargs={'foo': 42},
95 frame=sys._getframe())
96 record = make_record()
97 with self.assertRaises(TypeError) as caught:
98 record.message
99
100 errormsg = str(caught.exception)
101 self.assertRegexpMatches(errormsg,
102 "Could not format message with provided arguments: Invalid (?:format specifier)|(?:conversion specification)|(?:format spec)")
103 self.assertIn("msg='Hello {foo:invalid}'", errormsg)
104 self.assertIn('args=()', errormsg)
105 self.assertIn("kwargs={'foo': 42}", errormsg)
106 self.assertRegexpMatches(
107 errormsg,
108 r'Happened in file .*%s, line \d+' % __file_without_pyc__)
109
110 def test_exception_catching(self):
111 logger = logbook.Logger('Test')
112 with self.thread_activation_strategy(logbook.TestHandler()) as handler:
113 self.assertFalse(handler.has_error())
114 try:
115 1 / 0
116 except Exception:
117 logger.exception()
118 try:
119 1 / 0
120 except Exception:
121 logger.exception('Awesome')
122 self.assert_(handler.has_error('Uncaught exception occurred'))
123 self.assert_(handler.has_error('Awesome'))
124 self.assertIsNotNone(handler.records[0].exc_info)
125 self.assertIn('1 / 0', handler.records[0].formatted_exception)
126
127 def test_exc_info_tuple(self):
128 self._test_exc_info(as_tuple=True)
129
130 def test_exc_info_true(self):
131 self._test_exc_info(as_tuple=False)
132
133 def _test_exc_info(self, as_tuple):
134 logger = logbook.Logger("Test")
135 with self.thread_activation_strategy(logbook.TestHandler()) as handler:
136 try:
137 1 / 0
138 except Exception:
139 exc_info = sys.exc_info()
140 logger.info("Exception caught", exc_info=exc_info if as_tuple else True)
141 self.assertIsNotNone(handler.records[0].exc_info)
142 self.assertEquals(handler.records[0].exc_info, exc_info)
143
144 def test_exporting(self):
145 with self.thread_activation_strategy(logbook.TestHandler()) as handler:
146 try:
147 1 / 0
148 except Exception:
149 self.log.exception()
150 record = handler.records[0]
151
152 exported = record.to_dict()
153 record.close()
154 imported = logbook.LogRecord.from_dict(exported)
155 for key, value in iteritems(record.__dict__):
156 if key[0] == '_':
157 continue
158 self.assertEqual(value, getattr(imported, key))
159
160 def test_pickle(self):
161 with self.thread_activation_strategy(logbook.TestHandler()) as handler:
162 try:
163 1 / 0
164 except Exception:
165 self.log.exception()
166 record = handler.records[0]
167 record.pull_information()
168 record.close()
169
170 for p in xrange(pickle.HIGHEST_PROTOCOL):
171 exported = pickle.dumps(record, p)
172 imported = pickle.loads(exported)
173 for key, value in iteritems(record.__dict__):
174 if key[0] == '_':
175 continue
176 imported_value = getattr(imported, key)
177 if isinstance(value, ZeroDivisionError):
178 # in Python 3.2, ZeroDivisionError(x) != ZeroDivisionError(x)
179 self.assert_(type(value) is type(imported_value))
180 self.assertEqual(value.args, imported_value.args)
181 else:
182 self.assertEqual(value, imported_value)
183
184 def test_timedate_format(self):
185 """
186 tests the logbook.set_datetime_format() function
187 """
188 FORMAT_STRING = '{record.time:%H:%M:%S} {record.message}'
189 handler = logbook.TestHandler(format_string=FORMAT_STRING)
190 handler.push_thread()
191 logbook.set_datetime_format('utc')
192 try:
193 self.log.warn('This is a warning.')
194 time_utc = handler.records[0].time
195 logbook.set_datetime_format('local')
196 self.log.warn('This is a warning.')
197 time_local = handler.records[1].time
198 finally:
199 handler.pop_thread()
200 # put back the default time factory
201 logbook.set_datetime_format('utc')
202
203 # get the expected difference between local and utc time
204 t1 = datetime.now()
205 t2 = datetime.utcnow()
206
207 tz_minutes_diff = get_total_delta_seconds(t1 - t2)/60.0
208
209 if abs(tz_minutes_diff) < 1:
210 self.skipTest("Cannot test utc/localtime differences if they vary by less than one minute...")
211
212 # get the difference between LogRecord local and utc times
213 logbook_minutes_diff = get_total_delta_seconds(time_local - time_utc)/60.0
214 self.assertGreater(abs(logbook_minutes_diff), 1, "Localtime does not differ from UTC by more than 1 minute (Local: %s, UTC: %s)" % (time_local, time_utc))
215
216 ratio = logbook_minutes_diff / tz_minutes_diff
217
218 self.assertGreater(ratio, 0.99)
219 self.assertLess(ratio, 1.01)
220
221 class BasicAPITestCase_Regular(_BasicAPITestCase):
222 def setUp(self):
223 super(BasicAPITestCase_Regular, self).setUp()
224 self.thread_activation_strategy = activate_via_with_statement
225
226 class BasicAPITestCase_Contextmgr(_BasicAPITestCase):
227 def setUp(self):
228 super(BasicAPITestCase_Contextmgr, self).setUp()
229 self.thread_activation_strategy = activate_via_push_pop
230
231 class _HandlerTestCase(LogbookTestCase):
232 def setUp(self):
233 super(_HandlerTestCase, self).setUp()
234 self.dirname = tempfile.mkdtemp()
235 self.filename = os.path.join(self.dirname, 'log.tmp')
236
237 def tearDown(self):
238 shutil.rmtree(self.dirname)
239 super(_HandlerTestCase, self).tearDown()
240
241 def test_file_handler(self):
242 handler = logbook.FileHandler(self.filename,
243 format_string='{record.level_name}:{record.channel}:'
244 '{record.message}',)
245 with self.thread_activation_strategy(handler):
246 self.log.warn('warning message')
247 handler.close()
248 with open(self.filename) as f:
249 self.assertEqual(f.readline(),
250 'WARNING:testlogger:warning message\n')
251
252 def test_file_handler_unicode(self):
253 with capturing_stderr_context() as captured:
254 with self.thread_activation_strategy(logbook.FileHandler(self.filename)) as h:
255 self.log.info(u'\u0431')
256 self.assertFalse(captured.getvalue())
257
258 def test_file_handler_delay(self):
259 handler = logbook.FileHandler(self.filename,
260 format_string='{record.level_name}:{record.channel}:'
261 '{record.message}', delay=True)
262 self.assertFalse(os.path.isfile(self.filename))
263 with self.thread_activation_strategy(handler):
264 self.log.warn('warning message')
265 handler.close()
266
267 with open(self.filename) as f:
268 self.assertEqual(f.readline(),
269 'WARNING:testlogger:warning message\n')
270
271 def test_monitoring_file_handler(self):
272 if os.name == "nt":
273 self.skipTest("unsupported on windows due to different IO (also unneeded)")
274 handler = logbook.MonitoringFileHandler(self.filename,
275 format_string='{record.level_name}:{record.channel}:'
276 '{record.message}', delay=True)
277 with self.thread_activation_strategy(handler):
278 self.log.warn('warning message')
279 os.rename(self.filename, self.filename + '.old')
280 self.log.warn('another warning message')
281 handler.close()
282 with open(self.filename) as f:
283 self.assertEqual(f.read().strip(),
284 'WARNING:testlogger:another warning message')
285
286 def test_custom_formatter(self):
287 def custom_format(record, handler):
288 return record.level_name + ':' + record.message
289 handler = logbook.FileHandler(self.filename)
290 with self.thread_activation_strategy(handler):
291 handler.formatter = custom_format
292 self.log.warn('Custom formatters are awesome')
293
294 with open(self.filename) as f:
295 self.assertEqual(f.readline(),
296 'WARNING:Custom formatters are awesome\n')
297
298 def test_rotating_file_handler(self):
299 basename = os.path.join(self.dirname, 'rot.log')
300 handler = logbook.RotatingFileHandler(basename, max_size=2048,
301 backup_count=3,
302 )
303 handler.format_string = '{record.message}'
304 with self.thread_activation_strategy(handler):
305 for c, x in zip(LETTERS, xrange(32)):
306 self.log.warn(c * 256)
307 files = [x for x in os.listdir(self.dirname)
308 if x.startswith('rot.log')]
309 files.sort()
310
311 self.assertEqual(files, ['rot.log', 'rot.log.1', 'rot.log.2',
312 'rot.log.3'])
313 with open(basename) as f:
314 self.assertEqual(f.readline().rstrip(), 'C' * 256)
315 self.assertEqual(f.readline().rstrip(), 'D' * 256)
316 self.assertEqual(f.readline().rstrip(), 'E' * 256)
317 self.assertEqual(f.readline().rstrip(), 'F' * 256)
318
319 def test_timed_rotating_file_handler(self):
320 basename = os.path.join(self.dirname, 'trot.log')
321 handler = logbook.TimedRotatingFileHandler(basename, backup_count=3)
322 handler.format_string = '[{record.time:%H:%M}] {record.message}'
323
324 def fake_record(message, year, month, day, hour=0,
325 minute=0, second=0):
326 lr = logbook.LogRecord('Test Logger', logbook.WARNING,
327 message)
328 lr.time = datetime(year, month, day, hour, minute, second)
329 return lr
330
331 with self.thread_activation_strategy(handler):
332 for x in xrange(10):
333 handler.handle(fake_record('First One', 2010, 1, 5, x + 1))
334 for x in xrange(20):
335 handler.handle(fake_record('Second One', 2010, 1, 6, x + 1))
336 for x in xrange(10):
337 handler.handle(fake_record('Third One', 2010, 1, 7, x + 1))
338 for x in xrange(20):
339 handler.handle(fake_record('Last One', 2010, 1, 8, x + 1))
340
341 files = sorted(
342 x for x in os.listdir(self.dirname) if x.startswith('trot')
343 )
344 self.assertEqual(files, ['trot-2010-01-06.log', 'trot-2010-01-07.log',
345 'trot-2010-01-08.log'])
346 with open(os.path.join(self.dirname, 'trot-2010-01-08.log')) as f:
347 self.assertEqual(f.readline().rstrip(), '[01:00] Last One')
348 self.assertEqual(f.readline().rstrip(), '[02:00] Last One')
349 with open(os.path.join(self.dirname, 'trot-2010-01-07.log')) as f:
350 self.assertEqual(f.readline().rstrip(), '[01:00] Third One')
351 self.assertEqual(f.readline().rstrip(), '[02:00] Third One')
352
353 def test_mail_handler(self):
354 subject = u'\xf8nicode'
355 handler = make_fake_mail_handler(subject=subject)
356 with capturing_stderr_context() as fallback:
357 with self.thread_activation_strategy(handler):
358 self.log.warn('This is not mailed')
359 try:
360 1 / 0
361 except Exception:
362 self.log.exception(u'Viva la Espa\xf1a')
363
364 if not handler.mails:
365 # if sending the mail failed, the reason should be on stderr
366 self.fail(fallback.getvalue())
367
368 self.assertEqual(len(handler.mails), 1)
369 sender, receivers, mail = handler.mails[0]
370 mail = mail.replace("\r", "")
371 self.assertEqual(sender, handler.from_addr)
372 self.assert_('=?utf-8?q?=C3=B8nicode?=' in mail)
373 self.assertRegexpMatches(mail, 'Message type:\s+ERROR')
374 self.assertRegexpMatches(mail, 'Location:.*%s' % __file_without_pyc__)
375 self.assertRegexpMatches(mail, 'Module:\s+%s' % __name__)
376 self.assertRegexpMatches(mail, 'Function:\s+test_mail_handler')
377 body = u'Message:\n\nViva la Espa\xf1a'
378 if sys.version_info < (3, 0):
379 body = body.encode('utf-8')
380 self.assertIn(body, mail)
381 self.assertIn('\n\nTraceback (most', mail)
382 self.assertIn('1 / 0', mail)
383 self.assertIn('This is not mailed', fallback.getvalue())
384
385 def test_mail_handler_record_limits(self):
386 suppression_test = re.compile('This message occurred additional \d+ '
387 'time\(s\) and was suppressed').search
388 handler = make_fake_mail_handler(record_limit=1,
389 record_delta=timedelta(seconds=0.5))
390 with self.thread_activation_strategy(handler):
391 later = datetime.utcnow() + timedelta(seconds=1.1)
392 while datetime.utcnow() < later:
393 self.log.error('Over and over...')
394
395 # first mail that is always delivered + 0.5 seconds * 2
396 # and 0.1 seconds of room for rounding errors makes 3 mails
397 self.assertEqual(len(handler.mails), 3)
398
399 # first mail is always delivered
400 self.assert_(not suppression_test(handler.mails[0][2]))
401
402 # the next two have a supression count
403 self.assert_(suppression_test(handler.mails[1][2]))
404 self.assert_(suppression_test(handler.mails[2][2]))
405
406 def test_mail_handler_batching(self):
407 mail_handler = make_fake_mail_handler()
408 handler = logbook.FingersCrossedHandler(mail_handler, reset=True)
409 with self.thread_activation_strategy(handler):
410 self.log.warn('Testing')
411 self.log.debug('Even more')
412 self.log.error('And this triggers it')
413 self.log.info('Aha')
414 self.log.error('And this triggers it again!')
415
416 self.assertEqual(len(mail_handler.mails), 2)
417 mail = mail_handler.mails[0][2]
418
419 pieces = mail.split('Log records that led up to this one:')
420 self.assertEqual(len(pieces), 2)
421 body, rest = pieces
422 rest = rest.replace("\r", "")
423
424 self.assertRegexpMatches(body, 'Message type:\s+ERROR')
425 self.assertRegexpMatches(body, 'Module:\s+%s' % __name__)
426 self.assertRegexpMatches(body, 'Function:\s+test_mail_handler_batching')
427
428 related = rest.strip().split('\n\n')
429 self.assertEqual(len(related), 2)
430 self.assertRegexpMatches(related[0], 'Message type:\s+WARNING')
431 self.assertRegexpMatches(related[1], 'Message type:\s+DEBUG')
432
433 self.assertIn('And this triggers it again', mail_handler.mails[1][2])
434
435 def test_group_handler_mail_combo(self):
436 mail_handler = make_fake_mail_handler(level=logbook.DEBUG)
437 handler = logbook.GroupHandler(mail_handler)
438 with self.thread_activation_strategy(handler):
439 self.log.error('The other way round')
440 self.log.warn('Testing')
441 self.log.debug('Even more')
442 self.assertEqual(mail_handler.mails, [])
443
444 self.assertEqual(len(mail_handler.mails), 1)
445 mail = mail_handler.mails[0][2]
446
447 pieces = mail.split('Other log records in the same group:')
448 self.assertEqual(len(pieces), 2)
449 body, rest = pieces
450 rest = rest.replace("\r", "")
451
452 self.assertRegexpMatches(body, 'Message type:\s+ERROR')
453 self.assertRegexpMatches(body, 'Module:\s+'+__name__)
454 self.assertRegexpMatches(body, 'Function:\s+test_group_handler_mail_combo')
455
456 related = rest.strip().split('\n\n')
457 self.assertEqual(len(related), 2)
458 self.assertRegexpMatches(related[0], 'Message type:\s+WARNING')
459 self.assertRegexpMatches(related[1], 'Message type:\s+DEBUG')
460
461 def test_syslog_handler(self):
462 to_test = [
463 (socket.AF_INET, ('127.0.0.1', 0)),
464 ]
465 if hasattr(socket, 'AF_UNIX'):
466 to_test.append((socket.AF_UNIX, self.filename))
467 for sock_family, address in to_test:
468 with closing(socket.socket(sock_family, socket.SOCK_DGRAM)) as inc:
469 inc.bind(address)
470 inc.settimeout(1)
471 for app_name in [None, 'Testing']:
472 handler = logbook.SyslogHandler(app_name, inc.getsockname())
473 with self.thread_activation_strategy(handler):
474 self.log.warn('Syslog is weird')
475 try:
476 rv = inc.recvfrom(1024)[0]
477 except socket.error:
478 self.fail('got timeout on socket')
479 self.assertEqual(rv, (
480 u'<12>%stestlogger: Syslog is weird\x00' %
481 (app_name and app_name + u':' or u'')).encode('utf-8'))
482
483 def test_handler_processors(self):
484 handler = make_fake_mail_handler(format_string='''\
485 Subject: Application Error for {record.extra[path]} [{record.extra[method]}]
486
487 Message type: {record.level_name}
488 Location: {record.filename}:{record.lineno}
489 Module: {record.module}
490 Function: {record.func_name}
491 Time: {record.time:%Y-%m-%d %H:%M:%S}
492 Remote IP: {record.extra[ip]}
493 Request: {record.extra[path]} [{record.extra[method]}]
494
495 Message:
496
497 {record.message}
498 ''')
499
500 class Request(object):
501 remote_addr = '127.0.0.1'
502 method = 'GET'
503 path = '/index.html'
504
505 def handle_request(request):
506 def inject_extra(record):
507 record.extra['ip'] = request.remote_addr
508 record.extra['method'] = request.method
509 record.extra['path'] = request.path
510
511 processor = logbook.Processor(inject_extra)
512 with self.thread_activation_strategy(processor):
513 handler.push_thread()
514 try:
515 try:
516 1 / 0
517 except Exception:
518 self.log.exception('Exception happened during request')
519 finally:
520 handler.pop_thread()
521
522 handle_request(Request())
523 self.assertEqual(len(handler.mails), 1)
524 mail = handler.mails[0][2]
525 self.assertIn('Subject: Application Error '
526 'for /index.html [GET]', mail)
527 self.assertIn('1 / 0', mail)
528
529 def test_regex_matching(self):
530 test_handler = logbook.TestHandler()
531 with self.thread_activation_strategy(test_handler):
532 self.log.warn('Hello World!')
533 self.assert_(test_handler.has_warning(re.compile('^Hello')))
534 self.assert_(not test_handler.has_warning(re.compile('world$')))
535 self.assert_(not test_handler.has_warning('^Hello World'))
536
537 def test_custom_handling_test(self):
538 class MyTestHandler(logbook.TestHandler):
539 def handle(self, record):
540 if record.extra.get('flag') != 'testing':
541 return False
542 return logbook.TestHandler.handle(self, record)
543
544 class MyLogger(logbook.Logger):
545 def process_record(self, record):
546 logbook.Logger.process_record(self, record)
547 record.extra['flag'] = 'testing'
548 log = MyLogger()
549 handler = MyTestHandler()
550 with capturing_stderr_context() as captured:
551 with self.thread_activation_strategy(handler):
552 log.warn('From my logger')
553 self.log.warn('From another logger')
554 self.assert_(handler.has_warning('From my logger'))
555 self.assertIn('From another logger', captured.getvalue())
556
557 def test_custom_handling_tester(self):
558 flag = True
559
560 class MyTestHandler(logbook.TestHandler):
561 def should_handle(self, record):
562 return flag
563 null_handler = logbook.NullHandler()
564 with self.thread_activation_strategy(null_handler):
565 test_handler = MyTestHandler()
566 with self.thread_activation_strategy(test_handler):
567 self.log.warn('1')
568 flag = False
569 self.log.warn('2')
570 self.assert_(test_handler.has_warning('1'))
571 self.assert_(not test_handler.has_warning('2'))
572
573 def test_null_handler(self):
574 with capturing_stderr_context() as captured:
575 with self.thread_activation_strategy(logbook.NullHandler()) as null_handler:
576 with self.thread_activation_strategy(logbook.TestHandler(level='ERROR')) as handler:
577 self.log.error('An error')
578 self.log.warn('A warning')
579 self.assertEqual(captured.getvalue(), '')
580 self.assertFalse(handler.has_warning('A warning'))
581 self.assert_(handler.has_error('An error'))
582
583 def test_test_handler_cache(self):
584 with self.thread_activation_strategy(logbook.TestHandler()) as handler:
585 self.log.warn('First line')
586 self.assertEqual(len(handler.formatted_records),1)
587 cache = handler.formatted_records # store cache, to make sure it is identifiable
588 self.assertEqual(len(handler.formatted_records),1)
589 self.assert_(cache is handler.formatted_records) # Make sure cache is not invalidated without changes to record
590 self.log.warn('Second line invalidates cache')
591 self.assertEqual(len(handler.formatted_records),2)
592 self.assertFalse(cache is handler.formatted_records) # Make sure cache is invalidated when records change
593
594 def test_blackhole_setting(self):
595 null_handler = logbook.NullHandler()
596 heavy_init = logbook.LogRecord.heavy_init
597 with self.thread_activation_strategy(null_handler):
598 def new_heavy_init(self):
599 raise RuntimeError('should not be triggered')
600 logbook.LogRecord.heavy_init = new_heavy_init
601 try:
602 with self.thread_activation_strategy(null_handler):
603 logbook.warn('Awesome')
604 finally:
605 logbook.LogRecord.heavy_init = heavy_init
606
607 null_handler.bubble = True
608 with capturing_stderr_context() as captured:
609 logbook.warning('Not a blockhole')
610 self.assertNotEqual(captured.getvalue(), '')
611
612 def test_calling_frame(self):
613 handler = logbook.TestHandler()
614 with self.thread_activation_strategy(handler):
615 logbook.warn('test')
616 self.assertEqual(handler.records[0].calling_frame, sys._getframe())
617
618 def test_nested_setups(self):
619 with capturing_stderr_context() as captured:
620 logger = logbook.Logger('App')
621 test_handler = logbook.TestHandler(level='WARNING')
622 mail_handler = make_fake_mail_handler(bubble=True)
623
624 handlers = logbook.NestedSetup([
625 logbook.NullHandler(),
626 test_handler,
627 mail_handler
628 ])
629
630 with self.thread_activation_strategy(handlers):
631 logger.warn('This is a warning')
632 logger.error('This is also a mail')
633 try:
634 1 / 0
635 except Exception:
636 logger.exception()
637 logger.warn('And here we go straight back to stderr')
638
639 self.assert_(test_handler.has_warning('This is a warning'))
640 self.assert_(test_handler.has_error('This is also a mail'))
641 self.assertEqual(len(mail_handler.mails), 2)
642 self.assertIn('This is also a mail', mail_handler.mails[0][2])
643 self.assertIn('1 / 0',mail_handler.mails[1][2])
644 self.assertIn('And here we go straight back to stderr',
645 captured.getvalue())
646
647 with self.thread_activation_strategy(handlers):
648 logger.warn('threadbound warning')
649
650 handlers.push_application()
651 try:
652 logger.warn('applicationbound warning')
653 finally:
654 handlers.pop_application()
655
656 def test_dispatcher(self):
657 logger = logbook.Logger('App')
658 with self.thread_activation_strategy(logbook.TestHandler()) as test_handler:
659 logger.warn('Logbook is too awesome for stdlib')
660 self.assertEqual(test_handler.records[0].dispatcher, logger)
661
662 def test_filtering(self):
663 logger1 = logbook.Logger('Logger1')
664 logger2 = logbook.Logger('Logger2')
665 handler = logbook.TestHandler()
666 outer_handler = logbook.TestHandler()
667
668 def only_1(record, handler):
669 return record.dispatcher is logger1
670 handler.filter = only_1
671
672 with self.thread_activation_strategy(outer_handler):
673 with self.thread_activation_strategy(handler):
674 logger1.warn('foo')
675 logger2.warn('bar')
676
677 self.assert_(handler.has_warning('foo', channel='Logger1'))
678 self.assertFalse(handler.has_warning('bar', channel='Logger2'))
679 self.assertFalse(outer_handler.has_warning('foo', channel='Logger1'))
680 self.assert_(outer_handler.has_warning('bar', channel='Logger2'))
681
682 def test_different_context_pushing(self):
683 h1 = logbook.TestHandler(level=logbook.DEBUG)
684 h2 = logbook.TestHandler(level=logbook.INFO)
685 h3 = logbook.TestHandler(level=logbook.WARNING)
686 logger = logbook.Logger('Testing')
687
688 with self.thread_activation_strategy(h1):
689 with self.thread_activation_strategy(h2):
690 with self.thread_activation_strategy(h3):
691 logger.warn('Wuuu')
692 logger.info('still awesome')
693 logger.debug('puzzled')
694
695 self.assert_(h1.has_debug('puzzled'))
696 self.assert_(h2.has_info('still awesome'))
697 self.assert_(h3.has_warning('Wuuu'))
698 for handler in h1, h2, h3:
699 self.assertEquals(len(handler.records), 1)
700
701 def test_global_functions(self):
702 with self.thread_activation_strategy(logbook.TestHandler()) as handler:
703 logbook.debug('a debug message')
704 logbook.info('an info message')
705 logbook.warn('warning part 1')
706 logbook.warning('warning part 2')
707 logbook.notice('notice')
708 logbook.error('an error')
709 logbook.critical('pretty critical')
710 logbook.log(logbook.CRITICAL, 'critical too')
711
712 self.assert_(handler.has_debug('a debug message'))
713 self.assert_(handler.has_info('an info message'))
714 self.assert_(handler.has_warning('warning part 1'))
715 self.assert_(handler.has_warning('warning part 2'))
716 self.assert_(handler.has_notice('notice'))
717 self.assert_(handler.has_error('an error'))
718 self.assert_(handler.has_critical('pretty critical'))
719 self.assert_(handler.has_critical('critical too'))
720 self.assertEqual(handler.records[0].channel, 'Generic')
721 self.assertIsNone(handler.records[0].dispatcher)
722
723 def test_fingerscrossed(self):
724 handler = logbook.FingersCrossedHandler(logbook.default_handler,
725 logbook.WARNING)
726
727 # if no warning occurs, the infos are not logged
728 with self.thread_activation_strategy(handler):
729 with capturing_stderr_context() as captured:
730 self.log.info('some info')
731 self.assertEqual(captured.getvalue(), '')
732 self.assert_(not handler.triggered)
733
734 # but if it does, all log messages are output
735 with self.thread_activation_strategy(handler):
736 with capturing_stderr_context() as captured:
737 self.log.info('some info')
738 self.log.warning('something happened')
739 self.log.info('something else happened')
740 logs = captured.getvalue()
741 self.assert_('some info' in logs)
742 self.assert_('something happened' in logs)
743 self.assert_('something else happened' in logs)
744 self.assert_(handler.triggered)
745
746 def test_fingerscrossed_factory(self):
747 handlers = []
748
749 def handler_factory(record, fch):
750 handler = logbook.TestHandler()
751 handlers.append(handler)
752 return handler
753
754 def make_fch():
755 return logbook.FingersCrossedHandler(handler_factory,
756 logbook.WARNING)
757
758 fch = make_fch()
759 with self.thread_activation_strategy(fch):
760 self.log.info('some info')
761 self.assertEqual(len(handlers), 0)
762 self.log.warning('a warning')
763 self.assertEqual(len(handlers), 1)
764 self.log.error('an error')
765 self.assertEqual(len(handlers), 1)
766 self.assert_(handlers[0].has_infos)
767 self.assert_(handlers[0].has_warnings)
768 self.assert_(handlers[0].has_errors)
769 self.assert_(not handlers[0].has_notices)
770 self.assert_(not handlers[0].has_criticals)
771 self.assert_(not handlers[0].has_debugs)
772
773 fch = make_fch()
774 with self.thread_activation_strategy(fch):
775 self.log.info('some info')
776 self.log.warning('a warning')
777 self.assertEqual(len(handlers), 2)
778
779 def test_fingerscrossed_buffer_size(self):
780 logger = logbook.Logger('Test')
781 test_handler = logbook.TestHandler()
782 handler = logbook.FingersCrossedHandler(test_handler, buffer_size=3)
783
784 with self.thread_activation_strategy(handler):
785 logger.info('Never gonna give you up')
786 logger.warn('Aha!')
787 logger.warn('Moar!')
788 logger.error('Pure hate!')
789
790 self.assertEqual(test_handler.formatted_records, [
791 '[WARNING] Test: Aha!',
792 '[WARNING] Test: Moar!',
793 '[ERROR] Test: Pure hate!'
794 ])
795
796
797 class HandlerTestCase_Regular(_HandlerTestCase):
798 def setUp(self):
799 super(HandlerTestCase_Regular, self).setUp()
800 self.thread_activation_strategy = activate_via_push_pop
801
802 class HandlerTestCase_Contextmgr(_HandlerTestCase):
803 def setUp(self):
804 super(HandlerTestCase_Contextmgr, self).setUp()
805 self.thread_activation_strategy = activate_via_with_statement
806
807 class AttributeTestCase(LogbookTestCase):
808
809 def test_level_properties(self):
810 self.assertEqual(self.log.level, logbook.NOTSET)
811 self.assertEqual(self.log.level_name, 'NOTSET')
812 self.log.level_name = 'WARNING'
813 self.assertEqual(self.log.level, logbook.WARNING)
814 self.log.level = logbook.ERROR
815 self.assertEqual(self.log.level_name, 'ERROR')
816
817 def test_reflected_properties(self):
818 group = logbook.LoggerGroup()
819 group.add_logger(self.log)
820 self.assertEqual(self.log.group, group)
821 group.level = logbook.ERROR
822 self.assertEqual(self.log.level, logbook.ERROR)
823 self.assertEqual(self.log.level_name, 'ERROR')
824 group.level = logbook.WARNING
825 self.assertEqual(self.log.level, logbook.WARNING)
826 self.assertEqual(self.log.level_name, 'WARNING')
827 self.log.level = logbook.CRITICAL
828 group.level = logbook.DEBUG
829 self.assertEqual(self.log.level, logbook.CRITICAL)
830 self.assertEqual(self.log.level_name, 'CRITICAL')
831 group.remove_logger(self.log)
832 self.assertEqual(self.log.group, None)
833
834 class LevelLookupTest(LogbookTestCase):
835 def test_level_lookup_failures(self):
836 with self.assertRaises(LookupError):
837 logbook.get_level_name(37)
838 with self.assertRaises(LookupError):
839 logbook.lookup_level('FOO')
840
841 class FlagsTestCase(LogbookTestCase):
842 def test_error_flag(self):
843 with capturing_stderr_context() as captured:
844 with logbook.Flags(errors='print'):
845 with logbook.Flags(errors='silent'):
846 self.log.warn('Foo {42}', 'aha')
847 self.assertEqual(captured.getvalue(), '')
848
849 with logbook.Flags(errors='silent'):
850 with logbook.Flags(errors='print'):
851 self.log.warn('Foo {42}', 'aha')
852 self.assertNotEqual(captured.getvalue(), '')
853
854 with self.assertRaises(Exception) as caught:
855 with logbook.Flags(errors='raise'):
856 self.log.warn('Foo {42}', 'aha')
857 self.assertIn('Could not format message with provided '
858 'arguments', str(caught.exception))
859
860 def test_disable_introspection(self):
861 with logbook.Flags(introspection=False):
862 with logbook.TestHandler() as h:
863 self.log.warn('Testing')
864 self.assertIsNone(h.records[0].frame)
865 self.assertIsNone(h.records[0].calling_frame)
866 self.assertIsNone(h.records[0].module)
867
868 class LoggerGroupTestCase(LogbookTestCase):
869 def test_groups(self):
870 def inject_extra(record):
871 record.extra['foo'] = 'bar'
872 group = logbook.LoggerGroup(processor=inject_extra)
873 group.level = logbook.ERROR
874 group.add_logger(self.log)
875 with logbook.TestHandler() as handler:
876 self.log.warn('A warning')
877 self.log.error('An error')
878 self.assertFalse(handler.has_warning('A warning'))
879 self.assertTrue(handler.has_error('An error'))
880 self.assertEqual(handler.records[0].extra['foo'], 'bar')
881
882 class DefaultConfigurationTestCase(LogbookTestCase):
883
884 def test_default_handlers(self):
885 with capturing_stderr_context() as stream:
886 self.log.warn('Aha!')
887 captured = stream.getvalue()
888 self.assertIn('WARNING: testlogger: Aha!', captured)
889
890 class LoggingCompatTestCase(LogbookTestCase):
891
892 def test_basic_compat(self):
893 from logging import getLogger
894 from logbook.compat import redirected_logging
895
896 name = 'test_logbook-%d' % randrange(1 << 32)
897 logger = getLogger(name)
898 with capturing_stderr_context() as captured:
899 redirector = redirected_logging()
900 redirector.start()
901 try:
902 logger.debug('This is from the old system')
903 logger.info('This is from the old system')
904 logger.warn('This is from the old system')
905 logger.error('This is from the old system')
906 logger.critical('This is from the old system')
907 finally:
908 redirector.end()
909 self.assertIn(('WARNING: %s: This is from the old system' % name),
910 captured.getvalue())
911
912 def test_redirect_logbook(self):
913 import logging
914 from logbook.compat import LoggingHandler
915 out = StringIO()
916 logger = logging.getLogger()
917 old_handlers = logger.handlers[:]
918 handler = logging.StreamHandler(out)
919 handler.setFormatter(logging.Formatter(
920 '%(name)s:%(levelname)s:%(message)s'))
921 logger.handlers[:] = [handler]
922 try:
923 with logbook.compat.LoggingHandler() as logging_handler:
924 self.log.warn("This goes to logging")
925 pieces = out.getvalue().strip().split(':')
926 self.assertEqual(pieces, [
927 'testlogger',
928 'WARNING',
929 'This goes to logging'
930 ])
931 finally:
932 logger.handlers[:] = old_handlers
933
934 class WarningsCompatTestCase(LogbookTestCase):
935
936 def test_warning_redirections(self):
937 from logbook.compat import redirected_warnings
938 with logbook.TestHandler() as handler:
939 redirector = redirected_warnings()
940 redirector.start()
941 try:
942 from warnings import warn
943 warn(RuntimeWarning('Testing'))
944 finally:
945 redirector.end()
946
947 self.assertEqual(len(handler.records), 1)
948 self.assertEqual('[WARNING] RuntimeWarning: Testing',
949 handler.formatted_records[0])
950 self.assertIn(__file_without_pyc__, handler.records[0].filename)
951
952 class MoreTestCase(LogbookTestCase):
953
954 @contextmanager
955 def _get_temporary_file_context(self):
956 fn = tempfile.mktemp()
957 try:
958 yield fn
959 finally:
960 try:
961 os.remove(fn)
962 except OSError:
963 pass
964
965 @require_module('jinja2')
966 def test_jinja_formatter(self):
967 from logbook.more import JinjaFormatter
968 fmter = JinjaFormatter('{{ record.channel }}/{{ record.level_name }}')
969 handler = logbook.TestHandler()
970 handler.formatter = fmter
971 with handler:
972 self.log.info('info')
973 self.assertIn('testlogger/INFO', handler.formatted_records)
974
975 @missing('jinja2')
976 def test_missing_jinja2(self):
977 from logbook.more import JinjaFormatter
978 # check the RuntimeError is raised
979 with self.assertRaises(RuntimeError):
980 JinjaFormatter('dummy')
981
982 def test_colorizing_support(self):
983 from logbook.more import ColorizedStderrHandler
984
985 class TestColorizingHandler(ColorizedStderrHandler):
986 def should_colorize(self, record):
987 return True
988 stream = StringIO()
989 with TestColorizingHandler(format_string='{record.message}') as handler:
990 self.log.error('An error')
991 self.log.warn('A warning')
992 self.log.debug('A debug message')
993 lines = handler.stream.getvalue().rstrip('\n').splitlines()
994 self.assertEqual(lines, [
995 '\x1b[31;01mAn error',
996 '\x1b[39;49;00m\x1b[33;01mA warning',
997 '\x1b[39;49;00m\x1b[37mA debug message',
998 '\x1b[39;49;00m'
999 ])
1000
1001 def test_tagged(self):
1002 from logbook.more import TaggingLogger, TaggingHandler
1003 stream = StringIO()
1004 second_handler = logbook.StreamHandler(stream)
1005
1006 logger = TaggingLogger('name', ['cmd'])
1007 handler = TaggingHandler(dict(
1008 info=logbook.default_handler,
1009 cmd=second_handler,
1010 both=[logbook.default_handler, second_handler],
1011 ))
1012 handler.bubble = False
1013
1014 with handler:
1015 with capturing_stderr_context() as captured:
1016 logger.log('info', 'info message')
1017 logger.log('both', 'all message')
1018 logger.cmd('cmd message')
1019
1020 stderr = captured.getvalue()
1021
1022 self.assertIn('info message', stderr)
1023 self.assertIn('all message', stderr)
1024 self.assertNotIn('cmd message', stderr)
1025
1026 stringio = stream.getvalue()
1027
1028 self.assertNotIn('info message', stringio)
1029 self.assertIn('all message', stringio)
1030 self.assertIn('cmd message', stringio)
1031
1032 def test_external_application_handler(self):
1033 from logbook.more import ExternalApplicationHandler as Handler
1034 with self._get_temporary_file_context() as fn:
1035 handler = Handler([sys.executable, '-c', r'''if 1:
1036 f = open(%(tempfile)s, 'w')
1037 try:
1038 f.write('{record.message}\n')
1039 finally:
1040 f.close()
1041 ''' % {'tempfile': repr(fn)}])
1042 with handler:
1043 self.log.error('this is a really bad idea')
1044 with open(fn, 'r') as rf:
1045 contents = rf.read().strip()
1046 self.assertEqual(contents, 'this is a really bad idea')
1047
1048 def test_external_application_handler_stdin(self):
1049 from logbook.more import ExternalApplicationHandler as Handler
1050 with self._get_temporary_file_context() as fn:
1051 handler = Handler([sys.executable, '-c', r'''if 1:
1052 import sys
1053 f = open(%(tempfile)s, 'w')
1054 try:
1055 f.write(sys.stdin.read())
1056 finally:
1057 f.close()
1058 ''' % {'tempfile': repr(fn)}], '{record.message}\n')
1059 with handler:
1060 self.log.error('this is a really bad idea')
1061 with open(fn, 'r') as rf:
1062 contents = rf.read().strip()
1063 self.assertEqual(contents, 'this is a really bad idea')
1064
1065 def test_exception_handler(self):
1066 from logbook.more import ExceptionHandler
1067
1068 with ExceptionHandler(ValueError) as exception_handler:
1069 with self.assertRaises(ValueError) as caught:
1070 self.log.info('here i am')
1071 self.assertIn('INFO: testlogger: here i am', caught.exception.args[0])
1072
1073 def test_exception_handler_specific_level(self):
1074 from logbook.more import ExceptionHandler
1075 with logbook.TestHandler() as test_handler:
1076 with self.assertRaises(ValueError) as caught:
1077 with ExceptionHandler(ValueError, level='WARNING') as exception_handler:
1078 self.log.info('this is irrelevant')
1079 self.log.warn('here i am')
1080 self.assertIn('WARNING: testlogger: here i am', caught.exception.args[0])
1081 self.assertIn('this is irrelevant', test_handler.records[0].message)
1082
1083 class QueuesTestCase(LogbookTestCase):
1084 def _get_zeromq(self):
1085 from logbook.queues import ZeroMQHandler, ZeroMQSubscriber
1086
1087 # Get an unused port
1088 tempsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
1089 tempsock.bind(('localhost', 0))
1090 host, unused_port = tempsock.getsockname()
1091 tempsock.close()
1092
1093 # Retrieve the ZeroMQ handler and subscriber
1094 uri = 'tcp://%s:%d' % (host, unused_port)
1095 handler = ZeroMQHandler(uri)
1096 subscriber = ZeroMQSubscriber(uri)
1097 # Enough time to start
1098 time.sleep(0.1)
1099 return handler, subscriber
1100
1101 @require_module('zmq')
1102 def test_zeromq_handler(self):
1103 tests = [
1104 u'Logging something',
1105 u'Something with umlauts äöü',
1106 u'Something else for good measure',
1107 ]
1108 handler, subscriber = self._get_zeromq()
1109 for test in tests:
1110 with handler:
1111 self.log.warn(test)
1112 record = subscriber.recv()
1113 self.assertEqual(record.message, test)
1114 self.assertEqual(record.channel, self.log.name)
1115
1116 @require_module('zmq')
1117 def test_zeromq_background_thread(self):
1118 handler, subscriber = self._get_zeromq()
1119 test_handler = logbook.TestHandler()
1120 controller = subscriber.dispatch_in_background(test_handler)
1121
1122 with handler:
1123 self.log.warn('This is a warning')
1124 self.log.error('This is an error')
1125
1126 # stop the controller. This will also stop the loop and join the
1127 # background process. Before that we give it a fraction of a second
1128 # to get all results
1129 time.sleep(0.1)
1130 controller.stop()
1131
1132 self.assertTrue(test_handler.has_warning('This is a warning'))
1133 self.assertTrue(test_handler.has_error('This is an error'))
1134
1135 @missing('zmq')
1136 def test_missing_zeromq(self):
1137 from logbook.queues import ZeroMQHandler, ZeroMQSubscriber
1138 with self.assertRaises(RuntimeError):
1139 ZeroMQHandler('tcp://127.0.0.1:42000')
1140 with self.assertRaises(RuntimeError):
1141 ZeroMQSubscriber('tcp://127.0.0.1:42000')
1142
1143 @require_module('multiprocessing')
1144 def test_multi_processing_handler(self):
1145 from multiprocessing import Process, Queue
1146 from logbook.queues import MultiProcessingHandler, \
1147 MultiProcessingSubscriber
1148 queue = Queue(-1)
1149 test_handler = logbook.TestHandler()
1150 subscriber = MultiProcessingSubscriber(queue)
1151
1152 def send_back():
1153 handler = MultiProcessingHandler(queue)
1154 handler.push_thread()
1155 try:
1156 logbook.warn('Hello World')
1157 finally:
1158 handler.pop_thread()
1159
1160 p = Process(target=send_back)
1161 p.start()
1162 p.join()
1163
1164 with test_handler:
1165 subscriber.dispatch_once()
1166 self.assert_(test_handler.has_warning('Hello World'))
1167
1168 def test_threaded_wrapper_handler(self):
1169 from logbook.queues import ThreadedWrapperHandler
1170 test_handler = logbook.TestHandler()
1171 with ThreadedWrapperHandler(test_handler) as handler:
1172 self.log.warn('Just testing')
1173 self.log.error('More testing')
1174
1175 # give it some time to sync up
1176 handler.close()
1177
1178 self.assertTrue(not handler.controller.running)
1179 self.assertTrue(test_handler.has_warning('Just testing'))
1180 self.assertTrue(test_handler.has_error('More testing'))
1181
1182 @require_module('execnet')
1183 def test_execnet_handler(self):
1184 def run_on_remote(channel):
1185 import logbook
1186 from logbook.queues import ExecnetChannelHandler
1187 handler = ExecnetChannelHandler(channel)
1188 log = logbook.Logger("Execnet")
1189 handler.push_application()
1190 log.info('Execnet works')
1191
1192 import execnet
1193 gw = execnet.makegateway()
1194 channel = gw.remote_exec(run_on_remote)
1195 from logbook.queues import ExecnetChannelSubscriber
1196 subscriber = ExecnetChannelSubscriber(channel)
1197 record = subscriber.recv()
1198 self.assertEqual(record.msg, 'Execnet works')
1199 gw.exit()
1200
1201 @require_module('multiprocessing')
1202 def test_subscriber_group(self):
1203 from multiprocessing import Process, Queue
1204 from logbook.queues import MultiProcessingHandler, \
1205 MultiProcessingSubscriber, SubscriberGroup
1206 a_queue = Queue(-1)
1207 b_queue = Queue(-1)
1208 test_handler = logbook.TestHandler()
1209 subscriber = SubscriberGroup([
1210 MultiProcessingSubscriber(a_queue),
1211 MultiProcessingSubscriber(b_queue)
1212 ])
1213
1214 def make_send_back(message, queue):
1215 def send_back():
1216 with MultiProcessingHandler(queue):
1217 logbook.warn(message)
1218 return send_back
1219
1220 for _ in range(10):
1221 p1 = Process(target=make_send_back('foo', a_queue))
1222 p2 = Process(target=make_send_back('bar', b_queue))
1223 p1.start()
1224 p2.start()
1225 p1.join()
1226 p2.join()
1227 messages = [subscriber.recv().message for i in (1, 2)]
1228 self.assertEqual(sorted(messages), ['bar', 'foo'])
1229
1230 @require_module('redis')
1231 def test_redis_handler(self):
1232 import redis
1233 from logbook.queues import RedisHandler
1234
1235 KEY = 'redis'
1236 FIELDS = ['message', 'host']
1237 r = redis.Redis(decode_responses=True)
1238 redis_handler = RedisHandler(level=logbook.INFO, bubble=True)
1239 #We don't want output for the tests, so we can wrap everything in a NullHandler
1240 null_handler = logbook.NullHandler()
1241
1242 #Check default values
1243 with null_handler.applicationbound():
1244 with redis_handler:
1245 logbook.info(LETTERS)
1246
1247 key, message = r.blpop(KEY)
1248 #Are all the fields in the record?
1249 [self.assertTrue(message.find(field)) for field in FIELDS]
1250 self.assertEqual(key, KEY)
1251 self.assertTrue(message.find(LETTERS))
1252
1253 #Change the key of the handler and check on redis
1254 KEY = 'test_another_key'
1255 redis_handler.key = KEY
1256
1257 with null_handler.applicationbound():
1258 with redis_handler:
1259 logbook.info(LETTERS)
1260
1261 key, message = r.blpop(KEY)
1262 self.assertEqual(key, KEY)
1263
1264 #Check that extra fields are added if specified when creating the handler
1265 FIELDS.append('type')
1266 extra_fields = {'type': 'test'}
1267 del(redis_handler)
1268 redis_handler = RedisHandler(key=KEY, level=logbook.INFO,
1269 extra_fields=extra_fields, bubble=True)
1270
1271 with null_handler.applicationbound():
1272 with redis_handler:
1273 logbook.info(LETTERS)
1274
1275 key, message = r.blpop(KEY)
1276 [self.assertTrue(message.find(field)) for field in FIELDS]
1277 self.assertTrue(message.find('test'))
1278
1279 #And finally, check that fields are correctly added if appended to the
1280 #log message
1281 FIELDS.append('more_info')
1282 with null_handler.applicationbound():
1283 with redis_handler:
1284 logbook.info(LETTERS, more_info='This works')
1285
1286 key, message = r.blpop(KEY)
1287 [self.assertTrue(message.find(field)) for field in FIELDS]
1288 self.assertTrue(message.find('This works'))
1289
1290
1291 class TicketingTestCase(LogbookTestCase):
1292
1293 @require_module('sqlalchemy')
1294 def test_basic_ticketing(self):
1295 from logbook.ticketing import TicketingHandler
1296 with TicketingHandler('sqlite:///') as handler:
1297 for x in xrange(5):
1298 self.log.warn('A warning')
1299 self.log.info('An error')
1300 if x < 2:
1301 try:
1302 1 / 0
1303 except Exception:
1304 self.log.exception()
1305
1306 self.assertEqual(handler.db.count_tickets(), 3)
1307 tickets = handler.db.get_tickets()
1308 self.assertEqual(len(tickets), 3)
1309 self.assertEqual(tickets[0].level, logbook.INFO)
1310 self.assertEqual(tickets[1].level, logbook.WARNING)
1311 self.assertEqual(tickets[2].level, logbook.ERROR)
1312 self.assertEqual(tickets[0].occurrence_count, 5)
1313 self.assertEqual(tickets[1].occurrence_count, 5)
1314 self.assertEqual(tickets[2].occurrence_count, 2)
1315 self.assertEqual(tickets[0].last_occurrence.level, logbook.INFO)
1316
1317 tickets[0].solve()
1318 self.assert_(tickets[0].solved)
1319 tickets[0].delete()
1320
1321 ticket = handler.db.get_ticket(tickets[1].ticket_id)
1322 self.assertEqual(ticket, tickets[1])
1323
1324 occurrences = handler.db.get_occurrences(tickets[2].ticket_id,
1325 order_by='time')
1326 self.assertEqual(len(occurrences), 2)
1327 record = occurrences[0]
1328 self.assertIn(__file_without_pyc__, record.filename)
1329 # avoid 2to3 destroying our assertion
1330 self.assertEqual(getattr(record, 'func_name'), 'test_basic_ticketing')
1331 self.assertEqual(record.level, logbook.ERROR)
1332 self.assertEqual(record.thread, get_ident())
1333 self.assertEqual(record.process, os.getpid())
1334 self.assertEqual(record.channel, 'testlogger')
1335 self.assertIn('1 / 0', record.formatted_exception)
1336
1337 class HelperTestCase(LogbookTestCase):
1338
1339 def test_jsonhelper(self):
1340 from logbook.helpers import to_safe_json
1341
1342 class Bogus(object):
1343 def __str__(self):
1344 return 'bogus'
1345
1346 rv = to_safe_json([
1347 None,
1348 'foo',
1349 u'jäger',
1350 1,
1351 datetime(2000, 1, 1),
1352 {'jäger1': 1, u'jäger2': 2, Bogus(): 3, 'invalid': object()},
1353 object() # invalid
1354 ])
1355 self.assertEqual(
1356 rv, [None, u'foo', u'jäger', 1, '2000-01-01T00:00:00Z',
1357 {u('jäger1'): 1, u'jäger2': 2, u'bogus': 3,
1358 u'invalid': None}, None])
1359
1360 def test_datehelpers(self):
1361 from logbook.helpers import format_iso8601, parse_iso8601
1362 now = datetime.now()
1363 rv = format_iso8601()
1364 self.assertEqual(rv[:4], str(now.year))
1365
1366 self.assertRaises(ValueError, parse_iso8601, 'foo')
1367 v = parse_iso8601('2000-01-01T00:00:00.12Z')
1368 self.assertEqual(v.microsecond, 120000)
1369 v = parse_iso8601('2000-01-01T12:00:00+01:00')
1370 self.assertEqual(v.hour, 11)
1371 v = parse_iso8601('2000-01-01T12:00:00-01:00')
1372 self.assertEqual(v.hour, 13)
1373
1374 class UnicodeTestCase(LogbookTestCase):
1375 # in Py3 we can just assume a more uniform unicode environment
1376 @require_py3
1377 def test_default_format_unicode(self):
1378 with capturing_stderr_context() as stream:
1379 self.log.warn('\u2603')
1380 self.assertIn('WARNING: testlogger: \u2603', stream.getvalue())
1381
1382 @require_py3
1383 def test_default_format_encoded(self):
1384 with capturing_stderr_context() as stream:
1385 # it's a string but it's in the right encoding so don't barf
1386 self.log.warn('\u2603')
1387 self.assertIn('WARNING: testlogger: \u2603', stream.getvalue())
1388
1389 @require_py3
1390 def test_default_format_bad_encoding(self):
1391 with capturing_stderr_context() as stream:
1392 # it's a string, is wrong, but just dump it in the logger,
1393 # don't try to decode/encode it
1394 self.log.warn('Русский'.encode('koi8-r'))
1395 self.assertIn("WARNING: testlogger: b'\\xf2\\xd5\\xd3\\xd3\\xcb\\xc9\\xca'", stream.getvalue())
1396
1397 @require_py3
1398 def test_custom_unicode_format_unicode(self):
1399 format_string = ('[{record.level_name}] '
1400 '{record.channel}: {record.message}')
1401 with capturing_stderr_context() as stream:
1402 with logbook.StderrHandler(format_string=format_string):
1403 self.log.warn("\u2603")
1404 self.assertIn('[WARNING] testlogger: \u2603', stream.getvalue())
1405
1406 @require_py3
1407 def test_custom_string_format_unicode(self):
1408 format_string = ('[{record.level_name}] '
1409 '{record.channel}: {record.message}')
1410 with capturing_stderr_context() as stream:
1411 with logbook.StderrHandler(format_string=format_string):
1412 self.log.warn('\u2603')
1413 self.assertIn('[WARNING] testlogger: \u2603', stream.getvalue())
1414
1415 @require_py3
1416 def test_unicode_message_encoded_params(self):
1417 with capturing_stderr_context() as stream:
1418 self.log.warn("\u2603 {0}", "\u2603".encode('utf8'))
1419 self.assertIn("WARNING: testlogger: \u2603 b'\\xe2\\x98\\x83'", stream.getvalue())
1420
1421 @require_py3
1422 def test_encoded_message_unicode_params(self):
1423 with capturing_stderr_context() as stream:
1424 self.log.warn('\u2603 {0}'.encode('utf8'), '\u2603')
1425 self.assertIn('WARNING: testlogger: \u2603 \u2603', stream.getvalue())
0 # -*- coding: utf-8 -*-
1 """
2 test utils for logbook
3 ~~~~~~~~~~~~~~~~~~~~~~
4
5 :copyright: (c) 2010 by Armin Ronacher, Georg Brandl.
6 :license: BSD, see LICENSE for more details.
7 """
8 from contextlib import contextmanager
9 import platform
10 import sys
11
12 if platform.python_version() < "2.7":
13 import unittest2 as unittest
14 else:
15 import unittest
16 import logbook
17 from logbook.helpers import StringIO
18
19 _missing = object()
20
21
22 def get_total_delta_seconds(delta):
23 """
24 Replacement for datetime.timedelta.total_seconds() for Python 2.5, 2.6 and 3.1
25 """
26 return (delta.microseconds + (delta.seconds + delta.days * 24 * 3600) * 10**6) / 10**6
27
28
29 require_py3 = unittest.skipUnless(sys.version_info[0] == 3, "Requires Python 3")
30 def require_module(module_name):
31 try:
32 __import__(module_name)
33 except ImportError:
34 return unittest.skip("Requires the %r module" % (module_name,))
35 return lambda func: func
36
37 class LogbookTestSuite(unittest.TestSuite):
38 pass
39
40 class LogbookTestCase(unittest.TestCase):
41 def setUp(self):
42 self.log = logbook.Logger('testlogger')
43
44 # silence deprecation warning displayed on Py 3.2
45 LogbookTestCase.assert_ = LogbookTestCase.assertTrue
46
47 def make_fake_mail_handler(**kwargs):
48 class FakeMailHandler(logbook.MailHandler):
49 mails = []
50
51 def get_connection(self):
52 return self
53
54 def close_connection(self, con):
55 pass
56
57 def sendmail(self, fromaddr, recipients, mail):
58 self.mails.append((fromaddr, recipients, mail))
59
60 kwargs.setdefault('level', logbook.ERROR)
61 return FakeMailHandler('foo@example.com', ['bar@example.com'], **kwargs)
62
63
64 def missing(name):
65 def decorate(f):
66 def wrapper(*args, **kwargs):
67 old = sys.modules.get(name, _missing)
68 sys.modules[name] = None
69 try:
70 f(*args, **kwargs)
71 finally:
72 if old is _missing:
73 del sys.modules[name]
74 else:
75 sys.modules[name] = old
76 return wrapper
77 return decorate
78
79 def activate_via_with_statement(handler):
80 return handler
81
82 @contextmanager
83 def activate_via_push_pop(handler):
84 handler.push_thread()
85 try:
86 yield handler
87 finally:
88 handler.pop_thread()
89
90 @contextmanager
91 def capturing_stderr_context():
92 original = sys.stderr
93 sys.stderr = StringIO()
94 try:
95 yield sys.stderr
96 finally:
97 sys.stderr = original
0 from logbook import NTEventLogHandler, Logger
1
2 logger = Logger('MyLogger')
3 handler = NTEventLogHandler('My Application')
4
5 with handler.applicationbound():
6 logger.error('Testing')
0 [tox]
1 envlist=py26,py27,py33,pypy,docs
2
3 [testenv]
4 commands=
5 python {toxinidir}/scripts/test_setup.py
6 nosetests -w tests
7 deps=
8 nose
9 changedir={toxinidir}
10
11 [testenv:25]
12 deps=
13 ssl
14 nose
15
16 [testenv:docs]
17 deps=
18 Sphinx==1.1.3
19 changedir=docs
20 commands=
21 sphinx-build -W -b html . _build/html
22 sphinx-build -W -b linkcheck . _build/linkcheck
0 Leaked Twitter Secrets
1
2 Twitter for Android
3 xauth: yes
4 key: 3nVuSoBZnx6U4vzUxf5w
5 secret: Bcs59EFbbsdF6Sl9Ng71smgStWEGwXXKSjYvPVt7qys
6
7 Echofon:
8 xauth: yes
9 key: yqoymTNrS9ZDGsBnlFhIuw
10 secret: OMai1whT3sT3XMskI7DZ7xiju5i5rAYJnxSEHaKYvEs