Update upstream source from tag 'upstream/0.6.3'
Update to upstream version '0.6.3'
with Debian dir f242a771372a44bac05774b2b7305eba2824f029
Ole Streicher
1 year, 6 months ago
0 | # readthedocs.yml | |
1 | # Read the Docs configuration file | |
2 | # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details | |
3 | ||
4 | 0 | version: 2 |
1 | build: | |
2 | os: ubuntu-20.04 | |
3 | tools: | |
4 | python: "3.9" | |
5 | apt_packages: | |
6 | - graphviz | |
5 | 7 | |
6 | 8 | sphinx: |
7 | 9 | builder: html |
8 | 10 | configuration: docs/conf.py |
9 | fail_on_warning: true | |
11 | fail_on_warning: false | |
10 | 12 | |
11 | 13 | python: |
12 | 14 | install: |
13 | - method: pip | |
14 | extra_requirements: | |
15 | - dev | |
16 | path: . | |
15 | - method: pip | |
16 | extra_requirements: | |
17 | - all | |
18 | - docs | |
19 | path: . |
0 | 0.6.3 (2022-10-13) | |
1 | ================== | |
2 | ||
3 | Bug Fixes | |
4 | --------- | |
5 | ||
6 | - Updated indexing in a function to prevent FutureWarnings from pandas. (`#73 <https://github.com/sunpy/drms/pull/73>`__) | |
7 | ||
8 | ||
9 | Trivial/Internal Changes | |
10 | ------------------------ | |
11 | ||
12 | - Updated the init of `drms.json.HttpJsonRequest` to raise a nicer message if the URL fails to open. (`#76 <https://github.com/sunpy/drms/pull/76>`__) | |
13 | ||
14 | ||
0 | 15 | 0.6.2 (2021-05-15) |
1 | 16 | ================== |
2 | 17 |
0 | Copyright (c) 2013-2021 The SunPy developers | |
0 | Copyright (c) 2013-2022 The SunPy developers | |
1 | 1 | All rights reserved. |
2 | 2 | |
3 | 3 | Redistribution and use in source and binary forms, with or without |
0 | 0 | Metadata-Version: 2.1 |
1 | 1 | Name: drms |
2 | Version: 0.6.2 | |
3 | Summary: "Access HMI, AIA and MDI data with Python from the public JSOC DRMS server" | |
2 | Version: 0.6.3 | |
3 | Summary: Access HMI, AIA and MDI data with Python from the Standford JSOC DRMS | |
4 | 4 | Home-page: https://sunpy.org |
5 | 5 | Author: The SunPy Community |
6 | 6 | Author-email: sunpy@googlegroups.com |
7 | 7 | License: BSD 2-Clause |
8 | Description: ==== | |
9 | drms | |
10 | ==== | |
11 | ||
12 | `Docs <https://docs.sunpy.org/projects/drms/>`__ | | |
13 | `Tutorial <https://docs.sunpy.org/projects/drms/en/latest/tutorial.html>`__ | | |
14 | `Github <https://github.com/sunpy/drms>`__ | | |
15 | `PyPI <https://pypi.python.org/pypi/drms>`__ | |
16 | ||
17 | |JOSS| |Zenodo| | |
18 | ||
19 | .. |JOSS| image:: https://joss.theoj.org/papers/10.21105/joss.01614/status.svg | |
20 | :target: https://doi.org/10.21105/joss.01614 | |
21 | .. |Zenodo| image:: https://zenodo.org/badge/58651845.svg | |
22 | :target: https://zenodo.org/badge/latestdoi/58651845 | |
23 | ||
24 | The ``drms`` module provides an easy-to-use interface for accessing HMI, AIA and MDI data with Python. | |
25 | It uses the publicly accessible `JSOC <http://jsoc.stanford.edu/>`__ DRMS server by default, but can also be used with local `NetDRMS <http://jsoc.stanford.edu/netdrms/>`__ sites. | |
26 | More information, including a detailed tutorial, is available in the `Documentation <https://docs.sunpy.org/projects/drms/>`__. | |
27 | ||
28 | Getting Help | |
29 | ------------ | |
30 | ||
31 | This is a SunPy-affiliated package. For more information or to ask questions | |
32 | about drms or SunPy, check out: | |
33 | ||
34 | - `drms Documentation`_ | |
35 | - `SunPy Matrix Channel`_ | |
36 | - `SunPy mailing list`_ | |
37 | ||
38 | .. _drms Documentation: https://docs.sunpy.org/projects/drms/en/latest/ | |
39 | .. _SunPy Matrix Channel: https://riot.im/app/#/room/#sunpy:matrix.org | |
40 | ||
41 | Contributing | |
42 | ------------ | |
43 | ||
44 | If you would like to get involved, start by joining the `SunPy mailing list`_ and check out the `Developers Guide`_ section of the SunPy docs. | |
45 | Stop by our chat room `#sunpy:matrix.org`_ if you have any questions. | |
46 | Help is always welcome so let us know what you like to work on, or check out the `issues page`_ for the list of known outstanding items. | |
47 | ||
48 | For more information on contributing to SunPy and to DRMS, please read our `Newcomers guide`_. | |
49 | ||
50 | .. _SunPy mailing list: https://groups.google.com/forum/#!forum/sunpy | |
51 | .. _Developers Guide: https://docs.sunpy.org/en/latest/dev_guide/index.html | |
52 | .. _`#sunpy:matrix.org`: https://app.element.io/#/room/#sunpy:openastronomy.org | |
53 | .. _issues page: https://github.com/sunpy/drms/issues | |
54 | .. _Newcomers guide: https://docs.sunpy.org/en/latest/dev_guide/newcomers.html | |
55 | ||
56 | Code of Conduct (CoC) | |
57 | --------------------- | |
58 | ||
59 | When you are interacting with the SunPy community you are asked to follow our `code of conduct`_. | |
60 | ||
61 | .. _code of conduct: https://docs.sunpy.org/en/latest/code_of_conduct.html | |
62 | ||
63 | Citation | |
64 | -------- | |
65 | ||
66 | If you use ``drms`` in your work, please consider citing our `paper`_. | |
67 | ||
68 | .. code :: bibtex | |
69 | ||
70 | @article{Glogowski2019, | |
71 | doi = {10.21105/joss.01614}, | |
72 | url = {https://doi.org/10.21105/joss.01614}, | |
73 | year = {2019}, | |
74 | publisher = {The Open Journal}, | |
75 | volume = {4}, | |
76 | number = {40}, | |
77 | pages = {1614}, | |
78 | author = {Kolja Glogowski and Monica G. Bobra and Nitin Choudhary and Arthur B. Amezcua and Stuart J. Mumford}, | |
79 | title = {drms: A Python package for accessing HMI and AIA data}, | |
80 | journal = {Journal of Open Source Software} | |
81 | } | |
82 | ||
83 | .. _paper: https://doi.org/10.21105/joss.01614 | |
84 | ||
85 | Acknowledgements | |
86 | ---------------- | |
87 | ||
88 | Kolja Glogowski has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 307117. | |
89 | ||
90 | 8 | Keywords: solar physics,solar,science,data |
91 | 9 | Platform: any |
92 | 10 | Classifier: Development Status :: 5 - Production/Stable |
96 | 14 | Classifier: Operating System :: OS Independent |
97 | 15 | Classifier: Programming Language :: Python |
98 | 16 | Classifier: Programming Language :: Python :: 3 |
99 | Classifier: Programming Language :: Python :: 3.7 | |
100 | 17 | Classifier: Programming Language :: Python :: 3.8 |
101 | 18 | Classifier: Programming Language :: Python :: 3.9 |
19 | Classifier: Programming Language :: Python :: 3.10 | |
102 | 20 | Classifier: Topic :: Scientific/Engineering :: Astronomy |
103 | 21 | Classifier: Topic :: Scientific/Engineering :: Physics |
104 | 22 | Provides: drms |
105 | Requires-Python: >=3.7 | |
23 | Requires-Python: >=3.8 | |
106 | 24 | Description-Content-Type: text/x-rst |
107 | 25 | Provides-Extra: tests |
108 | 26 | Provides-Extra: docs |
109 | 27 | Provides-Extra: dev |
110 | 28 | Provides-Extra: all |
29 | License-File: LICENSE.rst | |
30 | ||
31 | ==== | |
32 | drms | |
33 | ==== | |
34 | ||
35 | `Docs <https://docs.sunpy.org/projects/drms/>`__ | | |
36 | `Tutorial <https://docs.sunpy.org/projects/drms/en/latest/tutorial.html>`__ | | |
37 | `Github <https://github.com/sunpy/drms>`__ | | |
38 | `PyPI <https://pypi.org/project/drms/>`__ | |
39 | ||
40 | |JOSS| |Zenodo| | |
41 | ||
42 | .. |JOSS| image:: https://joss.theoj.org/papers/10.21105/joss.01614/status.svg | |
43 | :target: https://doi.org/10.21105/joss.01614 | |
44 | .. |Zenodo| image:: https://zenodo.org/badge/58651845.svg | |
45 | :target: https://zenodo.org/badge/latestdoi/58651845 | |
46 | ||
47 | The ``drms`` module provides an easy-to-use interface for accessing HMI, AIA and MDI data with Python. | |
48 | It uses the publicly accessible `JSOC <http://jsoc.stanford.edu/>`__ DRMS server by default, but can also be used with local `NetDRMS <http://jsoc.stanford.edu/netdrms/>`__ sites. | |
49 | More information, including a detailed tutorial, is available in the `Documentation <https://docs.sunpy.org/projects/drms/>`__. | |
50 | ||
51 | Getting Help | |
52 | ------------ | |
53 | For more information or to ask questions about ``drms``, check out: | |
54 | ||
55 | - `drms Documentation <https://docs.sunpy.org/projects/drms/en/latest/>`__ | |
56 | ||
57 | .. _drms Documentation: https://docs.sunpy.org/projects/drms/en/latest/ | |
58 | ||
59 | Contributing | |
60 | ------------ | |
61 | If you would like to get involved, start by joining the `SunPy Chat`_ and read our `Newcomers' guide <https://docs.sunpy.org/en/latest/dev_guide/newcomers.html>`__. | |
62 | Help is always welcome so let us know what you like to work on, or check out the `issues page <https://github.com/sunpy/drms/issues>`__ for a list of known outstanding items. | |
63 | ||
64 | Citation | |
65 | -------- | |
66 | If you use ``drms`` in your work, please cite our `paper <https://doi.org/10.21105/joss.01614>`__. | |
67 | ||
68 | .. code :: bibtex | |
69 | ||
70 | @article{Glogowski2019, | |
71 | doi = {10.21105/joss.01614}, | |
72 | url = {https://doi.org/10.21105/joss.01614}, | |
73 | year = {2019}, | |
74 | publisher = {The Open Journal}, | |
75 | volume = {4}, | |
76 | number = {40}, | |
77 | pages = {1614}, | |
78 | author = {Kolja Glogowski and Monica G. Bobra and Nitin Choudhary and Arthur B. Amezcua and Stuart J. Mumford}, | |
79 | title = {drms: A Python package for accessing HMI and AIA data}, | |
80 | journal = {Journal of Open Source Software} | |
81 | } | |
82 | ||
83 | Code of Conduct (CoC) | |
84 | --------------------- | |
85 | ||
86 | When you are interacting with the SunPy community you are asked to follow our `code of conduct <https://docs.sunpy.org/en/latest/code_of_conduct.html>`__. | |
87 | ||
88 | Acknowledgements | |
89 | ---------------- | |
90 | Kolja Glogowski has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 307117. | |
91 | ||
92 | .. _SunPy Chat: https://openastronomy.element.io/#/room/#sunpy:openastronomy.org |
4 | 4 | `Docs <https://docs.sunpy.org/projects/drms/>`__ | |
5 | 5 | `Tutorial <https://docs.sunpy.org/projects/drms/en/latest/tutorial.html>`__ | |
6 | 6 | `Github <https://github.com/sunpy/drms>`__ | |
7 | `PyPI <https://pypi.python.org/pypi/drms>`__ | |
7 | `PyPI <https://pypi.org/project/drms/>`__ | |
8 | 8 | |
9 | 9 | |JOSS| |Zenodo| |
10 | 10 | |
19 | 19 | |
20 | 20 | Getting Help |
21 | 21 | ------------ |
22 | For more information or to ask questions about ``drms``, check out: | |
22 | 23 | |
23 | This is a SunPy-affiliated package. For more information or to ask questions | |
24 | about drms or SunPy, check out: | |
25 | ||
26 | - `drms Documentation`_ | |
27 | - `SunPy Matrix Channel`_ | |
28 | - `SunPy mailing list`_ | |
24 | - `drms Documentation <https://docs.sunpy.org/projects/drms/en/latest/>`__ | |
29 | 25 | |
30 | 26 | .. _drms Documentation: https://docs.sunpy.org/projects/drms/en/latest/ |
31 | .. _SunPy Matrix Channel: https://riot.im/app/#/room/#sunpy:matrix.org | |
32 | 27 | |
33 | 28 | Contributing |
34 | 29 | ------------ |
35 | ||
36 | If you would like to get involved, start by joining the `SunPy mailing list`_ and check out the `Developers Guide`_ section of the SunPy docs. | |
37 | Stop by our chat room `#sunpy:matrix.org`_ if you have any questions. | |
38 | Help is always welcome so let us know what you like to work on, or check out the `issues page`_ for the list of known outstanding items. | |
39 | ||
40 | For more information on contributing to SunPy and to DRMS, please read our `Newcomers guide`_. | |
41 | ||
42 | .. _SunPy mailing list: https://groups.google.com/forum/#!forum/sunpy | |
43 | .. _Developers Guide: https://docs.sunpy.org/en/latest/dev_guide/index.html | |
44 | .. _`#sunpy:matrix.org`: https://app.element.io/#/room/#sunpy:openastronomy.org | |
45 | .. _issues page: https://github.com/sunpy/drms/issues | |
46 | .. _Newcomers guide: https://docs.sunpy.org/en/latest/dev_guide/newcomers.html | |
47 | ||
48 | Code of Conduct (CoC) | |
49 | --------------------- | |
50 | ||
51 | When you are interacting with the SunPy community you are asked to follow our `code of conduct`_. | |
52 | ||
53 | .. _code of conduct: https://docs.sunpy.org/en/latest/code_of_conduct.html | |
30 | If you would like to get involved, start by joining the `SunPy Chat`_ and read our `Newcomers' guide <https://docs.sunpy.org/en/latest/dev_guide/newcomers.html>`__. | |
31 | Help is always welcome so let us know what you like to work on, or check out the `issues page <https://github.com/sunpy/drms/issues>`__ for a list of known outstanding items. | |
54 | 32 | |
55 | 33 | Citation |
56 | 34 | -------- |
57 | ||
58 | If you use ``drms`` in your work, please consider citing our `paper`_. | |
35 | If you use ``drms`` in your work, please cite our `paper <https://doi.org/10.21105/joss.01614>`__. | |
59 | 36 | |
60 | 37 | .. code :: bibtex |
61 | 38 | |
72 | 49 | journal = {Journal of Open Source Software} |
73 | 50 | } |
74 | 51 | |
75 | .. _paper: https://doi.org/10.21105/joss.01614 | |
52 | Code of Conduct (CoC) | |
53 | --------------------- | |
54 | ||
55 | When you are interacting with the SunPy community you are asked to follow our `code of conduct <https://docs.sunpy.org/en/latest/code_of_conduct.html>`__. | |
76 | 56 | |
77 | 57 | Acknowledgements |
78 | 58 | ---------------- |
59 | Kolja Glogowski has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 307117. | |
79 | 60 | |
80 | Kolja Glogowski has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 307117. | |
61 | .. _SunPy Chat: https://openastronomy.element.io/#/room/#sunpy:openastronomy.org |
5 | 5 | import drms |
6 | 6 | |
7 | 7 | # Test URLs, used to check if a online site is reachable |
8 | jsoc_testurl = 'http://jsoc.stanford.edu/' | |
9 | kis_testurl = 'http://drms.leibniz-kis.de/' | |
8 | jsoc_testurl = "http://jsoc.stanford.edu/" | |
9 | kis_testurl = "http://drms.leibniz-kis.de/" | |
10 | 10 | |
11 | 11 | |
12 | 12 | def pytest_addoption(parser): |
13 | parser.addoption('--email', help='Export email address') | |
13 | parser.addoption("--email", help="Export email address") | |
14 | 14 | |
15 | 15 | |
16 | 16 | class lazily_cached: |
22 | 22 | self.func = lambda: f(*args, **kwargs) |
23 | 23 | |
24 | 24 | def __call__(self): |
25 | if not hasattr(self, 'result'): | |
25 | if not hasattr(self, "result"): | |
26 | 26 | self.result = self.func() |
27 | 27 | return self.result |
28 | 28 | |
45 | 45 | |
46 | 46 | def pytest_runtest_setup(item): |
47 | 47 | # Skip JSOC online site tests if the site is not reachable. |
48 | if item.get_closest_marker('jsoc') is not None: | |
48 | if item.get_closest_marker("jsoc") is not None: | |
49 | 49 | if not jsoc_reachable(): |
50 | pytest.skip('JSOC is not reachable') | |
50 | pytest.skip("JSOC is not reachable") | |
51 | 51 | |
52 | 52 | # Skip KIS online site tests if the site is not reachable. |
53 | if item.get_closest_marker('kis') is not None: | |
53 | if item.get_closest_marker("kis") is not None: | |
54 | 54 | if not kis_reachable(): |
55 | pytest.skip('KIS is not reachable') | |
55 | pytest.skip("KIS is not reachable") | |
56 | 56 | |
57 | 57 | # Skip export tests if no email address was specified. |
58 | if item.get_closest_marker('export') is not None: | |
59 | email = item.config.getoption('email') | |
58 | if item.get_closest_marker("export") is not None: | |
59 | email = item.config.getoption("email") | |
60 | 60 | if email is None: |
61 | pytest.skip('No email address specified; use the --email option to enable export tests') | |
61 | pytest.skip("No email address specified; use the --email option to enable export tests") | |
62 | 62 | |
63 | 63 | |
64 | 64 | @pytest.fixture |
66 | 66 | """ |
67 | 67 | Email address from --email command line option. |
68 | 68 | """ |
69 | return request.config.getoption('--email') | |
69 | return request.config.getoption("--email") | |
70 | 70 | |
71 | 71 | |
72 | 72 | @pytest.fixture |
74 | 74 | """ |
75 | 75 | Client fixture for JSOC online tests, does not use email. |
76 | 76 | """ |
77 | return drms.Client('jsoc') | |
77 | return drms.Client("jsoc") | |
78 | 78 | |
79 | 79 | |
80 | 80 | @pytest.fixture |
82 | 82 | """ |
83 | 83 | Client fixture for JSOC online tests, uses email if specified. |
84 | 84 | """ |
85 | return drms.Client('jsoc', email=email) | |
85 | return drms.Client("jsoc", email=email) | |
86 | 86 | |
87 | 87 | |
88 | 88 | @pytest.fixture |
90 | 90 | """ |
91 | 91 | Client fixture for KIS online tests. |
92 | 92 | """ |
93 | return drms.Client('kis') | |
93 | return drms.Client("kis") |
2 | 2 | ************* |
3 | 3 | API Reference |
4 | 4 | ************* |
5 | ||
6 | .. automodapi:: drms | |
5 | 7 | |
6 | 8 | .. automodapi:: drms.client |
7 | 9 |
0 | # | |
1 | 0 | # Configuration file for the Sphinx documentation builder. |
2 | # | |
3 | # This file does only contain a selection of the most common options. For a | |
4 | # full list see the documentation: | |
5 | # http://www.sphinx-doc.org/en/master/config | |
6 | ||
7 | 1 | |
8 | 2 | # -- Project information ----------------------------------------------------- |
9 | 3 | |
10 | 4 | import os |
11 | 5 | from drms import __version__ |
12 | 6 | |
13 | project = 'drms' | |
14 | copyright = '2021, The SunPy Developers' | |
15 | author = 'The SunPy Developers' | |
7 | project = "drms" | |
8 | copyright = "2022, The SunPy Developers" | |
9 | author = "The SunPy Developers" | |
16 | 10 | |
17 | 11 | # The full version, including alpha/beta/rc tags |
18 | 12 | release = __version__ |
19 | is_development = '.dev' in __version__ | |
13 | is_development = ".dev" in __version__ | |
20 | 14 | |
21 | 15 | # -- General configuration --------------------------------------------------- |
22 | 16 | |
24 | 18 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom |
25 | 19 | # ones. |
26 | 20 | extensions = [ |
27 | 'sphinx_automodapi.automodapi', | |
28 | 'sphinx_automodapi.smart_resolver', | |
29 | 'sphinx_gallery.gen_gallery', | |
30 | 'sphinx.ext.autodoc', | |
31 | 'sphinx.ext.coverage', | |
32 | 'sphinx.ext.doctest', | |
33 | 'sphinx.ext.inheritance_diagram', | |
34 | 'sphinx.ext.intersphinx', | |
35 | 'sphinx.ext.mathjax', | |
36 | 'sphinx.ext.napoleon', | |
37 | 'sphinx.ext.todo', | |
38 | 'sphinx.ext.viewcode', | |
39 | 'sphinx_changelog', | |
21 | "sphinx_automodapi.automodapi", | |
22 | "sphinx_automodapi.smart_resolver", | |
23 | "sphinx_gallery.gen_gallery", | |
24 | "sphinx.ext.autodoc", | |
25 | "sphinx.ext.coverage", | |
26 | "sphinx.ext.doctest", | |
27 | "sphinx.ext.inheritance_diagram", | |
28 | "sphinx.ext.intersphinx", | |
29 | "sphinx.ext.mathjax", | |
30 | "sphinx.ext.napoleon", | |
31 | "sphinx.ext.todo", | |
32 | "sphinx.ext.viewcode", | |
33 | "sphinx_changelog", | |
40 | 34 | ] |
41 | 35 | numpydoc_show_class_members = False |
42 | 36 | |
46 | 40 | # List of patterns, relative to source directory, that match files and |
47 | 41 | # directories to ignore when looking for source files. |
48 | 42 | # This pattern also affects html_static_path and html_extra_path. |
49 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] | |
43 | exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] | |
50 | 44 | |
51 | 45 | # The suffix(es) of source filenames. |
52 | 46 | # You can specify multiple suffix as a list of string: |
53 | source_suffix = '.rst' | |
47 | source_suffix = ".rst" | |
54 | 48 | |
55 | 49 | # The master toctree document. |
56 | master_doc = 'index' | |
50 | master_doc = "index" | |
57 | 51 | |
58 | 52 | # The reST default role (used for this markup: `text`) to use for all |
59 | 53 | # documents. Set to the "smart" one. |
60 | default_role = 'obj' | |
54 | default_role = "obj" | |
61 | 55 | |
62 | 56 | # Enable nitpicky mode, which forces links to be non-broken |
63 | 57 | nitpicky = True |
64 | 58 | nitpick_ignore = [ |
65 | ('py:obj', 'numpy.datetime64'), | |
59 | ("py:obj", "numpy.datetime64"), | |
66 | 60 | # See https://github.com/numpy/numpy/issues/10039 |
67 | 61 | ] |
68 | 62 | |
69 | # Example configuration for intersphinx: refer to the Python standard library. | |
63 | # -- Options for intersphinx extension ----------------------------------------- | |
70 | 64 | intersphinx_mapping = { |
71 | 'python': ('https://docs.python.org/3/', (None, 'http://data.astropy.org/intersphinx/python3.inv'),), | |
72 | 'pandas': ('http://pandas.pydata.org/pandas-docs/stable/', None), | |
73 | 'numpy': ('https://docs.scipy.org/doc/numpy/', (None, 'http://data.astropy.org/intersphinx/numpy.inv'),), | |
74 | 'scipy': ( | |
75 | 'https://docs.scipy.org/doc/scipy/reference/', | |
76 | (None, 'http://data.astropy.org/intersphinx/scipy.inv'), | |
77 | ), | |
78 | 'matplotlib': ('https://matplotlib.org/', (None, 'http://data.astropy.org/intersphinx/matplotlib.inv'),), | |
79 | 'astropy': ('http://docs.astropy.org/en/stable/', None), | |
80 | 'sunpy': ('https://docs.sunpy.org/en/stable/', None), | |
65 | "python": ("https://docs.python.org/3/", None), | |
66 | "numpy": ("https://numpy.org/doc/stable/", None), | |
67 | "astropy": ("https://docs.astropy.org/en/stable/", None), | |
68 | "pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None), | |
69 | "sunpy": ("https://docs.sunpy.org/en/stable/", None), | |
81 | 70 | } |
82 | 71 | |
83 | 72 | # -- Options for HTML output ------------------------------------------------- |
88 | 77 | try: |
89 | 78 | from sunpy_sphinx_theme.conf import * |
90 | 79 | except ImportError: |
91 | html_theme = 'default' | |
80 | html_theme = "default" | |
92 | 81 | |
93 | 82 | # JSOC email os env |
94 | os.environ["JSOC_EMAIL"] = "jsoc@cadair.com" | |
83 | os.environ["JSOC_EMAIL"] = "jsoc@sunpy.org" | |
95 | 84 | |
96 | 85 | # Add any paths that contain custom static files (such as style sheets) here, |
97 | 86 | # relative to this directory. They are copied after the builtin static files, |
99 | 88 | # html_static_path = ['_static'] |
100 | 89 | |
101 | 90 | # Render inheritance diagrams in SVG |
102 | graphviz_output_format = 'svg' | |
91 | graphviz_output_format = "svg" | |
103 | 92 | |
104 | 93 | graphviz_dot_args = [ |
105 | '-Nfontsize=10', | |
106 | '-Nfontname=Helvetica Neue, Helvetica, Arial, sans-serif', | |
107 | '-Efontsize=10', | |
108 | '-Efontname=Helvetica Neue, Helvetica, Arial, sans-serif', | |
109 | '-Gfontsize=10', | |
110 | '-Gfontname=Helvetica Neue, Helvetica, Arial, sans-serif', | |
94 | "-Nfontsize=10", | |
95 | "-Nfontname=Helvetica Neue, Helvetica, Arial, sans-serif", | |
96 | "-Efontsize=10", | |
97 | "-Efontname=Helvetica Neue, Helvetica, Arial, sans-serif", | |
98 | "-Gfontsize=10", | |
99 | "-Gfontname=Helvetica Neue, Helvetica, Arial, sans-serif", | |
111 | 100 | ] |
112 | 101 | |
113 | 102 | # -- Sphinx Gallery ------------------------------------------------------------ |
114 | 103 | from sphinx_gallery.sorting import ExampleTitleSortKey |
115 | 104 | |
116 | 105 | sphinx_gallery_conf = { |
117 | 'backreferences_dir': os.path.join('generated', 'modules'), | |
118 | 'filename_pattern': '^((?!skip_).)*$', | |
119 | 'examples_dirs': os.path.join('..', 'examples'), | |
120 | 'within_subsection_order': ExampleTitleSortKey, | |
121 | 'gallery_dirs': os.path.join('generated', 'gallery'), | |
106 | "backreferences_dir": os.path.join("generated", "modules"), | |
107 | "filename_pattern": "^((?!skip_).)*$", | |
108 | "examples_dirs": os.path.join("..", "examples"), | |
109 | "within_subsection_order": ExampleTitleSortKey, | |
110 | "gallery_dirs": os.path.join("generated", "gallery"), | |
122 | 111 | # Comes from the theme. |
123 | 'default_thumb_file': os.path.join(html_static_path[0], 'img', 'sunpy_icon_128x128.png'), | |
124 | 'abort_on_example_error': False, | |
125 | 'only_warn_on_example_error': True, | |
126 | 'plot_gallery': True, | |
127 | 'remove_config_comments': True, | |
128 | 'doc_module': ('sunpy'), | |
112 | "default_thumb_file": os.path.join(html_static_path[0], "img", "sunpy_icon_128x128.png"), | |
113 | "abort_on_example_error": False, | |
114 | "only_warn_on_example_error": True, | |
115 | "plot_gallery": True, | |
116 | "remove_config_comments": True, | |
117 | "doc_module": ("sunpy"), | |
129 | 118 | } |
0 | ################## | |
0 | ****************** | |
1 | 1 | drms documentation |
2 | ################## | |
2 | ****************** | |
3 | 3 | |
4 | 4 | :Github: https://github.com/sunpy/drms |
5 | :PyPI: https://pypi.python.org/pypi/drms | |
5 | :PyPI: https://pypi.org/project/drms/ | |
6 | 6 | |
7 | Python library for accessing HMI, AIA and MDI data. | |
7 | Python library for accessing HMI, AIA and MDI data from the Joint Science Operations Center (JSOC) or other Data Record Management System (DRMS) servers. | |
8 | 8 | |
9 | 9 | .. toctree:: |
10 | 10 | :maxdepth: 2 |
0 | ************ | |
0 | 1 | Introduction |
1 | ============ | |
2 | ||
2 | ************ | |
3 | 3 | The ``drms`` Python package can be used to access HMI, AIA and MDI data which are stored in a DRMS database system. |
4 | 4 | |
5 | DRMS stands for *Data Record Management System* and is a system that was developed by the `Joint Science Operation Center <http://jsoc.stanford.edu/>`__ (JSOC), headquartered at Stanford University, to handle the data produced by the AIA and HMI instruments aboard the `Solar Dynamics Observatory <http://sdo.gsfc.nasa.gov/>`__ spacecraft. | |
5 | DRMS stands for *Data Record Management System* and is a system that was developed by the `Joint Science Operation Center <http://jsoc.stanford.edu/>`__ (JSOC), headquartered at Stanford University, to handle the data produced by the AIA and HMI instruments aboard the `Solar Dynamics Observatory <https://sdo.gsfc.nasa.gov//>`__ spacecraft. | |
6 | 6 | |
7 | 7 | By default the ``drms`` library uses the HTTP/JSON interface provided by JSOC and has similar functionality to the `JSOC Lookdata <http://jsoc.stanford.edu/ajax/lookdata.html>`__ website. |
8 | 8 | It can be used to query metadata, submit data export requests and download data files. |
10 | 10 | This module also works well for local `NetDRMS <http://jsoc.stanford.edu/netdrms/>`__ sites, as long as the site runs a web server providing the needed CGI programs ``show_series`` and ``jsoc_info`` (for the data export functionality, additional CGIs, like ``jsoc_fetch``, are needed). |
11 | 11 | |
12 | 12 | Requirements |
13 | ------------ | |
14 | ||
13 | ============ | |
15 | 14 | The ``drms`` module supports Python 3.7 or newer. |
16 | 15 | It requires the following Python packages: |
17 | 16 | |
19 | 18 | - pandas |
20 | 19 | |
21 | 20 | Installation |
22 | ------------ | |
21 | ============ | |
22 | If you are using `miniforge`_ (which is conda but using the conda-forge channel): | |
23 | 23 | |
24 | If you are using `Anaconda`_, it is recommended to use the `conda-forge`_ package:: | |
24 | .. code-block:: bash | |
25 | 25 | |
26 | conda config --append channels conda-forge | |
27 | 26 | conda install drms |
28 | 27 | |
29 | Otherwise the ``drms`` Python package can be installed from `PyPI`_ using:: | |
28 | Otherwise the ``drms`` Python package can be installed from `PyPI`_ using: | |
29 | ||
30 | .. code-block:: bash | |
30 | 31 | |
31 | 32 | pip install drms |
32 | 33 | |
33 | 34 | .. note:: |
34 | If you do not use a Python distribution, like `Anaconda`_, | |
35 | If you do not use a Python distribution, like `miniforge`_, | |
35 | 36 | and did not create an isolated Python environment using `Virtualenv`_, |
36 | you might need to add ``--user`` to the ``pip`` command:: | |
37 | you might need to add ``--user`` to the ``pip`` command: | |
37 | 38 | |
38 | pip install --user drms | |
39 | .. code-block:: bash | |
39 | 40 | |
40 | .. _PyPI: https://pypi.python.org/pypi/drms | |
41 | pip install --user drms | |
42 | ||
43 | .. _PyPI: https://pypi.org/project/drms/ | |
41 | 44 | .. _conda-forge: https://anaconda.org/conda-forge/drms |
42 | .. _Anaconda: https://www.anaconda.com/distribution/ | |
43 | .. _Virtualenv: https://virtualenv.pypa.io | |
45 | .. _miniforge: https://github.com/conda-forge/miniforge#miniforge3 | |
46 | .. _Virtualenv: https://virtualenv.pypa.io/en/latest/ | |
44 | 47 | |
45 | 48 | Acknowledgements |
46 | ---------------- | |
47 | ||
49 | ================ | |
48 | 50 | Kolja Glogowski has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 307117. |
0 | 0 | .. _tutorial: |
1 | 1 | |
2 | ******** | |
2 | 3 | Tutorial |
3 | ======== | |
4 | ||
4 | ******** | |
5 | 5 | This tutorial gives an introduction on how to use the ``drms`` Python library. |
6 | 6 | More detailed information on the different classes and functions can be found in the :ref:`API Reference <reference>`. |
7 | 7 | |
8 | 8 | Basic usage |
9 | ----------- | |
10 | ||
11 | In this first part, we start with looking at data series that are available from `JSOC <http://jsoc.stanford.edu/>`__ and perform some basic DRMS queries to obtain keyword data (metadata) and segment file (data) locations. | |
9 | =========== | |
10 | We start with looking at data series that are available from `JSOC <http://jsoc.stanford.edu/>`__ and perform some basic DRMS queries to obtain keyword data (metadata) and segment file (data) locations. | |
12 | 11 | This is essentially what you can do on the `JSOC Lookdata <http://jsoc.stanford.edu/ajax/lookdata.html>`__ website. |
13 | 12 | |
14 | To be able to access the JSOC DRMS from Python, we first need to import the ``drms`` module and create an instance of the `~drms.client.Client` class:: | |
13 | To be able to access the JSOC DRMS from Python, we first need to import the ``drms`` module and create an instance of the `~drms.client.Client` class: | |
14 | ||
15 | .. code-block:: python | |
15 | 16 | |
16 | 17 | >>> import drms |
17 | 18 | >>> client = drms.Client() # doctest: +REMOTE_DATA |
18 | 19 | |
19 | All available data series can be now retrieved by calling the :meth:`drms.client.Client.series` method. | |
20 | All available data series can be now retrieved by calling :meth:`drms.client.Client.series`. | |
20 | 21 | HMI series names start with ``"hmi."``, AIA series names with ``"aia."`` and the names of MDI series with ``"mdi."``. |
21 | 22 | |
22 | 23 | The first (optional) parameter of this method takes a regular expression that allows you to filter the result. |
23 | If you, for example, want to obtain a list of HMI series, with a name that start with the string ``"m_"``, you can write:: | |
24 | If for example, you want to obtain a list of HMI series, with a name that start with the string ``"m_"``, you can write: | |
25 | ||
26 | .. code-block:: python | |
24 | 27 | |
25 | 28 | >>> client.series(r'hmi\.m_') # doctest: +REMOTE_DATA |
26 | 29 | ['hmi.M_45s', 'hmi.M_45s_dcon', 'hmi.M_720s', 'hmi.M_720s_dcon', 'hmi.M_720s_dconS', 'hmi.m_720s_mod', 'hmi.m_720s_nrt'] |
31 | 34 | DRMS records can be selected by creating a query string that contains a series name, followed by one or more fields, which are surrounded by square brackets. |
32 | 35 | Each of those fields corresponds to a specific primekey, that is specified in the series definition. |
33 | 36 | A complete set of primekeys represents a unique identifier for a record in that particular series. |
34 | For more detailed information on building record set queries, including additional non-primekey fields, see the `JSOC Help <http://jsoc.stanford.edu/ajax/RecordSetHelp.html>`__ page about this topiclient. | |
35 | ||
36 | With the ``drms`` module you can use the :meth:`drms.client.Client.pkeys` method to obtain a list of all primekeys of a series, e.g.:: | |
37 | For more detailed information on building record set queries, including additional non-primekey fields, see the `JSOC Help <http://jsoc.stanford.edu/ajax/RecordSetHelp.html>`__ page about this. | |
38 | ||
39 | With the ``drms`` module you can use :meth:`drms.client.Client.pkeys` to obtain a list of all primekeys of a series: | |
40 | ||
41 | .. code-block:: python | |
37 | 42 | |
38 | 43 | >>> client.pkeys('hmi.m_720s') # doctest: +REMOTE_DATA |
39 | 44 | ['T_REC', 'CAMERA'] |
40 | ||
41 | 45 | >>> client.pkeys('hmi.v_sht_modes') # doctest: +REMOTE_DATA |
42 | 46 | ['T_START', 'LMIN', 'LMAX', 'NDT'] |
43 | 47 | |
44 | 48 | A list of all (regular) keywords can be obtained using :meth:`drms.client.Client.keys`. |
45 | You can also use the method :meth:`drms.client.Client.info` to get more detailed information about a series, e.g.:: | |
46 | ||
47 | >>> si = client.info('hmi.v_avg120') # doctest: +REMOTE_DATA | |
48 | >>> si.segments # doctest: +REMOTE_DATA | |
49 | You can also use :meth:`drms.client.Client.info` to get more detailed information about a series: | |
50 | ||
51 | .. code-block:: python | |
52 | ||
53 | >>> series_info = client.info('hmi.v_avg120') # doctest: +REMOTE_DATA | |
54 | >>> series_info.segments # doctest: +REMOTE_DATA | |
49 | 55 | type units protocol dims note |
50 | 56 | name |
51 | 57 | mean short m/s fits 4096x4096 Doppler mean |
53 | 59 | valid short NA fits 4096x4096 valid pixel count |
54 | 60 | Log char NA generic run log |
55 | 61 | |
56 | All table-like structures, returned by routines in the ``drms`` module, are `Pandas DataFrames <http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html>`__. | |
57 | If you are new to `Pandas <http://pandas.pydata.org/>`__, you should have a look at the introduction to `Pandas Data Structures <http://pandas.pydata.org/pandas-docs/stable/dsintro.html>`__. | |
58 | ||
59 | Record set queries, used to obtain keyword data and get the location of data segments, can be performed using the :meth:`drms.client.Client.query` method. | |
60 | To get, for example, the record time and the mean value for some of the HMI Dopplergrams that were recorded on April 1, 2016, together with the spacecraft's radial velocity in respect to the Sun, you can write:: | |
61 | ||
62 | >>> k = client.query('hmi.v_45s[2016.04.01_TAI/1d@6h]', | |
62 | All table-like structures, returned by routines in the ``drms`` module, are `Pandas DataFrames <https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html>`__. | |
63 | If you are new to `Pandas <https://pandas.pydata.org/>`__, you should have a look at the introduction to `Pandas Data Structures <https://pandas.pydata.org/pandas-docs/stable/dsintro.html>`__. | |
64 | ||
65 | Record set queries, used to obtain keyword data and get the location of data segments, can be performed using :meth:`drms.client.Client.query`. | |
66 | To get, for example, the record time and the mean value for some of the HMI Dopplergrams that were recorded on April 1, 2016, together with the spacecraft's radial velocity in respect to the Sun, you can write: | |
67 | ||
68 | .. code-block:: python | |
69 | ||
70 | >>> query = client.query('hmi.v_45s[2016.04.01_TAI/1d@6h]', | |
63 | 71 | ... key='T_REC, DATAMEAN, OBS_VR') # doctest: +REMOTE_DATA |
64 | >>> k # doctest: +REMOTE_DATA | |
72 | >>> query # doctest: +REMOTE_DATA | |
65 | 73 | T_REC DATAMEAN OBS_VR |
66 | 74 | 0 2016.04.01_00:00:00_TAI 3313.104980 3309.268006 |
67 | 75 | 1 2016.04.01_06:00:00_TAI 878.075195 887.864139 |
68 | 76 | 2 2016.04.01_12:00:00_TAI -2289.062500 -2284.690263 |
69 | 77 | 3 2016.04.01_18:00:00_TAI 128.609283 137.836168 |
70 | 78 | |
71 | JSOC time strings can be converted to a naive ``datetime`` | |
72 | representation using the :meth:`drms.utils.to_datetime` utility function:: | |
73 | ||
74 | >>> t = drms.to_datetime(k.T_REC) # doctest: +REMOTE_DATA | |
75 | >>> t # doctest: +REMOTE_DATA | |
79 | JSOC time strings can be converted to a naive `~datetime.datetime` representation using :meth:`drms.utils.to_datetime`: | |
80 | ||
81 | .. code-block:: python | |
82 | ||
83 | >>> timestamps = drms.to_datetime(query.T_REC) # doctest: +REMOTE_DATA | |
84 | >>> timestamps # doctest: +REMOTE_DATA | |
76 | 85 | 0 2016-04-01 00:00:00 |
77 | 86 | 1 2016-04-01 06:00:00 |
78 | 87 | 2 2016-04-01 12:00:00 |
81 | 90 | |
82 | 91 | For most of the HMI and MDI data sets, the `TAI <https://en.wikipedia.org/wiki/International_Atomic_Time>`__ time standard is used which, in contrast to `UTC <https://en.wikipedia.org/wiki/Coordinated_Universal_Time>`__, does not make use of any leap seconds. |
83 | 92 | The TAI standard is currently not supported by the Python standard libraries. |
84 | If you need to convert timestamps between TAI and UTC, you can use the `Astropy <http://www.astropy.org/>`__ time module:: | |
93 | If you need to convert timestamps between TAI and UTC, you can use `Astropy <https://www.astropy.org/>`__: | |
94 | ||
95 | .. code-block:: python | |
85 | 96 | |
86 | 97 | >>> from astropy.time import Time |
87 | >>> ta = Time(t[0], format='datetime', scale='tai') # doctest: +REMOTE_DATA | |
88 | >>> ta # doctest: +REMOTE_DATA | |
98 | >>> start_time = Time(timestamps[0], format='datetime', scale='tai') # doctest: +REMOTE_DATA | |
99 | >>> start_time # doctest: +REMOTE_DATA | |
89 | 100 | <Time object: scale='tai' format='datetime' value=2016-04-01 00:00:00> |
90 | >>> ta.utc # doctest: +REMOTE_DATA | |
101 | >>> start_time.utc # doctest: +REMOTE_DATA | |
91 | 102 | <Time object: scale='utc' format='datetime' value=2016-03-31 23:59:24> |
92 | 103 | |
93 | The ``"hmi.v_45s"`` series has a data segment with the name ``"Dopplergram"``, which contains Dopplergrams for each record in the series, that are stored as `FITS <http://fits.gsfclient.nasa.gov/>`__ files. | |
94 | The location of the FITS files for the record set query in the example above, can be obtained by using the ``seg`` parameter of the :meth:`drms.client.Client.query` method:: | |
95 | ||
96 | >>> s = client.query('hmi.v_45s[2016.04.01_TAI/1d@6h]', seg='Dopplergram') # doctest: +REMOTE_DATA | |
97 | >>> s # doctest: +REMOTE_DATA | |
104 | The ``"hmi.v_45s"`` series has a data segment with the name ``"Dopplergram"``, which contains Dopplergrams for each record in the series, that are stored as `FITS <https://fits.gsfc.nasa.gov/>`__ files. | |
105 | The location of the FITS files for the record set query in the example above, can be obtained by using the ``seg`` parameter of :meth:`drms.client.Client.query`: | |
106 | ||
107 | .. code-block:: python | |
108 | ||
109 | >>> query = client.query('hmi.v_45s[2016.04.01_TAI/1d@6h]', seg='Dopplergram') # doctest: +REMOTE_DATA | |
110 | >>> query # doctest: +REMOTE_DATA | |
98 | 111 | Dopplergram |
99 | 112 | 0 /SUM58/D803708321/S00008/Dopplergram.fits |
100 | 113 | 1 /SUM41/D803708361/S00008/Dopplergram.fits |
101 | 114 | 2 /SUM71/D803720859/S00008/Dopplergram.fits |
102 | 115 | 3 /SUM70/D803730119/S00008/Dopplergram.fits |
103 | 116 | |
104 | Note that the ``key`` and ``seg`` parameters can also be used together in one :meth:`drms.client.Client.query` call, i.e.:: | |
105 | ||
106 | >>> k, s = client.query('hmi.v_45s[2016.04.01_TAI/1d@6h]', | |
117 | Note that the ``key`` and ``seg`` parameters can also be used together in one :meth:`drms.client.Client.query` call: | |
118 | ||
119 | .. code-block:: python | |
120 | ||
121 | >>> keys, segments = client.query('hmi.v_45s[2016.04.01_TAI/1d@6h]', | |
107 | 122 | ... key='T_REC, DATAMEAN, OBS_VR', seg='Dopplergram') # doctest: +REMOTE_DATA |
108 | 123 | |
109 | 124 | The file paths listed above are the storage location on the JSOC server. |
110 | You can access these files, even if you do not have direct NFS access to the filesystem, by prepending the JSOC URL to segment file path:: | |
111 | ||
112 | >>> url = 'http://jsoc.stanford.edu' + s.Dopplergram[0] # doctest: +REMOTE_DATA | |
125 | You can access these files, even if you do not have direct NFS access to the filesystem, by prepending the JSOC URL to segment file path: | |
126 | ||
127 | .. code-block:: python | |
128 | ||
129 | >>> url = 'http://jsoc.stanford.edu' + segments.Dopplergram[0] # doctest: +REMOTE_DATA | |
113 | 130 | >>> url # doctest: +REMOTE_DATA |
114 | 131 | 'http://jsoc.stanford.edu/SUM58/D803708321/S00008/Dopplergram.fits' |
115 | 132 | |
116 | 133 | >>> from astropy.io import fits |
117 | >>> a = fits.getdata(url) # doctest: +REMOTE_DATA | |
118 | >>> print(a.shape, a.dtype) # doctest: +REMOTE_DATA | |
134 | >>> data = fits.getdata(url) # doctest: +REMOTE_DATA | |
135 | >>> print(data.shape, data.dtype) # doctest: +REMOTE_DATA | |
119 | 136 | (4096, 4096) float32 |
120 | 137 | |
121 | Note that FITS files which are accessed in this way, do not contain anykeyword data in their headers. | |
138 | Note that FITS files which are accessed in this way, do not contain any keyword data in their headers. | |
122 | 139 | This is perfectly fine in many cases, because you can just use :meth:`drms.client.Client.query` to obtain the data of all required keywords. |
123 | 140 | If you need FITS files with headers that contain all the keyword data, you need to submit an export request to JSOC, which is described in the next section. |
124 | 141 | |
125 | Export requests can also be useful, if you want to download more than only one or two files (even without keyword headers), because you can then use the :meth:`drms.client.ExportRequest.download` method, which takes care of creating URLs, downloading the data and (if necessary) generating suitable local filenames. | |
142 | Export requests can also be useful, if you want to download more than only one or two files (even without keyword headers), because you can then use :meth:`drms.client.ExportRequest.download`, which takes care of creating URLs, downloading the data and (if necessary) generating suitable local filenames. | |
126 | 143 | |
127 | 144 | Data export requests |
128 | 145 | -------------------- |
129 | ||
130 | 146 | Data export requests can be interactively built and submitted on the `JSOC Export Data <http://jsoc.stanford.edu/ajax/exportdata.html>`__ webpage, where you can also find more information about the different export options that are available. |
131 | Note that a registered email address is required to for submitting export requests. You can register your email address on the `JSOC email registration <http://jsoc.stanford.edu/ajax/register_email.html>`__ webpage. | |
147 | Note that a registered email address is required to for submitting export requests. | |
148 | You can register your email address on the `JSOC email registration <http://jsoc.stanford.edu/ajax/register_email.html>`__ webpage. | |
132 | 149 | |
133 | 150 | It is advisable to have a closer look at the export webpage before submitting export requests using the ``drms`` library. |
134 | 151 | It is also possible to submit an export request on the webpage and then use the Python routines to query the request status and download files. |
136 | 153 | .. warning:: |
137 | 154 | Please replace the email below with your own registered email. |
138 | 155 | |
139 | First, we start again with importing the ``drms`` library and creating a `~drms.client.Client` instance:: | |
156 | .. code-block:: python | |
157 | ||
158 | >>> import os | |
159 | >>> email_address = os.environ["JSOC_EMAIL"] | |
160 | ||
161 | First, we start again with importing the ``drms`` library and creating a `~drms.client.Client` instance: | |
162 | ||
163 | .. code-block:: python | |
140 | 164 | |
141 | 165 | >>> import drms |
142 | >>> client = drms.Client(email='nabil.freij@gmail.com', verbose=True) # doctest: +REMOTE_DATA | |
166 | >>> client = drms.Client(email=email_address, verbose=True) # doctest: +REMOTE_DATA | |
143 | 167 | |
144 | 168 | In this case we also provide an email address (which needs to be already registered at JSOC) and turn on status messages by enabling the ``verbose`` flag. |
145 | 169 | |
146 | We now create a download directory for our downloads, in case it does not exist yet:: | |
170 | We now create a download directory for our downloads, in case it does not exist yet: | |
171 | ||
172 | .. code-block:: python | |
147 | 173 | |
148 | 174 | >>> import os |
149 | 175 | >>> out_dir = 'downloads' |
156 | 182 | In the following examples we confine ourselves to the methods ``url_quick`` and ``url`` and the protocols ``as-is`` and ``fits``. |
157 | 183 | |
158 | 184 | url_quick / as-is |
159 | ~~~~~~~~~~~~~~~~~ | |
160 | ||
185 | ^^^^^^^^^^^^^^^^^ | |
161 | 186 | The most direct and quickest way of downloading files is the combination ``url_quick`` / ``as-is``. |
162 | 187 | This (in most cases) does not create an actual export request, where you would have to wait for it being finished, but rather compiles a list of files from your data export query, which can then be directly downloaded. |
163 | 188 | This also means that this kind of export usually has no ``ExportID`` assigned to it. |
165 | 190 | |
166 | 191 | As an example, we now create an ``url_quick`` / ``as-is`` export request for the same record set that was used in the previous section. |
167 | 192 | For export requests, the segment name is specified using an additional field in the query string, surrounded by curly braces. |
168 | Note that :meth:`drms.client.Client.export` performs an ``url_quick`` / ``as-is`` export request by default, so you do not need to explicitly use ``method='url_quick'`` and ``protocol='as-is'`` in this case:: | |
169 | ||
170 | >>> r = client.export('hmi.v_45s[2016.04.01_TAI/1d@6h]{Dopplergram}') # doctest: +REMOTE_DATA | |
171 | >>> r # doctest: +REMOTE_DATA | |
193 | Note that :meth:`drms.client.Client.export` performs an ``url_quick`` / ``as-is`` export request by default, so you do not need to explicitly use ``method='url_quick'`` and ``protocol='as-is'`` in this case: | |
194 | ||
195 | .. code-block:: python | |
196 | ||
197 | >>> export_request = client.export('hmi.v_45s[2016.04.01_TAI/1d@6h]{Dopplergram}') # doctest: +REMOTE_DATA | |
198 | >>> export_request # doctest: +REMOTE_DATA | |
172 | 199 | <ExportRequest: id=None, status=0> |
173 | 200 | |
174 | >>> r.data.filename # doctest: +REMOTE_DATA | |
201 | >>> export_request.data.filename # doctest: +REMOTE_DATA | |
175 | 202 | 0 /SUM58/D803708321/S00008/Dopplergram.fits |
176 | 203 | 1 /SUM41/D803708361/S00008/Dopplergram.fits |
177 | 204 | 2 /SUM71/D803720859/S00008/Dopplergram.fits |
178 | 205 | 3 /SUM70/D803730119/S00008/Dopplergram.fits |
179 | 206 | Name: filename, dtype: object |
180 | 207 | |
181 | Download URLs can now be generated using the :attr:`drms.client.ExportRequest.urls` attribute:: | |
182 | ||
183 | >>> r.urls.url[0] # doctest: +REMOTE_DATA | |
208 | Download URLs can now be generated using the :attr:`drms.client.ExportRequest.urls` attribute: | |
209 | ||
210 | .. code-block:: python | |
211 | ||
212 | >>> export_request.urls.url[0] # doctest: +REMOTE_DATA | |
184 | 213 | 'http://jsoc.stanford.edu/SUM58/D803708321/S00008/Dopplergram.fits' |
185 | 214 | |
186 | Files can be downloaded using the :meth:`drms.client.ExportRequest.download` method. | |
187 | You can (optionally) select which file(s) you want to download, by using the ``index`` parameter of this method. The following, for example, only downloads the first file of the request:: | |
188 | ||
189 | >>> r.download(out_dir, 0) # doctest: +REMOTE_DATA | |
215 | Files can be downloaded using :meth:`drms.client.ExportRequest.download`. | |
216 | You can (optionally) select which file(s) you want to download, by using the ``index`` parameter of this method. | |
217 | The following, for example, only downloads the first file of the request: | |
218 | ||
219 | .. code-block:: python | |
220 | ||
221 | >>> export_request.download(out_dir, 0) # doctest: +REMOTE_DATA | |
190 | 222 | Downloading file 1 of 1... |
191 | 223 | record: hmi.V_45s[2016.04.01_00:00:00_TAI][2]{Dopplergram} |
192 | 224 | filename: Dopplergram.fits |
197 | 229 | If you need keyword data added to the headers, you have to use the ``fits`` export protocol instead, which is described below. |
198 | 230 | |
199 | 231 | url / fits |
200 | ~~~~~~~~~~ | |
201 | ||
232 | ^^^^^^^^^^ | |
202 | 233 | Using the ``fits`` export protocol, allows you to request FITS files that include all keyword data in their headers. |
203 | 234 | Note that this protocol *does not convert* other file formats into the FITS format. |
204 | 235 | The only purpose of ``protocol='fits'`` is to add keyword data to headers of segment files, that are already stored using the FITS format. |
209 | 240 | |
210 | 241 | In the following example, we use the ``hmi.sharp_720s`` series, which contains `Spaceweather HMI Active Region Patches <http://jsoc.stanford.edu/doc/data/hmi/sharp/sharp.htm>`__ (SHARPs), and download some data files from this series. |
211 | 242 | |
212 | First we have a look at the content of the series, by using :meth:`drms.client.Client.info` to get a `~drms.client.SeriesInfo` instance for this particular series:: | |
213 | ||
214 | >>> si = client.info('hmi.sharp_720s') # doctest: +REMOTE_DATA | |
215 | ||
216 | >>> si.note # doctest: +REMOTE_DATA | |
243 | First we have a look at the content of the series, by using :meth:`drms.client.Client.info` to get a `~drms.client.SeriesInfo` instance for this particular series: | |
244 | ||
245 | .. code-block:: python | |
246 | ||
247 | >>> series_info = client.info('hmi.sharp_720s') # doctest: +REMOTE_DATA | |
248 | >>> series_info.note # doctest: +REMOTE_DATA | |
217 | 249 | 'Spaceweather HMI Active Region Patch (SHARP): CCD coordinates' |
218 | ||
219 | >>> si.primekeys # doctest: +REMOTE_DATA | |
250 | >>> series_info.primekeys # doctest: +REMOTE_DATA | |
220 | 251 | ['HARPNUM', 'T_REC'] |
221 | 252 | |
222 | This series contains a total of 31 different data segments:: | |
223 | ||
224 | >>> len(si.segments) # doctest: +REMOTE_DATA | |
253 | This series contains a total of 31 different data segments: | |
254 | ||
255 | .. code-block:: python | |
256 | ||
257 | >>> len(series_info.segments) # doctest: +REMOTE_DATA | |
225 | 258 | 31 |
226 | ||
227 | >>> si.segments.index.values # doctest: +REMOTE_DATA | |
259 | >>> series_info.segments.index.values # doctest: +REMOTE_DATA | |
228 | 260 | array(['magnetogram', 'bitmap', 'Dopplergram', 'continuum', 'inclination', |
229 | 261 | 'azimuth', 'field', 'vlos_mag', 'dop_width', 'eta_0', 'damping', |
230 | 262 | 'src_continuum', 'src_grad', 'alpha_mag', 'chisq', 'conv_flag', |
234 | 266 | 'inclination_alpha_err', 'azimuth_alpha_err', 'disambig', |
235 | 267 | 'conf_disambig'], dtype=object) |
236 | 268 | |
237 | Here, we are only interested in magnetograms and continuum intensity maps :: | |
238 | ||
239 | >>> si.segments.loc[['continuum', 'magnetogram']] # doctest: +REMOTE_DATA | |
269 | Here, we are only interested in magnetograms and continuum intensity maps: | |
270 | ||
271 | .. code-block:: python | |
272 | ||
273 | >>> series_info.segments.loc[['continuum', 'magnetogram']] # doctest: +REMOTE_DATA | |
240 | 274 | type units protocol dims note |
241 | 275 | name |
242 | 276 | continuum int DN/s fits VARxVAR continuum intensity |
244 | 278 | |
245 | 279 | which are stored as FITS files with varying dimensions. |
246 | 280 | |
247 | If we now want to submit an export request for a magnetogram and an intensity map of HARP number 4864, recorded at midnight on November 30, 2014, we can use the following export query string:: | |
248 | ||
249 | >>> ds = 'hmi.sharp_720s[4864][2014.11.30_00:00_TAI]{continuum, magnetogram}' # doctest: +REMOTE_DATA | |
250 | ||
251 | In order to obtain FITS files that include keyword data in their headers, we then need to use ``protocol='fits'`` when submitting the request using :meth:`drms.client.Client.export`:: | |
252 | ||
253 | >>> r = client.export(ds, method='url', protocol='fits') # doctest: +REMOTE_DATA | |
254 | >>> r # doctest: +REMOTE_DATA | |
281 | If we now want to submit an export request for a magnetogram and an intensity map of HARP number 4864, recorded at midnight on November 30, 2014, we can use the following export query string: | |
282 | ||
283 | .. code-block:: python | |
284 | ||
285 | >>> query_string = 'hmi.sharp_720s[4864][2014.11.30_00:00_TAI]{continuum, magnetogram}' # doctest: +REMOTE_DATA | |
286 | ||
287 | In order to obtain FITS files that include keyword data in their headers, we then need to use ``protocol='fits'`` when submitting the request using :meth:`drms.client.Client.export`: | |
288 | ||
289 | .. code-block:: python | |
290 | ||
291 | >>> export_request = client.export(query_string, method='url', protocol='fits') # doctest: +REMOTE_DATA | |
292 | >>> export_request # doctest: +REMOTE_DATA | |
255 | 293 | <ExportRequest: id=JSOC_..., status=2> |
256 | 294 | |
257 | We now need to wait for the server to prepare the requested files:: | |
258 | ||
259 | >>> r.wait() # doctest: +REMOTE_DATA | |
295 | We now need to wait for the server to prepare the requested files: | |
296 | ||
297 | .. code-block:: python | |
298 | ||
299 | >>> export_request.wait() # doctest: +REMOTE_DATA | |
260 | 300 | Export request pending. [id=..., status=2] |
261 | 301 | Waiting for 5 seconds... |
262 | 302 | ... |
263 | 303 | |
264 | >>> r.status # doctest: +REMOTE_DATA | |
304 | >>> export_request.status # doctest: +REMOTE_DATA | |
265 | 305 | 0 |
266 | 306 | |
267 | 307 | Note that calling :meth:`drms.client.ExportRequest.wait` is optional. |
268 | 308 | It gives you some control over the waiting process, but it can be usually omitted, in which case :meth:`~drms.client.ExportRequest.wait` is called implicitly, when you for example try to download the requested files. |
269 | 309 | |
270 | 310 | After the export request is finished, a unique request URL is created for you, which points to the location where all your requested files are stored. |
271 | You can use the :attr:`drms.client.ExportRequest.request_url` attribute to obtain this URL:: | |
272 | ||
273 | >>> r.request_url # doctest: +REMOTE_DATA | |
311 | You can use the :attr:`drms.client.ExportRequest.request_url` attribute to obtain this URL: | |
312 | ||
313 | .. code-block:: python | |
314 | ||
315 | >>> export_request.request_url # doctest: +REMOTE_DATA | |
274 | 316 | 'http://jsoc.stanford.edu/.../S00000' |
275 | 317 | |
276 | 318 | Note that this location is only temporary and that all files will be deleted after a couple of days. |
277 | 319 | |
278 | Downloading the data works exactly like in the previous example, by using the :meth:`drms.client.ExportRequest.download` method:: | |
279 | ||
280 | >>> r.download(out_dir) # doctest: +REMOTE_DATA | |
320 | Downloading the data works exactly like in the previous example, by using :meth:`drms.client.ExportRequest.download`: | |
321 | ||
322 | .. code-block:: python | |
323 | ||
324 | >>> export_request.download(out_dir) # doctest: +REMOTE_DATA | |
281 | 325 | Downloading file 1 of 2... |
282 | 326 | ... |
283 | 327 | Downloading file 2 of 2... |
284 | 328 | ... |
285 | 329 | |
286 | .. note:: | |
287 | If you want to access an existing export request that you have submitted earlier, or if you submitted an export request using the `JSOC Export Data <http://jsoc.stanford.edu/ajax/exportdata.html>`__ webpage and want to access it from Python, you can use the :meth:`drms.client.Client.export_from_id` method with the corresponding ``ExportID`` to create an `drms.client.ExportRequest` instance for this particular request. | |
330 | If you want to access an existing export request that you have submitted earlier, or if you submitted an export request using the `JSOC Export Data <http://jsoc.stanford.edu/ajax/exportdata.html>`__ webpage. | |
331 | You can use :meth:`drms.client.Client.export_from_id` with the corresponding ``ExportID`` to create an `drms.client.ExportRequest` instance for this particular request. |
16 | 16 | # Must be done before any drms imports |
17 | 17 | __minimum_python_version__ = "3.7" |
18 | 18 | |
19 | ||
19 | 20 | class UnsupportedPythonError(Exception): |
20 | """Running on an unsupported version of Python.""" | |
21 | """ | |
22 | Running on an unsupported version of Python. | |
23 | """ | |
21 | 24 | |
22 | 25 | |
23 | if sys.version_info < tuple(int(val) for val in __minimum_python_version__.split('.')): | |
26 | if sys.version_info < tuple(int(val) for val in __minimum_python_version__.split(".")): | |
24 | 27 | # This has to be .format to keep backwards compatibly. |
25 | raise UnsupportedPythonError( | |
26 | "sunpy does not support Python < {}".format(__minimum_python_version__)) | |
28 | raise UnsupportedPythonError("sunpy does not support Python < {}".format(__minimum_python_version__)) | |
27 | 29 | |
28 | 30 | |
29 | 31 | def _get_bibtex(): |
30 | 32 | import textwrap |
31 | 33 | |
32 | 34 | # Set the bibtex entry to the article referenced in CITATION.rst |
33 | citation_file = os.path.join(os.path.dirname(__file__), 'CITATION.rst') | |
35 | citation_file = os.path.join(os.path.dirname(__file__), "CITATION.rst") | |
34 | 36 | |
35 | 37 | # Explicitly specify UTF-8 encoding in case the system's default encoding is problematic |
36 | with open(citation_file, encoding='utf-8') as citation: | |
38 | with open(citation_file, encoding="utf-8") as citation: | |
37 | 39 | # Extract the first bibtex block: |
38 | ref = citation.read().partition('.. code:: bibtex\n\n')[2] | |
39 | lines = ref.split('\n') | |
40 | ref = citation.read().partition(".. code:: bibtex\n\n")[2] | |
41 | lines = ref.split("\n") | |
40 | 42 | # Only read the lines which are indented |
41 | lines = lines[: [line.startswith(' ') for line in lines].index(False)] | |
42 | ref = textwrap.dedent('\n'.join(lines)) | |
43 | lines = lines[: [line.startswith(" ") for line in lines].index(False)] | |
44 | ref = textwrap.dedent("\n".join(lines)) | |
43 | 45 | return ref |
46 | ||
44 | 47 | |
45 | 48 | __citation__ = __bibtex__ = _get_bibtex() |
46 | 49 | |
47 | from .version import version as __version__ | |
48 | 50 | # DRMS imports to collapse the namespace |
49 | 51 | from .client import * |
50 | 52 | from .config import * |
51 | 53 | from .exceptions import * |
52 | 54 | from .json import * |
53 | 55 | from .utils import * |
56 | from .version import version as __version__ |
0 | 0 | # coding: utf-8 |
1 | 1 | # file generated by setuptools_scm |
2 | 2 | # don't change, don't track in version control |
3 | version = '0.6.2' | |
4 | version_tuple = (0, 6, 2) | |
3 | __version__ = version = '0.6.3' | |
4 | __version_tuple__ = version_tuple = (0, 6, 3) |
12 | 12 | from .json import HttpJsonClient |
13 | 13 | from .utils import _extract_series_name, _pd_to_numeric_coerce, _split_arg |
14 | 14 | |
15 | __all__ = ['SeriesInfo', 'ExportRequest', 'Client'] | |
15 | __all__ = ["SeriesInfo", "ExportRequest", "Client"] | |
16 | 16 | |
17 | 17 | |
18 | 18 | class SeriesInfo: |
50 | 50 | def __init__(self, d, name=None): |
51 | 51 | self._d = d |
52 | 52 | self.name = name |
53 | self.retention = self._d.get('retention') | |
54 | self.unitsize = self._d.get('unitsize') | |
55 | self.archive = self._d.get('archive') | |
56 | self.tapegroup = self._d.get('tapegroup') | |
57 | self.note = self._d.get('note') | |
58 | self.primekeys = self._d.get('primekeys') | |
59 | self.dbindex = self._d.get('dbindex') | |
60 | self.keywords = self._parse_keywords(d['keywords']) | |
61 | self.links = self._parse_links(d['links']) | |
62 | self.segments = self._parse_segments(d['segments']) | |
53 | self.retention = self._d.get("retention") | |
54 | self.unitsize = self._d.get("unitsize") | |
55 | self.archive = self._d.get("archive") | |
56 | self.tapegroup = self._d.get("tapegroup") | |
57 | self.note = self._d.get("note") | |
58 | self.primekeys = self._d.get("primekeys") | |
59 | self.dbindex = self._d.get("dbindex") | |
60 | self.keywords = self._parse_keywords(d["keywords"]) | |
61 | self.links = self._parse_links(d["links"]) | |
62 | self.segments = self._parse_segments(d["segments"]) | |
63 | 63 | |
64 | 64 | @staticmethod |
65 | 65 | def _parse_keywords(d): |
66 | keys = ['name', 'type', 'recscope', 'defval', 'units', 'note', 'linkinfo'] | |
66 | keys = ["name", "type", "recscope", "defval", "units", "note", "linkinfo"] | |
67 | 67 | res = [] |
68 | 68 | for di in d: |
69 | 69 | resi = [] |
73 | 73 | if not res: |
74 | 74 | res = None # workaround for older pandas versions |
75 | 75 | res = pd.DataFrame(res, columns=keys) |
76 | res.index = res.pop('name') | |
77 | res['is_time'] = res.type == 'time' | |
78 | res['is_integer'] = res.type == 'short' | |
79 | res['is_integer'] |= res.type == 'int' | |
80 | res['is_integer'] |= res.type == 'longlong' | |
81 | res['is_real'] = res.type == 'float' | |
82 | res['is_real'] |= res.type == 'double' | |
83 | res['is_numeric'] = res.is_integer | res.is_real | |
76 | res.index = res.pop("name") | |
77 | res["is_time"] = res.type == "time" | |
78 | res["is_integer"] = res.type == "short" | |
79 | res["is_integer"] |= res.type == "int" | |
80 | res["is_integer"] |= res.type == "longlong" | |
81 | res["is_real"] = res.type == "float" | |
82 | res["is_real"] |= res.type == "double" | |
83 | res["is_numeric"] = res.is_integer | res.is_real | |
84 | 84 | return res |
85 | 85 | |
86 | 86 | @staticmethod |
87 | 87 | def _parse_links(d): |
88 | keys = ['name', 'target', 'kind', 'note'] | |
88 | keys = ["name", "target", "kind", "note"] | |
89 | 89 | res = [] |
90 | 90 | for di in d: |
91 | 91 | resi = [] |
95 | 95 | if not res: |
96 | 96 | res = None # workaround for older pandas versions |
97 | 97 | res = pd.DataFrame(res, columns=keys) |
98 | res.index = res.pop('name') | |
98 | res.index = res.pop("name") | |
99 | 99 | return res |
100 | 100 | |
101 | 101 | @staticmethod |
102 | 102 | def _parse_segments(d): |
103 | keys = ['name', 'type', 'units', 'protocol', 'dims', 'note'] | |
103 | keys = ["name", "type", "units", "protocol", "dims", "note"] | |
104 | 104 | res = [] |
105 | 105 | for di in d: |
106 | 106 | resi = [] |
110 | 110 | if not res: |
111 | 111 | res = None # workaround for older pandas versions |
112 | 112 | res = pd.DataFrame(res, columns=keys) |
113 | res.index = res.pop('name') | |
113 | res.index = res.pop("name") | |
114 | 114 | return res |
115 | 115 | |
116 | 116 | def __repr__(self): |
117 | 117 | if self.name is None: |
118 | return '<SeriesInfo>' | |
118 | return "<SeriesInfo>" | |
119 | 119 | else: |
120 | return f'<SeriesInfo: {self.name}>' | |
120 | return f"<SeriesInfo: {self.name}>" | |
121 | 121 | |
122 | 122 | |
123 | 123 | class ExportRequest: |
146 | 146 | return cls(d, client) |
147 | 147 | |
148 | 148 | def __repr__(self): |
149 | idstr = str(None) if self._requestid is None else (f'{self._requestid}') | |
150 | return f'<ExportRequest: id={idstr}, status={int(self._status)}>' | |
149 | idstr = str(None) if self._requestid is None else (f"{self._requestid}") | |
150 | return f"<ExportRequest: id={idstr}, status={int(self._status)}>" | |
151 | 151 | |
152 | 152 | @staticmethod |
153 | 153 | def _parse_data(d): |
154 | keys = ['record', 'filename'] | |
154 | keys = ["record", "filename"] | |
155 | 155 | res = None if d is None else [(di.get(keys[0]), di.get(keys[1])) for di in d] |
156 | 156 | if not res: |
157 | 157 | res = None # workaround for older pandas versions |
163 | 163 | d = self._client._json.exp_status(self._requestid) |
164 | 164 | self._d = d |
165 | 165 | self._d_time = time.time() |
166 | self._status = int(self._d.get('status', self._status)) | |
167 | self._requestid = self._d.get('requestid', self._requestid) | |
166 | self._status = int(self._d.get("status", self._status)) | |
167 | self._requestid = self._d.get("requestid", self._requestid) | |
168 | 168 | if self._requestid is None: |
169 | 169 | # Apparently 'reqid' is used instead of 'requestid' for certain |
170 | 170 | # protocols like 'mpg' |
171 | self._requestid = self._d.get('reqid') | |
172 | if self._requestid == '': | |
171 | self._requestid = self._d.get("reqid") | |
172 | if self._requestid == "": | |
173 | 173 | # Use None if the requestid is empty (url_quick + as-is) |
174 | 174 | self._requestid = None |
175 | 175 | |
177 | 177 | if self._status in self._status_codes_ok_or_pending: |
178 | 178 | if self._status != self._status_code_notfound or notfound_ok: |
179 | 179 | return # request has not failed (yet) |
180 | msg = self._d.get('error') | |
180 | msg = self._d.get("error") | |
181 | 181 | if msg is None: |
182 | msg = 'DRMS export request failed.' | |
183 | msg += f' [status={int(self._status)}]' | |
182 | msg = "DRMS export request failed." | |
183 | msg += f" [status={int(self._status)}]" | |
184 | 184 | raise DrmsExportError(msg) |
185 | 185 | |
186 | 186 | def _generate_download_urls(self): |
191 | 191 | data_dir = self.dir |
192 | 192 | |
193 | 193 | # Clear first record name for movies, as it is not a DRMS record. |
194 | if self.protocol in ['mpg', 'mp4']: | |
195 | if res.record[0].startswith('movie'): | |
194 | if self.protocol in ["mpg", "mp4"]: | |
195 | if res.record[0].startswith("movie"): | |
196 | 196 | res.record[0] = None |
197 | 197 | |
198 | 198 | # tar exports provide only a single TAR file with full path |
199 | 199 | if self.tarfile is not None: |
200 | 200 | data_dir = None |
201 | res = pd.DataFrame([(None, self.tarfile)], columns=['record', 'filename']) | |
201 | res = pd.DataFrame([(None, self.tarfile)], columns=["record", "filename"]) | |
202 | 202 | |
203 | 203 | # If data_dir is None, the filename column should contain the full |
204 | 204 | # path of the file and we need to extract the basename part. If |
205 | 205 | # data_dir contains a directory, the filename column should contain |
206 | 206 | # only the basename and we need to join it with the directory. |
207 | 207 | if data_dir is None: |
208 | res.rename(columns={'filename': 'fpath'}, inplace=True) | |
209 | split_fpath = res.fpath.str.split('/') | |
210 | res['filename'] = [sfp[-1] for sfp in split_fpath] | |
208 | res.rename(columns={"filename": "fpath"}, inplace=True) | |
209 | split_fpath = res.fpath.str.split("/") | |
210 | res["filename"] = [sfp[-1] for sfp in split_fpath] | |
211 | 211 | else: |
212 | res['fpath'] = [f'{data_dir}/{filename}' for filename in res.filename] | |
213 | ||
214 | if self.method.startswith('url'): | |
212 | res["fpath"] = [f"{data_dir}/{filename}" for filename in res.filename] | |
213 | ||
214 | if self.method.startswith("url"): | |
215 | 215 | baseurl = self._client._server.http_download_baseurl |
216 | elif self.method.startswith('ftp'): | |
216 | elif self.method.startswith("ftp"): | |
217 | 217 | baseurl = self._client._server.ftp_download_baseurl |
218 | 218 | else: |
219 | raise RuntimeError(f'Download is not supported for export method {self.method}') | |
219 | raise RuntimeError(f"Download is not supported for export method {self.method}") | |
220 | 220 | |
221 | 221 | # Generate download URLs. |
222 | 222 | urls = [] |
223 | 223 | for fp in res.fpath: |
224 | while fp.startswith('/'): | |
224 | while fp.startswith("/"): | |
225 | 225 | fp = fp[1:] |
226 | 226 | urls.append(urljoin(baseurl, fp)) |
227 | res['url'] = urls | |
227 | res["url"] = urls | |
228 | 228 | |
229 | 229 | # Remove rows with missing files. |
230 | res = res[res.filename != 'NoDataFile'] | |
231 | del res['fpath'] | |
230 | res = res[res.filename != "NoDataFile"] | |
231 | del res["fpath"] | |
232 | 232 | return res |
233 | 233 | |
234 | 234 | @staticmethod |
239 | 239 | i = 1 |
240 | 240 | new_fname = fname |
241 | 241 | while os.path.exists(new_fname): |
242 | new_fname = f'{fname}.{int(i)}' | |
242 | new_fname = f"{fname}.{int(i)}" | |
243 | 243 | i += 1 |
244 | 244 | return new_fname |
245 | 245 | |
262 | 262 | """ |
263 | 263 | (string) Export method. |
264 | 264 | """ |
265 | return self._d.get('method') | |
265 | return self._d.get("method") | |
266 | 266 | |
267 | 267 | @property |
268 | 268 | def protocol(self): |
269 | 269 | """ |
270 | 270 | (string) Export protocol. |
271 | 271 | """ |
272 | return self._d.get('protocol') | |
272 | return self._d.get("protocol") | |
273 | 273 | |
274 | 274 | @property |
275 | 275 | def dir(self): |
280 | 280 | self._raise_on_error() |
281 | 281 | else: |
282 | 282 | self.wait() |
283 | data_dir = self._d.get('dir') | |
283 | data_dir = self._d.get("dir") | |
284 | 284 | return data_dir if data_dir else None |
285 | 285 | |
286 | 286 | @property |
295 | 295 | self._raise_on_error() |
296 | 296 | else: |
297 | 297 | self.wait() |
298 | return self._parse_data(self._d.get('data')) | |
298 | return self._parse_data(self._d.get("data")) | |
299 | 299 | |
300 | 300 | @property |
301 | 301 | def tarfile(self): |
306 | 306 | self._raise_on_error() |
307 | 307 | else: |
308 | 308 | self.wait() |
309 | data_tarfile = self._d.get('tarfile') | |
309 | data_tarfile = self._d.get("tarfile") | |
310 | 310 | return data_tarfile if data_tarfile else None |
311 | 311 | |
312 | 312 | @property |
318 | 318 | self._raise_on_error() |
319 | 319 | else: |
320 | 320 | self.wait() |
321 | data_keywords = self._d.get('keywords') | |
321 | data_keywords = self._d.get("keywords") | |
322 | 322 | return data_keywords if data_keywords else None |
323 | 323 | |
324 | 324 | @property |
330 | 330 | http_baseurl = self._client._server.http_download_baseurl |
331 | 331 | if data_dir is None or http_baseurl is None: |
332 | 332 | return None |
333 | if data_dir.startswith('/'): | |
333 | if data_dir.startswith("/"): | |
334 | 334 | data_dir = data_dir[1:] |
335 | 335 | return urljoin(http_baseurl, data_dir) |
336 | 336 | |
462 | 462 | |
463 | 463 | while True: |
464 | 464 | if verbose: |
465 | idstr = str(None) if self._requestid is None else (f'{self._requestid}') | |
466 | print(f'Export request pending. [id={idstr}, status={self._status}]') | |
465 | idstr = str(None) if self._requestid is None else (f"{self._requestid}") | |
466 | print(f"Export request pending. [id={idstr}, status={self._status}]") | |
467 | 467 | |
468 | 468 | # Use the user-provided sleep value or the server's wait value. |
469 | 469 | # In case neither is available, wait for 5 seconds. |
470 | wait_secs = self._d.get('wait', 5) if sleep is None else sleep | |
470 | wait_secs = self._d.get("wait", 5) if sleep is None else sleep | |
471 | 471 | |
472 | 472 | # Consider the time that passed since the last status update. |
473 | 473 | wait_secs -= time.time() - self._d_time |
480 | 480 | return False |
481 | 481 | |
482 | 482 | if verbose: |
483 | print(f'Waiting for {int(round(wait_secs))} seconds...') | |
483 | print(f"Waiting for {int(round(wait_secs))} seconds...") | |
484 | 484 | time.sleep(wait_secs) |
485 | 485 | |
486 | 486 | if self.has_finished(): |
491 | 491 | if retries_notfound <= 0: |
492 | 492 | self._raise_on_error(notfound_ok=False) |
493 | 493 | if verbose: |
494 | print(f'Request not found on server, {retries_notfound} retries left.') | |
494 | print(f"Request not found on server, {retries_notfound} retries left.") | |
495 | 495 | retries_notfound -= 1 |
496 | 496 | |
497 | 497 | def download(self, directory, index=None, fname_from_rec=None, verbose=None): |
544 | 544 | """ |
545 | 545 | out_dir = os.path.abspath(directory) |
546 | 546 | if not os.path.isdir(out_dir): |
547 | raise OSError(f'Download directory {out_dir} does not exist') | |
547 | raise OSError(f"Download directory {out_dir} does not exist") | |
548 | 548 | |
549 | 549 | if np.isscalar(index): |
550 | 550 | index = [int(index)] |
559 | 559 | |
560 | 560 | if fname_from_rec is None: |
561 | 561 | # For 'url_quick', generate local filenames from record strings. |
562 | if self.method == 'url_quick': | |
562 | if self.method == "url_quick": | |
563 | 563 | fname_from_rec = True |
564 | 564 | |
565 | 565 | # self.urls contains the same records as self.data, except for the tar |
581 | 581 | |
582 | 582 | fpath = os.path.join(out_dir, filename) |
583 | 583 | fpath_new = self._next_available_filename(fpath) |
584 | fpath_tmp = self._next_available_filename(f'{fpath_new}.part') | |
584 | fpath_tmp = self._next_available_filename(f"{fpath_new}.part") | |
585 | 585 | if verbose: |
586 | print(f'Downloading file {int(i + 1)} of {int(ndata)}...') | |
587 | print(f' record: {di.record}') | |
588 | print(f' filename: {di.filename}') | |
586 | print(f"Downloading file {int(i + 1)} of {int(ndata)}...") | |
587 | print(f" record: {di.record}") | |
588 | print(f" filename: {di.filename}") | |
589 | 589 | try: |
590 | 590 | urlretrieve(di.url, fpath_tmp) |
591 | 591 | except (HTTPError, URLError): |
592 | 592 | fpath_new = None |
593 | 593 | if verbose: |
594 | print(' -> Error: Could not download file') | |
594 | print(" -> Error: Could not download file") | |
595 | 595 | else: |
596 | 596 | fpath_new = self._next_available_filename(fpath) |
597 | 597 | os.rename(fpath_tmp, fpath_new) |
598 | 598 | if verbose: |
599 | print(f' -> {os.path.relpath(fpath_new)}') | |
599 | print(f" -> {os.path.relpath(fpath_new)}") | |
600 | 600 | downloads.append(fpath_new) |
601 | 601 | |
602 | res = data[['record', 'url']].copy() | |
603 | res['download'] = downloads | |
602 | res = data[["record", "url"]].copy() | |
603 | res["download"] = downloads | |
604 | 604 | return res |
605 | 605 | |
606 | 606 | |
610 | 610 | |
611 | 611 | Parameters |
612 | 612 | ---------- |
613 | server : str or ServerConfig | |
613 | server : str or drms.config.ServerConfig | |
614 | 614 | Registered server ID or ServerConfig instance. |
615 | 615 | Defaults to JSOC. |
616 | 616 | email : str or None |
621 | 621 | Print debug output (disabled by default). |
622 | 622 | """ |
623 | 623 | |
624 | def __init__(self, server='jsoc', email=None, verbose=False, debug=False): | |
624 | def __init__(self, server="jsoc", email=None, verbose=False, debug=False): | |
625 | 625 | self._json = HttpJsonClient(server=server, debug=debug) |
626 | 626 | self._info_cache = {} |
627 | 627 | self.verbose = verbose # use property for convertion to bool |
628 | 628 | self.email = email # use property for email validation |
629 | 629 | |
630 | 630 | def __repr__(self): |
631 | return f'<Client: {self._server.name}>' | |
631 | return f"<Client: {self._server.name}>" | |
632 | 632 | |
633 | 633 | def _convert_numeric_keywords(self, ds, kdf, skip_conversion=None): |
634 | 634 | si = self.info(ds) |
635 | 635 | int_keys = list(si.keywords[si.keywords.is_integer].index) |
636 | 636 | num_keys = list(si.keywords[si.keywords.is_numeric].index) |
637 | num_keys += ['*recnum*', '*sunum*', '*size*'] | |
637 | num_keys += ["*recnum*", "*sunum*", "*size*"] | |
638 | 638 | if skip_conversion is None: |
639 | 639 | skip_conversion = [] |
640 | 640 | elif isinstance(skip_conversion, str): |
645 | 645 | # pandas apparently does not support hexadecimal strings, so |
646 | 646 | # we need a special treatment for integer strings that start |
647 | 647 | # with '0x', like QUALITY. The following to_numeric call is |
648 | # still neccessary as the results are still Python objects. | |
648 | # still necessary as the results are still Python objects. | |
649 | 649 | if k in int_keys and kdf[k].dtype is np.dtype(object): |
650 | idx = kdf[k].str.startswith('0x') | |
650 | idx = kdf[k].str.startswith("0x") | |
651 | 651 | if idx.any(): |
652 | kdf.loc[idx, k] = kdf.loc[idx, k].map(lambda x: int(x, base=16)) | |
652 | k_idx = kdf.columns.get_loc(k) | |
653 | kdf[kdf.columns[k_idx]] = kdf[kdf.columns[k_idx]].apply(int, base=16) | |
653 | 654 | if k in num_keys: |
654 | 655 | kdf[k] = _pd_to_numeric_coerce(kdf[k]) |
655 | 656 | |
659 | 660 | Raises a DrmsQueryError, using the json error message from d. |
660 | 661 | """ |
661 | 662 | if status is None: |
662 | status = d.get('status') | |
663 | msg = d.get('error') | |
663 | status = d.get("status") | |
664 | msg = d.get("error") | |
664 | 665 | if msg is None: |
665 | msg = 'DRMS Query failed.' | |
666 | msg += f' [status={status}]' | |
666 | msg = "DRMS Query failed." | |
667 | msg += f" [status={status}]" | |
667 | 668 | raise DrmsQueryError(msg) |
668 | 669 | |
669 | 670 | def _generate_filenamefmt(self, sname): |
679 | 680 | pkfmt_list = [] |
680 | 681 | for k in si.primekeys: |
681 | 682 | if si.keywords.loc[k].is_time: |
682 | pkfmt_list.append(f'{{{k}:A}}') | |
683 | pkfmt_list.append(f"{{{k}:A}}") | |
683 | 684 | else: |
684 | pkfmt_list.append(f'{{{k}}}') | |
685 | pkfmt_list.append(f"{{{k}}}") | |
685 | 686 | |
686 | 687 | if pkfmt_list: |
687 | return '{}.{}.{{segment}}'.format(si.name, '.'.join(pkfmt_list)) | |
688 | return "{}.{}.{{segment}}".format(si.name, ".".join(pkfmt_list)) | |
688 | 689 | else: |
689 | return str(si.name) + '.{recnum:%lld}.{segment}' | |
690 | return str(si.name) + ".{recnum:%lld}.{segment}" | |
690 | 691 | |
691 | 692 | # Some regular expressions used to parse export request queries. |
692 | _re_export_recset = re.compile(r'^\s*([\w\.]+)\s*(\[.*\])?\s*(?:\{([\w\s\.,]*)\})?\s*$') | |
693 | _re_export_recset_pkeys = re.compile(r'\[([^\[^\]]*)\]') | |
694 | _re_export_recset_slist = re.compile(r'[\s,]+') | |
693 | _re_export_recset = re.compile(r"^\s*([\w\.]+)\s*(\[.*\])?\s*(?:\{([\w\s\.,]*)\})?\s*$") | |
694 | _re_export_recset_pkeys = re.compile(r"\[([^\[^\]]*)\]") | |
695 | _re_export_recset_slist = re.compile(r"[\s,]+") | |
695 | 696 | |
696 | 697 | @staticmethod |
697 | 698 | def _parse_export_recset(rs): |
735 | 736 | # Cleanup time strings. |
736 | 737 | if si.keywords.loc[si.primekeys[i]].is_time: |
737 | 738 | v = pkeys[i] |
738 | v = v.replace('.', "").replace(':', "").replace('-', "") | |
739 | v = v.replace(".", "").replace(":", "").replace("-", "") | |
739 | 740 | pkeys[i] = v |
740 | 741 | |
741 | 742 | # Generate filename. |
742 | 743 | fname = si.name |
743 | 744 | if pkeys is not None: |
744 | 745 | pkeys = [k for k in pkeys if k.strip()] |
745 | pkeys_str = '.'.join(pkeys) | |
746 | pkeys_str = ".".join(pkeys) | |
746 | 747 | if pkeys_str: |
747 | fname += f'.{pkeys_str}' | |
748 | fname += f".{pkeys_str}" | |
748 | 749 | if segs is not None: |
749 | 750 | segs = [s for s in segs if s.strip()] |
750 | segs_str = '.'.join(segs) | |
751 | segs_str = ".".join(segs) | |
751 | 752 | if segs_str: |
752 | fname += f'.{segs_str}' | |
753 | fname += f".{segs_str}" | |
753 | 754 | |
754 | 755 | if old_fname is not None: |
755 | 756 | # Try to use the file extension of the original filename. |
756 | known_fname_extensions = ['.fits', '.txt', '.jpg', '.mpg', '.mp4', '.tar'] | |
757 | known_fname_extensions = [".fits", ".txt", ".jpg", ".mpg", ".mp4", ".tar"] | |
757 | 758 | for ext in known_fname_extensions: |
758 | 759 | if old_fname.endswith(ext): |
759 | 760 | return fname + ext |
761 | 762 | |
762 | 763 | # Export color table names, from (internal) series "jsoc.Color_Tables" |
763 | 764 | _export_color_table_names = [ |
764 | 'HMI_mag.lut', | |
765 | 'aia_131.lut', | |
766 | 'aia_1600.lut', | |
767 | 'aia_1700.lut', | |
768 | 'aia_171.lut', | |
769 | 'aia_193.lut', | |
770 | 'aia_211.lut', | |
771 | 'aia_304.lut', | |
772 | 'aia_335.lut', | |
773 | 'aia_4500.lut', | |
774 | 'aia_94.lut', | |
775 | 'aia_mixed', | |
776 | 'bb.sao', | |
777 | 'grey.sao', | |
778 | 'heat.sao', | |
765 | "HMI_mag.lut", | |
766 | "aia_131.lut", | |
767 | "aia_1600.lut", | |
768 | "aia_1700.lut", | |
769 | "aia_171.lut", | |
770 | "aia_193.lut", | |
771 | "aia_211.lut", | |
772 | "aia_304.lut", | |
773 | "aia_335.lut", | |
774 | "aia_4500.lut", | |
775 | "aia_94.lut", | |
776 | "aia_mixed", | |
777 | "bb.sao", | |
778 | "grey.sao", | |
779 | "heat.sao", | |
779 | 780 | ] |
780 | 781 | |
781 | 782 | # Export scaling types, from (internal) series "jsoc.Color_Tables" |
782 | _export_scaling_names = ['LOG', 'MINMAX', 'MINMAXGIVEN', 'SQRT', 'mag'] | |
783 | _export_scaling_names = ["LOG", "MINMAX", "MINMAXGIVEN", "SQRT", "mag"] | |
783 | 784 | |
784 | 785 | @staticmethod |
785 | 786 | def _validate_export_protocol_args(protocol_args): |
789 | 790 | if protocol_args is None: |
790 | 791 | return |
791 | 792 | |
792 | ct_key = 'ct' | |
793 | ct_key = "ct" | |
793 | 794 | ct = protocol_args.get(ct_key) |
794 | 795 | if ct is None: |
795 | ct_key = 'CT' | |
796 | ct_key = "CT" | |
796 | 797 | ct = protocol_args.get(ct_key) |
797 | 798 | if ct is not None: |
798 | 799 | ll = [s.lower() for s in Client._export_color_table_names] |
799 | 800 | try: |
800 | 801 | i = ll.index(ct.lower()) |
801 | 802 | except ValueError: |
802 | msg = f'{ct} is not a valid color table, ' | |
803 | msg += 'available color tables: {}'.format( | |
804 | ', '.join([str(s) for s in Client._export_color_table_names]) | |
803 | msg = f"{ct} is not a valid color table, " | |
804 | msg += "available color tables: {}".format( | |
805 | ", ".join([str(s) for s in Client._export_color_table_names]) | |
805 | 806 | ) |
806 | 807 | raise ValueError(msg) |
807 | 808 | protocol_args[ct_key] = Client._export_color_table_names[i] |
808 | 809 | |
809 | scaling = protocol_args.get('scaling') | |
810 | scaling = protocol_args.get("scaling") | |
810 | 811 | if scaling is not None: |
811 | 812 | ll = [s.lower() for s in Client._export_scaling_names] |
812 | 813 | try: |
813 | 814 | i = ll.index(scaling.lower()) |
814 | 815 | except ValueError: |
815 | msg = f'{scaling} is not a valid scaling type,' | |
816 | msg += 'available scaling types: {}'.format( | |
817 | ', '.join([str(s) for s in Client._export_scaling_names]) | |
818 | ) | |
816 | msg = f"{scaling} is not a valid scaling type," | |
817 | msg += "available scaling types: {}".format(", ".join([str(s) for s in Client._export_scaling_names])) | |
819 | 818 | raise ValueError(msg) |
820 | protocol_args['scaling'] = Client._export_scaling_names[i] | |
819 | protocol_args["scaling"] = Client._export_scaling_names[i] | |
821 | 820 | |
822 | 821 | @property |
823 | 822 | def _server(self): |
847 | 846 | @email.setter |
848 | 847 | def email(self, value): |
849 | 848 | if value is not None and not self.check_email(value): |
850 | raise ValueError('Email address is invalid or not registered') | |
849 | raise ValueError("Email address is invalid or not registered") | |
851 | 850 | self._email = value |
852 | 851 | |
853 | 852 | @property |
884 | 883 | primekeys and a description of the selected series (see |
885 | 884 | parameter ``full``). |
886 | 885 | """ |
887 | if not self._server.check_supported('series'): | |
888 | raise DrmsOperationNotSupported('Server does not support series list access') | |
886 | if not self._server.check_supported("series"): | |
887 | raise DrmsOperationNotSupported("Server does not support series list access") | |
889 | 888 | if self._server.url_show_series_wrapper is None: |
890 | 889 | # No wrapper CGI available, use the regular version. |
891 | 890 | d = self._json.show_series(regex) |
892 | status = d.get('status') | |
891 | status = d.get("status") | |
893 | 892 | if status != 0: |
894 | 893 | self._raise_query_error(d) |
895 | 894 | if full: |
896 | keys = ('name', 'primekeys', 'note') | |
897 | if not d['names']: | |
895 | keys = ("name", "primekeys", "note") | |
896 | if not d["names"]: | |
898 | 897 | return pd.DataFrame(columns=keys) |
899 | recs = [(it['name'], _split_arg(it['primekeys']), it['note']) for it in d['names']] | |
898 | recs = [(it["name"], _split_arg(it["primekeys"]), it["note"]) for it in d["names"]] | |
900 | 899 | return pd.DataFrame(recs, columns=keys) |
901 | 900 | else: |
902 | if not d['names']: | |
901 | if not d["names"]: | |
903 | 902 | return [] |
904 | return [it['name'] for it in d['names']] | |
903 | return [it["name"] for it in d["names"]] | |
905 | 904 | else: |
906 | 905 | # Use show_series_wrapper instead of the regular version. |
907 | 906 | d = self._json.show_series_wrapper(regex, info=full) |
908 | 907 | if full: |
909 | keys = ('name', 'note') | |
910 | if not d['seriesList']: | |
908 | keys = ("name", "note") | |
909 | if not d["seriesList"]: | |
911 | 910 | return pd.DataFrame(columns=keys) |
912 | 911 | recs = [] |
913 | for it in d['seriesList']: | |
912 | for it in d["seriesList"]: | |
914 | 913 | name, info = tuple(it.items())[0] |
915 | note = info.get('description', "") | |
914 | note = info.get("description", "") | |
916 | 915 | recs.append((name, note)) |
917 | 916 | return pd.DataFrame(recs, columns=keys) |
918 | 917 | else: |
919 | return d['seriesList'] | |
918 | return d["seriesList"] | |
920 | 919 | |
921 | 920 | def info(self, ds): |
922 | 921 | """ |
933 | 932 | SeriesInfo instance containing information about the data |
934 | 933 | series. |
935 | 934 | """ |
936 | if not self._server.check_supported('info'): | |
937 | raise DrmsOperationNotSupported('Server does not support series info access') | |
935 | if not self._server.check_supported("info"): | |
936 | raise DrmsOperationNotSupported("Server does not support series info access") | |
938 | 937 | name = _extract_series_name(ds) |
939 | 938 | if name is not None: |
940 | 939 | name = name.lower() |
941 | 940 | if name in self._info_cache: |
942 | 941 | return self._info_cache[name] |
943 | 942 | d = self._json.series_struct(name) |
944 | status = d.get('status') | |
943 | status = d.get("status") | |
945 | 944 | if status != 0: |
946 | 945 | self._raise_query_error(d) |
947 | 946 | si = SeriesInfo(d, name=name) |
1047 | 1046 | Link query results. This DataFrame is only returned, |
1048 | 1047 | if link is not None. |
1049 | 1048 | """ |
1050 | if not self._server.check_supported('query'): | |
1051 | raise DrmsOperationNotSupported('Server does not support DRMS queries') | |
1049 | if not self._server.check_supported("query"): | |
1050 | raise DrmsOperationNotSupported("Server does not support DRMS queries") | |
1052 | 1051 | if pkeys: |
1053 | 1052 | pk = self.pkeys(ds) |
1054 | 1053 | key = _split_arg(key) if key is not None else [] |
1056 | 1055 | key = pk + key |
1057 | 1056 | |
1058 | 1057 | lres = self._json.rs_list(ds, key, seg, link, recinfo=rec_index, n=n) |
1059 | status = lres.get('status') | |
1058 | status = lres.get("status") | |
1060 | 1059 | if status != 0: |
1061 | 1060 | self._raise_query_error(lres) |
1062 | 1061 | |
1063 | 1062 | res = [] |
1064 | 1063 | if key is not None: |
1065 | if 'keywords' in lres: | |
1066 | names = [it['name'] for it in lres['keywords']] | |
1067 | values = [it['values'] for it in lres['keywords']] | |
1064 | if "keywords" in lres: | |
1065 | names = [it["name"] for it in lres["keywords"]] | |
1066 | values = [it["values"] for it in lres["keywords"]] | |
1068 | 1067 | res_key = pd.DataFrame.from_dict(OrderedDict(zip(names, values))) |
1069 | 1068 | else: |
1070 | 1069 | res_key = pd.DataFrame() |
1073 | 1072 | res.append(res_key) |
1074 | 1073 | |
1075 | 1074 | if seg is not None: |
1076 | if 'segments' in lres: | |
1077 | names = [it['name'] for it in lres['segments']] | |
1078 | values = [it['values'] for it in lres['segments']] | |
1075 | if "segments" in lres: | |
1076 | names = [it["name"] for it in lres["segments"]] | |
1077 | values = [it["values"] for it in lres["segments"]] | |
1079 | 1078 | res_seg = pd.DataFrame.from_dict(OrderedDict(zip(names, values))) |
1080 | 1079 | else: |
1081 | 1080 | res_seg = pd.DataFrame() |
1082 | 1081 | res.append(res_seg) |
1083 | 1082 | |
1084 | 1083 | if link is not None: |
1085 | if 'links' in lres: | |
1086 | names = [it['name'] for it in lres['links']] | |
1087 | values = [it['values'] for it in lres['links']] | |
1084 | if "links" in lres: | |
1085 | names = [it["name"] for it in lres["links"]] | |
1086 | values = [it["values"] for it in lres["links"]] | |
1088 | 1087 | res_link = pd.DataFrame.from_dict(OrderedDict(zip(names, values))) |
1089 | 1088 | else: |
1090 | 1089 | res_link = pd.DataFrame() |
1091 | 1090 | res.append(res_link) |
1092 | 1091 | |
1093 | 1092 | if rec_index: |
1094 | index = [it['name'] for it in lres['recinfo']] | |
1093 | index = [it["name"] for it in lres["recinfo"]] | |
1095 | 1094 | for r in res: |
1096 | 1095 | r.index = index |
1097 | 1096 | |
1121 | 1120 | True if the email address is valid and registered, False |
1122 | 1121 | otherwise. |
1123 | 1122 | """ |
1124 | if not self._server.check_supported('email'): | |
1125 | raise DrmsOperationNotSupported('Server does not support user emails') | |
1123 | if not self._server.check_supported("email"): | |
1124 | raise DrmsOperationNotSupported("Server does not support user emails") | |
1126 | 1125 | res = self._json.check_address(email) |
1127 | status = res.get('status') | |
1126 | status = res.get("status") | |
1128 | 1127 | return status is not None and int(status) == 2 |
1129 | 1128 | |
1130 | 1129 | def export( |
1131 | 1130 | self, |
1132 | 1131 | ds, |
1133 | method='url_quick', | |
1134 | protocol='as-is', | |
1132 | method="url_quick", | |
1133 | protocol="as-is", | |
1135 | 1134 | protocol_args=None, |
1136 | 1135 | filenamefmt=None, |
1137 | 1136 | n=None, |
1205 | 1204 | ------- |
1206 | 1205 | result : `ExportRequest` |
1207 | 1206 | """ |
1208 | if not self._server.check_supported('export'): | |
1209 | raise DrmsOperationNotSupported('Server does not support export requests') | |
1207 | if not self._server.check_supported("export"): | |
1208 | raise DrmsOperationNotSupported("Server does not support export requests") | |
1210 | 1209 | if email is None: |
1211 | 1210 | if self._email is None: |
1212 | raise ValueError('The email argument is required, when no default email address was set') | |
1211 | raise ValueError("The email argument is required, when no default email address was set") | |
1213 | 1212 | email = self._email |
1214 | 1213 | |
1215 | 1214 | if filenamefmt is None: |
1218 | 1217 | elif filenamefmt is False: |
1219 | 1218 | filenamefmt = None |
1220 | 1219 | |
1221 | if protocol.lower() in ['jpg', 'mpg', 'mp4']: | |
1220 | if protocol.lower() in ["jpg", "mpg", "mp4"]: | |
1222 | 1221 | self._validate_export_protocol_args(protocol_args) |
1223 | 1222 | |
1224 | 1223 | d = self._json.exp_request( |
1247 | 1246 | ------- |
1248 | 1247 | result : `ExportRequest` |
1249 | 1248 | """ |
1250 | if not self._server.check_supported('export'): | |
1251 | raise DrmsOperationNotSupported('Server does not support export requests') | |
1249 | if not self._server.check_supported("export"): | |
1250 | raise DrmsOperationNotSupported("Server does not support export requests") | |
1252 | 1251 | return ExportRequest._create_from_id(requestid, client=self) |
0 | 0 | from urllib.parse import urljoin |
1 | 1 | |
2 | __all__ = ['ServerConfig', 'register_server'] | |
2 | __all__ = ["ServerConfig", "register_server"] | |
3 | 3 | |
4 | 4 | |
5 | 5 | class ServerConfig: |
38 | 38 | """ |
39 | 39 | |
40 | 40 | _valid_keys = [ |
41 | 'name', | |
42 | 'cgi_baseurl', | |
43 | 'cgi_show_series', | |
44 | 'cgi_jsoc_info', | |
45 | 'cgi_jsoc_fetch', | |
46 | 'cgi_check_address', | |
47 | 'cgi_show_series_wrapper', | |
48 | 'show_series_wrapper_dbhost', | |
49 | 'url_show_series', | |
50 | 'url_jsoc_info', | |
51 | 'url_jsoc_fetch', | |
52 | 'url_check_address', | |
53 | 'url_show_series_wrapper', | |
54 | 'encoding', | |
55 | 'http_download_baseurl', | |
56 | 'ftp_download_baseurl', | |
41 | "name", | |
42 | "cgi_baseurl", | |
43 | "cgi_show_series", | |
44 | "cgi_jsoc_info", | |
45 | "cgi_jsoc_fetch", | |
46 | "cgi_check_address", | |
47 | "cgi_show_series_wrapper", | |
48 | "show_series_wrapper_dbhost", | |
49 | "url_show_series", | |
50 | "url_jsoc_info", | |
51 | "url_jsoc_fetch", | |
52 | "url_check_address", | |
53 | "url_show_series_wrapper", | |
54 | "encoding", | |
55 | "http_download_baseurl", | |
56 | "ftp_download_baseurl", | |
57 | 57 | ] |
58 | 58 | |
59 | 59 | def __init__(self, config=None, **kwargs): |
62 | 62 | |
63 | 63 | for k in d: |
64 | 64 | if k not in self._valid_keys: |
65 | raise ValueError(f'Invalid server config key: {k}') | |
65 | raise ValueError(f"Invalid server config key: {k}") | |
66 | 66 | |
67 | if 'name' not in d: | |
67 | if "name" not in d: | |
68 | 68 | raise ValueError('Server config entry "name" is missing') |
69 | 69 | |
70 | 70 | # encoding defaults to latin1 |
71 | if 'encoding' not in d: | |
72 | d['encoding'] = 'latin1' | |
71 | if "encoding" not in d: | |
72 | d["encoding"] = "latin1" | |
73 | 73 | |
74 | 74 | # Generate URL entries from CGI entries, if cgi_baseurl exists and |
75 | 75 | # the specific URL entry is not already set. |
76 | if 'cgi_baseurl' in d: | |
77 | cgi_baseurl = d['cgi_baseurl'] | |
78 | cgi_keys = [k for k in self._valid_keys if k.startswith('cgi') and k != 'cgi_baseurl'] | |
76 | if "cgi_baseurl" in d: | |
77 | cgi_baseurl = d["cgi_baseurl"] | |
78 | cgi_keys = [k for k in self._valid_keys if k.startswith("cgi") and k != "cgi_baseurl"] | |
79 | 79 | for k in cgi_keys: |
80 | url_key = f'url{k[3:]}' | |
80 | url_key = f"url{k[3:]}" | |
81 | 81 | cgi_value = d.get(k) |
82 | 82 | if d.get(url_key) is None and cgi_value is not None: |
83 | 83 | d[url_key] = urljoin(cgi_baseurl, cgi_value) |
97 | 97 | def __setattr__(self, name, value): |
98 | 98 | if name in self._valid_keys: |
99 | 99 | if not isinstance(value, str): |
100 | raise ValueError(f'{name} config value must be a string') | |
100 | raise ValueError(f"{name} config value must be a string") | |
101 | 101 | self._d[name] = value |
102 | 102 | else: |
103 | 103 | object.__setattr__(self, name, value) |
112 | 112 | """ |
113 | 113 | Check if an operation is supported by the server. |
114 | 114 | """ |
115 | if op == 'series': | |
115 | if op == "series": | |
116 | 116 | return (self.cgi_show_series is not None) or (self.cgi_show_series_wrapper is not None) |
117 | elif op == 'info': | |
117 | elif op == "info": | |
118 | 118 | return self.cgi_jsoc_info is not None |
119 | elif op == 'query': | |
119 | elif op == "query": | |
120 | 120 | return self.cgi_jsoc_info is not None |
121 | elif op == 'email': | |
121 | elif op == "email": | |
122 | 122 | return self.cgi_check_address is not None |
123 | elif op == 'export': | |
123 | elif op == "export": | |
124 | 124 | return (self.cgi_jsoc_info is not None) and (self.cgi_jsoc_fetch is not None) |
125 | 125 | else: |
126 | raise ValueError(f'Unknown operation: {op!r}') | |
126 | raise ValueError(f"Unknown operation: {op!r}") | |
127 | 127 | |
128 | 128 | |
129 | 129 | def register_server(config): |
133 | 133 | global _server_configs |
134 | 134 | name = config.name.lower() |
135 | 135 | if name in _server_configs: |
136 | raise RuntimeError(f'ServerConfig {name} already registered') | |
136 | raise RuntimeError(f"ServerConfig {name} already registered") | |
137 | 137 | _server_configs[config.name.lower()] = config |
138 | 138 | |
139 | 139 | |
143 | 143 | # Register public JSOC DRMS server. |
144 | 144 | register_server( |
145 | 145 | ServerConfig( |
146 | name='JSOC', | |
147 | cgi_baseurl='http://jsoc.stanford.edu/cgi-bin/ajax/', | |
148 | cgi_show_series='show_series', | |
149 | cgi_jsoc_info='jsoc_info', | |
150 | cgi_jsoc_fetch='jsoc_fetch', | |
151 | cgi_check_address='checkAddress.sh', | |
152 | cgi_show_series_wrapper='showextseries', | |
153 | show_series_wrapper_dbhost='hmidb2', | |
154 | http_download_baseurl='http://jsoc.stanford.edu/', | |
155 | ftp_download_baseurl='ftp://pail.stanford.edu/export/', | |
146 | name="JSOC", | |
147 | cgi_baseurl="http://jsoc.stanford.edu/cgi-bin/ajax/", | |
148 | cgi_show_series="show_series", | |
149 | cgi_jsoc_info="jsoc_info", | |
150 | cgi_jsoc_fetch="jsoc_fetch", | |
151 | cgi_check_address="checkAddress.sh", | |
152 | cgi_show_series_wrapper="showextseries", | |
153 | show_series_wrapper_dbhost="hmidb2", | |
154 | http_download_baseurl="http://jsoc.stanford.edu/", | |
155 | ftp_download_baseurl="ftp://pail.stanford.edu/export/", | |
156 | 156 | ) |
157 | 157 | ) |
158 | 158 | |
159 | 159 | # Register KIS DRMS server. |
160 | 160 | register_server( |
161 | 161 | ServerConfig( |
162 | name='KIS', | |
163 | cgi_baseurl='http://drms.leibniz-kis.de/cgi-bin/', | |
164 | cgi_show_series='show_series', | |
165 | cgi_jsoc_info='jsoc_info', | |
162 | name="KIS", | |
163 | cgi_baseurl="http://drms.leibniz-kis.de/cgi-bin/", | |
164 | cgi_show_series="show_series", | |
165 | cgi_jsoc_info="jsoc_info", | |
166 | 166 | ) |
167 | 167 | ) |
0 | 0 | __all__ = [ |
1 | 'DrmsError', | |
2 | 'DrmsQueryError', | |
3 | 'DrmsExportError', | |
4 | 'DrmsOperationNotSupported', | |
1 | "DrmsError", | |
2 | "DrmsQueryError", | |
3 | "DrmsExportError", | |
4 | "DrmsOperationNotSupported", | |
5 | 5 | ] |
6 | 6 | |
7 | 7 |
0 | 0 | import json as _json |
1 | 1 | from urllib.parse import urlencode, quote_plus |
2 | from urllib.request import urlopen | |
2 | from urllib.request import HTTPError, urlopen | |
3 | 3 | |
4 | 4 | from .config import ServerConfig, _server_configs |
5 | 5 | from .utils import _split_arg |
6 | 6 | |
7 | __all__ = ['const', 'HttpJsonRequest', 'HttpJsonClient'] | |
7 | __all__ = ["const", "HttpJsonRequest", "HttpJsonClient"] | |
8 | 8 | |
9 | 9 | |
10 | 10 | class JsocInfoConstants: |
37 | 37 | = ``'*archive*'`` |
38 | 38 | """ |
39 | 39 | |
40 | all = '**ALL**' | |
41 | none = '**NONE**' | |
42 | recdir = '*recdir*' | |
43 | dirmtime = '*dirmtime*' | |
44 | logdir = '*logdir*' | |
45 | recnum = '*recnum*' | |
46 | sunum = '*sunum*' | |
47 | size = '*size*' | |
48 | online = '*online*' | |
49 | retain = '*retain*' | |
50 | archive = '*archive*' | |
40 | all = "**ALL**" | |
41 | none = "**NONE**" | |
42 | recdir = "*recdir*" | |
43 | dirmtime = "*dirmtime*" | |
44 | logdir = "*logdir*" | |
45 | recnum = "*recnum*" | |
46 | sunum = "*sunum*" | |
47 | size = "*size*" | |
48 | online = "*online*" | |
49 | retain = "*retain*" | |
50 | archive = "*archive*" | |
51 | 51 | |
52 | 52 | |
53 | 53 | const = JsocInfoConstants() |
62 | 62 | |
63 | 63 | def __init__(self, url, encoding): |
64 | 64 | self._encoding = encoding |
65 | self._http = urlopen(url) | |
65 | try: | |
66 | self._http = urlopen(url) | |
67 | except HTTPError as e: | |
68 | e.msg = f"Failed to open URL: {e.url} with {e.code} - {e.msg}" | |
69 | raise e | |
66 | 70 | self._data_str = None |
67 | 71 | self._data = None |
68 | 72 | |
69 | 73 | def __repr__(self): |
70 | return f'<HttpJsonRequest: {self.url}>' | |
74 | return f"<HttpJsonRequest: {self.url}>" | |
71 | 75 | |
72 | 76 | @property |
73 | 77 | def url(self): |
99 | 103 | Enable or disable debug mode (default is disabled). |
100 | 104 | """ |
101 | 105 | |
102 | def __init__(self, server='jsoc', debug=False): | |
106 | def __init__(self, server="jsoc", debug=False): | |
103 | 107 | if isinstance(server, ServerConfig): |
104 | 108 | self._server = server |
105 | 109 | else: |
107 | 111 | self.debug = debug |
108 | 112 | |
109 | 113 | def __repr__(self): |
110 | return f'<HttpJsonClient: {self._server.name}>' | |
114 | return f"<HttpJsonClient: {self._server.name}>" | |
111 | 115 | |
112 | 116 | def _json_request(self, url): |
113 | 117 | if self.debug: |
139 | 143 | ------- |
140 | 144 | result : dict |
141 | 145 | """ |
142 | query = '?' if ds_filter is not None else "" | |
146 | query = "?" if ds_filter is not None else "" | |
143 | 147 | if ds_filter is not None: |
144 | query += urlencode({'filter': ds_filter}) | |
148 | query += urlencode({"filter": ds_filter}) | |
145 | 149 | req = self._json_request(self._server.url_show_series + query) |
146 | 150 | return req.data |
147 | 151 | |
167 | 171 | ------- |
168 | 172 | result : dict |
169 | 173 | """ |
170 | query_args = {'dbhost': self._server.show_series_wrapper_dbhost} | |
174 | query_args = {"dbhost": self._server.show_series_wrapper_dbhost} | |
171 | 175 | if ds_filter is not None: |
172 | query_args['filter'] = ds_filter | |
176 | query_args["filter"] = ds_filter | |
173 | 177 | if info: |
174 | query_args['info'] = '1' | |
175 | query = f'?{urlencode(query_args)}' | |
178 | query_args["info"] = "1" | |
179 | query = f"?{urlencode(query_args)}" | |
176 | 180 | req = self._json_request(self._server.url_show_series_wrapper + query) |
177 | 181 | return req.data |
178 | 182 | |
242 | 246 | Dictionary containing the requested record set information. |
243 | 247 | """ |
244 | 248 | if key is None and seg is None and link is None: |
245 | raise ValueError('At least one key, seg or link must be specified') | |
246 | d = {'op': 'rs_list', 'ds': ds} | |
249 | raise ValueError("At least one key, seg or link must be specified") | |
250 | d = {"op": "rs_list", "ds": ds} | |
247 | 251 | if key is not None: |
248 | d['key'] = ','.join(_split_arg(key)) | |
252 | d["key"] = ",".join(_split_arg(key)) | |
249 | 253 | if seg is not None: |
250 | d['seg'] = ','.join(_split_arg(seg)) | |
254 | d["seg"] = ",".join(_split_arg(seg)) | |
251 | 255 | if link is not None: |
252 | d['link'] = ','.join(_split_arg(link)) | |
256 | d["link"] = ",".join(_split_arg(link)) | |
253 | 257 | if recinfo: |
254 | d['R'] = '1' | |
258 | d["R"] = "1" | |
255 | 259 | if n is not None: |
256 | d['n'] = f'{int(int(n))}' | |
260 | d["n"] = f"{int(int(n))}" | |
257 | 261 | if uid is not None: |
258 | d['userhandle'] = uid | |
259 | query = f'?{urlencode(d)}' | |
262 | d["userhandle"] = uid | |
263 | query = f"?{urlencode(d)}" | |
260 | 264 | req = self._json_request(self._server.url_jsoc_info + query) |
261 | 265 | return req.data |
262 | 266 | |
279 | 283 | - 4: Email address has neither been validated nor registered |
280 | 284 | - -2: Not a valid email address |
281 | 285 | """ |
282 | query = '?' + urlencode({'address': quote_plus(email), 'checkonly': '1'}) | |
286 | query = "?" + urlencode({"address": quote_plus(email), "checkonly": "1"}) | |
283 | 287 | req = self._json_request(self._server.url_check_address + query) |
284 | 288 | return req.data |
285 | 289 | |
334 | 338 | self, |
335 | 339 | ds, |
336 | 340 | notify, |
337 | method='url_quick', | |
338 | protocol='as-is', | |
341 | method="url_quick", | |
342 | protocol="as-is", | |
339 | 343 | protocol_args=None, |
340 | 344 | filenamefmt=None, |
341 | 345 | n=None, |
343 | 347 | requestor=None, |
344 | 348 | ): |
345 | 349 | method = method.lower() |
346 | method_list = ['url_quick', 'url', 'url-tar', 'ftp', 'ftp-tar'] | |
350 | method_list = ["url_quick", "url", "url-tar", "ftp", "ftp-tar"] | |
347 | 351 | if method not in method_list: |
348 | 352 | raise ValueError( |
349 | 'Method {} is not supported, valid methods are: {}'.format( | |
350 | method, ', '.join(str(s) for s in method_list) | |
353 | "Method {} is not supported, valid methods are: {}".format( | |
354 | method, ", ".join(str(s) for s in method_list) | |
351 | 355 | ) |
352 | 356 | ) |
353 | 357 | |
354 | 358 | protocol = protocol.lower() |
355 | img_protocol_list = ['jpg', 'mpg', 'mp4'] | |
356 | protocol_list = ['as-is', 'fits'] + img_protocol_list | |
359 | img_protocol_list = ["jpg", "mpg", "mp4"] | |
360 | protocol_list = ["as-is", "fits"] + img_protocol_list | |
357 | 361 | if protocol not in protocol_list: |
358 | 362 | raise ValueError( |
359 | 'Protocol {} is not supported, valid protocols are: {}'.format( | |
360 | protocol, ', '.join(str(s) for s in protocol_list) | |
363 | "Protocol {} is not supported, valid protocols are: {}".format( | |
364 | protocol, ", ".join(str(s) for s in protocol_list) | |
361 | 365 | ) |
362 | 366 | ) |
363 | 367 | |
364 | 368 | # method "url_quick" is meant to be used with "as-is", change method |
365 | 369 | # to "url" if protocol is not "as-is" |
366 | if method == 'url_quick' and protocol != 'as-is': | |
367 | method = 'url' | |
370 | if method == "url_quick" and protocol != "as-is": | |
371 | method = "url" | |
368 | 372 | |
369 | 373 | if protocol in img_protocol_list: |
370 | extra_keys = {'ct': 'grey.sao', 'scaling': 'MINMAX', 'size': 1} | |
374 | extra_keys = {"ct": "grey.sao", "scaling": "MINMAX", "size": 1} | |
371 | 375 | if protocol_args is not None: |
372 | 376 | for k, v in protocol_args.items(): |
373 | if k.lower() == 'ct': | |
374 | extra_keys['ct'] = v | |
375 | elif k == 'scaling': | |
377 | if k.lower() == "ct": | |
378 | extra_keys["ct"] = v | |
379 | elif k == "scaling": | |
376 | 380 | extra_keys[k] = v |
377 | elif k == 'size': | |
381 | elif k == "size": | |
378 | 382 | extra_keys[k] = int(v) |
379 | elif k in ['min', 'max']: | |
383 | elif k in ["min", "max"]: | |
380 | 384 | extra_keys[k] = float(v) |
381 | 385 | else: |
382 | raise ValueError(f'Unknown protocol argument: {k}') | |
383 | protocol += ',CT={ct},scaling={scaling},size={size}'.format(**extra_keys) | |
384 | if 'min' in extra_keys: | |
386 | raise ValueError(f"Unknown protocol argument: {k}") | |
387 | protocol += ",CT={ct},scaling={scaling},size={size}".format(**extra_keys) | |
388 | if "min" in extra_keys: | |
385 | 389 | protocol += f',min={extra_keys["min"]:g}' |
386 | if 'max' in extra_keys: | |
390 | if "max" in extra_keys: | |
387 | 391 | protocol += f',max={extra_keys["max"]:g}' |
388 | 392 | else: |
389 | 393 | if protocol_args is not None: |
390 | raise ValueError(f'protocol_args not supported for protocol {protocol}') | |
394 | raise ValueError(f"protocol_args not supported for protocol {protocol}") | |
391 | 395 | |
392 | 396 | d = { |
393 | 'op': 'exp_request', | |
394 | 'format': 'json', | |
395 | 'ds': ds, | |
396 | 'notify': notify, | |
397 | 'method': method, | |
398 | 'protocol': protocol, | |
397 | "op": "exp_request", | |
398 | "format": "json", | |
399 | "ds": ds, | |
400 | "notify": notify, | |
401 | "method": method, | |
402 | "protocol": protocol, | |
399 | 403 | } |
400 | 404 | |
401 | 405 | if filenamefmt is not None: |
402 | d['filenamefmt'] = filenamefmt | |
406 | d["filenamefmt"] = filenamefmt | |
403 | 407 | |
404 | 408 | n = int(n) if n is not None else 0 |
405 | d['process=n'] = f'{n}' | |
409 | d["process=n"] = f"{n}" | |
406 | 410 | if process is not None: |
407 | 411 | allowed_processes = [ |
408 | 'im_patch', | |
409 | 'resize', | |
410 | 'rebin', | |
411 | 'aia_scale_aialev1', | |
412 | 'aia_scale_orig', | |
413 | 'aia_scale_other', | |
414 | 'Maproj', | |
415 | 'HmiB2ptr', | |
412 | "im_patch", | |
413 | "resize", | |
414 | "rebin", | |
415 | "aia_scale_aialev1", | |
416 | "aia_scale_orig", | |
417 | "aia_scale_other", | |
418 | "Maproj", | |
419 | "HmiB2ptr", | |
416 | 420 | ] |
417 | 421 | process_strings = {} |
418 | 422 | for p, opts in process.items(): |
419 | 423 | if p not in allowed_processes: |
420 | raise ValueError(f'{p} is not one of the allowed processing options: {allowed_processes}') | |
421 | process_strings[p] = ','.join([f'{k}={v}' for k, v in opts.items()]) | |
422 | processes = '|'.join([f'{k},{v}' for k, v in process_strings.items()]) | |
423 | d['process=n'] = f'{d["process=n"]}|{processes}' | |
424 | raise ValueError(f"{p} is not one of the allowed processing options: {allowed_processes}") | |
425 | process_strings[p] = ",".join([f"{k}={v}" for k, v in opts.items()]) | |
426 | processes = "|".join([f"{k},{v}" for k, v in process_strings.items()]) | |
427 | d["process=n"] = f'{d["process=n"]}|{processes}' | |
424 | 428 | |
425 | 429 | if requestor is None: |
426 | d['requestor'] = notify.split('@')[0] | |
430 | d["requestor"] = notify.split("@")[0] | |
427 | 431 | elif requestor is not False: |
428 | d['requestor'] = requestor | |
429 | ||
430 | query = '?' + urlencode(d) | |
432 | d["requestor"] = requestor | |
433 | ||
434 | query = "?" + urlencode(d) | |
431 | 435 | return self._server.url_jsoc_fetch + query |
432 | 436 | |
433 | 437 | def exp_status(self, requestid): |
8 | 8 | |
9 | 9 | # Create a Client instance |
10 | 10 | client = drms.Client(args.server, email=args.email, verbose=args.verbose, debug=args.debug) |
11 | print(f'client: {client}') | |
11 | print(f"client: {client}") | |
12 | 12 | |
13 | 13 | |
14 | 14 | def parse_args(args): |
15 | 15 | import drms |
16 | 16 | |
17 | 17 | # Handle command line options |
18 | parser = argparse.ArgumentParser(description='drms, access HMI, AIA and MDI data with python') | |
19 | parser.add_argument('--debug', action='store_true', help='enable debug output') | |
18 | parser = argparse.ArgumentParser(description="drms, access HMI, AIA and MDI data with python") | |
19 | parser.add_argument("--debug", action="store_true", help="enable debug output") | |
20 | 20 | parser.add_argument( |
21 | '--version', action='version', version=f'drms v{drms.__version__}', help='show package version and exit', | |
21 | "--version", | |
22 | action="version", | |
23 | version=f"drms v{drms.__version__}", | |
24 | help="show package version and exit", | |
22 | 25 | ) |
23 | parser.add_argument('--email', help='email address for data export requests') | |
24 | parser.add_argument('--verbose', action='store_true', help='print export status messages to stdout') | |
25 | parser.add_argument('server', nargs='?', default='jsoc', help='DRMS server, default is JSOC') | |
26 | parser.add_argument("--email", help="email address for data export requests") | |
27 | parser.add_argument("--verbose", action="store_true", help="print export status messages to stdout") | |
28 | parser.add_argument("server", nargs="?", default="jsoc", help="DRMS server, default is JSOC") | |
26 | 29 | |
27 | 30 | args = parser.parse_args(args) |
28 | 31 | return args |
6 | 6 | def test_client_init_defaults(): |
7 | 7 | c = drms.Client() |
8 | 8 | assert isinstance(c._server, ServerConfig) |
9 | assert c._server.name.lower() == 'jsoc' | |
9 | assert c._server.name.lower() == "jsoc" | |
10 | 10 | assert c.email is None |
11 | 11 | assert c.verbose is False |
12 | 12 | assert c.debug is False |
13 | 13 | |
14 | 14 | |
15 | @pytest.mark.parametrize('value', [True, False]) | |
15 | @pytest.mark.parametrize("value", [True, False]) | |
16 | 16 | def test_client_init_verbose(value): |
17 | 17 | c = drms.Client(verbose=value) |
18 | 18 | assert c.verbose is value |
19 | 19 | assert c.debug is False |
20 | 20 | |
21 | 21 | |
22 | @pytest.mark.parametrize('value', [True, False]) | |
22 | @pytest.mark.parametrize("value", [True, False]) | |
23 | 23 | def test_client_init_debug(value): |
24 | 24 | c = drms.Client(debug=value) |
25 | 25 | assert c.verbose is False |
26 | 26 | assert c.debug is value |
27 | 27 | |
28 | 28 | |
29 | @pytest.mark.parametrize('server_name', ['jsoc', 'kis']) | |
29 | @pytest.mark.parametrize("server_name", ["jsoc", "kis"]) | |
30 | 30 | def test_client_registered_servers(server_name): |
31 | 31 | c = drms.Client(server_name) |
32 | 32 | assert isinstance(c._server, ServerConfig) |
37 | 37 | |
38 | 38 | |
39 | 39 | def test_client_custom_config(): |
40 | cfg = ServerConfig(name='TEST') | |
40 | cfg = ServerConfig(name="TEST") | |
41 | 41 | c = drms.Client(server=cfg) |
42 | 42 | assert isinstance(c._server, ServerConfig) |
43 | assert c._server.name == 'TEST' | |
43 | assert c._server.name == "TEST" | |
44 | 44 | |
45 | 45 | |
46 | 46 | def test_repr(): |
47 | assert repr(drms.Client()) == '<Client: JSOC>' | |
48 | assert repr(drms.Client(server='kis')) == '<Client: KIS>' | |
47 | assert repr(drms.Client()) == "<Client: JSOC>" | |
48 | assert repr(drms.Client(server="kis")) == "<Client: KIS>" |
3 | 3 | |
4 | 4 | |
5 | 5 | def test_create_config_basic(): |
6 | cfg = ServerConfig(name='TEST') | |
6 | cfg = ServerConfig(name="TEST") | |
7 | 7 | valid_keys = ServerConfig._valid_keys |
8 | assert 'name' in valid_keys | |
9 | assert 'encoding' in valid_keys | |
8 | assert "name" in valid_keys | |
9 | assert "encoding" in valid_keys | |
10 | 10 | for k in valid_keys: |
11 | 11 | v = getattr(cfg, k) |
12 | if k == 'name': | |
13 | assert v == 'TEST' | |
14 | elif k == 'encoding': | |
15 | assert v == 'latin1' | |
12 | if k == "name": | |
13 | assert v == "TEST" | |
14 | elif k == "encoding": | |
15 | assert v == "latin1" | |
16 | 16 | else: |
17 | 17 | assert v is None |
18 | assert repr(cfg) == '<ServerConfig: TEST>' | |
18 | assert repr(cfg) == "<ServerConfig: TEST>" | |
19 | 19 | |
20 | 20 | |
21 | 21 | def test_create_config_missing_name(): |
24 | 24 | |
25 | 25 | |
26 | 26 | def test_copy_config(): |
27 | cfg = ServerConfig(name='TEST') | |
28 | assert cfg.name == 'TEST' | |
27 | cfg = ServerConfig(name="TEST") | |
28 | assert cfg.name == "TEST" | |
29 | 29 | |
30 | 30 | cfg2 = cfg.copy() |
31 | 31 | assert cfg2 is not cfg |
32 | assert cfg2.name == 'TEST' | |
32 | assert cfg2.name == "TEST" | |
33 | 33 | |
34 | cfg.name = 'MUH' | |
34 | cfg.name = "MUH" | |
35 | 35 | assert cfg.name != cfg2.name |
36 | 36 | |
37 | 37 | |
38 | 38 | def test_register_server(): |
39 | cfg = ServerConfig(name='TEST') | |
39 | cfg = ServerConfig(name="TEST") | |
40 | 40 | |
41 | assert 'test' not in _server_configs | |
41 | assert "test" not in _server_configs | |
42 | 42 | register_server(cfg) |
43 | assert 'test' in _server_configs | |
43 | assert "test" in _server_configs | |
44 | 44 | |
45 | del _server_configs['test'] | |
46 | assert 'test' not in _server_configs | |
45 | del _server_configs["test"] | |
46 | assert "test" not in _server_configs | |
47 | 47 | |
48 | 48 | |
49 | 49 | def test_register_server_existing(): |
50 | assert 'jsoc' in _server_configs | |
51 | cfg = ServerConfig(name='jsoc') | |
50 | assert "jsoc" in _server_configs | |
51 | cfg = ServerConfig(name="jsoc") | |
52 | 52 | with pytest.raises(RuntimeError): |
53 | 53 | register_server(cfg) |
54 | assert 'jsoc' in _server_configs | |
54 | assert "jsoc" in _server_configs | |
55 | 55 | |
56 | 56 | |
57 | 57 | def test_config_jsoc(): |
58 | assert 'jsoc' in _server_configs | |
58 | assert "jsoc" in _server_configs | |
59 | 59 | |
60 | cfg = _server_configs['jsoc'] | |
61 | assert cfg.name.lower() == 'jsoc' | |
60 | cfg = _server_configs["jsoc"] | |
61 | assert cfg.name.lower() == "jsoc" | |
62 | 62 | assert isinstance(cfg.encoding, str) |
63 | 63 | assert isinstance(cfg.cgi_show_series, str) |
64 | 64 | assert isinstance(cfg.cgi_jsoc_info, str) |
66 | 66 | assert isinstance(cfg.cgi_check_address, str) |
67 | 67 | assert isinstance(cfg.cgi_show_series_wrapper, str) |
68 | 68 | assert isinstance(cfg.show_series_wrapper_dbhost, str) |
69 | assert cfg.http_download_baseurl.startswith('http://') | |
70 | assert cfg.ftp_download_baseurl.startswith('ftp://') | |
69 | assert cfg.http_download_baseurl.startswith("http://") | |
70 | assert cfg.ftp_download_baseurl.startswith("ftp://") | |
71 | 71 | |
72 | 72 | baseurl = cfg.cgi_baseurl |
73 | assert baseurl.startswith('http://') | |
73 | assert baseurl.startswith("http://") | |
74 | 74 | assert cfg.url_show_series.startswith(baseurl) |
75 | 75 | assert cfg.url_jsoc_info.startswith(baseurl) |
76 | 76 | assert cfg.url_jsoc_fetch.startswith(baseurl) |
79 | 79 | |
80 | 80 | |
81 | 81 | def test_config_kis(): |
82 | assert 'kis' in _server_configs | |
83 | cfg = _server_configs['kis'] | |
82 | assert "kis" in _server_configs | |
83 | cfg = _server_configs["kis"] | |
84 | 84 | |
85 | assert cfg.name.lower() == 'kis' | |
85 | assert cfg.name.lower() == "kis" | |
86 | 86 | assert isinstance(cfg.encoding, str) |
87 | 87 | |
88 | 88 | assert isinstance(cfg.cgi_show_series, str) |
95 | 95 | assert cfg.ftp_download_baseurl is None |
96 | 96 | |
97 | 97 | baseurl = cfg.cgi_baseurl |
98 | assert baseurl.startswith('http://') | |
98 | assert baseurl.startswith("http://") | |
99 | 99 | assert cfg.url_show_series.startswith(baseurl) |
100 | 100 | assert cfg.url_jsoc_info.startswith(baseurl) |
101 | 101 | assert cfg.url_jsoc_fetch is None |
104 | 104 | |
105 | 105 | |
106 | 106 | @pytest.mark.parametrize( |
107 | 'server_name, operation, expected', | |
107 | "server_name, operation, expected", | |
108 | 108 | [ |
109 | ('jsoc', 'series', True), | |
110 | ('jsoc', 'info', True), | |
111 | ('jsoc', 'query', True), | |
112 | ('jsoc', 'email', True), | |
113 | ('jsoc', 'export', True), | |
114 | ('kis', 'series', True), | |
115 | ('kis', 'info', True), | |
116 | ('kis', 'query', True), | |
117 | ('kis', 'email', False), | |
118 | ('kis', 'export', False), | |
109 | ("jsoc", "series", True), | |
110 | ("jsoc", "info", True), | |
111 | ("jsoc", "query", True), | |
112 | ("jsoc", "email", True), | |
113 | ("jsoc", "export", True), | |
114 | ("kis", "series", True), | |
115 | ("kis", "info", True), | |
116 | ("kis", "query", True), | |
117 | ("kis", "email", False), | |
118 | ("kis", "export", False), | |
119 | 119 | ], |
120 | 120 | ) |
121 | 121 | def test_supported(server_name, operation, expected): |
124 | 124 | |
125 | 125 | |
126 | 126 | @pytest.mark.parametrize( |
127 | 'server_name, operation', [('jsoc', 'bar'), ('kis', 'foo')], | |
127 | "server_name, operation", | |
128 | [("jsoc", "bar"), ("kis", "foo")], | |
128 | 129 | ) |
129 | 130 | def test_supported_invalid_operation(server_name, operation): |
130 | 131 | cfg = _server_configs[server_name] |
134 | 135 | |
135 | 136 | def test_create_config_invalid_key(): |
136 | 137 | with pytest.raises(ValueError): |
137 | cfg = ServerConfig(foo='bar') | |
138 | cfg = ServerConfig(foo="bar") | |
138 | 139 | |
139 | 140 | |
140 | 141 | def test_getset_attr(): |
141 | cfg = ServerConfig(name='TEST') | |
142 | assert getattr(cfg, 'name') == 'TEST' | |
143 | assert getattr(cfg, '__dict__') == {'_d': {'encoding': 'latin1', 'name': 'TEST'}} | |
142 | cfg = ServerConfig(name="TEST") | |
143 | assert getattr(cfg, "name") == "TEST" | |
144 | assert getattr(cfg, "__dict__") == {"_d": {"encoding": "latin1", "name": "TEST"}} | |
144 | 145 | with pytest.raises(AttributeError): |
145 | getattr(cfg, 'foo') | |
146 | setattr(cfg, 'name', 'NewTest') | |
147 | assert getattr(cfg, 'name') == 'NewTest' | |
146 | getattr(cfg, "foo") | |
147 | setattr(cfg, "name", "NewTest") | |
148 | assert getattr(cfg, "name") == "NewTest" | |
148 | 149 | with pytest.raises(ValueError): |
149 | setattr(cfg, 'name', 123) | |
150 | setattr(cfg, '__sizeof__', 127) | |
151 | assert getattr(cfg, '__sizeof__') == 127 | |
150 | setattr(cfg, "name", 123) | |
151 | setattr(cfg, "__sizeof__", 127) | |
152 | assert getattr(cfg, "__sizeof__") == 127 | |
152 | 153 | |
153 | 154 | |
154 | 155 | def test_to_dict(): |
155 | cfg = ServerConfig(name='TEST') | |
156 | cfg = ServerConfig(name="TEST") | |
156 | 157 | _dict = cfg.to_dict() |
157 | 158 | assert isinstance(_dict, dict) |
158 | assert _dict == {'encoding': 'latin1', 'name': 'TEST'} | |
159 | assert _dict == {"encoding": "latin1", "name": "TEST"} | |
159 | 160 | |
160 | 161 | |
161 | 162 | def test_inbuilt_dir(): |
162 | cfg = ServerConfig(name='TEST') | |
163 | cfg = ServerConfig(name="TEST") | |
163 | 164 | valid_keys = ServerConfig._valid_keys |
164 | 165 | list_attr = dir(cfg) |
165 | 166 | assert isinstance(list_attr, list) |
3 | 3 | |
4 | 4 | |
5 | 5 | @pytest.mark.parametrize( |
6 | 'exception_class', [drms.DrmsError, drms.DrmsQueryError, drms.DrmsExportError, drms.DrmsOperationNotSupported], | |
6 | "exception_class", | |
7 | [drms.DrmsError, drms.DrmsQueryError, drms.DrmsExportError, drms.DrmsOperationNotSupported], | |
7 | 8 | ) |
8 | 9 | def test_exception_class(exception_class): |
9 | 10 | with pytest.raises(RuntimeError): |
5 | 5 | |
6 | 6 | |
7 | 7 | @pytest.mark.parametrize( |
8 | 'symbol', | |
8 | "symbol", | |
9 | 9 | [ |
10 | 'DrmsError', | |
11 | 'DrmsQueryError', | |
12 | 'DrmsExportError', | |
13 | 'DrmsOperationNotSupported', | |
14 | 'SeriesInfo', | |
15 | 'ExportRequest', | |
16 | 'Client', | |
17 | 'const', | |
18 | 'to_datetime', | |
10 | "DrmsError", | |
11 | "DrmsQueryError", | |
12 | "DrmsExportError", | |
13 | "DrmsOperationNotSupported", | |
14 | "SeriesInfo", | |
15 | "ExportRequest", | |
16 | "Client", | |
17 | "const", | |
18 | "to_datetime", | |
19 | 19 | ], |
20 | 20 | ) |
21 | 21 | def test_symbols(symbol): |
24 | 24 | |
25 | 25 | def test_version(): |
26 | 26 | assert isinstance(drms.__version__, str) |
27 | version = drms.__version__.split('+')[0] | |
27 | version = drms.__version__.split("+")[0] | |
28 | 28 | # Check to make sure it isn't empty |
29 | 29 | assert version |
30 | 30 | # To match the 0.6 in 0.6.dev3 or v0.6 |
31 | m = re.match(r'v*\d+\.\d+\.*', version) | |
31 | m = re.match(r"v*\d+\.\d+\.*", version) | |
32 | 32 | assert m is not None |
33 | 33 | |
34 | 34 | |
35 | 35 | def test_bibtex(): |
36 | 36 | assert isinstance(drms.__citation__, str) |
37 | m = re.match(r'.*Glogowski2019.*', drms.__citation__) | |
37 | m = re.match(r".*Glogowski2019.*", drms.__citation__) | |
38 | 38 | assert m is not None |
5 | 5 | def test_series_list_all(jsoc_client): |
6 | 6 | slist = jsoc_client.series() |
7 | 7 | assert isinstance(slist, list) |
8 | assert 'hmi.v_45s' in (s.lower() for s in slist) | |
9 | assert 'hmi.m_720s' in (s.lower() for s in slist) | |
10 | assert 'hmi.ic_720s' in (s.lower() for s in slist) | |
11 | assert 'aia.lev1' in (s.lower() for s in slist) | |
12 | assert 'aia.lev1_euv_12s' in (s.lower() for s in slist) | |
13 | assert 'mdi.fd_v' in (s.lower() for s in slist) | |
8 | assert "hmi.v_45s" in (s.lower() for s in slist) | |
9 | assert "hmi.m_720s" in (s.lower() for s in slist) | |
10 | assert "hmi.ic_720s" in (s.lower() for s in slist) | |
11 | assert "aia.lev1" in (s.lower() for s in slist) | |
12 | assert "aia.lev1_euv_12s" in (s.lower() for s in slist) | |
13 | assert "mdi.fd_v" in (s.lower() for s in slist) | |
14 | 14 | |
15 | 15 | |
16 | 16 | @pytest.mark.jsoc |
17 | 17 | @pytest.mark.remote_data |
18 | @pytest.mark.parametrize('schema', ['aia', 'hmi', 'mdi']) | |
18 | @pytest.mark.parametrize("schema", ["aia", "hmi", "mdi"]) | |
19 | 19 | def test_series_list_schemata(jsoc_client, schema): |
20 | regex = fr'{schema}\.' | |
20 | regex = rf"{schema}\." | |
21 | 21 | slist = jsoc_client.series(regex) |
22 | 22 | assert len(slist) > 0 |
23 | 23 | for sname in slist: |
24 | assert sname.startswith(f'{schema}.') | |
24 | assert sname.startswith(f"{schema}.") |
5 | 5 | |
6 | 6 | # Invalid email addresses used for testing |
7 | 7 | invalid_emails = [ |
8 | 'notregistered@example.com', | |
9 | 'not-valid', | |
8 | "notregistered@example.com", | |
9 | "not-valid", | |
10 | 10 | "", |
11 | 11 | ] |
12 | 12 | |
13 | 13 | |
14 | 14 | @pytest.mark.jsoc |
15 | 15 | @pytest.mark.remote_data |
16 | @pytest.mark.parametrize('email', invalid_emails) | |
16 | @pytest.mark.parametrize("email", invalid_emails) | |
17 | 17 | def test_email_invalid_check(email): |
18 | c = drms.Client('jsoc') | |
18 | c = drms.Client("jsoc") | |
19 | 19 | assert not c.check_email(email) |
20 | 20 | |
21 | 21 | |
22 | 22 | @pytest.mark.jsoc |
23 | 23 | @pytest.mark.remote_data |
24 | @pytest.mark.parametrize('email', invalid_emails) | |
24 | @pytest.mark.parametrize("email", invalid_emails) | |
25 | 25 | def test_email_invalid_set(email): |
26 | c = drms.Client('jsoc') | |
26 | c = drms.Client("jsoc") | |
27 | 27 | with pytest.raises(ValueError): |
28 | 28 | c.email = email |
29 | 29 | |
30 | 30 | |
31 | 31 | @pytest.mark.jsoc |
32 | 32 | @pytest.mark.remote_data |
33 | @pytest.mark.parametrize('email', invalid_emails) | |
33 | @pytest.mark.parametrize("email", invalid_emails) | |
34 | 34 | def test_email_invalid_init(email): |
35 | 35 | with pytest.raises(ValueError): |
36 | drms.Client('jsoc', email=email) | |
36 | drms.Client("jsoc", email=email) | |
37 | 37 | |
38 | 38 | |
39 | 39 | @pytest.mark.jsoc |
40 | 40 | @pytest.mark.export |
41 | 41 | @pytest.mark.remote_data |
42 | 42 | def test_email_cmdopt_check(email): |
43 | c = drms.Client('jsoc') | |
43 | c = drms.Client("jsoc") | |
44 | 44 | assert c.check_email(email) |
45 | 45 | |
46 | 46 | |
48 | 48 | @pytest.mark.export |
49 | 49 | @pytest.mark.remote_data |
50 | 50 | def test_email_cmdopt_set(email): |
51 | c = drms.Client('jsoc') | |
51 | c = drms.Client("jsoc") | |
52 | 52 | c.email = email |
53 | 53 | assert c.email == email |
54 | 54 | |
57 | 57 | @pytest.mark.export |
58 | 58 | @pytest.mark.remote_data |
59 | 59 | def test_email_cmdopt_init(email): |
60 | c = drms.Client('jsoc', email=email) | |
60 | c = drms.Client("jsoc", email=email) | |
61 | 61 | assert c.email == email |
62 | 62 | |
63 | 63 | |
64 | 64 | def test_query_invalid(): |
65 | cfg = ServerConfig(name='TEST') | |
65 | cfg = ServerConfig(name="TEST") | |
66 | 66 | with pytest.raises(DrmsOperationNotSupported): |
67 | c = drms.Client(server=cfg, email='user@example.com') | |
67 | c = drms.Client(server=cfg, email="user@example.com") |
5 | 5 | @pytest.mark.jsoc |
6 | 6 | @pytest.mark.export |
7 | 7 | @pytest.mark.remote_data |
8 | @pytest.mark.parametrize('method', ['url_quick', 'url']) | |
8 | @pytest.mark.parametrize("method", ["url_quick", "url"]) | |
9 | 9 | def test_export_asis_basic(jsoc_client_export, method): |
10 | 10 | r = jsoc_client_export.export( |
11 | 'hmi.v_avg120[2150]{mean,power}', protocol='as-is', method=method, requestor=False, | |
11 | "hmi.v_avg120[2150]{mean,power}", | |
12 | protocol="as-is", | |
13 | method=method, | |
14 | requestor=False, | |
12 | 15 | ) |
13 | 16 | |
14 | 17 | assert isinstance(r, drms.ExportRequest) |
15 | 18 | assert r.wait(timeout=60) |
16 | 19 | assert r.has_succeeded() |
17 | assert r.protocol == 'as-is' | |
20 | assert r.protocol == "as-is" | |
18 | 21 | assert len(r.urls) == 12 # 6 files per segment |
19 | 22 | |
20 | 23 | for record in r.urls.record: |
21 | 24 | record = record.lower() |
22 | assert record.startswith('hmi.v_avg120[2150]') | |
23 | assert record.endswith('{mean}') or record.endswith('{power}') | |
25 | assert record.startswith("hmi.v_avg120[2150]") | |
26 | assert record.endswith("{mean}") or record.endswith("{power}") | |
24 | 27 | |
25 | 28 | for filename in r.urls.filename: |
26 | assert filename.endswith('mean.fits') or filename.endswith('power.fits') | |
29 | assert filename.endswith("mean.fits") or filename.endswith("power.fits") | |
27 | 30 | |
28 | 31 | for url in r.urls.url: |
29 | assert url.endswith('mean.fits') or url.endswith('power.fits') | |
32 | assert url.endswith("mean.fits") or url.endswith("power.fits") | |
30 | 33 | |
31 | 34 | |
32 | 35 | @pytest.mark.jsoc |
34 | 37 | @pytest.mark.remote_data |
35 | 38 | def test_export_fits_basic(jsoc_client_export): |
36 | 39 | r = jsoc_client_export.export( |
37 | 'hmi.sharp_720s[4864][2014.11.30_00:00_TAI]{continuum, magnetogram}', | |
38 | protocol='fits', | |
39 | method='url', | |
40 | "hmi.sharp_720s[4864][2014.11.30_00:00_TAI]{continuum, magnetogram}", | |
41 | protocol="fits", | |
42 | method="url", | |
40 | 43 | requestor=False, |
41 | 44 | ) |
42 | 45 | |
43 | 46 | assert isinstance(r, drms.ExportRequest) |
44 | 47 | assert r.wait(timeout=60) |
45 | 48 | assert r.has_succeeded() |
46 | assert r.protocol == 'fits' | |
49 | assert r.protocol == "fits" | |
47 | 50 | assert len(r.urls) == 2 # 1 file per segment |
48 | 51 | |
49 | 52 | for record in r.urls.record: |
50 | 53 | record = record.lower() |
51 | assert record.startswith('hmi.sharp_720s[4864]') | |
52 | assert record.endswith('2014.11.30_00:00:00_tai]') | |
54 | assert record.startswith("hmi.sharp_720s[4864]") | |
55 | assert record.endswith("2014.11.30_00:00:00_tai]") | |
53 | 56 | |
54 | 57 | for filename in r.urls.filename: |
55 | assert filename.endswith('continuum.fits') or filename.endswith('magnetogram.fits') | |
58 | assert filename.endswith("continuum.fits") or filename.endswith("magnetogram.fits") | |
56 | 59 | |
57 | 60 | for url in r.urls.url: |
58 | assert url.endswith('continuum.fits') or url.endswith('magnetogram.fits') | |
61 | assert url.endswith("continuum.fits") or url.endswith("magnetogram.fits") | |
59 | 62 | |
60 | 63 | |
61 | 64 | @pytest.mark.jsoc |
66 | 69 | # NOTE: processing exports seem to fail silently on the server side if |
67 | 70 | # the correct names/arguments are not passed. Not clear how to check |
68 | 71 | # that this has not happened. |
69 | process = {'im_patch': { | |
70 | 't_ref': '2015-10-17T04:33:30.000', | |
71 | 't': 0, | |
72 | 'r': 0, | |
73 | 'c': 0, | |
74 | 'locunits': 'arcsec', | |
75 | 'boxunits': 'arcsec', | |
76 | 'x': -517.2, | |
77 | 'y': -246, | |
78 | 'width': 345.6, | |
79 | 'height': 345.6, | |
80 | }} | |
72 | process = { | |
73 | "im_patch": { | |
74 | "t_ref": "2015-10-17T04:33:30.000", | |
75 | "t": 0, | |
76 | "r": 0, | |
77 | "c": 0, | |
78 | "locunits": "arcsec", | |
79 | "boxunits": "arcsec", | |
80 | "x": -517.2, | |
81 | "y": -246, | |
82 | "width": 345.6, | |
83 | "height": 345.6, | |
84 | } | |
85 | } | |
81 | 86 | req = jsoc_client_export.export( |
82 | 'aia.lev1_euv_12s[2015-10-17T04:33:30.000/1m@12s][171]{image}', | |
83 | method='url', | |
84 | protocol='fits', | |
87 | "aia.lev1_euv_12s[2015-10-17T04:33:30.000/1m@12s][171]{image}", | |
88 | method="url", | |
89 | protocol="fits", | |
85 | 90 | process=process, |
86 | 91 | requestor=False, |
87 | 92 | ) |
89 | 94 | assert isinstance(req, drms.ExportRequest) |
90 | 95 | assert req.wait(timeout=60) |
91 | 96 | assert req.has_succeeded() |
92 | assert req.protocol == 'fits' | |
97 | assert req.protocol == "fits" | |
93 | 98 | |
94 | 99 | for record in req.urls.record: |
95 | 100 | record = record.lower() |
96 | assert record.startswith('aia.lev1_euv_12s_mod') | |
101 | assert record.startswith("aia.lev1_euv_12s_mod") | |
97 | 102 | |
98 | 103 | for filename in req.urls.filename: |
99 | assert filename.endswith('image.fits') | |
104 | assert filename.endswith("image.fits") | |
100 | 105 | |
101 | 106 | for url in req.urls.url: |
102 | assert url.endswith('image.fits') | |
107 | assert url.endswith("image.fits") | |
103 | 108 | |
104 | 109 | |
105 | 110 | @pytest.mark.jsoc |
111 | 116 | # the correct names/arguments are not passed. Not clear how to check |
112 | 117 | # that this has not happened. |
113 | 118 | req = jsoc_client_export.export( |
114 | 'hmi.M_720s[2020-10-17_22:12:00_TAI/24m]{magnetogram}', | |
115 | method='url', | |
116 | protocol='fits', | |
117 | process={'rebin': {'method': 'boxcar', 'scale': 0.25}}, | |
119 | "hmi.M_720s[2020-10-17_22:12:00_TAI/24m]{magnetogram}", | |
120 | method="url", | |
121 | protocol="fits", | |
122 | process={"rebin": {"method": "boxcar", "scale": 0.25}}, | |
118 | 123 | requestor=False, |
119 | 124 | ) |
120 | 125 | |
121 | 126 | assert isinstance(req, drms.ExportRequest) |
122 | 127 | assert req.wait(timeout=60) |
123 | 128 | assert req.has_succeeded() |
124 | assert req.protocol == 'fits' | |
129 | assert req.protocol == "fits" | |
125 | 130 | |
126 | 131 | for record in req.urls.record: |
127 | 132 | record = record.lower() |
128 | assert record.startswith('hmi.m_720s_mod') | |
133 | assert record.startswith("hmi.m_720s_mod") | |
129 | 134 | |
130 | 135 | for filename in req.urls.filename: |
131 | assert filename.endswith('magnetogram.fits') | |
136 | assert filename.endswith("magnetogram.fits") | |
132 | 137 | |
133 | 138 | for url in req.urls.url: |
134 | assert url.endswith('magnetogram.fits') | |
139 | assert url.endswith("magnetogram.fits") | |
135 | 140 | |
136 | 141 | |
137 | 142 | @pytest.mark.jsoc |
138 | 143 | @pytest.mark.export |
139 | 144 | @pytest.mark.remote_data |
140 | 145 | def test_export_invalid_process(jsoc_client_export): |
141 | with pytest.raises(ValueError, match='foobar is not one of the allowed processing options'): | |
146 | with pytest.raises(ValueError, match="foobar is not one of the allowed processing options"): | |
142 | 147 | jsoc_client_export.export( |
143 | 'aia.lev1_euv_12s[2015-10-17T04:33:30.000/1m@12s][171]{image}', | |
144 | process={'foobar': {}} | |
148 | "aia.lev1_euv_12s[2015-10-17T04:33:30.000/1m@12s][171]{image}", process={"foobar": {}} | |
145 | 149 | ) |
146 | 150 | |
147 | 151 | |
150 | 154 | @pytest.mark.remote_data |
151 | 155 | def test_export_email(jsoc_client): |
152 | 156 | with pytest.raises(ValueError): |
153 | jsoc_client.export('hmi.v_45s[2016.04.01_TAI/1d@6h]{Dopplergram}') | |
157 | jsoc_client.export("hmi.v_45s[2016.04.01_TAI/1d@6h]{Dopplergram}") |
3 | 3 | @pytest.mark.jsoc |
4 | 4 | @pytest.mark.remote_data |
5 | 5 | @pytest.mark.parametrize( |
6 | 'series, pkeys, segments', | |
6 | "series, pkeys, segments", | |
7 | 7 | [ |
8 | ('hmi.v_45s', ['T_REC', 'CAMERA'], ['Dopplergram']), | |
9 | ('hmi.m_720s', ['T_REC', 'CAMERA'], ['magnetogram']), | |
10 | ('hmi.v_avg120', ['CarrRot', 'CMLon'], ['mean', 'power', 'valid', 'Log']), | |
8 | ("hmi.v_45s", ["T_REC", "CAMERA"], ["Dopplergram"]), | |
9 | ("hmi.m_720s", ["T_REC", "CAMERA"], ["magnetogram"]), | |
10 | ("hmi.v_avg120", ["CarrRot", "CMLon"], ["mean", "power", "valid", "Log"]), | |
11 | 11 | ], |
12 | 12 | ) |
13 | 13 | def test_series_info_basic(jsoc_client, series, pkeys, segments): |
23 | 23 | @pytest.mark.jsoc |
24 | 24 | @pytest.mark.remote_data |
25 | 25 | @pytest.mark.parametrize( |
26 | 'series, pkeys', | |
26 | "series, pkeys", | |
27 | 27 | [ |
28 | ('hmi.v_45s', ['T_REC', 'CAMERA']), | |
29 | ('hmi.m_720s', ['T_REC', 'CAMERA']), | |
30 | ('hmi.v_avg120', ['CarrRot', 'CMLon']), | |
31 | ('aia.lev1', ['T_REC', 'FSN']), | |
32 | ('aia.lev1_euv_12s', ['T_REC', 'WAVELNTH']), | |
33 | ('aia.response', ['T_START', 'WAVE_STR']), | |
34 | ('iris.lev1', ['T_OBS', 'FSN']), | |
35 | ('mdi.fd_m_lev182', ['T_REC']), | |
28 | ("hmi.v_45s", ["T_REC", "CAMERA"]), | |
29 | ("hmi.m_720s", ["T_REC", "CAMERA"]), | |
30 | ("hmi.v_avg120", ["CarrRot", "CMLon"]), | |
31 | ("aia.lev1", ["T_REC", "FSN"]), | |
32 | ("aia.lev1_euv_12s", ["T_REC", "WAVELNTH"]), | |
33 | ("aia.response", ["T_START", "WAVE_STR"]), | |
34 | ("iris.lev1", ["T_OBS", "FSN"]), | |
35 | ("mdi.fd_m_lev182", ["T_REC"]), | |
36 | 36 | ], |
37 | 37 | ) |
38 | 38 | def test_series_primekeys(jsoc_client, series, pkeys): |
7 | 7 | @pytest.mark.jsoc |
8 | 8 | @pytest.mark.remote_data |
9 | 9 | def test_query_basic(jsoc_client): |
10 | keys, segs = jsoc_client.query('hmi.v_45s[2013.07.03_08:42_TAI/3m]', key='T_REC, CRLT_OBS', seg='Dopplergram') | |
10 | keys, segs = jsoc_client.query("hmi.v_45s[2013.07.03_08:42_TAI/3m]", key="T_REC, CRLT_OBS", seg="Dopplergram") | |
11 | 11 | assert len(keys) == 4 |
12 | for k in ['T_REC', 'CRLT_OBS']: | |
12 | for k in ["T_REC", "CRLT_OBS"]: | |
13 | 13 | assert k in keys.columns |
14 | 14 | assert len(segs) == 4 |
15 | assert 'Dopplergram' in segs.columns | |
15 | assert "Dopplergram" in segs.columns | |
16 | 16 | assert ((keys.CRLT_OBS - 3.14159).abs() < 0.0001).all() |
17 | 17 | |
18 | 18 | |
20 | 20 | @pytest.mark.remote_data |
21 | 21 | def test_query_allmissing(jsoc_client): |
22 | 22 | with pytest.raises(ValueError): |
23 | keys = jsoc_client.query('hmi.v_45s[2013.07.03_08:42_TAI/3m]') | |
23 | keys = jsoc_client.query("hmi.v_45s[2013.07.03_08:42_TAI/3m]") | |
24 | 24 | |
25 | 25 | |
26 | 26 | @pytest.mark.jsoc |
27 | 27 | @pytest.mark.remote_data |
28 | 28 | def test_query_key(jsoc_client): |
29 | keys = jsoc_client.query('hmi.v_45s[2013.07.03_08:42_TAI/3m]', key='T_REC, CRLT_OBS') | |
29 | keys = jsoc_client.query("hmi.v_45s[2013.07.03_08:42_TAI/3m]", key="T_REC, CRLT_OBS") | |
30 | 30 | assert len(keys) == 4 |
31 | for k in ['T_REC', 'CRLT_OBS']: | |
31 | for k in ["T_REC", "CRLT_OBS"]: | |
32 | 32 | assert k in keys.columns |
33 | 33 | assert ((keys.CRLT_OBS - 3.14159).abs() < 0.0001).all() |
34 | 34 | |
36 | 36 | @pytest.mark.jsoc |
37 | 37 | @pytest.mark.remote_data |
38 | 38 | def test_query_seg(jsoc_client): |
39 | segs = jsoc_client.query('hmi.v_45s[2013.07.03_08:42_TAI/3m]', seg='Dopplergram') | |
39 | segs = jsoc_client.query("hmi.v_45s[2013.07.03_08:42_TAI/3m]", seg="Dopplergram") | |
40 | 40 | assert len(segs) == 4 |
41 | assert 'Dopplergram' in segs.columns | |
41 | assert "Dopplergram" in segs.columns | |
42 | 42 | |
43 | 43 | |
44 | 44 | @pytest.mark.jsoc |
45 | 45 | @pytest.mark.remote_data |
46 | 46 | def test_query_link(jsoc_client): |
47 | links = jsoc_client.query('hmi.B_720s[2013.07.03_08:42_TAI/40m]', link='MDATA') | |
47 | links = jsoc_client.query("hmi.B_720s[2013.07.03_08:42_TAI/40m]", link="MDATA") | |
48 | 48 | assert len(links) == 3 |
49 | assert 'MDATA' in links.columns | |
49 | assert "MDATA" in links.columns | |
50 | 50 | |
51 | 51 | |
52 | 52 | @pytest.mark.jsoc |
53 | 53 | @pytest.mark.remote_data |
54 | 54 | def test_query_seg_key_link(jsoc_client): |
55 | keys, segs, links = jsoc_client.query('hmi.B_720s[2013.07.03_08:42_TAI/40m]', key='foo', link='bar', seg='baz') | |
55 | keys, segs, links = jsoc_client.query("hmi.B_720s[2013.07.03_08:42_TAI/40m]", key="foo", link="bar", seg="baz") | |
56 | 56 | assert len(keys) == 3 |
57 | assert (keys.foo == 'Invalid KeyLink').all() | |
57 | assert (keys.foo == "Invalid KeyLink").all() | |
58 | 58 | assert len(segs) == 3 |
59 | assert (segs.baz == 'InvalidSegName').all() | |
59 | assert (segs.baz == "InvalidSegName").all() | |
60 | 60 | assert len(links) == 3 |
61 | assert (links.bar == 'Invalid_Link').all() | |
61 | assert (links.bar == "Invalid_Link").all() | |
62 | 62 | |
63 | 63 | |
64 | 64 | @pytest.mark.jsoc |
65 | 65 | @pytest.mark.remote_data |
66 | 66 | def test_query_pkeys(jsoc_client): |
67 | keys = jsoc_client.query('hmi.v_45s[2013.07.03_08:42_TAI/3m]', pkeys=True) | |
67 | keys = jsoc_client.query("hmi.v_45s[2013.07.03_08:42_TAI/3m]", pkeys=True) | |
68 | 68 | pkeys = list(keys.columns.values) |
69 | assert pkeys == jsoc_client.pkeys('hmi.v_45s[2013.07.03_08:42_TAI/3m]') | |
69 | assert pkeys == jsoc_client.pkeys("hmi.v_45s[2013.07.03_08:42_TAI/3m]") | |
70 | 70 | assert len(keys) == 4 |
71 | 71 | |
72 | 72 | |
73 | 73 | @pytest.mark.jsoc |
74 | 74 | @pytest.mark.remote_data |
75 | 75 | def test_query_recindex(jsoc_client): |
76 | keys = jsoc_client.query('hmi.V_45s[2013.07.03_08:42_TAI/4m]', key='T_REC', rec_index=True) | |
76 | keys = jsoc_client.query("hmi.V_45s[2013.07.03_08:42_TAI/4m]", key="T_REC", rec_index=True) | |
77 | 77 | index_values = list(keys.index.values) |
78 | assert all(['hmi.V_45s' in s for s in index_values]) | |
78 | assert all(["hmi.V_45s" in s for s in index_values]) | |
79 | 79 | assert len(keys) == 5 |
80 | 80 | |
81 | 81 | |
82 | 82 | def test_query_invalid_server(): |
83 | cfg = ServerConfig(name='TEST') | |
83 | cfg = ServerConfig(name="TEST") | |
84 | 84 | c = drms.Client(server=cfg) |
85 | 85 | with pytest.raises(DrmsOperationNotSupported): |
86 | keys = c.query('hmi.v_45s[2013.07.03_08:42_TAI/3m]', pkeys=True) | |
86 | keys = c.query("hmi.v_45s[2013.07.03_08:42_TAI/3m]", pkeys=True) | |
87 | 87 | |
88 | 88 | |
89 | 89 | @pytest.mark.jsoc |
90 | 90 | @pytest.mark.remote_data |
91 | 91 | def test_query_invalid_series(jsoc_client): |
92 | 92 | with pytest.raises(DrmsQueryError): |
93 | keys = jsoc_client.query('foo', key='T_REC') | |
93 | keys = jsoc_client.query("foo", key="T_REC") | |
94 | ||
95 | ||
96 | @pytest.mark.remote_data | |
97 | def test_query_hexadecimal_strings(): | |
98 | # Exercise the part of client.py that deals with hexadecimal strings | |
99 | c = drms.Client() | |
100 | c.query("hmi.v_45s[2014.01.01_00:00:35_TAI-2014.01.01_01:00:35_TAI]", key="**ALL**") |
7 | 7 | def test_series_list_all(kis_client): |
8 | 8 | slist = kis_client.series() |
9 | 9 | assert isinstance(slist, list) |
10 | assert 'hmi.v_45s' in (s.lower() for s in slist) | |
11 | assert 'hmi.m_720s' in (s.lower() for s in slist) | |
12 | assert 'hmi.ic_720s' in (s.lower() for s in slist) | |
13 | assert 'mdi.fd_v' in (s.lower() for s in slist) | |
10 | assert "hmi.v_45s" in (s.lower() for s in slist) | |
11 | assert "hmi.m_720s" in (s.lower() for s in slist) | |
12 | assert "hmi.ic_720s" in (s.lower() for s in slist) | |
13 | assert "mdi.fd_v" in (s.lower() for s in slist) | |
14 | 14 | |
15 | 15 | |
16 | 16 | @pytest.mark.kis |
17 | 17 | @pytest.mark.remote_data |
18 | @pytest.mark.parametrize('schema', ['hmi', 'mdi']) | |
18 | @pytest.mark.parametrize("schema", ["hmi", "mdi"]) | |
19 | 19 | def test_series_list_schemata(kis_client, schema): |
20 | regex = fr'{schema}\.' | |
20 | regex = rf"{schema}\." | |
21 | 21 | slist = kis_client.series(regex) |
22 | 22 | assert len(slist) > 0 |
23 | 23 | for sname in slist: |
24 | assert sname.startswith(f'{schema}.') | |
24 | assert sname.startswith(f"{schema}.") | |
25 | 25 | |
26 | 26 | |
27 | 27 | @pytest.mark.kis |
28 | 28 | @pytest.mark.remote_data |
29 | 29 | @pytest.mark.parametrize( |
30 | 'series, pkeys, segments', | |
30 | "series, pkeys, segments", | |
31 | 31 | [ |
32 | ('hmi.v_45s', ['T_REC', 'CAMERA'], ['Dopplergram']), | |
33 | ('hmi.m_720s', ['T_REC', 'CAMERA'], ['magnetogram']), | |
34 | ('hmi.v_avg120', ['CarrRot', 'CMLon'], ['mean', 'power', 'valid', 'Log']), | |
32 | ("hmi.v_45s", ["T_REC", "CAMERA"], ["Dopplergram"]), | |
33 | ("hmi.m_720s", ["T_REC", "CAMERA"], ["magnetogram"]), | |
34 | ("hmi.v_avg120", ["CarrRot", "CMLon"], ["mean", "power", "valid", "Log"]), | |
35 | 35 | ], |
36 | 36 | ) |
37 | 37 | def test_series_info_basic(kis_client, series, pkeys, segments): |
47 | 47 | @pytest.mark.kis |
48 | 48 | @pytest.mark.remote_data |
49 | 49 | def test_query_basic(kis_client): |
50 | keys, segs = kis_client.query('hmi.v_45s[2013.07.03_08:42_TAI/3m]', key='T_REC, CRLT_OBS', seg='Dopplergram') | |
50 | keys, segs = kis_client.query("hmi.v_45s[2013.07.03_08:42_TAI/3m]", key="T_REC, CRLT_OBS", seg="Dopplergram") | |
51 | 51 | assert len(keys) == 4 |
52 | for k in ['T_REC', 'CRLT_OBS']: | |
52 | for k in ["T_REC", "CRLT_OBS"]: | |
53 | 53 | assert k in keys.columns |
54 | 54 | assert len(segs) == 4 |
55 | assert 'Dopplergram' in segs.columns | |
55 | assert "Dopplergram" in segs.columns | |
56 | 56 | assert ((keys.CRLT_OBS - 3.14159).abs() < 0.0001).all() |
57 | 57 | |
58 | 58 | |
60 | 60 | @pytest.mark.remote_data |
61 | 61 | def test_not_supported_email(kis_client): |
62 | 62 | with pytest.raises(drms.DrmsOperationNotSupported): |
63 | kis_client.email = 'name@example.com' | |
63 | kis_client.email = "name@example.com" | |
64 | 64 | |
65 | 65 | |
66 | 66 | @pytest.mark.kis |
67 | 67 | @pytest.mark.remote_data |
68 | 68 | def test_not_supported_export(kis_client): |
69 | 69 | with pytest.raises(drms.DrmsOperationNotSupported): |
70 | kis_client.export('hmi.v_45s[2010.05.01_TAI]') | |
70 | kis_client.export("hmi.v_45s[2010.05.01_TAI]") | |
71 | 71 | with pytest.raises(drms.DrmsOperationNotSupported): |
72 | kis_client.export_from_id('KIS_20120101_123') | |
72 | kis_client.export_from_id("KIS_20120101_123") |
10 | 10 | |
11 | 11 | |
12 | 12 | def test_debug(): |
13 | helper(['--debug'], 'debug', True) | |
14 | helper([], 'debug', False) | |
13 | helper(["--debug"], "debug", True) | |
14 | helper([], "debug", False) | |
15 | 15 | |
16 | 16 | |
17 | 17 | def test_verbose(): |
18 | helper(['--verbose'], 'verbose', True) | |
19 | helper([], 'verbose', False) | |
18 | helper(["--verbose"], "verbose", True) | |
19 | helper([], "verbose", False) | |
20 | 20 | |
21 | 21 | |
22 | 22 | def test_version(): |
23 | 23 | with pytest.raises(SystemExit): |
24 | helper(['--version'], 'version', True) | |
24 | helper(["--version"], "version", True) | |
25 | 25 | |
26 | 26 | |
27 | 27 | def test_server(): |
28 | helper(['fake_url_server'], 'server', 'fake_url_server') | |
29 | helper([], 'server', 'jsoc') | |
28 | helper(["fake_url_server"], "server", "fake_url_server") | |
29 | helper([], "server", "jsoc") | |
30 | 30 | |
31 | 31 | |
32 | 32 | def test_email(): |
33 | helper(['--email', 'fake@gmail.com'], 'email', 'fake@gmail.com') | |
34 | helper([], 'email', None) | |
33 | helper(["--email", "fake@gmail.com"], "email", "fake@gmail.com") | |
34 | helper([], "email", None) | |
35 | 35 | |
36 | 36 | |
37 | 37 | def test_main_empty(): |
38 | 38 | sys.argv = [ |
39 | 'drms', | |
39 | "drms", | |
40 | 40 | ] |
41 | 41 | main() |
7 | 7 | def test_parse_keywords(): |
8 | 8 | info = [ |
9 | 9 | { |
10 | 'recscope': 'variable', | |
11 | 'units': 'none', | |
12 | 'name': 'cparms_sg000', | |
13 | 'defval': 'compress Rice', | |
14 | 'note': '', | |
15 | 'type': 'string', | |
10 | "recscope": "variable", | |
11 | "units": "none", | |
12 | "name": "cparms_sg000", | |
13 | "defval": "compress Rice", | |
14 | "note": "", | |
15 | "type": "string", | |
16 | 16 | }, |
17 | 17 | { |
18 | 'recscope': 'variable', | |
19 | 'units': 'none', | |
20 | 'name': 'mean_bzero', | |
21 | 'defval': '0', | |
22 | 'note': '', | |
23 | 'type': 'double', | |
18 | "recscope": "variable", | |
19 | "units": "none", | |
20 | "name": "mean_bzero", | |
21 | "defval": "0", | |
22 | "note": "", | |
23 | "type": "double", | |
24 | 24 | }, |
25 | 25 | { |
26 | 'recscope': 'variable', | |
27 | 'units': 'none', | |
28 | 'name': 'mean_bscale', | |
29 | 'defval': '0.25', | |
30 | 'note': '', | |
31 | 'type': 'double', | |
26 | "recscope": "variable", | |
27 | "units": "none", | |
28 | "name": "mean_bscale", | |
29 | "defval": "0.25", | |
30 | "note": "", | |
31 | "type": "double", | |
32 | 32 | }, |
33 | 33 | { |
34 | 'recscope': 'variable', | |
35 | 'units': 'TAI', | |
36 | 'name': 'MidTime', | |
37 | 'defval': '-4712.01.01_11:59_TAI', | |
38 | 'note': 'Midpoint of averaging interval', | |
39 | 'type': 'time', | |
34 | "recscope": "variable", | |
35 | "units": "TAI", | |
36 | "name": "MidTime", | |
37 | "defval": "-4712.01.01_11:59_TAI", | |
38 | "note": "Midpoint of averaging interval", | |
39 | "type": "time", | |
40 | 40 | }, |
41 | 41 | ] |
42 | 42 | exp = OrderedDict( |
43 | 43 | [ |
44 | ('name', ['cparms_sg000', 'mean_bzero', 'mean_bscale', 'MidTime']), | |
45 | ('type', ['string', 'double', 'double', 'time']), | |
46 | ('recscope', ['variable', 'variable', 'variable', 'variable']), | |
47 | ('defval', ['compress Rice', '0', '0.25', '-4712.01.01_11:59_TAI']), | |
48 | ('units', ['none', 'none', 'none', 'TAI']), | |
49 | ('note', ['', '', '', 'Midpoint of averaging interval']), | |
50 | ('linkinfo', [None, None, None, None]), | |
51 | ('is_time', [False, False, False, True]), | |
52 | ('is_integer', [False, False, False, False]), | |
53 | ('is_real', [False, True, True, False]), | |
54 | ('is_numeric', [False, True, True, False]), | |
44 | ("name", ["cparms_sg000", "mean_bzero", "mean_bscale", "MidTime"]), | |
45 | ("type", ["string", "double", "double", "time"]), | |
46 | ("recscope", ["variable", "variable", "variable", "variable"]), | |
47 | ("defval", ["compress Rice", "0", "0.25", "-4712.01.01_11:59_TAI"]), | |
48 | ("units", ["none", "none", "none", "TAI"]), | |
49 | ("note", ["", "", "", "Midpoint of averaging interval"]), | |
50 | ("linkinfo", [None, None, None, None]), | |
51 | ("is_time", [False, False, False, True]), | |
52 | ("is_integer", [False, False, False, False]), | |
53 | ("is_real", [False, True, True, False]), | |
54 | ("is_numeric", [False, True, True, False]), | |
55 | 55 | ] |
56 | 56 | ) |
57 | 57 | |
58 | 58 | exp = pd.DataFrame(data=exp) |
59 | exp.index = exp.pop('name') | |
59 | exp.index = exp.pop("name") | |
60 | 60 | assert drms.SeriesInfo._parse_keywords(info).equals(exp) |
61 | 61 | |
62 | 62 | |
63 | 63 | def test_parse_links(): |
64 | 64 | links = [ |
65 | {'name': 'BHARP', 'kind': 'DYNAMIC', 'note': 'Bharp', 'target': 'hmi.Bharp_720s'}, | |
66 | {'name': 'MHARP', 'kind': 'DYNAMIC', 'note': 'Mharp', 'target': 'hmi.Mharp_720s'}, | |
65 | {"name": "BHARP", "kind": "DYNAMIC", "note": "Bharp", "target": "hmi.Bharp_720s"}, | |
66 | {"name": "MHARP", "kind": "DYNAMIC", "note": "Mharp", "target": "hmi.Mharp_720s"}, | |
67 | 67 | ] |
68 | 68 | exp = OrderedDict( |
69 | 69 | [ |
70 | ('name', ['BHARP', 'MHARP']), | |
71 | ('target', ['hmi.Bharp_720s', 'hmi.Mharp_720s']), | |
72 | ('kind', ['DYNAMIC', 'DYNAMIC']), | |
73 | ('note', ['Bharp', 'Mharp']), | |
70 | ("name", ["BHARP", "MHARP"]), | |
71 | ("target", ["hmi.Bharp_720s", "hmi.Mharp_720s"]), | |
72 | ("kind", ["DYNAMIC", "DYNAMIC"]), | |
73 | ("note", ["Bharp", "Mharp"]), | |
74 | 74 | ] |
75 | 75 | ) |
76 | 76 | exp = pd.DataFrame(data=exp) |
77 | exp.index = exp.pop('name') | |
77 | exp.index = exp.pop("name") | |
78 | 78 | assert drms.SeriesInfo._parse_links(links).equals(exp) |
79 | 79 | |
80 | 80 | |
81 | 81 | def test_parse_segments(): |
82 | 82 | segments = [ |
83 | 83 | { |
84 | 'type': 'int', | |
85 | 'dims': 'VARxVAR', | |
86 | 'units': 'Gauss', | |
87 | 'protocol': 'fits', | |
88 | 'note': 'magnetogram', | |
89 | 'name': 'magnetogram', | |
84 | "type": "int", | |
85 | "dims": "VARxVAR", | |
86 | "units": "Gauss", | |
87 | "protocol": "fits", | |
88 | "note": "magnetogram", | |
89 | "name": "magnetogram", | |
90 | 90 | }, |
91 | 91 | { |
92 | 'type': 'char', | |
93 | 'dims': 'VARxVAR', | |
94 | 'units': 'Enumerated', | |
95 | 'protocol': 'fits', | |
96 | 'note': 'Mask for the patch', | |
97 | 'name': 'bitmap', | |
92 | "type": "char", | |
93 | "dims": "VARxVAR", | |
94 | "units": "Enumerated", | |
95 | "protocol": "fits", | |
96 | "note": "Mask for the patch", | |
97 | "name": "bitmap", | |
98 | 98 | }, |
99 | 99 | { |
100 | 'type': 'int', | |
101 | 'dims': 'VARxVAR', | |
102 | 'units': 'm/s', | |
103 | 'protocol': 'fits', | |
104 | 'note': 'Dopplergram', | |
105 | 'name': 'Dopplergram', | |
100 | "type": "int", | |
101 | "dims": "VARxVAR", | |
102 | "units": "m/s", | |
103 | "protocol": "fits", | |
104 | "note": "Dopplergram", | |
105 | "name": "Dopplergram", | |
106 | 106 | }, |
107 | 107 | ] |
108 | 108 | exp = OrderedDict( |
109 | 109 | [ |
110 | ('name', ['magnetogram', 'bitmap', 'Dopplergram']), | |
111 | ('type', ['int', 'char', 'int']), | |
112 | ('units', ['Gauss', 'Enumerated', 'm/s']), | |
113 | ('protocol', ['fits', 'fits', 'fits']), | |
114 | ('dims', ['VARxVAR', 'VARxVAR', 'VARxVAR']), | |
115 | ('note', ['magnetogram', 'Mask for the patch', 'Dopplergram']), | |
110 | ("name", ["magnetogram", "bitmap", "Dopplergram"]), | |
111 | ("type", ["int", "char", "int"]), | |
112 | ("units", ["Gauss", "Enumerated", "m/s"]), | |
113 | ("protocol", ["fits", "fits", "fits"]), | |
114 | ("dims", ["VARxVAR", "VARxVAR", "VARxVAR"]), | |
115 | ("note", ["magnetogram", "Mask for the patch", "Dopplergram"]), | |
116 | 116 | ] |
117 | 117 | ) |
118 | 118 | |
119 | 119 | exp = pd.DataFrame(data=exp) |
120 | exp.index = exp.pop('name') | |
120 | exp.index = exp.pop("name") | |
121 | 121 | assert drms.SeriesInfo._parse_segments(segments).equals(exp) |
122 | 122 | |
123 | 123 | |
124 | 124 | def test_repr(): |
125 | 125 | info = { |
126 | 'primekeys': ['CarrRot', 'CMLon'], | |
127 | 'retention': 1800, | |
128 | 'tapegroup': 1, | |
129 | 'archive': 1, | |
130 | 'primekeysinfo': [], | |
131 | 'unitsize': 1, | |
132 | 'note': 'Temporal averages of HMI Vgrams over 1/3 CR', | |
133 | 'dbindex': [None], | |
134 | 'links': [], | |
135 | 'segments': [], | |
136 | 'keywords': [], | |
126 | "primekeys": ["CarrRot", "CMLon"], | |
127 | "retention": 1800, | |
128 | "tapegroup": 1, | |
129 | "archive": 1, | |
130 | "primekeysinfo": [], | |
131 | "unitsize": 1, | |
132 | "note": "Temporal averages of HMI Vgrams over 1/3 CR", | |
133 | "dbindex": [None], | |
134 | "links": [], | |
135 | "segments": [], | |
136 | "keywords": [], | |
137 | 137 | } |
138 | assert repr(drms.SeriesInfo(info)) == '<SeriesInfo>' | |
139 | assert repr(drms.SeriesInfo(info, 'hmi')) == '<SeriesInfo: hmi>' | |
138 | assert repr(drms.SeriesInfo(info)) == "<SeriesInfo>" | |
139 | assert repr(drms.SeriesInfo(info, "hmi")) == "<SeriesInfo: hmi>" |
4 | 4 | import drms |
5 | 5 | |
6 | 6 | data_tai = [ |
7 | ('2010.05.01_TAI', pd.Timestamp('2010-05-01 00:00:00')), | |
8 | ('2010.05.01_00:00_TAI', pd.Timestamp('2010-05-01 00:00:00')), | |
9 | ('2010.05.01_00:00:00_TAI', pd.Timestamp('2010-05-01 00:00:00')), | |
10 | ('2010.05.01_01:23:45_TAI', pd.Timestamp('2010-05-01 01:23:45')), | |
11 | ('2013.12.21_23:32_TAI', pd.Timestamp('2013-12-21 23:32:00')), | |
12 | ('2013.12.21_23:32:34_TAI', pd.Timestamp('2013-12-21 23:32:34')), | |
7 | ("2010.05.01_TAI", pd.Timestamp("2010-05-01 00:00:00")), | |
8 | ("2010.05.01_00:00_TAI", pd.Timestamp("2010-05-01 00:00:00")), | |
9 | ("2010.05.01_00:00:00_TAI", pd.Timestamp("2010-05-01 00:00:00")), | |
10 | ("2010.05.01_01:23:45_TAI", pd.Timestamp("2010-05-01 01:23:45")), | |
11 | ("2013.12.21_23:32_TAI", pd.Timestamp("2013-12-21 23:32:00")), | |
12 | ("2013.12.21_23:32:34_TAI", pd.Timestamp("2013-12-21 23:32:34")), | |
13 | 13 | ] |
14 | 14 | data_tai_in = [data[0] for data in data_tai] |
15 | 15 | data_tai_out = pd.Series([data[1] for data in data_tai]) |
16 | 16 | |
17 | 17 | |
18 | @pytest.mark.parametrize('time_string, expected', data_tai) | |
18 | @pytest.mark.parametrize("time_string, expected", data_tai) | |
19 | 19 | def test_tai_string(time_string, expected): |
20 | 20 | assert drms.to_datetime(time_string) == expected |
21 | 21 | |
22 | 22 | |
23 | 23 | @pytest.mark.parametrize( |
24 | 'time_string, expected', | |
24 | "time_string, expected", | |
25 | 25 | [ |
26 | ('2010-05-01T00:00Z', pd.Timestamp('2010-05-01 00:00:00')), | |
27 | ('2010-05-01T00:00:00Z', pd.Timestamp('2010-05-01 00:00:00')), | |
28 | ('2010-05-01T01:23:45Z', pd.Timestamp('2010-05-01 01:23:45')), | |
29 | ('2013-12-21T23:32Z', pd.Timestamp('2013-12-21 23:32:00')), | |
30 | ('2013-12-21T23:32:34Z', pd.Timestamp('2013-12-21 23:32:34')), | |
31 | ('2010-05-01 00:00Z', pd.Timestamp('2010-05-01 00:00:00')), | |
32 | ('2010-05-01 00:00:00Z', pd.Timestamp('2010-05-01 00:00:00')), | |
33 | ('2010-05-01 01:23:45Z', pd.Timestamp('2010-05-01 01:23:45')), | |
34 | ('2013-12-21 23:32Z', pd.Timestamp('2013-12-21 23:32:00')), | |
35 | ('2013-12-21 23:32:34Z', pd.Timestamp('2013-12-21 23:32:34')), | |
26 | ("2010-05-01T00:00Z", pd.Timestamp("2010-05-01 00:00:00")), | |
27 | ("2010-05-01T00:00:00Z", pd.Timestamp("2010-05-01 00:00:00")), | |
28 | ("2010-05-01T01:23:45Z", pd.Timestamp("2010-05-01 01:23:45")), | |
29 | ("2013-12-21T23:32Z", pd.Timestamp("2013-12-21 23:32:00")), | |
30 | ("2013-12-21T23:32:34Z", pd.Timestamp("2013-12-21 23:32:34")), | |
31 | ("2010-05-01 00:00Z", pd.Timestamp("2010-05-01 00:00:00")), | |
32 | ("2010-05-01 00:00:00Z", pd.Timestamp("2010-05-01 00:00:00")), | |
33 | ("2010-05-01 01:23:45Z", pd.Timestamp("2010-05-01 01:23:45")), | |
34 | ("2013-12-21 23:32Z", pd.Timestamp("2013-12-21 23:32:00")), | |
35 | ("2013-12-21 23:32:34Z", pd.Timestamp("2013-12-21 23:32:34")), | |
36 | 36 | ], |
37 | 37 | ) |
38 | 38 | def test_z_string(time_string, expected): |
39 | 39 | assert drms.to_datetime(time_string) == expected |
40 | 40 | |
41 | 41 | |
42 | @pytest.mark.xfail(reason='pandas does not support leap seconds') | |
42 | @pytest.mark.xfail(reason="pandas does not support leap seconds") | |
43 | 43 | @pytest.mark.parametrize( |
44 | 'time_string, expected', | |
44 | "time_string, expected", | |
45 | 45 | [ |
46 | ('2012-06-30T23:59:60Z', '2012-06-30 23:59:60'), | |
47 | ('2015-06-30T23:59:60Z', '2015-06-30 23:59:60'), | |
48 | ('2016-12-31T23:59:60Z', '2016-12-31 23:59:60'), | |
46 | ("2012-06-30T23:59:60Z", "2012-06-30 23:59:60"), | |
47 | ("2015-06-30T23:59:60Z", "2015-06-30 23:59:60"), | |
48 | ("2016-12-31T23:59:60Z", "2016-12-31 23:59:60"), | |
49 | 49 | ], |
50 | 50 | ) |
51 | 51 | def test_z_leap_string(time_string, expected): |
53 | 53 | |
54 | 54 | |
55 | 55 | @pytest.mark.parametrize( |
56 | 'time_string, expected', | |
56 | "time_string, expected", | |
57 | 57 | [ |
58 | ('2013.12.21_23:32:34_TAI', pd.Timestamp('2013-12-21 23:32:34')), | |
59 | ('2013.12.21_23:32:34_UTC', pd.Timestamp('2013-12-21 23:32:34')), | |
60 | ('2013.12.21_23:32:34Z', pd.Timestamp('2013-12-21 23:32:34')), | |
58 | ("2013.12.21_23:32:34_TAI", pd.Timestamp("2013-12-21 23:32:34")), | |
59 | ("2013.12.21_23:32:34_UTC", pd.Timestamp("2013-12-21 23:32:34")), | |
60 | ("2013.12.21_23:32:34Z", pd.Timestamp("2013-12-21 23:32:34")), | |
61 | 61 | ], |
62 | 62 | ) |
63 | 63 | def test_force_string(time_string, expected): |
65 | 65 | |
66 | 66 | |
67 | 67 | @pytest.mark.parametrize( |
68 | 'time_series, expected', | |
68 | "time_series, expected", | |
69 | 69 | [ |
70 | 70 | (data_tai_in, data_tai_out), |
71 | 71 | (pd.Series(data_tai_in), data_tai_out), |
78 | 78 | |
79 | 79 | |
80 | 80 | data_invalid = [ |
81 | ('2010.05.01_TAI', False), | |
82 | ('2010.05.01_00:00_TAI', False), | |
83 | ('', True), | |
84 | ('1600', True), | |
85 | ('foo', True), | |
86 | ('2013.12.21_23:32:34_TAI', False), | |
81 | ("2010.05.01_TAI", False), | |
82 | ("2010.05.01_00:00_TAI", False), | |
83 | ("", True), | |
84 | ("1600", True), | |
85 | ("foo", True), | |
86 | ("2013.12.21_23:32:34_TAI", False), | |
87 | 87 | ] |
88 | 88 | data_invalid_in = [data[0] for data in data_invalid] |
89 | 89 | data_invalid_out = pd.Series([data[1] for data in data_invalid]) |
90 | 90 | |
91 | 91 | |
92 | @pytest.mark.parametrize('time_string, expected', data_invalid) | |
92 | @pytest.mark.parametrize("time_string, expected", data_invalid) | |
93 | 93 | def test_corner_case(time_string, expected): |
94 | 94 | assert pd.isnull(drms.to_datetime(time_string)) == expected |
95 | 95 | assert isinstance(drms.to_datetime([]), pd.Series) |
97 | 97 | |
98 | 98 | |
99 | 99 | @pytest.mark.parametrize( |
100 | 'time_series, expected', | |
100 | "time_series, expected", | |
101 | 101 | [ |
102 | 102 | (data_invalid_in, data_invalid_out), |
103 | 103 | (pd.Series(data_invalid_in), data_invalid_out), |
4 | 4 | |
5 | 5 | |
6 | 6 | @pytest.mark.parametrize( |
7 | 'in_obj, expected', | |
7 | "in_obj, expected", | |
8 | 8 | [ |
9 | 9 | ("", []), |
10 | ('asd', ['asd']), | |
11 | ('aa,bb,cc', ['aa', 'bb', 'cc']), | |
12 | ('aa, bb, cc', ['aa', 'bb', 'cc']), | |
13 | (' aa,bb, cc, dd', ['aa', 'bb', 'cc', 'dd']), | |
14 | ('aa,\tbb,cc, dd ', ['aa', 'bb', 'cc', 'dd']), | |
15 | ('aa,\tbb,cc, dd ', ['aa', 'bb', 'cc', 'dd']), | |
10 | ("asd", ["asd"]), | |
11 | ("aa,bb,cc", ["aa", "bb", "cc"]), | |
12 | ("aa, bb, cc", ["aa", "bb", "cc"]), | |
13 | (" aa,bb, cc, dd", ["aa", "bb", "cc", "dd"]), | |
14 | ("aa,\tbb,cc, dd ", ["aa", "bb", "cc", "dd"]), | |
15 | ("aa,\tbb,cc, dd ", ["aa", "bb", "cc", "dd"]), | |
16 | 16 | ([], []), |
17 | (['a', 'b', 'c'], ['a', 'b', 'c']), | |
18 | (('a', 'b', 'c'), ['a', 'b', 'c']), | |
17 | (["a", "b", "c"], ["a", "b", "c"]), | |
18 | (("a", "b", "c"), ["a", "b", "c"]), | |
19 | 19 | ], |
20 | 20 | ) |
21 | 21 | def test_split_arg(in_obj, expected): |
26 | 26 | |
27 | 27 | |
28 | 28 | @pytest.mark.parametrize( |
29 | 'ds_string, expected', | |
29 | "ds_string, expected", | |
30 | 30 | [ |
31 | ('hmi.v_45s', 'hmi.v_45s'), | |
32 | ('hmi.v_45s[2010.05.01_TAI]', 'hmi.v_45s'), | |
33 | ('hmi.v_45s[2010.05.01_TAI/365d@1d]', 'hmi.v_45s'), | |
34 | ('hmi.v_45s[2010.05.01_TAI/365d@1d][?QUALITY>=0?]', 'hmi.v_45s'), | |
35 | ('hmi.v_45s[2010.05.01_TAI/1d@6h]{Dopplergram}', 'hmi.v_45s'), | |
31 | ("hmi.v_45s", "hmi.v_45s"), | |
32 | ("hmi.v_45s[2010.05.01_TAI]", "hmi.v_45s"), | |
33 | ("hmi.v_45s[2010.05.01_TAI/365d@1d]", "hmi.v_45s"), | |
34 | ("hmi.v_45s[2010.05.01_TAI/365d@1d][?QUALITY>=0?]", "hmi.v_45s"), | |
35 | ("hmi.v_45s[2010.05.01_TAI/1d@6h]{Dopplergram}", "hmi.v_45s"), | |
36 | 36 | ], |
37 | 37 | ) |
38 | 38 | def test_extract_series(ds_string, expected): |
40 | 40 | |
41 | 41 | |
42 | 42 | @pytest.mark.parametrize( |
43 | 'arg, exp', | |
43 | "arg, exp", | |
44 | 44 | [ |
45 | (pd.Series(['1.0', '2', -3]), pd.Series([1.0, 2.0, -3.0])), | |
46 | (pd.Series(['1.0', 'apple', -3]), pd.Series([1.0, float('nan'), -3.0])), | |
45 | (pd.Series(["1.0", "2", -3]), pd.Series([1.0, 2.0, -3.0])), | |
46 | (pd.Series(["1.0", "apple", -3]), pd.Series([1.0, float("nan"), -3.0])), | |
47 | 47 | ], |
48 | 48 | ) |
49 | 49 | def test_pd_to_numeric_coerce(arg, exp): |
51 | 51 | |
52 | 52 | |
53 | 53 | @pytest.mark.parametrize( |
54 | 'arg, exp', | |
54 | "arg, exp", | |
55 | 55 | [ |
56 | 56 | ( |
57 | pd.Series(['2016-04-01 00:00:00', '2016-04-01 06:00:00']), | |
58 | pd.Series([pd.Timestamp('2016-04-01 00:00:00'), pd.Timestamp('2016-04-01 06:00:00')]), | |
57 | pd.Series(["2016-04-01 00:00:00", "2016-04-01 06:00:00"]), | |
58 | pd.Series([pd.Timestamp("2016-04-01 00:00:00"), pd.Timestamp("2016-04-01 06:00:00")]), | |
59 | 59 | ) |
60 | 60 | ], |
61 | 61 | ) |
2 | 2 | import numpy as np |
3 | 3 | import pandas as pd |
4 | 4 | |
5 | __all__ = ['to_datetime'] | |
5 | __all__ = ["to_datetime"] | |
6 | 6 | |
7 | 7 | |
8 | 8 | def _pd_to_datetime_coerce(arg): |
9 | return pd.to_datetime(arg, errors='coerce') | |
9 | return pd.to_datetime(arg, errors="coerce") | |
10 | 10 | |
11 | 11 | |
12 | 12 | def _pd_to_numeric_coerce(arg): |
13 | return pd.to_numeric(arg, errors='coerce') | |
13 | return pd.to_numeric(arg, errors="coerce") | |
14 | 14 | |
15 | 15 | |
16 | 16 | def _split_arg(arg): |
18 | 18 | Split a comma-separated string into a list. |
19 | 19 | """ |
20 | 20 | if isinstance(arg, str): |
21 | arg = [it for it in re.split(r'[\s,]+', arg) if it] | |
21 | arg = [it for it in re.split(r"[\s,]+", arg) if it] | |
22 | 22 | return arg |
23 | 23 | |
24 | 24 | |
26 | 26 | """ |
27 | 27 | Extract series name from record set. |
28 | 28 | """ |
29 | m = re.match(r'^\s*([\w\.]+).*$', ds) | |
29 | m = re.match(r"^\s*([\w\.]+).*$", ds) | |
30 | 30 | return m.group(1) if m is not None else None |
31 | 31 | |
32 | 32 | |
64 | 64 | Pandas series or a single Timestamp object. |
65 | 65 | """ |
66 | 66 | s = pd.Series(tstr, dtype=object).astype(str) |
67 | if force or s.str.endswith('_TAI').any(): | |
68 | s = s.str.replace('_TAI', "") | |
69 | s = s.str.replace('_', ' ') | |
70 | s = s.str.replace('.', '-', regex=True, n=2) | |
67 | if force or s.str.endswith("_TAI").any(): | |
68 | s = s.str.replace("_TAI", "") | |
69 | s = s.str.replace("_", " ") | |
70 | s = s.str.replace(".", "-", regex=True, n=2) | |
71 | 71 | res = _pd_to_datetime_coerce(s) |
72 | 72 | res = res.dt.tz_localize(None) |
73 | 73 | return res.iloc[0] if (len(res) == 1) and np.isscalar(tstr) else res |
8 | 8 | except Exception: |
9 | 9 | import warnings |
10 | 10 | |
11 | warnings.warn( | |
12 | f'could not determine {__name__.split(".")[0]} package version; this indicates a broken installation' | |
13 | ) | |
11 | warnings.warn(f'could not determine {__name__.split(".")[0]} package version; this indicates a broken installation') | |
14 | 12 | del warnings |
15 | 13 | |
16 | version = '0.0.0' | |
14 | version = "0.0.0" | |
17 | 15 | |
18 | 16 | |
19 | 17 | # We use LooseVersion to define major, minor, micro, but ignore any suffixes. |
36 | 34 | |
37 | 35 | del split_version # clean up namespace. |
38 | 36 | |
39 | release = 'dev' not in version | |
37 | release = "dev" not in version |
0 | 0 | Metadata-Version: 2.1 |
1 | 1 | Name: drms |
2 | Version: 0.6.2 | |
3 | Summary: "Access HMI, AIA and MDI data with Python from the public JSOC DRMS server" | |
2 | Version: 0.6.3 | |
3 | Summary: Access HMI, AIA and MDI data with Python from the Standford JSOC DRMS | |
4 | 4 | Home-page: https://sunpy.org |
5 | 5 | Author: The SunPy Community |
6 | 6 | Author-email: sunpy@googlegroups.com |
7 | 7 | License: BSD 2-Clause |
8 | Description: ==== | |
9 | drms | |
10 | ==== | |
11 | ||
12 | `Docs <https://docs.sunpy.org/projects/drms/>`__ | | |
13 | `Tutorial <https://docs.sunpy.org/projects/drms/en/latest/tutorial.html>`__ | | |
14 | `Github <https://github.com/sunpy/drms>`__ | | |
15 | `PyPI <https://pypi.python.org/pypi/drms>`__ | |
16 | ||
17 | |JOSS| |Zenodo| | |
18 | ||
19 | .. |JOSS| image:: https://joss.theoj.org/papers/10.21105/joss.01614/status.svg | |
20 | :target: https://doi.org/10.21105/joss.01614 | |
21 | .. |Zenodo| image:: https://zenodo.org/badge/58651845.svg | |
22 | :target: https://zenodo.org/badge/latestdoi/58651845 | |
23 | ||
24 | The ``drms`` module provides an easy-to-use interface for accessing HMI, AIA and MDI data with Python. | |
25 | It uses the publicly accessible `JSOC <http://jsoc.stanford.edu/>`__ DRMS server by default, but can also be used with local `NetDRMS <http://jsoc.stanford.edu/netdrms/>`__ sites. | |
26 | More information, including a detailed tutorial, is available in the `Documentation <https://docs.sunpy.org/projects/drms/>`__. | |
27 | ||
28 | Getting Help | |
29 | ------------ | |
30 | ||
31 | This is a SunPy-affiliated package. For more information or to ask questions | |
32 | about drms or SunPy, check out: | |
33 | ||
34 | - `drms Documentation`_ | |
35 | - `SunPy Matrix Channel`_ | |
36 | - `SunPy mailing list`_ | |
37 | ||
38 | .. _drms Documentation: https://docs.sunpy.org/projects/drms/en/latest/ | |
39 | .. _SunPy Matrix Channel: https://riot.im/app/#/room/#sunpy:matrix.org | |
40 | ||
41 | Contributing | |
42 | ------------ | |
43 | ||
44 | If you would like to get involved, start by joining the `SunPy mailing list`_ and check out the `Developers Guide`_ section of the SunPy docs. | |
45 | Stop by our chat room `#sunpy:matrix.org`_ if you have any questions. | |
46 | Help is always welcome so let us know what you like to work on, or check out the `issues page`_ for the list of known outstanding items. | |
47 | ||
48 | For more information on contributing to SunPy and to DRMS, please read our `Newcomers guide`_. | |
49 | ||
50 | .. _SunPy mailing list: https://groups.google.com/forum/#!forum/sunpy | |
51 | .. _Developers Guide: https://docs.sunpy.org/en/latest/dev_guide/index.html | |
52 | .. _`#sunpy:matrix.org`: https://app.element.io/#/room/#sunpy:openastronomy.org | |
53 | .. _issues page: https://github.com/sunpy/drms/issues | |
54 | .. _Newcomers guide: https://docs.sunpy.org/en/latest/dev_guide/newcomers.html | |
55 | ||
56 | Code of Conduct (CoC) | |
57 | --------------------- | |
58 | ||
59 | When you are interacting with the SunPy community you are asked to follow our `code of conduct`_. | |
60 | ||
61 | .. _code of conduct: https://docs.sunpy.org/en/latest/code_of_conduct.html | |
62 | ||
63 | Citation | |
64 | -------- | |
65 | ||
66 | If you use ``drms`` in your work, please consider citing our `paper`_. | |
67 | ||
68 | .. code :: bibtex | |
69 | ||
70 | @article{Glogowski2019, | |
71 | doi = {10.21105/joss.01614}, | |
72 | url = {https://doi.org/10.21105/joss.01614}, | |
73 | year = {2019}, | |
74 | publisher = {The Open Journal}, | |
75 | volume = {4}, | |
76 | number = {40}, | |
77 | pages = {1614}, | |
78 | author = {Kolja Glogowski and Monica G. Bobra and Nitin Choudhary and Arthur B. Amezcua and Stuart J. Mumford}, | |
79 | title = {drms: A Python package for accessing HMI and AIA data}, | |
80 | journal = {Journal of Open Source Software} | |
81 | } | |
82 | ||
83 | .. _paper: https://doi.org/10.21105/joss.01614 | |
84 | ||
85 | Acknowledgements | |
86 | ---------------- | |
87 | ||
88 | Kolja Glogowski has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 307117. | |
89 | ||
90 | 8 | Keywords: solar physics,solar,science,data |
91 | 9 | Platform: any |
92 | 10 | Classifier: Development Status :: 5 - Production/Stable |
96 | 14 | Classifier: Operating System :: OS Independent |
97 | 15 | Classifier: Programming Language :: Python |
98 | 16 | Classifier: Programming Language :: Python :: 3 |
99 | Classifier: Programming Language :: Python :: 3.7 | |
100 | 17 | Classifier: Programming Language :: Python :: 3.8 |
101 | 18 | Classifier: Programming Language :: Python :: 3.9 |
19 | Classifier: Programming Language :: Python :: 3.10 | |
102 | 20 | Classifier: Topic :: Scientific/Engineering :: Astronomy |
103 | 21 | Classifier: Topic :: Scientific/Engineering :: Physics |
104 | 22 | Provides: drms |
105 | Requires-Python: >=3.7 | |
23 | Requires-Python: >=3.8 | |
106 | 24 | Description-Content-Type: text/x-rst |
107 | 25 | Provides-Extra: tests |
108 | 26 | Provides-Extra: docs |
109 | 27 | Provides-Extra: dev |
110 | 28 | Provides-Extra: all |
29 | License-File: LICENSE.rst | |
30 | ||
31 | ==== | |
32 | drms | |
33 | ==== | |
34 | ||
35 | `Docs <https://docs.sunpy.org/projects/drms/>`__ | | |
36 | `Tutorial <https://docs.sunpy.org/projects/drms/en/latest/tutorial.html>`__ | | |
37 | `Github <https://github.com/sunpy/drms>`__ | | |
38 | `PyPI <https://pypi.org/project/drms/>`__ | |
39 | ||
40 | |JOSS| |Zenodo| | |
41 | ||
42 | .. |JOSS| image:: https://joss.theoj.org/papers/10.21105/joss.01614/status.svg | |
43 | :target: https://doi.org/10.21105/joss.01614 | |
44 | .. |Zenodo| image:: https://zenodo.org/badge/58651845.svg | |
45 | :target: https://zenodo.org/badge/latestdoi/58651845 | |
46 | ||
47 | The ``drms`` module provides an easy-to-use interface for accessing HMI, AIA and MDI data with Python. | |
48 | It uses the publicly accessible `JSOC <http://jsoc.stanford.edu/>`__ DRMS server by default, but can also be used with local `NetDRMS <http://jsoc.stanford.edu/netdrms/>`__ sites. | |
49 | More information, including a detailed tutorial, is available in the `Documentation <https://docs.sunpy.org/projects/drms/>`__. | |
50 | ||
51 | Getting Help | |
52 | ------------ | |
53 | For more information or to ask questions about ``drms``, check out: | |
54 | ||
55 | - `drms Documentation <https://docs.sunpy.org/projects/drms/en/latest/>`__ | |
56 | ||
57 | .. _drms Documentation: https://docs.sunpy.org/projects/drms/en/latest/ | |
58 | ||
59 | Contributing | |
60 | ------------ | |
61 | If you would like to get involved, start by joining the `SunPy Chat`_ and read our `Newcomers' guide <https://docs.sunpy.org/en/latest/dev_guide/newcomers.html>`__. | |
62 | Help is always welcome so let us know what you like to work on, or check out the `issues page <https://github.com/sunpy/drms/issues>`__ for a list of known outstanding items. | |
63 | ||
64 | Citation | |
65 | -------- | |
66 | If you use ``drms`` in your work, please cite our `paper <https://doi.org/10.21105/joss.01614>`__. | |
67 | ||
68 | .. code :: bibtex | |
69 | ||
70 | @article{Glogowski2019, | |
71 | doi = {10.21105/joss.01614}, | |
72 | url = {https://doi.org/10.21105/joss.01614}, | |
73 | year = {2019}, | |
74 | publisher = {The Open Journal}, | |
75 | volume = {4}, | |
76 | number = {40}, | |
77 | pages = {1614}, | |
78 | author = {Kolja Glogowski and Monica G. Bobra and Nitin Choudhary and Arthur B. Amezcua and Stuart J. Mumford}, | |
79 | title = {drms: A Python package for accessing HMI and AIA data}, | |
80 | journal = {Journal of Open Source Software} | |
81 | } | |
82 | ||
83 | Code of Conduct (CoC) | |
84 | --------------------- | |
85 | ||
86 | When you are interacting with the SunPy community you are asked to follow our `code of conduct <https://docs.sunpy.org/en/latest/code_of_conduct.html>`__. | |
87 | ||
88 | Acknowledgements | |
89 | ---------------- | |
90 | Kolja Glogowski has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 307117. | |
91 | ||
92 | .. _SunPy Chat: https://openastronomy.element.io/#/room/#sunpy:openastronomy.org |
20 | 20 | email = os.environ["JSOC_EMAIL"] |
21 | 21 | |
22 | 22 | # Download directory |
23 | out_dir = os.path.join('downloads') | |
23 | out_dir = os.path.join("downloads") | |
24 | 24 | |
25 | 25 | # Create download directory if it does not exist yet. |
26 | 26 | if not os.path.exists(out_dir): |
29 | 29 | ############################################################################### |
30 | 30 | # Construct the DRMS query string: "Series[timespan][wavelength]{data segments}" |
31 | 31 | |
32 | qstr = 'aia.lev1_euv_12s[2015-10-17T04:33:30.000/1m@12s][171]{image}' | |
33 | print(f'Data export query:\n {qstr}\n') | |
32 | qstr = "aia.lev1_euv_12s[2015-10-17T04:33:30.000/1m@12s][171]{image}" | |
33 | print(f"Data export query:\n {qstr}\n") | |
34 | 34 | |
35 | 35 | ############################################################################### |
36 | 36 | # Construct the dictionary specifying that we want to request a cutout. |
40 | 40 | # ``r`` controls the use of sub-pixel registration. |
41 | 41 | # ``c`` controls whether off-limb pixels are filled with NaNs. |
42 | 42 | # For additional details about ``im_patch``, see the `documentation <http://jsoc.stanford.edu/doxygen_html/group__im__patch.html>`_. |
43 | process = {'im_patch': { | |
44 | 't_ref': '2015-10-17T04:33:30.000', | |
45 | 't': 0, | |
46 | 'r': 0, | |
47 | 'c': 0, | |
48 | 'locunits': 'arcsec', | |
49 | 'boxunits': 'arcsec', | |
50 | 'x': -517.2, | |
51 | 'y': -246, | |
52 | 'width': 345.6, | |
53 | 'height': 345.6, | |
54 | }} | |
43 | process = { | |
44 | "im_patch": { | |
45 | "t_ref": "2015-10-17T04:33:30.000", | |
46 | "t": 0, | |
47 | "r": 0, | |
48 | "c": 0, | |
49 | "locunits": "arcsec", | |
50 | "boxunits": "arcsec", | |
51 | "x": -517.2, | |
52 | "y": -246, | |
53 | "width": 345.6, | |
54 | "height": 345.6, | |
55 | } | |
56 | } | |
55 | 57 | |
56 | 58 | # Submit export request using the 'fits' protocol |
57 | print('Submitting export request...') | |
59 | print("Submitting export request...") | |
58 | 60 | result = client.export( |
59 | 61 | qstr, |
60 | method='url', | |
61 | protocol='fits', | |
62 | method="url", | |
63 | protocol="fits", | |
62 | 64 | email=email, |
63 | 65 | process=process, |
64 | 66 | ) |
65 | 67 | |
66 | 68 | # Print request URL. |
67 | print(f'\nRequest URL: {result.request_url}') | |
68 | print(f'{int(len(result.urls))} file(s) available for download.\n') | |
69 | print(f"\nRequest URL: {result.request_url}") | |
70 | print(f"{int(len(result.urls))} file(s) available for download.\n") | |
69 | 71 | |
70 | 72 | # Download selected files. |
71 | 73 | result.wait() |
72 | 74 | result.download(out_dir) |
73 | print('Download finished.') | |
75 | print("Download finished.") | |
74 | 76 | print(f'\nDownload directory:\n "{os.path.abspath(out_dir)}"\n') |
25 | 25 | email = os.environ["JSOC_EMAIL"] |
26 | 26 | |
27 | 27 | # Download directory |
28 | out_dir = os.path.join('downloads') | |
28 | out_dir = os.path.join("downloads") | |
29 | 29 | |
30 | 30 | # Create download directory if it does not exist yet. |
31 | 31 | if not os.path.exists(out_dir): |
34 | 34 | ############################################################################### |
35 | 35 | # Construct the DRMS query string: "Series[harpnum][timespan]{data segments}" |
36 | 36 | |
37 | qstr = 'hmi.sharp_720s[7451][2020.09.27_00:00:00_TAI]{continuum, magnetogram, field}' | |
38 | print(f'Data export query:\n {qstr}\n') | |
37 | qstr = "hmi.sharp_720s[7451][2020.09.27_00:00:00_TAI]{continuum, magnetogram, field}" | |
38 | print(f"Data export query:\n {qstr}\n") | |
39 | 39 | |
40 | 40 | # Submit export request, defaults to method='url_quick' and protocol='as-is' |
41 | print('Submitting export request...') | |
41 | print("Submitting export request...") | |
42 | 42 | result = client.export(qstr, email=email) |
43 | print(f'{int(len(result.urls))} file(s) available for download.\n') | |
43 | print(f"{int(len(result.urls))} file(s) available for download.\n") | |
44 | 44 | |
45 | 45 | # Download selected files. |
46 | 46 | result.download(out_dir) |
47 | print('Download finished.') | |
47 | print("Download finished.") | |
48 | 48 | print(f'\nDownload directory:\n "{os.path.abspath(out_dir)}"\n') |
26 | 26 | |
27 | 27 | # Use 'as-is' instead of 'fits', if record keywords are not needed in the |
28 | 28 | # FITS header. This greatly reduces the server load! |
29 | export_protocol = 'fits' | |
29 | export_protocol = "fits" | |
30 | 30 | |
31 | 31 | # Download directory |
32 | out_dir = os.path.join('downloads') | |
32 | out_dir = os.path.join("downloads") | |
33 | 33 | |
34 | 34 | # Create download directory if it does not exist yet. |
35 | 35 | if not os.path.exists(out_dir): |
38 | 38 | ############################################################################### |
39 | 39 | # Construct the DRMS query string: "Series[harpnum][timespan]{data segments}" |
40 | 40 | |
41 | qstr = 'hmi.sharp_720s[4864][2014.11.30_00:00:00_TAI/1d@8h]{continuum, magnetogram, field}' | |
42 | print(f'Data export query:\n {qstr}\n') | |
41 | qstr = "hmi.sharp_720s[4864][2014.11.30_00:00:00_TAI/1d@8h]{continuum, magnetogram, field}" | |
42 | print(f"Data export query:\n {qstr}\n") | |
43 | 43 | |
44 | 44 | # Submit export request using the 'fits' protocol |
45 | print('Submitting export request...') | |
46 | result = client.export(qstr, method='url', protocol=export_protocol, email=email) | |
45 | print("Submitting export request...") | |
46 | result = client.export(qstr, method="url", protocol=export_protocol, email=email) | |
47 | 47 | |
48 | 48 | # Print request URL. |
49 | print(f'\nRequest URL: {result.request_url}') | |
50 | print(f'{int(len(result.urls))} file(s) available for download.\n') | |
49 | print(f"\nRequest URL: {result.request_url}") | |
50 | print(f"{int(len(result.urls))} file(s) available for download.\n") | |
51 | 51 | |
52 | 52 | # Download selected files. |
53 | 53 | result.download(out_dir) |
54 | print('Download finished.') | |
54 | print("Download finished.") | |
55 | 55 | print(f'\nDownload directory:\n "{os.path.abspath(out_dir)}"\n') |
19 | 19 | client = drms.Client(verbose=True) |
20 | 20 | |
21 | 21 | # Export request ID |
22 | request_id = 'JSOC_20201101_198' | |
22 | request_id = "JSOC_20201101_198" | |
23 | 23 | |
24 | 24 | # Querying the server using the entered RequestID. |
25 | print(f'Looking up export request {request_id}...') | |
25 | print(f"Looking up export request {request_id}...") | |
26 | 26 | result = client.export_from_id(request_id) |
27 | 27 | |
28 | 28 | # Print request URL and number of available files. |
29 | print(f'\nRequest URL: {result.request_url}') | |
30 | print(f'{int(len(result.urls))} file(s) available for download.\n') | |
29 | print(f"\nRequest URL: {result.request_url}") | |
30 | print(f"{int(len(result.urls))} file(s) available for download.\n") | |
31 | 31 | |
32 | 32 | # Create download directory if it does not exist yet. |
33 | out_dir = os.path.join('downloads', request_id) | |
33 | out_dir = os.path.join("downloads", request_id) | |
34 | 34 | if not os.path.exists(out_dir): |
35 | 35 | os.makedirs(out_dir) |
36 | 36 | |
37 | 37 | # Download all available files. |
38 | 38 | result.download(out_dir) |
39 | print('Download finished.') | |
40 | print(f'\nDownload directory:\n {os.path.abspath(out_dir)}\n') | |
39 | print("Download finished.") | |
40 | print(f"\nDownload directory:\n {os.path.abspath(out_dir)}\n") |
27 | 27 | |
28 | 28 | # Arguments for 'jpg' protocol |
29 | 29 | jpg_args = { |
30 | 'ct': 'aia_304.lut', # color table | |
31 | 'min': 4, # min value | |
32 | 'max': 800, # max value | |
33 | 'scaling': 'log', # color scaling | |
34 | 'size': 2, # binning (1 -> 4k, 2 -> 2k, 4 -> 1k) | |
30 | "ct": "aia_304.lut", # color table | |
31 | "min": 4, # min value | |
32 | "max": 800, # max value | |
33 | "scaling": "log", # color scaling | |
34 | "size": 2, # binning (1 -> 4k, 2 -> 2k, 4 -> 1k) | |
35 | 35 | } |
36 | 36 | |
37 | 37 | # Download directory |
38 | out_dir = 'downloads' | |
38 | out_dir = "downloads" | |
39 | 39 | |
40 | 40 | # Create download directory if it does not exist yet. |
41 | 41 | if not os.path.exists(out_dir): |
44 | 44 | ############################################################################### |
45 | 45 | # Construct the DRMS query string: "Series[timespan][wavelength]{data segments}" |
46 | 46 | |
47 | qstr = 'aia.lev1_euv_12s[2012-08-31T19:48:01Z][304]{image}' | |
48 | print(f'Data export query:\n {qstr}\n') | |
47 | qstr = "aia.lev1_euv_12s[2012-08-31T19:48:01Z][304]{image}" | |
48 | print(f"Data export query:\n {qstr}\n") | |
49 | 49 | |
50 | 50 | # Submit export request using the 'jpg' protocol with custom protocol_args |
51 | print('Submitting export request...') | |
52 | result = client.export(qstr, protocol='jpg', protocol_args=jpg_args, email=email) | |
51 | print("Submitting export request...") | |
52 | result = client.export(qstr, protocol="jpg", protocol_args=jpg_args, email=email) | |
53 | 53 | |
54 | 54 | # Print request URL. |
55 | print(f'\nRequest URL: {result.request_url}') | |
56 | print(f'{int(len(result.urls))} file(s) available for download.\n') | |
55 | print(f"\nRequest URL: {result.request_url}") | |
56 | print(f"{int(len(result.urls))} file(s) available for download.\n") | |
57 | 57 | |
58 | 58 | # Download selected files. |
59 | 59 | result.download(out_dir) |
60 | print('Download finished.') | |
61 | print(f'\nDownload directory:\n {os.path.abspath(out_dir)}\n') | |
60 | print("Download finished.") | |
61 | print(f"\nDownload directory:\n {os.path.abspath(out_dir)}\n") |
26 | 26 | email = os.environ["JSOC_EMAIL"] |
27 | 27 | |
28 | 28 | # Download directory |
29 | out_dir = 'downloads' | |
29 | out_dir = "downloads" | |
30 | 30 | |
31 | 31 | # Create download directory if it does not exist yet. |
32 | 32 | if not os.path.exists(out_dir): |
35 | 35 | ############################################################################### |
36 | 36 | # Construct the DRMS query string: "Series[timespan]{segment}" |
37 | 37 | |
38 | qstr = 'hmi.m_720s[2014.11.28_00:00:00_TAI/5d@1h]{magnetogram}' | |
39 | print(f'Data export query:\n {qstr}\n') | |
38 | qstr = "hmi.m_720s[2014.11.28_00:00:00_TAI/5d@1h]{magnetogram}" | |
39 | print(f"Data export query:\n {qstr}\n") | |
40 | 40 | |
41 | 41 | ############################################################################### |
42 | 42 | # Arguments for 'mp4' protocol |
43 | 43 | |
44 | 44 | mp4_args = { |
45 | 'ct': 'grey.sao', # color table | |
46 | 'min': -1500, # min value | |
47 | 'max': 1500, # max value | |
48 | 'scaling': 'mag', # color scaling | |
49 | 'size': 8, # binning (1 -> 4k, 2 -> 2k, 4 -> 1k, 8 -> 512) | |
45 | "ct": "grey.sao", # color table | |
46 | "min": -1500, # min value | |
47 | "max": 1500, # max value | |
48 | "scaling": "mag", # color scaling | |
49 | "size": 8, # binning (1 -> 4k, 2 -> 2k, 4 -> 1k, 8 -> 512) | |
50 | 50 | } |
51 | 51 | |
52 | 52 | # Submit export request using the 'mp4' protocol with custom protocol_args |
53 | print('Submitting export request...') | |
54 | result = client.export(qstr, protocol='mp4', protocol_args=mp4_args, email=email) | |
53 | print("Submitting export request...") | |
54 | result = client.export(qstr, protocol="mp4", protocol_args=mp4_args, email=email) | |
55 | 55 | result.wait(sleep=10) |
56 | 56 | |
57 | 57 | # Print request URL. |
58 | print(f'\nRequest URL: {result.request_url}') | |
59 | print(f'{len(result.urls)} file(s) available for download.\n') | |
58 | print(f"\nRequest URL: {result.request_url}") | |
59 | print(f"{len(result.urls)} file(s) available for download.\n") | |
60 | 60 | |
61 | 61 | # Download movie file only: index=0 |
62 | 62 | result.download(out_dir, 0) |
63 | print('Download finished.') | |
64 | print(f'\nDownload directory:\n {os.path.abspath(out_dir)}\n') | |
63 | print("Download finished.") | |
64 | print(f"\nDownload directory:\n {os.path.abspath(out_dir)}\n") |
24 | 24 | ############################################################################### |
25 | 25 | # Construct the DRMS query string: "Series[timespan][wavelength]" |
26 | 26 | |
27 | qstr = 'hmi.ic_720s[2015.01.01_00:00:00_TAI/10d@1d]{continuum}' | |
27 | qstr = "hmi.ic_720s[2015.01.01_00:00:00_TAI/10d@1d]{continuum}" | |
28 | 28 | |
29 | 29 | # Submit export request, defaults to method='url_quick' and protocol='as-is' |
30 | print(f'Data export query:\n {qstr}\n') | |
31 | print('Submitting export request...') | |
30 | print(f"Data export query:\n {qstr}\n") | |
31 | print("Submitting export request...") | |
32 | 32 | result = client.export(qstr, email=email) |
33 | print(f'len(r.urls) file(s) available for download.\n') | |
33 | print(f"len(r.urls) file(s) available for download.\n") | |
34 | 34 | |
35 | 35 | # Print download URLs. |
36 | for _, row in result.urls[['record', 'url']].iterrows(): | |
37 | print(f'REC: {row.record}') | |
38 | print(f'URL: {row.url}\n') | |
36 | for _, row in result.urls[["record", "url"]].iterrows(): | |
37 | print(f"REC: {row.record}") | |
38 | print(f"URL: {row.url}\n") |
28 | 28 | email = os.environ["JSOC_EMAIL"] |
29 | 29 | |
30 | 30 | # Download directory |
31 | out_dir = 'downloads' | |
31 | out_dir = "downloads" | |
32 | 32 | |
33 | 33 | # Create download directory if it does not exist yet. |
34 | 34 | if not os.path.exists(out_dir): |
37 | 37 | ############################################################################### |
38 | 38 | # Construct the DRMS query string: "Series[Carrington rotation][Carrington longitude]{data segments}" |
39 | 39 | |
40 | qstr = 'hmi.rdvflows_fd15_frame[2150][360]{Ux, Uy}' | |
41 | print(f'Data export query:\n {qstr}\n') | |
40 | qstr = "hmi.rdvflows_fd15_frame[2150][360]{Ux, Uy}" | |
41 | print(f"Data export query:\n {qstr}\n") | |
42 | 42 | |
43 | 43 | # Submit export request using the 'url-tar' method, protocol default: 'as-is' |
44 | print('Submitting export request...') | |
45 | result = client.export(qstr, method='url-tar', email=email) | |
44 | print("Submitting export request...") | |
45 | result = client.export(qstr, method="url-tar", email=email) | |
46 | 46 | |
47 | 47 | # Print request URL. |
48 | print(f'\nRequest URL: {result.request_url}') | |
49 | print(f'{len(result.urls)} file(s) available for download.\n') | |
48 | print(f"\nRequest URL: {result.request_url}") | |
49 | print(f"{len(result.urls)} file(s) available for download.\n") | |
50 | 50 | |
51 | 51 | # Download selected files. |
52 | 52 | dr = result.download(out_dir) |
53 | print('Download finished.') | |
53 | print("Download finished.") | |
54 | 54 | print(f'\nDownloaded file:\n "{dr.download[0]}"\n') |
14 | 14 | client = drms.Client() |
15 | 15 | |
16 | 16 | # Get all available HMI series |
17 | hmi_series = client.series(r'hmi\.', full=True) | |
17 | hmi_series = client.series(r"hmi\.", full=True) | |
18 | 18 | |
19 | 19 | # Print series names, prime-keys (pkeys) and notes |
20 | 20 | for series in hmi_series.index: |
21 | print('Series:', hmi_series.name[series]) | |
22 | print(' Notes:', (f'\n{8 * " "}').join(textwrap.wrap(hmi_series.note[series]))) | |
21 | print("Series:", hmi_series.name[series]) | |
22 | print(" Notes:", (f'\n{8 * " "}').join(textwrap.wrap(hmi_series.note[series]))) |
14 | 14 | client = drms.Client() |
15 | 15 | |
16 | 16 | # Query series info |
17 | series_info = client.info('hmi.v_45s') | |
17 | series_info = client.info("hmi.v_45s") | |
18 | 18 | |
19 | 19 | # Print keyword info |
20 | print(f'Listing keywords for {series_info.name}:\n') | |
20 | print(f"Listing keywords for {series_info.name}:\n") | |
21 | 21 | for keyword in sorted(series_info.keywords.index): |
22 | 22 | keyword_info = series_info.keywords.loc[keyword] |
23 | 23 | print(keyword) |
24 | print(f' type ....... {keyword_info.type} ') | |
25 | print(f' recscope ... {keyword_info.recscope} ') | |
26 | print(f' defval ..... {keyword_info.defval} ') | |
27 | print(f' units ...... {keyword_info.units} ') | |
28 | print(f' note ....... {keyword_info.note} ') | |
24 | print(f" type ....... {keyword_info.type} ") | |
25 | print(f" recscope ... {keyword_info.recscope} ") | |
26 | print(f" defval ..... {keyword_info.defval} ") | |
27 | print(f" units ...... {keyword_info.units} ") | |
28 | print(f" note ....... {keyword_info.note} ") |
19 | 19 | # list of all available keywords of a series. |
20 | 20 | |
21 | 21 | keys = [ |
22 | 'T_REC', | |
23 | 'T_OBS', | |
24 | 'DATAMIN', | |
25 | 'DATAMAX', | |
26 | 'DATAMEAN', | |
27 | 'DATARMS', | |
28 | 'DATASKEW', | |
29 | 'DATAKURT', | |
30 | 'QUALITY', | |
22 | "T_REC", | |
23 | "T_OBS", | |
24 | "DATAMIN", | |
25 | "DATAMAX", | |
26 | "DATAMEAN", | |
27 | "DATARMS", | |
28 | "DATASKEW", | |
29 | "DATAKURT", | |
30 | "QUALITY", | |
31 | 31 | ] |
32 | 32 | |
33 | 33 | ############################################################################### |
36 | 36 | # entries (like note) are missing for linked keywords, so we are using the |
37 | 37 | # entries from aia.lev1 in this case. |
38 | 38 | |
39 | print('Querying series info...') | |
40 | series_info = client.info('aia.lev1_euv_12s') | |
41 | series_info_lev1 = client.info('aia.lev1') | |
39 | print("Querying series info...") | |
40 | series_info = client.info("aia.lev1_euv_12s") | |
41 | series_info_lev1 = client.info("aia.lev1") | |
42 | 42 | for key in keys: |
43 | 43 | linkinfo = series_info.keywords.loc[key].linkinfo |
44 | if linkinfo is not None and linkinfo.startswith('lev1->'): | |
44 | if linkinfo is not None and linkinfo.startswith("lev1->"): | |
45 | 45 | note_str = series_info_lev1.keywords.loc[key].note |
46 | 46 | else: |
47 | 47 | note_str = series_info.keywords.loc[key].note |
48 | print(f'{key:>10} : {note_str}') | |
48 | print(f"{key:>10} : {note_str}") | |
49 | 49 | |
50 | 50 | ############################################################################### |
51 | 51 | # Construct the DRMS query string: "Series[timespan][wavelength]" |
52 | 52 | |
53 | qstr = 'aia.lev1_euv_12s[2014-01-01T00:00:01Z/365d@1d][335]' | |
53 | qstr = "aia.lev1_euv_12s[2014-01-01T00:00:01Z/365d@1d][335]" | |
54 | 54 | |
55 | 55 | # Get keyword values for the selected timespan and wavelength |
56 | print(f'Querying keyword data...\n -> {qstr}') | |
56 | print(f"Querying keyword data...\n -> {qstr}") | |
57 | 57 | result = client.query(qstr, key=keys) |
58 | print(f' -> {len(result)} lines retrieved.') | |
58 | print(f" -> {len(result)} lines retrieved.") | |
59 | 59 | |
60 | 60 | # Only use entries with QUALITY==0 |
61 | 61 | result = result[result.QUALITY == 0] |
62 | print(f' -> {len(result)} lines after QUALITY selection.') | |
62 | print(f" -> {len(result)} lines after QUALITY selection.") | |
63 | 63 | |
64 | 64 | # Convert T_REC strings to datetime and use it as index for the series |
65 | 65 | result.index = drms.to_datetime(result.T_REC) |
67 | 67 | ############################################################################### |
68 | 68 | # Create some simple plots |
69 | 69 | |
70 | ax = result[['DATAMIN', 'DATAMAX', 'DATAMEAN', 'DATARMS', 'DATASKEW']].plot(figsize=(8, 10), subplots=True) | |
71 | ax[0].set_title(qstr, fontsize='medium') | |
70 | ax = result[["DATAMIN", "DATAMAX", "DATAMEAN", "DATARMS", "DATASKEW"]].plot(figsize=(8, 10), subplots=True) | |
71 | ax[0].set_title(qstr, fontsize="medium") | |
72 | 72 | plt.tight_layout() |
73 | 73 | plt.show() |
17 | 17 | ############################################################################### |
18 | 18 | # Construct the DRMS query string: "Series[timespan]" |
19 | 19 | |
20 | qstr = f'hmi.ic_720s[2010.05.01_TAI-2016.04.01_TAI@6h]' | |
20 | qstr = f"hmi.ic_720s[2010.05.01_TAI-2016.04.01_TAI@6h]" | |
21 | 21 | |
22 | 22 | # Send request to the DRMS server |
23 | print('Querying keyword data...\n -> {qstr}') | |
24 | result = client.query(qstr, key=['T_REC', 'DATAMEAN', 'DATARMS']) | |
25 | print(f' -> {int(len(result))} lines retrieved.') | |
23 | print("Querying keyword data...\n -> {qstr}") | |
24 | result = client.query(qstr, key=["T_REC", "DATAMEAN", "DATARMS"]) | |
25 | print(f" -> {int(len(result))} lines retrieved.") | |
26 | 26 | |
27 | 27 | ############################################################################### |
28 | 28 | # Now to plot the image. |
29 | 29 | |
30 | 30 | # Convert T_REC strings to datetime and use it as index for the series |
31 | result.index = drms.to_datetime(result.pop('T_REC')) | |
31 | result.index = drms.to_datetime(result.pop("T_REC")) | |
32 | 32 | |
33 | 33 | # Note: DATARMS contains the standard deviation, not the RMS! |
34 | 34 | t = result.index |
37 | 37 | |
38 | 38 | # Create plot |
39 | 39 | fig, ax = plt.subplots(1, 1, figsize=(15, 7)) |
40 | ax.set_title(qstr, fontsize='medium') | |
40 | ax.set_title(qstr, fontsize="medium") | |
41 | 41 | ax.fill_between( |
42 | t, avg + std, avg - std, edgecolor='none', facecolor='b', alpha=0.3, interpolate=True, | |
42 | t, | |
43 | avg + std, | |
44 | avg - std, | |
45 | edgecolor="none", | |
46 | facecolor="b", | |
47 | alpha=0.3, | |
48 | interpolate=True, | |
43 | 49 | ) |
44 | ax.plot(t, avg, color='b') | |
45 | ax.set_xlabel('Time') | |
46 | ax.set_ylabel('Disk-averaged continuum intensity [kDN/s]') | |
50 | ax.plot(t, avg, color="b") | |
51 | ax.set_xlabel("Time") | |
52 | ax.set_ylabel("Disk-averaged continuum intensity [kDN/s]") | |
47 | 53 | fig.tight_layout() |
48 | 54 | |
49 | 55 | plt.show() |
18 | 18 | ############################################################################### |
19 | 19 | # Construct the DRMS query string: "Series[timespan][wavelength]" |
20 | 20 | |
21 | qstr = 'hmi.v_sht_modes[2014.06.20_00:00:00_TAI]' | |
21 | qstr = "hmi.v_sht_modes[2014.06.20_00:00:00_TAI]" | |
22 | 22 | |
23 | 23 | # TODO: Add text here. |
24 | segname = 'm6' # 'm6', 'm18' or 'm36' | |
24 | segname = "m6" # 'm6', 'm18' or 'm36' | |
25 | 25 | |
26 | 26 | # Send request to the DRMS server |
27 | print(f'Querying keyword data...\n -> {qstr}') | |
28 | result, filenames = client.query(qstr, key=['T_START', 'T_STOP', 'LMIN', 'LMAX', 'NDT'], seg=segname) | |
29 | print(f' -> {len(result)} lines retrieved.') | |
27 | print(f"Querying keyword data...\n -> {qstr}") | |
28 | result, filenames = client.query(qstr, key=["T_START", "T_STOP", "LMIN", "LMAX", "NDT"], seg=segname) | |
29 | print(f" -> {len(result)} lines retrieved.") | |
30 | 30 | |
31 | 31 | # Use only the first line of the query result |
32 | 32 | result = result.iloc[0] |
33 | fname = f'http://jsoc.stanford.edu{filenames[segname][0]}' | |
33 | fname = f"http://jsoc.stanford.edu{filenames[segname][0]}" | |
34 | 34 | |
35 | 35 | # Read the data segment |
36 | print(f'Reading data from {fname}...') | |
36 | print(f"Reading data from {fname}...") | |
37 | 37 | a = np.genfromtxt(fname) |
38 | 38 | |
39 | 39 | # For column names, see appendix of Larson & Schou (2015SoPh..290.3221L) |
52 | 52 | # Plot the zoomed in on lower l |
53 | 53 | fig, ax = plt.subplots(1, 1, figsize=(11, 7)) |
54 | 54 | ax.set_title( |
55 | f'Time = {result.T_START} ... {result.T_STOP}, L = {result.LMIN} ... {result.LMAX}, NDT = {result.NDT}', | |
56 | fontsize='medium', | |
55 | f"Time = {result.T_START} ... {result.T_STOP}, L = {result.LMIN} ... {result.LMAX}, NDT = {result.NDT}", | |
56 | fontsize="medium", | |
57 | 57 | ) |
58 | 58 | for ni in np.unique(n): |
59 | 59 | idx = n == ni |
60 | ax.plot(l[idx], nu[idx], 'b.-') | |
60 | ax.plot(l[idx], nu[idx], "b.-") | |
61 | 61 | ax.set_xlim(0, 120) |
62 | 62 | ax.set_ylim(0.8, 4.5) |
63 | ax.set_xlabel('Harmonic degree') | |
64 | ax.set_ylabel('Frequency [mHz]') | |
63 | ax.set_xlabel("Harmonic degree") | |
64 | ax.set_ylabel("Frequency [mHz]") | |
65 | 65 | fig.tight_layout() |
66 | 66 | |
67 | 67 | ############################################################################### |
69 | 69 | |
70 | 70 | fig, ax = plt.subplots(1, 1, figsize=(11, 7)) |
71 | 71 | ax.set_title( |
72 | f'Time = {result.T_START} ... {result.T_STOP}, L = {result.LMIN} ... {result.LMAX}, NDT = {result.NDT}', | |
73 | fontsize='medium', | |
72 | f"Time = {result.T_START} ... {result.T_STOP}, L = {result.LMIN} ... {result.LMAX}, NDT = {result.NDT}", | |
73 | fontsize="medium", | |
74 | 74 | ) |
75 | 75 | for ni in np.unique(n): |
76 | 76 | if ni <= 20: |
77 | 77 | idx = n == ni |
78 | ax.plot(l[idx], nu[idx], 'b.', ms=3) | |
78 | ax.plot(l[idx], nu[idx], "b.", ms=3) | |
79 | 79 | if ni < 10: |
80 | ax.plot(l[idx], nu[idx] + 1000 * snu[idx], 'g') | |
81 | ax.plot(l[idx], nu[idx] - 1000 * snu[idx], 'g') | |
80 | ax.plot(l[idx], nu[idx] + 1000 * snu[idx], "g") | |
81 | ax.plot(l[idx], nu[idx] - 1000 * snu[idx], "g") | |
82 | 82 | else: |
83 | ax.plot(l[idx], nu[idx] + 500 * snu[idx], 'r') | |
84 | ax.plot(l[idx], nu[idx] - 500 * snu[idx], 'r') | |
83 | ax.plot(l[idx], nu[idx] + 500 * snu[idx], "r") | |
84 | ax.plot(l[idx], nu[idx] - 500 * snu[idx], "r") | |
85 | 85 | ax.legend( |
86 | loc='upper right', | |
86 | loc="upper right", | |
87 | 87 | handles=[ |
88 | plt.Line2D([0], [0], color='r', label='500 sigma'), | |
89 | plt.Line2D([0], [0], color='g', label='1000 sigma'), | |
88 | plt.Line2D([0], [0], color="r", label="500 sigma"), | |
89 | plt.Line2D([0], [0], color="g", label="1000 sigma"), | |
90 | 90 | ], |
91 | 91 | ) |
92 | 92 | ax.set_xlim(-5, 305) |
93 | 93 | ax.set_ylim(0.8, 4.5) |
94 | ax.set_xlabel('Harmonic degree') | |
95 | ax.set_ylabel('Frequency [mHz]') | |
94 | ax.set_xlabel("Harmonic degree") | |
95 | ax.set_ylabel("Frequency [mHz]") | |
96 | 96 | fig.tight_layout() |
97 | 97 | |
98 | 98 | plt.show() |
18 | 18 | ############################################################################### |
19 | 19 | # Construct the DRMS query string: "Series[timespan][wavelength]" |
20 | 20 | |
21 | qstr = 'hmi.meanpf_720s[2010.05.01_TAI-2016.04.01_TAI@12h]' | |
21 | qstr = "hmi.meanpf_720s[2010.05.01_TAI-2016.04.01_TAI@12h]" | |
22 | 22 | |
23 | 23 | # Send request to the DRMS server |
24 | print('Querying keyword data...\n -> {qstr}') | |
25 | result = client.query(qstr, key=['T_REC', 'CAPN2', 'CAPS2']) | |
26 | print(f' -> {len(result)} lines retrieved.') | |
24 | print("Querying keyword data...\n -> {qstr}") | |
25 | result = client.query(qstr, key=["T_REC", "CAPN2", "CAPS2"]) | |
26 | print(f" -> {len(result)} lines retrieved.") | |
27 | 27 | |
28 | 28 | # Convert T_REC strings to datetime and use it as index for the series |
29 | result.index = drms.to_datetime(result.pop('T_REC')) | |
29 | result.index = drms.to_datetime(result.pop("T_REC")) | |
30 | 30 | |
31 | 31 | # Determine smallest timestep |
32 | 32 | dt = np.diff(result.index.to_pydatetime()).min() |
49 | 49 | |
50 | 50 | # Plot smoothed data |
51 | 51 | fig, ax = plt.subplots(1, 1, figsize=(15, 7)) |
52 | ax.set_title(qstr, fontsize='medium') | |
53 | ax.plot(t, n, 'b', alpha=0.5, label='North pole') | |
54 | ax.plot(t, s, 'g', alpha=0.5, label='South pole') | |
55 | ax.plot(t, mn, 'r', label='Moving average') | |
56 | ax.plot(t, ms, 'r', label="") | |
57 | ax.set_xlabel('Time') | |
58 | ax.set_ylabel('Mean radial field strength [G]') | |
52 | ax.set_title(qstr, fontsize="medium") | |
53 | ax.plot(t, n, "b", alpha=0.5, label="North pole") | |
54 | ax.plot(t, s, "g", alpha=0.5, label="South pole") | |
55 | ax.plot(t, mn, "r", label="Moving average") | |
56 | ax.plot(t, ms, "r", label="") | |
57 | ax.set_xlabel("Time") | |
58 | ax.set_ylabel("Mean radial field strength [G]") | |
59 | 59 | ax.legend() |
60 | 60 | fig.tight_layout() |
61 | 61 | |
62 | 62 | # Plot raw data |
63 | 63 | fig, ax = plt.subplots(1, 1, figsize=(15, 7)) |
64 | ax.set_title(qstr, fontsize='medium') | |
65 | ax.fill_between(t, mn - sn, mn + sn, edgecolor='none', facecolor='b', alpha=0.3, interpolate=True) | |
66 | ax.fill_between(t, ms - ss, ms + ss, edgecolor='none', facecolor='g', alpha=0.3, interpolate=True) | |
67 | ax.plot(t, mn, 'b', label='North pole') | |
68 | ax.plot(t, ms, 'g', label='South pole') | |
69 | ax.set_xlabel('Time') | |
70 | ax.set_ylabel('Mean radial field strength [G]') | |
64 | ax.set_title(qstr, fontsize="medium") | |
65 | ax.fill_between(t, mn - sn, mn + sn, edgecolor="none", facecolor="b", alpha=0.3, interpolate=True) | |
66 | ax.fill_between(t, ms - ss, ms + ss, edgecolor="none", facecolor="g", alpha=0.3, interpolate=True) | |
67 | ax.plot(t, mn, "b", label="North pole") | |
68 | ax.plot(t, ms, "g", label="South pole") | |
69 | ax.set_xlabel("Time") | |
70 | ax.set_ylabel("Mean radial field strength [G]") | |
71 | 71 | ax.legend() |
72 | 72 | fig.tight_layout() |
73 | 73 |
19 | 19 | ############################################################################### |
20 | 20 | # Construct the DRMS query string: "Series[Carrington rotation]" |
21 | 21 | |
22 | qstr = 'hmi.synoptic_mr_720s[2150]' | |
22 | qstr = "hmi.synoptic_mr_720s[2150]" | |
23 | 23 | |
24 | 24 | # Send request to the DRMS server |
25 | print('Querying keyword data...\n -> {qstr}') | |
26 | segname = 'synopMr' | |
25 | print("Querying keyword data...\n -> {qstr}") | |
26 | segname = "synopMr" | |
27 | 27 | results, filenames = client.query(qstr, key=drms.const.all, seg=segname) |
28 | print(f' -> {len(results)} lines retrieved.') | |
28 | print(f" -> {len(results)} lines retrieved.") | |
29 | 29 | |
30 | 30 | # Use only the first line of the query result |
31 | 31 | results = results.iloc[0] |
32 | fname = f'http://jsoc.stanford.edu{filenames[segname][0]}' | |
32 | fname = f"http://jsoc.stanford.edu{filenames[segname][0]}" | |
33 | 33 | |
34 | 34 | # Read the data segment |
35 | 35 | # Note: HTTP downloads get cached in ~/.astropy/cache/downloads |
36 | print(f'Reading data from {fname}...') | |
36 | print(f"Reading data from {fname}...") | |
37 | 37 | a = fits.getdata(fname) |
38 | 38 | ny, nx = a.shape |
39 | 39 | |
63 | 63 | |
64 | 64 | # Create plot |
65 | 65 | fig, ax = plt.subplots(1, 1, figsize=(13.5, 6)) |
66 | ax.set_title(f'{qstr}, Time: {results.T_START} ... {results.T_STOP}', fontsize='medium') | |
66 | ax.set_title(f"{qstr}, Time: {results.T_START} ... {results.T_STOP}", fontsize="medium") | |
67 | 67 | ax.imshow( |
68 | a, vmin=-300, vmax=300, origin='lower', interpolation='nearest', cmap='gray', extent=extent, aspect=aspect, | |
68 | a, | |
69 | vmin=-300, | |
70 | vmax=300, | |
71 | origin="lower", | |
72 | interpolation="nearest", | |
73 | cmap="gray", | |
74 | extent=extent, | |
75 | aspect=aspect, | |
69 | 76 | ) |
70 | 77 | ax.invert_xaxis() |
71 | ax.set_xlabel('Carrington longitude') | |
72 | ax.set_ylabel('Sine latitude') | |
78 | ax.set_xlabel("Carrington longitude") | |
79 | ax.set_ylabel("Sine latitude") | |
73 | 80 | fig.tight_layout() |
74 | 81 | |
75 | 82 | plt.show() |
0 | 0 | [build-system] |
1 | 1 | requires = [ |
2 | "setuptools", | |
3 | "setuptools_scm", | |
4 | "wheel", | |
5 | "oldest-supported-numpy", | |
6 | ] | |
2 | "setuptools>=56,!=61.0.0", | |
3 | "setuptools_scm[toml]>=6.2", | |
4 | "wheel", | |
5 | "oldest-supported-numpy", | |
6 | ] | |
7 | 7 | build-backend = 'setuptools.build_meta' |
8 | ||
9 | [tool.black] | |
10 | line-length = 120 | |
11 | include = '\.pyi?$' | |
12 | exclude = ''' | |
13 | ( | |
14 | /( | |
15 | \.eggs | |
16 | | \.git | |
17 | | \.mypy_cache | |
18 | | \.tox | |
19 | | \.venv | |
20 | | _build | |
21 | | buck-out | |
22 | | build | |
23 | | dist | |
24 | | docs | |
25 | | .history | |
26 | )/ | |
27 | ) | |
28 | ''' | |
8 | 29 | |
9 | 30 | [ tool.gilesbot ] |
10 | 31 | [ tool.gilesbot.pull_requests ] |
0 | 0 | [metadata] |
1 | 1 | name = drms |
2 | 2 | provides = drms |
3 | description = "Access HMI, AIA and MDI data with Python from the public JSOC DRMS server" | |
3 | description = Access HMI, AIA and MDI data with Python from the Standford JSOC DRMS | |
4 | 4 | long_description = file: README.rst |
5 | 5 | long_description_content_type = text/x-rst |
6 | 6 | author = The SunPy Community |
7 | 7 | author_email = sunpy@googlegroups.com |
8 | 8 | license = BSD 2-Clause |
9 | license_file = LICENSE.rst | |
9 | license_files = LICENSE.rst | |
10 | 10 | url = https://sunpy.org |
11 | 11 | edit_on_github = True |
12 | 12 | github_project = sunpy/drms |
20 | 20 | Operating System :: OS Independent |
21 | 21 | Programming Language :: Python |
22 | 22 | Programming Language :: Python :: 3 |
23 | Programming Language :: Python :: 3.7 | |
24 | 23 | Programming Language :: Python :: 3.8 |
25 | 24 | Programming Language :: Python :: 3.9 |
25 | Programming Language :: Python :: 3.10 | |
26 | 26 | Topic :: Scientific/Engineering :: Astronomy |
27 | 27 | Topic :: Scientific/Engineering :: Physics |
28 | 28 | |
29 | 29 | [options] |
30 | 30 | zip_safe = False |
31 | python_requires = >=3.7 | |
31 | python_requires = >=3.8 | |
32 | 32 | packages = find: |
33 | 33 | include_package_data = True |
34 | 34 | setup_requires = |
62 | 62 | norecursedirs = ".tox" "build" "docs[\/]_build" "docs[\/]generated" "*.egg-info" "examples" ".history" "paper" "drms[\/]_dev" |
63 | 63 | doctest_plus = enabled |
64 | 64 | doctest_optionflags = NORMALIZE_WHITESPACE FLOAT_CMP ELLIPSIS |
65 | addopts = --doctest-rst | |
65 | addopts = --doctest-rst -p no:unraisableexception -p no:threadexception | |
66 | 66 | markers = |
67 | 67 | remote_data: marks this test function as needing remote data. |
68 | 68 | jsoc: marks the test function as needing a connection to JSOC. |
70 | 70 | export: marks the test function as needing a JSOC registered email address. |
71 | 71 | remote_data_strict = True |
72 | 72 | junit_family = xunit2 |
73 | filterwarnings = | |
74 | error | |
75 | always::pytest.PytestConfigWarning | |
76 | ignore:numpy.ufunc size changed:RuntimeWarning | |
77 | ignore:numpy.ndarray size changed:RuntimeWarning | |
73 | 78 | |
74 | 79 | [pycodestyle] |
75 | 80 | max_line_length = 110 |
2 | 2 | from itertools import chain |
3 | 3 | |
4 | 4 | from setuptools import setup |
5 | from setuptools.config import read_configuration | |
5 | ||
6 | try: | |
7 | # Recommended for setuptools 61.0.0+ | |
8 | # (though may disappear in the future) | |
9 | from setuptools.config.setupcfg import read_configuration | |
10 | except ImportError: | |
11 | from setuptools.config import read_configuration | |
6 | 12 | |
7 | 13 | ################################################################################ |
8 | 14 | # Programmatically generate some extras combos. |
9 | 15 | ################################################################################ |
10 | extras = read_configuration('setup.cfg')['options']['extras_require'] | |
11 | ||
16 | extras = read_configuration("setup.cfg")["options"]["extras_require"] | |
12 | 17 | # Dev is everything |
13 | extras['dev'] = list(chain(*list(extras.values()))) | |
14 | ||
18 | extras["dev"] = list(chain(*list(extras.values()))) | |
15 | 19 | # All is everything but tests and docs |
16 | exclude_keys = ('tests', 'docs', 'dev') | |
20 | exclude_keys = ("tests", "docs", "dev") | |
17 | 21 | ex_extras = dict([i for i in list(extras.items()) if i[0] not in exclude_keys]) |
18 | 22 | # Concatenate all the values together for 'all' |
19 | extras['all'] = list(chain.from_iterable(list(ex_extras.values()))) | |
20 | ||
23 | extras["all"] = list(chain.from_iterable(list(ex_extras.values()))) | |
21 | 24 | setup( |
22 | extras_require=extras, use_scm_version={'write_to': os.path.join('drms', '_version.py')}, | |
25 | extras_require=extras, | |
26 | use_scm_version={"write_to": os.path.join("drms", "_version.py")}, | |
23 | 27 | ) |
0 | 0 | [tox] |
1 | 1 | envlist = |
2 | py{37,38,39}{,-online,-sunpy} | |
2 | py{38,39,310}{,-online,-sunpy} | |
3 | 3 | build_docs |
4 | 4 | codestyle |
5 | 5 | isolated_build = true |
6 | 6 | requires = |
7 | setuptools >= 30.3.0 | |
7 | setuptools >=56, !=61.0.0 | |
8 | 8 | pip >= 19.3.1 |
9 | 9 | tox-pypi-filter >= 0.12 |
10 | 10 | |
13 | 13 | # Run the tests in a temporary directory to make sure that we don't import |
14 | 14 | # drms from the source tree |
15 | 15 | changedir = .tmp/{envname} |
16 | # tox environments are constructed with so-called 'factors' (or terms) | |
17 | # separated by hyphens, e.g. test-devdeps-cov. Lines below starting with factor: | |
18 | # will only take effect if that factor is included in the environment name. To | |
19 | # see a list of example environments that can be run, along with a description, | |
20 | # run: | |
21 | # | |
22 | # tox -l -v | |
23 | # | |
24 | 16 | description = |
25 | 17 | run tests |
26 | 18 | online: that require remote data |
30 | 22 | COLUMNS = 180 |
31 | 23 | PYTEST_COMMAND = pytest -vvv -s -ra --pyargs drms --cov-report=xml --cov=drms --cov-config={toxinidir}/setup.cfg {toxinidir}/docs |
32 | 24 | build_docs,online: HOME = {envtmpdir} |
25 | JSOC_EMAIL = jsoc@sunpy.org | |
33 | 26 | passenv = |
34 | 27 | HTTP_PROXY |
35 | 28 | HTTPS_PROXY |
36 | 29 | NO_PROXY |
37 | 30 | CIRCLECI |
38 | 31 | deps = |
39 | # These are specific online extras we use to run the online tests. | |
40 | online: pytest-rerunfailures | |
41 | online: pytest-timeout | |
32 | pytest-timeout | |
33 | # These are specific extras we use to run the sunpy tests. | |
42 | 34 | sunpy: git+https://github.com/sunpy/sunpy |
43 | # These are specific extras we use to run the sunpy tests. | |
44 | 35 | sunpy: beautifulsoup4 |
45 | 36 | sunpy: pytest-mock |
46 | 37 | sunpy: python-dateutil |
47 | 38 | sunpy: scipy |
48 | 39 | sunpy: tqdm |
49 | 40 | sunpy: zeep |
50 | # The following indicates which extras_require from setup.cfg will be installed | |
51 | # dev is special in that it installs everything | |
52 | 41 | extras = |
53 | 42 | dev |
54 | 43 | commands = |
55 | sunpy: pytest -vvv -s -ra --pyargs sunpy.net.jsoc --remote-data=any {posargs} | |
44 | sunpy: pytest -vvv -s -ra --pyargs sunpy.net.jsoc --timeout=120 --remote-data=any {posargs} | |
56 | 45 | !online: {env:PYTEST_COMMAND} {posargs} |
57 | online: {env:PYTEST_COMMAND} --reruns 2 --reruns-delay 60 --timeout=300 --remote-data=any --email jsoc@cadair.com {posargs} | |
46 | online: {env:PYTEST_COMMAND} --timeout=120 --remote-data=any --email jsoc@sunpy.org {posargs} | |
58 | 47 | |
59 | 48 | [testenv:build_docs] |
60 | 49 | changedir = docs |