Imported Upstream version 0.8.2+ds
Bas Couwenberg
8 years ago
0 | GNU LESSER GENERAL PUBLIC LICENSE | |
1 | Version 2.1, February 1999 | |
2 | ||
3 | Copyright (C) 1991, 1999 Free Software Foundation, Inc. | |
4 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA | |
5 | Everyone is permitted to copy and distribute verbatim copies | |
6 | of this license document, but changing it is not allowed. | |
7 | ||
8 | [This is the first released version of the Lesser GPL. It also counts | |
9 | as the successor of the GNU Library Public License, version 2, hence | |
10 | the version number 2.1.] | |
11 | ||
12 | Preamble | |
13 | ||
14 | The licenses for most software are designed to take away your | |
15 | freedom to share and change it. By contrast, the GNU General Public | |
16 | Licenses are intended to guarantee your freedom to share and change | |
17 | free software--to make sure the software is free for all its users. | |
18 | ||
19 | This license, the Lesser General Public License, applies to some | |
20 | specially designated software packages--typically libraries--of the | |
21 | Free Software Foundation and other authors who decide to use it. You | |
22 | can use it too, but we suggest you first think carefully about whether | |
23 | this license or the ordinary General Public License is the better | |
24 | strategy to use in any particular case, based on the explanations below. | |
25 | ||
26 | When we speak of free software, we are referring to freedom of use, | |
27 | not price. Our General Public Licenses are designed to make sure that | |
28 | you have the freedom to distribute copies of free software (and charge | |
29 | for this service if you wish); that you receive source code or can get | |
30 | it if you want it; that you can change the software and use pieces of | |
31 | it in new free programs; and that you are informed that you can do | |
32 | these things. | |
33 | ||
34 | To protect your rights, we need to make restrictions that forbid | |
35 | distributors to deny you these rights or to ask you to surrender these | |
36 | rights. These restrictions translate to certain responsibilities for | |
37 | you if you distribute copies of the library or if you modify it. | |
38 | ||
39 | For example, if you distribute copies of the library, whether gratis | |
40 | or for a fee, you must give the recipients all the rights that we gave | |
41 | you. You must make sure that they, too, receive or can get the source | |
42 | code. If you link other code with the library, you must provide | |
43 | complete object files to the recipients, so that they can relink them | |
44 | with the library after making changes to the library and recompiling | |
45 | it. And you must show them these terms so they know their rights. | |
46 | ||
47 | We protect your rights with a two-step method: (1) we copyright the | |
48 | library, and (2) we offer you this license, which gives you legal | |
49 | permission to copy, distribute and/or modify the library. | |
50 | ||
51 | To protect each distributor, we want to make it very clear that | |
52 | there is no warranty for the free library. Also, if the library is | |
53 | modified by someone else and passed on, the recipients should know | |
54 | that what they have is not the original version, so that the original | |
55 | author's reputation will not be affected by problems that might be | |
56 | introduced by others. | |
57 | ||
58 | Finally, software patents pose a constant threat to the existence of | |
59 | any free program. We wish to make sure that a company cannot | |
60 | effectively restrict the users of a free program by obtaining a | |
61 | restrictive license from a patent holder. Therefore, we insist that | |
62 | any patent license obtained for a version of the library must be | |
63 | consistent with the full freedom of use specified in this license. | |
64 | ||
65 | Most GNU software, including some libraries, is covered by the | |
66 | ordinary GNU General Public License. This license, the GNU Lesser | |
67 | General Public License, applies to certain designated libraries, and | |
68 | is quite different from the ordinary General Public License. We use | |
69 | this license for certain libraries in order to permit linking those | |
70 | libraries into non-free programs. | |
71 | ||
72 | When a program is linked with a library, whether statically or using | |
73 | a shared library, the combination of the two is legally speaking a | |
74 | combined work, a derivative of the original library. The ordinary | |
75 | General Public License therefore permits such linking only if the | |
76 | entire combination fits its criteria of freedom. The Lesser General | |
77 | Public License permits more lax criteria for linking other code with | |
78 | the library. | |
79 | ||
80 | We call this license the "Lesser" General Public License because it | |
81 | does Less to protect the user's freedom than the ordinary General | |
82 | Public License. It also provides other free software developers Less | |
83 | of an advantage over competing non-free programs. These disadvantages | |
84 | are the reason we use the ordinary General Public License for many | |
85 | libraries. However, the Lesser license provides advantages in certain | |
86 | special circumstances. | |
87 | ||
88 | For example, on rare occasions, there may be a special need to | |
89 | encourage the widest possible use of a certain library, so that it becomes | |
90 | a de-facto standard. To achieve this, non-free programs must be | |
91 | allowed to use the library. A more frequent case is that a free | |
92 | library does the same job as widely used non-free libraries. In this | |
93 | case, there is little to gain by limiting the free library to free | |
94 | software only, so we use the Lesser General Public License. | |
95 | ||
96 | In other cases, permission to use a particular library in non-free | |
97 | programs enables a greater number of people to use a large body of | |
98 | free software. For example, permission to use the GNU C Library in | |
99 | non-free programs enables many more people to use the whole GNU | |
100 | operating system, as well as its variant, the GNU/Linux operating | |
101 | system. | |
102 | ||
103 | Although the Lesser General Public License is Less protective of the | |
104 | users' freedom, it does ensure that the user of a program that is | |
105 | linked with the Library has the freedom and the wherewithal to run | |
106 | that program using a modified version of the Library. | |
107 | ||
108 | The precise terms and conditions for copying, distribution and | |
109 | modification follow. Pay close attention to the difference between a | |
110 | "work based on the library" and a "work that uses the library". The | |
111 | former contains code derived from the library, whereas the latter must | |
112 | be combined with the library in order to run. | |
113 | ||
114 | GNU LESSER GENERAL PUBLIC LICENSE | |
115 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION | |
116 | ||
117 | 0. This License Agreement applies to any software library or other | |
118 | program which contains a notice placed by the copyright holder or | |
119 | other authorized party saying it may be distributed under the terms of | |
120 | this Lesser General Public License (also called "this License"). | |
121 | Each licensee is addressed as "you". | |
122 | ||
123 | A "library" means a collection of software functions and/or data | |
124 | prepared so as to be conveniently linked with application programs | |
125 | (which use some of those functions and data) to form executables. | |
126 | ||
127 | The "Library", below, refers to any such software library or work | |
128 | which has been distributed under these terms. A "work based on the | |
129 | Library" means either the Library or any derivative work under | |
130 | copyright law: that is to say, a work containing the Library or a | |
131 | portion of it, either verbatim or with modifications and/or translated | |
132 | straightforwardly into another language. (Hereinafter, translation is | |
133 | included without limitation in the term "modification".) | |
134 | ||
135 | "Source code" for a work means the preferred form of the work for | |
136 | making modifications to it. For a library, complete source code means | |
137 | all the source code for all modules it contains, plus any associated | |
138 | interface definition files, plus the scripts used to control compilation | |
139 | and installation of the library. | |
140 | ||
141 | Activities other than copying, distribution and modification are not | |
142 | covered by this License; they are outside its scope. The act of | |
143 | running a program using the Library is not restricted, and output from | |
144 | such a program is covered only if its contents constitute a work based | |
145 | on the Library (independent of the use of the Library in a tool for | |
146 | writing it). Whether that is true depends on what the Library does | |
147 | and what the program that uses the Library does. | |
148 | ||
149 | 1. You may copy and distribute verbatim copies of the Library's | |
150 | complete source code as you receive it, in any medium, provided that | |
151 | you conspicuously and appropriately publish on each copy an | |
152 | appropriate copyright notice and disclaimer of warranty; keep intact | |
153 | all the notices that refer to this License and to the absence of any | |
154 | warranty; and distribute a copy of this License along with the | |
155 | Library. | |
156 | ||
157 | You may charge a fee for the physical act of transferring a copy, | |
158 | and you may at your option offer warranty protection in exchange for a | |
159 | fee. | |
160 | ||
161 | 2. You may modify your copy or copies of the Library or any portion | |
162 | of it, thus forming a work based on the Library, and copy and | |
163 | distribute such modifications or work under the terms of Section 1 | |
164 | above, provided that you also meet all of these conditions: | |
165 | ||
166 | a) The modified work must itself be a software library. | |
167 | ||
168 | b) You must cause the files modified to carry prominent notices | |
169 | stating that you changed the files and the date of any change. | |
170 | ||
171 | c) You must cause the whole of the work to be licensed at no | |
172 | charge to all third parties under the terms of this License. | |
173 | ||
174 | d) If a facility in the modified Library refers to a function or a | |
175 | table of data to be supplied by an application program that uses | |
176 | the facility, other than as an argument passed when the facility | |
177 | is invoked, then you must make a good faith effort to ensure that, | |
178 | in the event an application does not supply such function or | |
179 | table, the facility still operates, and performs whatever part of | |
180 | its purpose remains meaningful. | |
181 | ||
182 | (For example, a function in a library to compute square roots has | |
183 | a purpose that is entirely well-defined independent of the | |
184 | application. Therefore, Subsection 2d requires that any | |
185 | application-supplied function or table used by this function must | |
186 | be optional: if the application does not supply it, the square | |
187 | root function must still compute square roots.) | |
188 | ||
189 | These requirements apply to the modified work as a whole. If | |
190 | identifiable sections of that work are not derived from the Library, | |
191 | and can be reasonably considered independent and separate works in | |
192 | themselves, then this License, and its terms, do not apply to those | |
193 | sections when you distribute them as separate works. But when you | |
194 | distribute the same sections as part of a whole which is a work based | |
195 | on the Library, the distribution of the whole must be on the terms of | |
196 | this License, whose permissions for other licensees extend to the | |
197 | entire whole, and thus to each and every part regardless of who wrote | |
198 | it. | |
199 | ||
200 | Thus, it is not the intent of this section to claim rights or contest | |
201 | your rights to work written entirely by you; rather, the intent is to | |
202 | exercise the right to control the distribution of derivative or | |
203 | collective works based on the Library. | |
204 | ||
205 | In addition, mere aggregation of another work not based on the Library | |
206 | with the Library (or with a work based on the Library) on a volume of | |
207 | a storage or distribution medium does not bring the other work under | |
208 | the scope of this License. | |
209 | ||
210 | 3. You may opt to apply the terms of the ordinary GNU General Public | |
211 | License instead of this License to a given copy of the Library. To do | |
212 | this, you must alter all the notices that refer to this License, so | |
213 | that they refer to the ordinary GNU General Public License, version 2, | |
214 | instead of to this License. (If a newer version than version 2 of the | |
215 | ordinary GNU General Public License has appeared, then you can specify | |
216 | that version instead if you wish.) Do not make any other change in | |
217 | these notices. | |
218 | ||
219 | Once this change is made in a given copy, it is irreversible for | |
220 | that copy, so the ordinary GNU General Public License applies to all | |
221 | subsequent copies and derivative works made from that copy. | |
222 | ||
223 | This option is useful when you wish to copy part of the code of | |
224 | the Library into a program that is not a library. | |
225 | ||
226 | 4. You may copy and distribute the Library (or a portion or | |
227 | derivative of it, under Section 2) in object code or executable form | |
228 | under the terms of Sections 1 and 2 above provided that you accompany | |
229 | it with the complete corresponding machine-readable source code, which | |
230 | must be distributed under the terms of Sections 1 and 2 above on a | |
231 | medium customarily used for software interchange. | |
232 | ||
233 | If distribution of object code is made by offering access to copy | |
234 | from a designated place, then offering equivalent access to copy the | |
235 | source code from the same place satisfies the requirement to | |
236 | distribute the source code, even though third parties are not | |
237 | compelled to copy the source along with the object code. | |
238 | ||
239 | 5. A program that contains no derivative of any portion of the | |
240 | Library, but is designed to work with the Library by being compiled or | |
241 | linked with it, is called a "work that uses the Library". Such a | |
242 | work, in isolation, is not a derivative work of the Library, and | |
243 | therefore falls outside the scope of this License. | |
244 | ||
245 | However, linking a "work that uses the Library" with the Library | |
246 | creates an executable that is a derivative of the Library (because it | |
247 | contains portions of the Library), rather than a "work that uses the | |
248 | library". The executable is therefore covered by this License. | |
249 | Section 6 states terms for distribution of such executables. | |
250 | ||
251 | When a "work that uses the Library" uses material from a header file | |
252 | that is part of the Library, the object code for the work may be a | |
253 | derivative work of the Library even though the source code is not. | |
254 | Whether this is true is especially significant if the work can be | |
255 | linked without the Library, or if the work is itself a library. The | |
256 | threshold for this to be true is not precisely defined by law. | |
257 | ||
258 | If such an object file uses only numerical parameters, data | |
259 | structure layouts and accessors, and small macros and small inline | |
260 | functions (ten lines or less in length), then the use of the object | |
261 | file is unrestricted, regardless of whether it is legally a derivative | |
262 | work. (Executables containing this object code plus portions of the | |
263 | Library will still fall under Section 6.) | |
264 | ||
265 | Otherwise, if the work is a derivative of the Library, you may | |
266 | distribute the object code for the work under the terms of Section 6. | |
267 | Any executables containing that work also fall under Section 6, | |
268 | whether or not they are linked directly with the Library itself. | |
269 | ||
270 | 6. As an exception to the Sections above, you may also combine or | |
271 | link a "work that uses the Library" with the Library to produce a | |
272 | work containing portions of the Library, and distribute that work | |
273 | under terms of your choice, provided that the terms permit | |
274 | modification of the work for the customer's own use and reverse | |
275 | engineering for debugging such modifications. | |
276 | ||
277 | You must give prominent notice with each copy of the work that the | |
278 | Library is used in it and that the Library and its use are covered by | |
279 | this License. You must supply a copy of this License. If the work | |
280 | during execution displays copyright notices, you must include the | |
281 | copyright notice for the Library among them, as well as a reference | |
282 | directing the user to the copy of this License. Also, you must do one | |
283 | of these things: | |
284 | ||
285 | a) Accompany the work with the complete corresponding | |
286 | machine-readable source code for the Library including whatever | |
287 | changes were used in the work (which must be distributed under | |
288 | Sections 1 and 2 above); and, if the work is an executable linked | |
289 | with the Library, with the complete machine-readable "work that | |
290 | uses the Library", as object code and/or source code, so that the | |
291 | user can modify the Library and then relink to produce a modified | |
292 | executable containing the modified Library. (It is understood | |
293 | that the user who changes the contents of definitions files in the | |
294 | Library will not necessarily be able to recompile the application | |
295 | to use the modified definitions.) | |
296 | ||
297 | b) Use a suitable shared library mechanism for linking with the | |
298 | Library. A suitable mechanism is one that (1) uses at run time a | |
299 | copy of the library already present on the user's computer system, | |
300 | rather than copying library functions into the executable, and (2) | |
301 | will operate properly with a modified version of the library, if | |
302 | the user installs one, as long as the modified version is | |
303 | interface-compatible with the version that the work was made with. | |
304 | ||
305 | c) Accompany the work with a written offer, valid for at | |
306 | least three years, to give the same user the materials | |
307 | specified in Subsection 6a, above, for a charge no more | |
308 | than the cost of performing this distribution. | |
309 | ||
310 | d) If distribution of the work is made by offering access to copy | |
311 | from a designated place, offer equivalent access to copy the above | |
312 | specified materials from the same place. | |
313 | ||
314 | e) Verify that the user has already received a copy of these | |
315 | materials or that you have already sent this user a copy. | |
316 | ||
317 | For an executable, the required form of the "work that uses the | |
318 | Library" must include any data and utility programs needed for | |
319 | reproducing the executable from it. However, as a special exception, | |
320 | the materials to be distributed need not include anything that is | |
321 | normally distributed (in either source or binary form) with the major | |
322 | components (compiler, kernel, and so on) of the operating system on | |
323 | which the executable runs, unless that component itself accompanies | |
324 | the executable. | |
325 | ||
326 | It may happen that this requirement contradicts the license | |
327 | restrictions of other proprietary libraries that do not normally | |
328 | accompany the operating system. Such a contradiction means you cannot | |
329 | use both them and the Library together in an executable that you | |
330 | distribute. | |
331 | ||
332 | 7. You may place library facilities that are a work based on the | |
333 | Library side-by-side in a single library together with other library | |
334 | facilities not covered by this License, and distribute such a combined | |
335 | library, provided that the separate distribution of the work based on | |
336 | the Library and of the other library facilities is otherwise | |
337 | permitted, and provided that you do these two things: | |
338 | ||
339 | a) Accompany the combined library with a copy of the same work | |
340 | based on the Library, uncombined with any other library | |
341 | facilities. This must be distributed under the terms of the | |
342 | Sections above. | |
343 | ||
344 | b) Give prominent notice with the combined library of the fact | |
345 | that part of it is a work based on the Library, and explaining | |
346 | where to find the accompanying uncombined form of the same work. | |
347 | ||
348 | 8. You may not copy, modify, sublicense, link with, or distribute | |
349 | the Library except as expressly provided under this License. Any | |
350 | attempt otherwise to copy, modify, sublicense, link with, or | |
351 | distribute the Library is void, and will automatically terminate your | |
352 | rights under this License. However, parties who have received copies, | |
353 | or rights, from you under this License will not have their licenses | |
354 | terminated so long as such parties remain in full compliance. | |
355 | ||
356 | 9. You are not required to accept this License, since you have not | |
357 | signed it. However, nothing else grants you permission to modify or | |
358 | distribute the Library or its derivative works. These actions are | |
359 | prohibited by law if you do not accept this License. Therefore, by | |
360 | modifying or distributing the Library (or any work based on the | |
361 | Library), you indicate your acceptance of this License to do so, and | |
362 | all its terms and conditions for copying, distributing or modifying | |
363 | the Library or works based on it. | |
364 | ||
365 | 10. Each time you redistribute the Library (or any work based on the | |
366 | Library), the recipient automatically receives a license from the | |
367 | original licensor to copy, distribute, link with or modify the Library | |
368 | subject to these terms and conditions. You may not impose any further | |
369 | restrictions on the recipients' exercise of the rights granted herein. | |
370 | You are not responsible for enforcing compliance by third parties with | |
371 | this License. | |
372 | ||
373 | 11. If, as a consequence of a court judgment or allegation of patent | |
374 | infringement or for any other reason (not limited to patent issues), | |
375 | conditions are imposed on you (whether by court order, agreement or | |
376 | otherwise) that contradict the conditions of this License, they do not | |
377 | excuse you from the conditions of this License. If you cannot | |
378 | distribute so as to satisfy simultaneously your obligations under this | |
379 | License and any other pertinent obligations, then as a consequence you | |
380 | may not distribute the Library at all. For example, if a patent | |
381 | license would not permit royalty-free redistribution of the Library by | |
382 | all those who receive copies directly or indirectly through you, then | |
383 | the only way you could satisfy both it and this License would be to | |
384 | refrain entirely from distribution of the Library. | |
385 | ||
386 | If any portion of this section is held invalid or unenforceable under any | |
387 | particular circumstance, the balance of the section is intended to apply, | |
388 | and the section as a whole is intended to apply in other circumstances. | |
389 | ||
390 | It is not the purpose of this section to induce you to infringe any | |
391 | patents or other property right claims or to contest validity of any | |
392 | such claims; this section has the sole purpose of protecting the | |
393 | integrity of the free software distribution system which is | |
394 | implemented by public license practices. Many people have made | |
395 | generous contributions to the wide range of software distributed | |
396 | through that system in reliance on consistent application of that | |
397 | system; it is up to the author/donor to decide if he or she is willing | |
398 | to distribute software through any other system and a licensee cannot | |
399 | impose that choice. | |
400 | ||
401 | This section is intended to make thoroughly clear what is believed to | |
402 | be a consequence of the rest of this License. | |
403 | ||
404 | 12. If the distribution and/or use of the Library is restricted in | |
405 | certain countries either by patents or by copyrighted interfaces, the | |
406 | original copyright holder who places the Library under this License may add | |
407 | an explicit geographical distribution limitation excluding those countries, | |
408 | so that distribution is permitted only in or among countries not thus | |
409 | excluded. In such case, this License incorporates the limitation as if | |
410 | written in the body of this License. | |
411 | ||
412 | 13. The Free Software Foundation may publish revised and/or new | |
413 | versions of the Lesser General Public License from time to time. | |
414 | Such new versions will be similar in spirit to the present version, | |
415 | but may differ in detail to address new problems or concerns. | |
416 | ||
417 | Each version is given a distinguishing version number. If the Library | |
418 | specifies a version number of this License which applies to it and | |
419 | "any later version", you have the option of following the terms and | |
420 | conditions either of that version or of any later version published by | |
421 | the Free Software Foundation. If the Library does not specify a | |
422 | license version number, you may choose any version ever published by | |
423 | the Free Software Foundation. | |
424 | ||
425 | 14. If you wish to incorporate parts of the Library into other free | |
426 | programs whose distribution conditions are incompatible with these, | |
427 | write to the author to ask for permission. For software which is | |
428 | copyrighted by the Free Software Foundation, write to the Free | |
429 | Software Foundation; we sometimes make exceptions for this. Our | |
430 | decision will be guided by the two goals of preserving the free status | |
431 | of all derivatives of our free software and of promoting the sharing | |
432 | and reuse of software generally. | |
433 | ||
434 | NO WARRANTY | |
435 | ||
436 | 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO | |
437 | WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. | |
438 | EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR | |
439 | OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY | |
440 | KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE | |
441 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | |
442 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE | |
443 | LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME | |
444 | THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. | |
445 | ||
446 | 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN | |
447 | WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY | |
448 | AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU | |
449 | FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR | |
450 | CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE | |
451 | LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING | |
452 | RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A | |
453 | FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF | |
454 | SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH | |
455 | DAMAGES. | |
456 | ||
457 | END OF TERMS AND CONDITIONS | |
458 | ||
459 | How to Apply These Terms to Your New Libraries | |
460 | ||
461 | If you develop a new library, and you want it to be of the greatest | |
462 | possible use to the public, we recommend making it free software that | |
463 | everyone can redistribute and change. You can do so by permitting | |
464 | redistribution under these terms (or, alternatively, under the terms of the | |
465 | ordinary General Public License). | |
466 | ||
467 | To apply these terms, attach the following notices to the library. It is | |
468 | safest to attach them to the start of each source file to most effectively | |
469 | convey the exclusion of warranty; and each file should have at least the | |
470 | "copyright" line and a pointer to where the full notice is found. | |
471 | ||
472 | <one line to give the library's name and a brief idea of what it does.> | |
473 | Copyright (C) <year> <name of author> | |
474 | ||
475 | This library is free software; you can redistribute it and/or | |
476 | modify it under the terms of the GNU Lesser General Public | |
477 | License as published by the Free Software Foundation; either | |
478 | version 2.1 of the License, or (at your option) any later version. | |
479 | ||
480 | This library is distributed in the hope that it will be useful, | |
481 | but WITHOUT ANY WARRANTY; without even the implied warranty of | |
482 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | |
483 | Lesser General Public License for more details. | |
484 | ||
485 | You should have received a copy of the GNU Lesser General Public | |
486 | License along with this library; if not, write to the Free Software | |
487 | Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA | |
488 | ||
489 | Also add information on how to contact you by electronic and paper mail. | |
490 | ||
491 | You should also get your employer (if you work as a programmer) or your | |
492 | school, if any, to sign a "copyright disclaimer" for the library, if | |
493 | necessary. Here is a sample; alter the names: | |
494 | ||
495 | Yoyodyne, Inc., hereby disclaims all copyright interest in the | |
496 | library `Frob' (a library for tweaking knobs) written by James Random Hacker. | |
497 | ||
498 | <signature of Ty Coon>, 1 April 1990 | |
499 | Ty Coon, President of Vice | |
500 | ||
501 | That's all there is to it! | |
502 | ||
503 |
0 | include README.txt | |
1 | include MANIFEST.in | |
2 | include DEPENDENCIES.txt | |
3 | include FAQ.txt | |
4 | include LICENSE.txt | |
5 | recursive-include tests * | |
6 | recursive-include docs * |
0 | Metadata-Version: 1.1 | |
1 | Name: Rtree | |
2 | Version: 0.8.2 | |
3 | Summary: R-Tree spatial index for Python GIS | |
4 | Home-page: http://toblerity.github.com/rtree/ | |
5 | Author: Howard Butler | |
6 | Author-email: hobu@hobu.net | |
7 | License: LGPL | |
8 | Description: Rtree: Spatial indexing for Python | |
9 | ------------------------------------------------------------------------------ | |
10 | ||
11 | `Rtree`_ is a `ctypes`_ Python wrapper of `libspatialindex`_ that provides a | |
12 | number of advanced spatial indexing features for the spatially curious Python | |
13 | user. These features include: | |
14 | ||
15 | * Nearest neighbor search | |
16 | * Intersection search | |
17 | * Multi-dimensional indexes | |
18 | * Clustered indexes (store Python pickles directly with index entries) | |
19 | * Bulk loading | |
20 | * Deletion | |
21 | * Disk serialization | |
22 | * Custom storage implementation (to implement spatial indexing in ZODB, for example) | |
23 | ||
24 | Documentation and Website | |
25 | .............................................................................. | |
26 | ||
27 | http://toblerity.github.com/rtree/ | |
28 | ||
29 | Requirements | |
30 | .............................................................................. | |
31 | ||
32 | * `libspatialindex`_ 1.7.0+. | |
33 | ||
34 | Download | |
35 | .............................................................................. | |
36 | ||
37 | * PyPI http://pypi.python.org/pypi/Rtree/ | |
38 | * Windows binaries http://www.lfd.uci.edu/~gohlke/pythonlibs/#rtree | |
39 | ||
40 | Development | |
41 | .............................................................................. | |
42 | ||
43 | * https://github.com/Toblerity/Rtree | |
44 | ||
45 | .. _`R-trees`: http://en.wikipedia.org/wiki/R-tree | |
46 | .. _`ctypes`: http://docs.python.org/library/ctypes.html | |
47 | .. _`libspatialindex`: http://libspatialindex.github.com | |
48 | .. _`Rtree`: http://toblerity.github.com/rtree/ | |
49 | ||
50 | Keywords: gis spatial index r-tree | |
51 | Platform: UNKNOWN | |
52 | Classifier: Development Status :: 5 - Production/Stable | |
53 | Classifier: Intended Audience :: Developers | |
54 | Classifier: Intended Audience :: Science/Research | |
55 | Classifier: License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL) | |
56 | Classifier: Operating System :: OS Independent | |
57 | Classifier: Programming Language :: C | |
58 | Classifier: Programming Language :: C++ | |
59 | Classifier: Programming Language :: Python | |
60 | Classifier: Topic :: Scientific/Engineering :: GIS | |
61 | Classifier: Topic :: Database |
0 | # Makefile for Sphinx documentation | |
1 | # | |
2 | ||
3 | # You can set these variables from the command line. | |
4 | SPHINXOPTS = | |
5 | SPHINXBUILD = sphinx-build | |
6 | PAPER = | |
7 | ||
8 | # Internal variables. | |
9 | PAPEROPT_a4 = -D latex_paper_size=a4 | |
10 | PAPEROPT_letter = -D latex_paper_size=letter | |
11 | ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source | |
12 | ||
13 | .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest | |
14 | ||
15 | help: | |
16 | @echo "Please use \`make <target>' where <target> is one of" | |
17 | @echo " html to make standalone HTML files" | |
18 | @echo " dirhtml to make HTML files named index.html in directories" | |
19 | @echo " pickle to make pickle files" | |
20 | @echo " json to make JSON files" | |
21 | @echo " htmlhelp to make HTML files and a HTML help project" | |
22 | @echo " qthelp to make HTML files and a qthelp project" | |
23 | @echo " devhelp to make HTML files and a Devhelp project" | |
24 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" | |
25 | @echo " latex_paper_size to make LaTeX files and run them through pdflatex" | |
26 | @echo " changes to make an overview of all changed/added/deprecated items" | |
27 | @echo " linkcheck to check all external links for integrity" | |
28 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" | |
29 | ||
30 | clean: | |
31 | -rm -rf build/* | |
32 | ||
33 | html: | |
34 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html | |
35 | @echo | |
36 | @echo "Build finished. The HTML pages are in build/html." | |
37 | ||
38 | dirhtml: | |
39 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) build/dirhtml | |
40 | @echo | |
41 | @echo "Build finished. The HTML pages are in build/dirhtml." | |
42 | ||
43 | pickle: | |
44 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) build/pickle | |
45 | @echo | |
46 | @echo "Build finished; now you can process the pickle files." | |
47 | ||
48 | json: | |
49 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) build/json | |
50 | @echo | |
51 | @echo "Build finished; now you can process the JSON files." | |
52 | ||
53 | htmlhelp: | |
54 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp | |
55 | @echo | |
56 | @echo "Build finished; now you can run HTML Help Workshop with the" \ | |
57 | ".hhp project file in build/htmlhelp." | |
58 | ||
59 | qthelp: | |
60 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) build/qthelp | |
61 | @echo | |
62 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ | |
63 | ".qhcp project file in build/qthelp, like this:" | |
64 | @echo "# qcollectiongenerator build/qthelp/Rtree.qhcp" | |
65 | @echo "To view the help file:" | |
66 | @echo "# assistant -collectionFile build/qthelp/Rtree.qhc" | |
67 | ||
68 | devhelp: | |
69 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) build/devhelp | |
70 | @echo | |
71 | @echo "Build finished." | |
72 | @echo "To view the help file:" | |
73 | @echo "# mkdir -p $$HOME/.local/share/devhelp/Rtree" | |
74 | @echo "# ln -s build/devhelp $$HOME/.local/share/devhelp/Rtree" | |
75 | @echo "# devhelp" | |
76 | ||
77 | latex: | |
78 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex | |
79 | @echo | |
80 | @echo "Build finished; the LaTeX files are in build/latex." | |
81 | @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ | |
82 | "run these through (pdf)latex." | |
83 | ||
84 | latexpdf: latex | |
85 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex | |
86 | @echo "Running LaTeX files through pdflatex..." | |
87 | make -C build/latex all-pdf | |
88 | @echo "pdflatex finished; the PDF files are in build/latex." | |
89 | ||
90 | changes: | |
91 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes | |
92 | @echo | |
93 | @echo "The overview file is in build/changes." | |
94 | ||
95 | linkcheck: | |
96 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck | |
97 | @echo | |
98 | @echo "Link check complete; look for any errors in the above output " \ | |
99 | "or in build/linkcheck/output.txt." | |
100 | ||
101 | doctest: | |
102 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) build/doctest | |
103 | @echo "Testing of doctests in the sources finished, look at the " \ | |
104 | "results in build/doctest/output.txt." | |
105 | ||
106 | pdf: | |
107 | $(SPHINXBUILD) -b pdf $(ALLSPHINXOPTS) build/pdf | |
108 | @echo | |
109 | @echo "Build finished; now you can process the PDF files." |
0 | Rtree: Spatial indexing for Python | |
1 | ------------------------------------------------------------------------------ | |
2 | ||
3 | `Rtree`_ is a `ctypes`_ Python wrapper of `libspatialindex`_ that provides a | |
4 | number of advanced spatial indexing features for the spatially curious Python | |
5 | user. These features include: | |
6 | ||
7 | * Nearest neighbor search | |
8 | * Intersection search | |
9 | * Multi-dimensional indexes | |
10 | * Clustered indexes (store Python pickles directly with index entries) | |
11 | * Bulk loading | |
12 | * Deletion | |
13 | * Disk serialization | |
14 | * Custom storage implementation (to implement spatial indexing in ZODB, for example) | |
15 | ||
16 | Documentation and Website | |
17 | .............................................................................. | |
18 | ||
19 | http://toblerity.github.com/rtree/ | |
20 | ||
21 | Requirements | |
22 | .............................................................................. | |
23 | ||
24 | * `libspatialindex`_ 1.7.0+. | |
25 | ||
26 | Download | |
27 | .............................................................................. | |
28 | ||
29 | * PyPI http://pypi.python.org/pypi/Rtree/ | |
30 | * Windows binaries http://www.lfd.uci.edu/~gohlke/pythonlibs/#rtree | |
31 | ||
32 | Development | |
33 | .............................................................................. | |
34 | ||
35 | * https://github.com/Toblerity/Rtree | |
36 | ||
37 | .. _`R-trees`: http://en.wikipedia.org/wiki/R-tree | |
38 | .. _`ctypes`: http://docs.python.org/library/ctypes.html | |
39 | .. _`libspatialindex`: http://libspatialindex.github.com | |
40 | .. _`Rtree`: http://toblerity.github.com/rtree/ |
0 | .. _changes: | |
1 | ||
2 | Changes | |
3 | .............................................................................. | |
4 | ||
5 | 0.8: 2014-07-17 | |
6 | =============== | |
7 | ||
8 | - Support for Python 3 added. | |
9 | ||
10 | 0.7.0: 2011-12-29 | |
11 | ================= | |
12 | ||
13 | - 0.7.0 relies on libspatialindex 1.7.1+. | |
14 | - int64_t's should be used for IDs instead of uint64_t (requires libspatialindex 1.7.1 C API changes) | |
15 | - Fix __version__ | |
16 | - More documentation at http://toblerity.github.com/rtree/ | |
17 | - Class documentation at http://toblerity.github.com/rtree/class.html | |
18 | - Tweaks for PyPy compatibility. Still not compatible yet, however. | |
19 | - Custom storage support by Mattias (requires libspatialindex 1.7.1) | |
20 | ||
21 | 0.6.0: 2010-04-13 | |
22 | ================= | |
23 | ||
24 | - 0.6.0 relies on libspatialindex 1.5.0+. | |
25 | - :py:meth:`~rtree.index.Index.intersection` and :py:meth:`~rtree.index.Index.nearest` methods return iterators over results instead of | |
26 | lists. | |
27 | - Number of results for :py:meth:`~rtree.index.Index.nearest` defaults to 1. | |
28 | - libsidx C library of 0.5.0 removed and included in libspatialindex | |
29 | - objects="raw" in :py:meth:`~rtree.index.Index.intersection` to return the object sent in (for speed). | |
30 | - :py:meth:`~rtree.index.Index.count` method to return the intersection count without the overhead | |
31 | of returning a list (thanks Leonard Norrgård). | |
32 | - Improved bulk loading performance | |
33 | - Supposedly no memory leaks :) | |
34 | - Many other performance tweaks (see docs). | |
35 | - Bulk loader supports interleaved coordinates | |
36 | - Leaf queries. You can return the box and ids of the leaf nodes of the index. | |
37 | Useful for visualization, etc. | |
38 | - Many more docstrings, sphinx docs, etc | |
39 | ||
40 | ||
41 | 0.5.0: 2009-08-XX | |
42 | ================= | |
43 | ||
44 | 0.5.0 was a complete refactoring to use libsidx - a C API for libspatialindex. | |
45 | The code is now ctypes over libsidx, and a number of new features are now | |
46 | available as a result of this refactoring. | |
47 | ||
48 | * ability to store pickles within the index (clustered index) | |
49 | * ability to use custom extension names for disk-based indexes | |
50 | * ability to modify many index parameters at instantiation time | |
51 | * storage of point data reduced by a factor of 4 | |
52 | * bulk loading of indexes at instantiation time | |
53 | * ability to quickly return the bounds of the entire index | |
54 | * ability to return the bounds of index entries | |
55 | * much better windows support | |
56 | * libspatialindex 1.4.0 required. | |
57 | ||
58 | 0.4.3: 2009-06-05 | |
59 | ================= | |
60 | - Fix reference counting leak #181 | |
61 | ||
62 | 0.4.2: 2009-05-25 | |
63 | ================= | |
64 | - Windows support | |
65 | ||
66 | 0.4.1: 2008-03-24 | |
67 | ================= | |
68 | ||
69 | - Eliminate uncounted references in add, delete, nearestNeighbor (#157). | |
70 | ||
71 | 0.4: 2008-01-24 | |
72 | =============== | |
73 | ||
74 | - Testing improvements. | |
75 | - Switch dependency to the single consolidated spatialindex library (1.3). | |
76 | ||
77 | 0.3: 26 November 2007 | |
78 | ===================== | |
79 | - Change to Python long integer identifiers (#126). | |
80 | - Allow deletion of objects from indexes. | |
81 | - Reraise index query errors as Python exceptions. | |
82 | - Improved persistence. | |
83 | ||
84 | 0.2: | |
85 | ================== | |
86 | - Link spatialindex system library. | |
87 | ||
88 | 0.1: 13 April 2007 | |
89 | ================== | |
90 | - Add disk storage option for indexes (#320). | |
91 | - Change license to LGPL. | |
92 | - Moved from Pleiades to GIS-Python repo. | |
93 | - Initial release. | |
94 |
0 | .. _class: | |
1 | ||
2 | Class Documentation | |
3 | ------------------------------------------------------------------------------ | |
4 | ||
5 | .. autoclass:: rtree.index.Index | |
6 | :members: __init__, insert, intersection, nearest, delete, bounds, count, close, dumps, loads, interleaved | |
7 | ||
8 | .. autoclass:: rtree.index.Property | |
9 | :members: | |
10 | ||
11 | .. autoclass:: rtree.index.Item | |
12 | :members: __init__, bbox, object⏎ |
0 | # -*- coding: utf-8 -*- | |
1 | # | |
2 | # Rtree documentation build configuration file, created by | |
3 | # sphinx-quickstart on Tue Aug 18 13:21:07 2009. | |
4 | # | |
5 | # This file is execfile()d with the current directory set to its containing dir. | |
6 | # | |
7 | # Note that not all possible configuration values are present in this | |
8 | # autogenerated file. | |
9 | # | |
10 | # All configuration values have a default; values that are commented out | |
11 | # serve to show the default. | |
12 | ||
13 | import sys, os | |
14 | sys.path.append('../../') | |
15 | ||
16 | import rtree | |
17 | ||
18 | # If extensions (or modules to document with autodoc) are in another directory, | |
19 | # add these directories to sys.path here. If the directory is relative to the | |
20 | # documentation root, use os.path.abspath to make it absolute, like shown here. | |
21 | #sys.path.append(os.path.abspath('.')) | |
22 | ||
23 | # -- General configuration ----------------------------------------------------- | |
24 | ||
25 | # Add any Sphinx extension module names here, as strings. They can be extensions | |
26 | # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. | |
27 | extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx', 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.ifconfig'] | |
28 | ||
29 | # Add any paths that contain templates here, relative to this directory. | |
30 | templates_path = ['_templates'] | |
31 | ||
32 | # The suffix of source filenames. | |
33 | source_suffix = '.txt' | |
34 | ||
35 | # The encoding of source files. | |
36 | #source_encoding = 'utf-8-sig' | |
37 | ||
38 | # The master toctree document. | |
39 | master_doc = 'index' | |
40 | ||
41 | # General information about the project. | |
42 | project = u'Rtree' | |
43 | copyright = u'2011, Howard Butler, Brent Pedersen, Sean Gilles, and others.' | |
44 | ||
45 | # The version info for the project you're documenting, acts as replacement for | |
46 | # |version| and |release|, also used in various other places throughout the | |
47 | # built documents. | |
48 | # | |
49 | # The short X.Y version. | |
50 | version = rtree.__version__ | |
51 | # The full version, including alpha/beta/rc tags. | |
52 | release = rtree.__version__ | |
53 | ||
54 | # The language for content autogenerated by Sphinx. Refer to documentation | |
55 | # for a list of supported languages. | |
56 | #language = None | |
57 | ||
58 | # There are two options for replacing |today|: either, you set today to some | |
59 | # non-false value, then it is used: | |
60 | #today = '' | |
61 | # Else, today_fmt is used as the format for a strftime call. | |
62 | #today_fmt = '%B %d, %Y' | |
63 | ||
64 | # List of documents that shouldn't be included in the build. | |
65 | #unused_docs = [] | |
66 | ||
67 | # List of directories, relative to source directory, that shouldn't be searched | |
68 | # for source files. | |
69 | exclude_trees = [] | |
70 | ||
71 | # The reST default role (used for this markup: `text`) to use for all documents. | |
72 | #default_role = None | |
73 | ||
74 | # If true, '()' will be appended to :func: etc. cross-reference text. | |
75 | #add_function_parentheses = True | |
76 | ||
77 | # If true, the current module name will be prepended to all description | |
78 | # unit titles (such as .. function::). | |
79 | #add_module_names = True | |
80 | ||
81 | # If true, sectionauthor and moduleauthor directives will be shown in the | |
82 | # output. They are ignored by default. | |
83 | #show_authors = False | |
84 | ||
85 | # The name of the Pygments (syntax highlighting) style to use. | |
86 | pygments_style = 'sphinx' | |
87 | ||
88 | # A list of ignored prefixes for module index sorting. | |
89 | #modindex_common_prefix = [] | |
90 | ||
91 | ||
92 | # -- Options for HTML output --------------------------------------------------- | |
93 | ||
94 | # The theme to use for HTML and HTML Help pages. Major themes that come with | |
95 | # Sphinx are currently 'default' and 'sphinxdoc'. | |
96 | html_theme = 'nature' | |
97 | ||
98 | # Theme options are theme-specific and customize the look and feel of a theme | |
99 | # further. For a list of options available for each theme, see the | |
100 | # documentation. | |
101 | #html_theme_options = {} | |
102 | ||
103 | # Add any paths that contain custom themes here, relative to this directory. | |
104 | #html_theme_path = [] | |
105 | ||
106 | # The name for this set of Sphinx documents. If None, it defaults to | |
107 | # "<project> v<release> documentation". | |
108 | #html_title = None | |
109 | ||
110 | # A shorter title for the navigation bar. Default is the same as html_title. | |
111 | #html_short_title = None | |
112 | ||
113 | # The name of an image file (relative to this directory) to place at the top | |
114 | # of the sidebar. | |
115 | #html_logo = None | |
116 | ||
117 | # The name of an image file (within the static path) to use as favicon of the | |
118 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 | |
119 | # pixels large. | |
120 | #html_favicon = None | |
121 | ||
122 | # Add any paths that contain custom static files (such as style sheets) here, | |
123 | # relative to this directory. They are copied after the builtin static files, | |
124 | # so a file named "default.css" will overwrite the builtin "default.css". | |
125 | html_static_path = ['_static'] | |
126 | ||
127 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, | |
128 | # using the given strftime format. | |
129 | #html_last_updated_fmt = '%b %d, %Y' | |
130 | ||
131 | # If true, SmartyPants will be used to convert quotes and dashes to | |
132 | # typographically correct entities. | |
133 | #html_use_smartypants = True | |
134 | ||
135 | # Custom sidebar templates, maps document names to template names. | |
136 | #html_sidebars = {} | |
137 | ||
138 | # Additional templates that should be rendered to pages, maps page names to | |
139 | # template names. | |
140 | #html_additional_pages = {} | |
141 | ||
142 | # If false, no module index is generated. | |
143 | #html_use_modindex = True | |
144 | ||
145 | # If false, no index is generated. | |
146 | #html_use_index = True | |
147 | ||
148 | # If true, the index is split into individual pages for each letter. | |
149 | #html_split_index = False | |
150 | ||
151 | # If true, links to the reST sources are added to the pages. | |
152 | #html_show_sourcelink = True | |
153 | ||
154 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. | |
155 | #html_show_sphinx = True | |
156 | ||
157 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. | |
158 | #html_show_copyright = True | |
159 | ||
160 | # If true, an OpenSearch description file will be output, and all pages will | |
161 | # contain a <link> tag referring to it. The value of this option must be the | |
162 | # base URL from which the finished HTML is served. | |
163 | #html_use_opensearch = '' | |
164 | ||
165 | # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). | |
166 | #html_file_suffix = '' | |
167 | ||
168 | # Output file base name for HTML help builder. | |
169 | htmlhelp_basename = 'Rtreedoc' | |
170 | ||
171 | ||
172 | # -- Options for LaTeX output -------------------------------------------------- | |
173 | ||
174 | # The paper size ('letter' or 'a4'). | |
175 | #latex_paper_size = 'letter' | |
176 | ||
177 | # The font size ('10pt', '11pt' or '12pt'). | |
178 | #latex_font_size = '10pt' | |
179 | ||
180 | # Grouping the document tree into LaTeX files. List of tuples | |
181 | # (source start file, target name, title, author, documentclass [howto/manual]). | |
182 | latex_documents = [ | |
183 | ('index', 'Rtree.tex', u'Rtree Documentation', | |
184 | u'Sean Gilles', 'manual'), | |
185 | ] | |
186 | ||
187 | # The name of an image file (relative to this directory) to place at the top of | |
188 | # the title page. | |
189 | #latex_logo = None | |
190 | ||
191 | # For "manual" documents, if this is true, then toplevel headings are parts, | |
192 | # not chapters. | |
193 | #latex_use_parts = False | |
194 | ||
195 | # Additional stuff for the LaTeX preamble. | |
196 | #latex_preamble = '' | |
197 | ||
198 | # Documents to append as an appendix to all manuals. | |
199 | #latex_appendices = [] | |
200 | ||
201 | # If false, no module index is generated. | |
202 | #latex_use_modindex = True | |
203 | ||
204 | pdf_documents = [ | |
205 | ('index', u'Rtree', u'Rtree Documentation', u'The Rtree Team'), | |
206 | ] | |
207 | ||
208 | # A comma-separated list of custom stylesheets. Example: | |
209 | pdf_language = "en_US" | |
210 | pdf_fit_mode = "overflow" | |
211 | ||
212 | # Example configuration for intersphinx: refer to the Python standard library. | |
213 | intersphinx_mapping = {'http://docs.python.org/': None} | |
214 | ||
215 | # Example configuration for intersphinx: refer to the Python standard library. | |
216 | intersphinx_mapping = {'http://docs.python.org/': None} |
0 | .. _history: | |
1 | ||
2 | History of Rtree | |
3 | ------------------------------------------------------------------------------ | |
4 | ||
5 | `Rtree`_ was started by `Sean Gillies`_ as a port of the `libspatialindex`_ | |
6 | linkages that `QGIS`_ maintained to provide on-the-fly indexing support for | |
7 | GUI operations. A notable feature of `R-trees`_ is the ability to insert data | |
8 | into the structure without the need for a global partitioning bounds, and this | |
9 | drove Sean's adoption of this code. `Howard Butler`_ later picked up `Rtree`_ | |
10 | and added a number of features that `libspatialindex`_ provided including disk | |
11 | serialization and bulk loading by writing a C API for `libspatialindex`_ and | |
12 | re-writing `Rtree`_ as a `ctypes`_ wrapper to utilize this C API. `Brent | |
13 | Pedersen`_ came along and added features to support alternative coordinate | |
14 | ordering, augmentation of the pickle storage, and lots of documentation. | |
15 | Mattias (http://dr-code.org) added support for custom storage backends to | |
16 | support using `Rtree`_ as an indexing type in `ZODB`_. | |
17 | ||
18 | `Rtree`_ has gone through a number of iterations, and at | |
19 | 0.5.0, it was completely refactored to use a new internal architecture (ctypes | |
20 | + a C API over `libspatialindex`_). This refactoring has resulted in a number | |
21 | of new features and much more flexibility. See :ref:`changes` for more detail. | |
22 | ||
23 | .. note:: | |
24 | A significant bug in the 1.6.1+ `libspatialindex`_ C API was found where | |
25 | it was using unsigned integers for index entry IDs instead of signed | |
26 | integers. Because `Rtree`_ appeared to be the only significant user of the | |
27 | C API at this time, it was corrected immediately. You should update | |
28 | immediately and re-insert data into new indexes if this is an important | |
29 | consideration for your application. | |
30 | ||
31 | Rtree 0.5.0 included a C library that is now the C API for libspatialindex and | |
32 | is part of that source tree. The code bases are independent from each other | |
33 | and can now evolve separately. Rtree is pure Python as of 0.6.0+. | |
34 | ||
35 | ||
36 | .. _`Sean Gillies`: http://sgillies.net/blog/ | |
37 | .. _`Howard Butler`: http://hobu.biz | |
38 | .. _`Brent Pedersen`: http://hackmap.blogspot.com/ | |
39 | .. _`QGIS`: http://qgis.org | |
40 | ||
41 | ||
42 | .. _`ZODB`: http://www.zodb.org/ | |
43 | .. _`R-trees`: http://en.wikipedia.org/wiki/R-tree | |
44 | .. _`ctypes`: http://docs.python.org/library/ctypes.html | |
45 | .. _`libspatialindex`: http://libspatialindex.github.com | |
46 | .. _`Rtree`: http://rtree.github.com⏎ |
0 | .. _home: | |
1 | ||
2 | Rtree: Spatial indexing for Python | |
3 | ------------------------------------------------------------------------------ | |
4 | ||
5 | `Rtree`_ is a `ctypes`_ Python wrapper of `libspatialindex`_ that provides a | |
6 | number of advanced spatial indexing features for the spatially curious Python | |
7 | user. These features include: | |
8 | ||
9 | * Nearest neighbor search | |
10 | * Intersection search | |
11 | * Multi-dimensional indexes | |
12 | * Clustered indexes (store Python pickles directly with index entries) | |
13 | * Bulk loading | |
14 | * Deletion | |
15 | * Disk serialization | |
16 | * Custom storage implementation (to implement spatial indexing in ZODB, for example) | |
17 | ||
18 | Documentation | |
19 | .............................................................................. | |
20 | ||
21 | .. toctree:: | |
22 | :maxdepth: 2 | |
23 | ||
24 | install | |
25 | tutorial | |
26 | Mailing List <http://lists.gispython.org/mailman/listinfo/community> | |
27 | class | |
28 | changes | |
29 | performance | |
30 | examples | |
31 | history | |
32 | ||
33 | * :ref:`genindex` | |
34 | * :ref:`modindex` | |
35 | * :ref:`search` | |
36 | ||
37 | .. _`R-trees`: http://en.wikipedia.org/wiki/R-tree | |
38 | .. _`ctypes`: http://docs.python.org/library/ctypes.html | |
39 | .. _`libspatialindex`: http://libspatialindex.github.com | |
40 | .. _`Rtree`: http://toblerity.github.com/rtree/ |
0 | .. _installation: | |
1 | ||
2 | Installation | |
3 | ------------------------------------------------------------------------------ | |
4 | ||
5 | \*nix | |
6 | .............................................................................. | |
7 | ||
8 | First, download and install version 1.7.0 of the `libspatialindex`_ library from: | |
9 | ||
10 | http://libspatialindex.github.com | |
11 | ||
12 | The library is a GNU-style build, so it is a matter of:: | |
13 | ||
14 | $ ./configure; make; make install | |
15 | ||
16 | You may need to run the ``ldconfig`` command after installing the library to | |
17 | ensure that applications can find it at startup time. | |
18 | ||
19 | At this point you can get Rtree 0.7.0 via easy_install:: | |
20 | ||
21 | $ easy_install Rtree | |
22 | ||
23 | or by running the local setup.py:: | |
24 | ||
25 | $ python setup.py install | |
26 | ||
27 | You can build and test in place like:: | |
28 | ||
29 | $ python setup.py test | |
30 | ||
31 | Windows | |
32 | .............................................................................. | |
33 | ||
34 | The Windows DLLs of `libspatialindex`_ are pre-compiled in | |
35 | windows installers that are available from `PyPI`_. Installation on Windows | |
36 | is as easy as:: | |
37 | ||
38 | c:\python2x\scripts\easy_install.exe Rtree | |
39 | ||
40 | ||
41 | .. _`PyPI`: http://pypi.python.org/pypi/Rtree/ | |
42 | .. _`Rtree`: http://pypi.python.org/pypi/Rtree/ | |
43 | ||
44 | .. _`libspatialindex`: http://libspatialindex.github.com | |
45 |
0 | .. _performance: | |
1 | ||
2 | Performance | |
3 | ------------------------------------------------------------------------------ | |
4 | ||
5 | See the `tests/benchmarks.py`_ file for a comparison of various query methods | |
6 | and how much acceleration can be obtained from using Rtree. | |
7 | ||
8 | .. _tests/benchmarks.py: https://raw.github.com/Rtree/Rtree/master/tests/benchmarks.py | |
9 | ||
10 | There are a few simple things that will improve performance. | |
11 | ||
12 | Use stream loading | |
13 | .............................................................................. | |
14 | ||
15 | This will substantially (orders of magnitude in many cases) improve | |
16 | performance over :py:meth:`~rtree.index.Index.insert` by allowing the data to | |
17 | be pre-sorted | |
18 | ||
19 | :: | |
20 | ||
21 | >>> def generator_function(): | |
22 | ... for i, obj in enumerate(somedata): | |
23 | ... yield (i, (obj.xmin, obj.ymin, obj.xmax, obj.ymax), obj) | |
24 | >>> r = index.Index(generator_function()) | |
25 | ||
26 | After bulk loading the index, you can then insert additional records into | |
27 | the index using :py:meth:`~rtree.index.Index.insert` | |
28 | ||
29 | Override :py:data:`~rtree.index.Index.dumps` to use the highest pickle protocol | |
30 | ............................................................................... | |
31 | ||
32 | :: | |
33 | ||
34 | >>> import cPickle, rtree | |
35 | >>> class FastRtree(rtree.Rtree): | |
36 | ... def dumps(self, obj): | |
37 | ... return cPickle.dumps(obj, -1) | |
38 | >>> r = FastRtree() | |
39 | ||
40 | ||
41 | Use objects='raw' | |
42 | ............................................................................... | |
43 | ||
44 | In any :py:meth:`~rtree.index.Index.intersection` or | |
45 | :py:meth:`~rtree.index.Index.nearest` or query, use objects='raw' keyword | |
46 | argument :: | |
47 | ||
48 | >>> objs = r.intersection((xmin, ymin, xmax, ymax), objects="raw") | |
49 | ||
50 | ||
51 | Adjust index properties | |
52 | ............................................................................... | |
53 | ||
54 | Adjust :py:class:`rtree.index.Property` appropriate to your index. | |
55 | ||
56 | * Set your :py:data:`~rtree.index.Property.leaf_capacity` to a higher value | |
57 | than the default 100. 1000+ is fine for the default pagesize of 4096 in | |
58 | many cases. | |
59 | ||
60 | * Increase the :py:data:`~rtree.index.Property.fill_factor` to something | |
61 | near 0.9. Smaller fill factors mean more splitting, which means more | |
62 | nodes. This may be bad or good depending on your usage. | |
63 | ||
64 | Limit dimensionality to the amount you need | |
65 | ............................................................................... | |
66 | ||
67 | Don't use more dimensions than you actually need. If you only need 2, only use | |
68 | two. Otherwise, you will waste lots of storage and add that many more floating | |
69 | point comparisons for each query, search, and insert operation of the index. | |
70 | ||
71 | Use the correct query method | |
72 | ............................................................................... | |
73 | ||
74 | Use :py:meth:`~rtree.index.Index.count` if you only need a count and | |
75 | :py:meth:`~rtree.index.Index.intersection` if you only need the ids. | |
76 | Otherwise, lots of data may potentially be copied. | |
77 |
0 | .. _tutorial: | |
1 | ||
2 | Tutorial | |
3 | ------------------------------------------------------------------------------ | |
4 | ||
5 | This tutorial demonstrates how to take advantage of :ref:`Rtree <home>` for | |
6 | querying data that have a spatial component that can be modeled as bounding | |
7 | boxes. | |
8 | ||
9 | ||
10 | Creating an index | |
11 | .............................................................................. | |
12 | ||
13 | The following section describes the basic instantiation and usage of | |
14 | :ref:`Rtree <home>`. | |
15 | ||
16 | Import | |
17 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
18 | ||
19 | After :ref:`installing <installation>` :ref:`Rtree <home>`, you should be able to | |
20 | open up a Python prompt and issue the following:: | |
21 | ||
22 | >>> from rtree import index | |
23 | ||
24 | :py:mod:`rtree` is organized as a Python package with a couple of modules | |
25 | and two major classes - :py:class:`rtree.index.Index` and | |
26 | :py:class:`rtree.index.Property`. Users manipulate these classes to interact | |
27 | with the index. | |
28 | ||
29 | Construct an instance | |
30 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
31 | ||
32 | After importing the index module, construct an index with the default | |
33 | construction:: | |
34 | ||
35 | >>> idx = index.Index() | |
36 | ||
37 | .. note:: | |
38 | ||
39 | While the default construction is useful in many cases, if you want to | |
40 | manipulate how the index is constructed you will need pass in a | |
41 | :py:class:`rtree.index.Property` instance when creating the index. | |
42 | ||
43 | Create a bounding box | |
44 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
45 | ||
46 | After instantiating the index, create a bounding box that we can | |
47 | insert into the index:: | |
48 | ||
49 | >>> left, bottom, right, top = (0.0, 0.0, 1.0, 1.0) | |
50 | ||
51 | .. note:: | |
52 | ||
53 | The coordinate ordering for all functions are sensitive the the index's | |
54 | :py:attr:`~rtree.index.Index.interleaved` data member. If | |
55 | :py:attr:`~rtree.index.Index.interleaved` is False, the coordinates must | |
56 | be in the form [xmin, xmax, ymin, ymax, ..., ..., kmin, kmax]. If | |
57 | :py:attr:`~rtree.index.Index.interleaved` is True, the coordinates must be | |
58 | in the form [xmin, ymin, ..., kmin, xmax, ymax, ..., kmax]. | |
59 | ||
60 | Insert records into the index | |
61 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
62 | ||
63 | Insert an entry into the index:: | |
64 | ||
65 | >>> idx.insert(0, (left, bottom, right, top)) | |
66 | ||
67 | .. note:: | |
68 | ||
69 | Entries that are inserted into the index are not unique in either the | |
70 | sense of the `id` or of the bounding box that is inserted with index | |
71 | entries. If you need to maintain uniqueness, you need to manage that before | |
72 | inserting entries into the Rtree. | |
73 | ||
74 | .. note:: | |
75 | ||
76 | Inserting a point, i.e. where left == right && top == bottom, will | |
77 | essentially insert a single point entry into the index instead of copying | |
78 | extra coordinates and inserting them. There is no shortcut to explicitly | |
79 | insert a single point, however. | |
80 | ||
81 | Query the index | |
82 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
83 | ||
84 | There are three primary methods for querying the index. | |
85 | :py:meth:`rtree.index.Index.intersection` will return you index entries that | |
86 | *cross* or are *contained* within the given query window. | |
87 | :py:meth:`rtree.index.Index.intersection` | |
88 | ||
89 | Intersection | |
90 | ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | |
91 | ||
92 | Given a query window, return ids that are contained within the window:: | |
93 | ||
94 | >>> list(idx.intersection((1.0, 1.0, 2.0, 2.0))) | |
95 | [0] | |
96 | ||
97 | Given a query window that is beyond the bounds of data we have in the | |
98 | index:: | |
99 | ||
100 | >>> list(idx.intersection((1.0000001, 1.0000001, 2.0, 2.0))) | |
101 | [] | |
102 | ||
103 | Nearest Neighbors | |
104 | ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | |
105 | ||
106 | The following finds the 1 nearest item to the given bounds. If multiple items | |
107 | are of equal distance to the bounds, both are returned:: | |
108 | ||
109 | >>> idx.insert(1, (left, bottom, right, top)) | |
110 | >>> list(idx.nearest((1.0000001, 1.0000001, 2.0, 2.0), 1)) | |
111 | [0, 1] | |
112 | ||
113 | ||
114 | .. _clustered: | |
115 | ||
116 | Using Rtree as a cheapo spatial database | |
117 | .............................................................................. | |
118 | ||
119 | Rtree also supports inserting any object you can pickle into the index (called | |
120 | a clustered index in `libspatialindex`_ parlance). The following inserts the | |
121 | picklable object ``42`` into the index with the given id:: | |
122 | ||
123 | >>> index.insert(id=id, bounds=(left, bottom, right, top), obj=42) | |
124 | ||
125 | You can then return a list of objects by giving the ``objects=True`` flag | |
126 | to intersection:: | |
127 | ||
128 | >>> [n.object for n in idx.intersection((left, bottom, right, top), objects=True)] | |
129 | [None, None, 42] | |
130 | ||
131 | .. warning:: | |
132 | `libspatialindex`_'s clustered indexes were not designed to be a database. | |
133 | You get none of the data integrity protections that a database would | |
134 | purport to offer, but this behavior of :ref:`Rtree <home>` can be useful | |
135 | nonetheless. Consider yourself warned. Now go do cool things with it. | |
136 | ||
137 | Serializing your index to a file | |
138 | .............................................................................. | |
139 | ||
140 | One of :ref:`Rtree <home>`'s most useful properties is the ability to | |
141 | serialize Rtree indexes to disk. These include the clustered indexes | |
142 | described :ref:`here <clustered>`:: | |
143 | ||
144 | >>> file_idx = index.Rtree('rtree') | |
145 | >>> file_idx.insert(1, (left, bottom, right, top)) | |
146 | >>> file_idx.insert(2, (left - 1.0, bottom - 1.0, right + 1.0, top + 1.0)) | |
147 | >>> [n for n in file_idx.intersection((left, bottom, right, top))] | |
148 | [1, 2] | |
149 | ||
150 | .. note:: | |
151 | ||
152 | By default, if an index file with the given name `rtree` in the example | |
153 | above already exists on the file system, it will be opened in append mode | |
154 | and not be re-created. You can control this behavior with the | |
155 | :py:attr:`rtree.index.Property.overwrite` property of the index property | |
156 | that can be given to the :py:class:`rtree.index.Index` constructor. | |
157 | ||
158 | .. seealso:: | |
159 | ||
160 | :ref:`performance` describes some parameters you can tune to make | |
161 | file-based indexes run a bit faster. The choices you make for the | |
162 | parameters is entirely dependent on your usage. | |
163 | ||
164 | Modifying file names | |
165 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
166 | ||
167 | Rtree uses the extensions `dat` and `idx` by default for the two index files | |
168 | that are created when serializing index data to disk. These file extensions | |
169 | are controllable using the :py:attr:`rtree.index.Property.dat_extension` and | |
170 | :py:attr:`rtree.index.Property.idx_extension` index properties. | |
171 | ||
172 | :: | |
173 | ||
174 | >>> p = rtree.index.Property() | |
175 | >>> p.dat_extension = 'data' | |
176 | >>> p.idx_extension = 'index' | |
177 | >>> file_idx = index.Index('rtree', properties = p) | |
178 | ||
179 | 3D indexes | |
180 | .............................................................................. | |
181 | ||
182 | As of Rtree version 0.5.0, you can create 3D (actually kD) indexes. The | |
183 | following is a 3D index that is to be stored on disk. Persisted indexes are | |
184 | stored on disk using two files -- an index file (.idx) and a data (.dat) file. | |
185 | You can modify the extensions these files use by altering the properties of | |
186 | the index at instantiation time. The following creates a 3D index that is | |
187 | stored on disk as the files ``3d_index.data`` and ``3d_index.index``:: | |
188 | ||
189 | >>> from rtree import index | |
190 | >>> p = index.Property() | |
191 | >>> p.dimension = 3 | |
192 | >>> p.dat_extension = 'data' | |
193 | >>> p.idx_extension = 'index' | |
194 | >>> idx3d = index.Index('3d_index',properties=p) | |
195 | >>> idx3d.insert(1, (0, 0, 60, 60, 23.0, 42.0)) | |
196 | >>> idx3d.intersection( (-1, -1, 62, 62, 22, 43)) | |
197 | [1L] | |
198 | ||
199 | ZODB and Custom Storages | |
200 | .............................................................................. | |
201 | ||
202 | https://mail.zope.org/pipermail/zodb-dev/2010-June/013491.html contains a custom | |
203 | storage backend for `ZODB`_ | |
204 | ||
205 | .. _ZODB: http://www.zodb.org/ | |
206 | ||
207 | .. _`libspatialindex`: http://libspatialindex.github.com ⏎ |
0 | import atexit, os, re, sys | |
1 | import ctypes | |
2 | from ctypes.util import find_library | |
3 | ||
4 | import ctypes | |
5 | ||
6 | class RTreeError(Exception): | |
7 | "RTree exception, indicates a RTree-related error." | |
8 | pass | |
9 | ||
10 | def check_return(result, func, cargs): | |
11 | "Error checking for Error calls" | |
12 | if result != 0: | |
13 | msg = 'LASError in "%s": %s' % (func.__name__, rt.Error_GetLastErrorMsg() ) | |
14 | rt.Error_Reset() | |
15 | raise RTreeError(msg) | |
16 | return True | |
17 | ||
18 | def check_void(result, func, cargs): | |
19 | "Error checking for void* returns" | |
20 | if not bool(result): | |
21 | msg = 'Error in "%s": %s' % (func.__name__, rt.Error_GetLastErrorMsg() ) | |
22 | rt.Error_Reset() | |
23 | raise RTreeError(msg) | |
24 | return result | |
25 | ||
26 | def check_void_done(result, func, cargs): | |
27 | "Error checking for void* returns that might be empty with no error" | |
28 | if rt.Error_GetErrorCount(): | |
29 | msg = 'Error in "%s": %s' % (func.__name__, rt.Error_GetLastErrorMsg() ) | |
30 | rt.Error_Reset() | |
31 | raise RTreeError(msg) | |
32 | ||
33 | return result | |
34 | ||
35 | def check_value(result, func, cargs): | |
36 | "Error checking proper value returns" | |
37 | count = rt.Error_GetErrorCount() | |
38 | if count != 0: | |
39 | msg = 'Error in "%s": %s' % (func.__name__, rt.Error_GetLastErrorMsg() ) | |
40 | rt.Error_Reset() | |
41 | raise RTreeError(msg) | |
42 | return result | |
43 | ||
44 | def check_value_free(result, func, cargs): | |
45 | "Error checking proper value returns" | |
46 | count = rt.Error_GetErrorCount() | |
47 | if count != 0: | |
48 | msg = 'Error in "%s": %s' % (func.__name__, rt.Error_GetLastErrorMsg() ) | |
49 | rt.Error_Reset() | |
50 | raise RTreeError(msg) | |
51 | return result | |
52 | ||
53 | def free_returned_char_p(result, func, cargs): | |
54 | retvalue = ctypes.string_at(result) | |
55 | p = ctypes.cast(result, ctypes.POINTER(ctypes.c_void_p)) | |
56 | rt.Index_Free(p) | |
57 | return retvalue | |
58 | ||
59 | def free_error_msg_ptr(result, func, cargs): | |
60 | retvalue = ctypes.string_at(result) | |
61 | p = ctypes.cast(result, ctypes.POINTER(ctypes.c_void_p)) | |
62 | rt.Index_Free(p) | |
63 | return retvalue | |
64 | ||
65 | ||
66 | if os.name == 'nt': | |
67 | ||
68 | def _load_library(dllname, loadfunction, dllpaths=('', )): | |
69 | """Load a DLL via ctypes load function. Return None on failure. | |
70 | ||
71 | Try loading the DLL from the current package directory first, | |
72 | then from the Windows DLL search path. | |
73 | ||
74 | """ | |
75 | try: | |
76 | dllpaths = (os.path.abspath(os.path.dirname(__file__)), | |
77 | ) + dllpaths | |
78 | except NameError: | |
79 | pass # no __file__ attribute on PyPy and some frozen distributions | |
80 | for path in dllpaths: | |
81 | if path: | |
82 | # temporarily add the path to the PATH environment variable | |
83 | # so Windows can find additional DLL dependencies. | |
84 | try: | |
85 | oldenv = os.environ['PATH'] | |
86 | os.environ['PATH'] = path + ';' + oldenv | |
87 | except KeyError: | |
88 | oldenv = None | |
89 | try: | |
90 | return loadfunction(os.path.join(path, dllname)) | |
91 | except (WindowsError, OSError): | |
92 | pass | |
93 | finally: | |
94 | if path and oldenv is not None: | |
95 | os.environ['PATH'] = oldenv | |
96 | return None | |
97 | ||
98 | rt = _load_library('spatialindex_c.dll', ctypes.cdll.LoadLibrary) | |
99 | if not rt: | |
100 | raise OSError("could not find or load spatialindex_c.dll") | |
101 | ||
102 | elif os.name == 'posix': | |
103 | platform = os.uname()[0] | |
104 | lib_name = find_library('spatialindex_c') | |
105 | rt = ctypes.CDLL(lib_name) | |
106 | else: | |
107 | raise RTreeError('Unsupported OS "%s"' % os.name) | |
108 | ||
109 | rt.Error_GetLastErrorNum.restype = ctypes.c_int | |
110 | ||
111 | rt.Error_GetLastErrorMsg.argtypes = [] | |
112 | rt.Error_GetLastErrorMsg.restype = ctypes.POINTER(ctypes.c_char) | |
113 | rt.Error_GetLastErrorMsg.errcheck = free_error_msg_ptr | |
114 | ||
115 | rt.Error_GetLastErrorMethod.restype = ctypes.POINTER(ctypes.c_char) | |
116 | rt.Error_GetLastErrorMethod.errcheck = free_returned_char_p | |
117 | ||
118 | rt.Error_GetErrorCount.argtypes = [] | |
119 | rt.Error_GetErrorCount.restype=ctypes.c_int | |
120 | ||
121 | rt.Error_Reset.argtypes = [] | |
122 | rt.Error_Reset.restype = None | |
123 | ||
124 | rt.Index_Create.argtypes = [ctypes.c_void_p] | |
125 | rt.Index_Create.restype = ctypes.c_void_p | |
126 | rt.Index_Create.errcheck = check_void | |
127 | ||
128 | NEXTFUNC = ctypes.CFUNCTYPE(ctypes.c_int, | |
129 | ctypes.POINTER(ctypes.c_int64), | |
130 | ctypes.POINTER(ctypes.POINTER(ctypes.c_double)), | |
131 | ctypes.POINTER(ctypes.POINTER(ctypes.c_double)), | |
132 | ctypes.POINTER(ctypes.c_uint32), | |
133 | ctypes.POINTER(ctypes.POINTER(ctypes.c_ubyte)), | |
134 | ctypes.POINTER(ctypes.c_size_t)) | |
135 | ||
136 | rt.Index_CreateWithStream.argtypes = [ctypes.c_void_p, NEXTFUNC] | |
137 | rt.Index_CreateWithStream.restype = ctypes.c_void_p | |
138 | rt.Index_CreateWithStream.errcheck = check_void | |
139 | ||
140 | rt.Index_Destroy.argtypes = [ctypes.c_void_p] | |
141 | rt.Index_Destroy.restype = None | |
142 | rt.Index_Destroy.errcheck = check_void_done | |
143 | ||
144 | rt.Index_GetProperties.argtypes = [ctypes.c_void_p] | |
145 | rt.Index_GetProperties.restype = ctypes.c_void_p | |
146 | rt.Index_GetProperties.errcheck = check_void | |
147 | ||
148 | rt.Index_DeleteData.argtypes = [ctypes.c_void_p, | |
149 | ctypes.c_int64, | |
150 | ctypes.POINTER(ctypes.c_double), | |
151 | ctypes.POINTER(ctypes.c_double), | |
152 | ctypes.c_uint32] | |
153 | rt.Index_DeleteData.restype = ctypes.c_int | |
154 | rt.Index_DeleteData.errcheck = check_return | |
155 | ||
156 | rt.Index_InsertData.argtypes = [ctypes.c_void_p, | |
157 | ctypes.c_int64, | |
158 | ctypes.POINTER(ctypes.c_double), | |
159 | ctypes.POINTER(ctypes.c_double), | |
160 | ctypes.c_uint32, | |
161 | ctypes.POINTER(ctypes.c_ubyte), | |
162 | ctypes.c_uint32] | |
163 | rt.Index_InsertData.restype = ctypes.c_int | |
164 | rt.Index_InsertData.errcheck = check_return | |
165 | ||
166 | rt.Index_GetBounds.argtypes = [ ctypes.c_void_p, | |
167 | ctypes.POINTER(ctypes.POINTER(ctypes.c_double)), | |
168 | ctypes.POINTER(ctypes.POINTER(ctypes.c_double)), | |
169 | ctypes.POINTER(ctypes.c_uint32)] | |
170 | rt.Index_GetBounds.restype = ctypes.c_int | |
171 | rt.Index_GetBounds.errcheck = check_value | |
172 | ||
173 | rt.Index_IsValid.argtypes = [ctypes.c_void_p] | |
174 | rt.Index_IsValid.restype = ctypes.c_int | |
175 | rt.Index_IsValid.errcheck = check_value | |
176 | ||
177 | rt.Index_Intersects_obj.argtypes = [ctypes.c_void_p, | |
178 | ctypes.POINTER(ctypes.c_double), | |
179 | ctypes.POINTER(ctypes.c_double), | |
180 | ctypes.c_uint32, | |
181 | ctypes.POINTER(ctypes.POINTER(ctypes.c_void_p)), | |
182 | ctypes.POINTER(ctypes.c_uint64)] | |
183 | rt.Index_Intersects_obj.restype = ctypes.c_int | |
184 | rt.Index_Intersects_obj.errcheck = check_return | |
185 | ||
186 | rt.Index_Intersects_id.argtypes = [ctypes.c_void_p, | |
187 | ctypes.POINTER(ctypes.c_double), | |
188 | ctypes.POINTER(ctypes.c_double), | |
189 | ctypes.c_uint32, | |
190 | ctypes.POINTER(ctypes.POINTER(ctypes.c_int64)), | |
191 | ctypes.POINTER(ctypes.c_uint64)] | |
192 | rt.Index_Intersects_id.restype = ctypes.c_int | |
193 | rt.Index_Intersects_id.errcheck = check_return | |
194 | ||
195 | rt.Index_Intersects_count.argtypes = [ ctypes.c_void_p, | |
196 | ctypes.POINTER(ctypes.c_double), | |
197 | ctypes.POINTER(ctypes.c_double), | |
198 | ctypes.c_uint32, | |
199 | ctypes.POINTER(ctypes.c_uint64)] | |
200 | ||
201 | rt.Index_NearestNeighbors_obj.argtypes = [ ctypes.c_void_p, | |
202 | ctypes.POINTER(ctypes.c_double), | |
203 | ctypes.POINTER(ctypes.c_double), | |
204 | ctypes.c_uint32, | |
205 | ctypes.POINTER(ctypes.POINTER(ctypes.c_void_p)), | |
206 | ctypes.POINTER(ctypes.c_uint64)] | |
207 | rt.Index_NearestNeighbors_obj.restype = ctypes.c_int | |
208 | rt.Index_NearestNeighbors_obj.errcheck = check_return | |
209 | ||
210 | rt.Index_NearestNeighbors_id.argtypes = [ ctypes.c_void_p, | |
211 | ctypes.POINTER(ctypes.c_double), | |
212 | ctypes.POINTER(ctypes.c_double), | |
213 | ctypes.c_uint32, | |
214 | ctypes.POINTER(ctypes.POINTER(ctypes.c_int64)), | |
215 | ctypes.POINTER(ctypes.c_uint64)] | |
216 | rt.Index_NearestNeighbors_id.restype = ctypes.c_int | |
217 | rt.Index_NearestNeighbors_id.errcheck = check_return | |
218 | ||
219 | rt.Index_GetLeaves.argtypes = [ ctypes.c_void_p, | |
220 | ctypes.POINTER(ctypes.c_uint32), | |
221 | ctypes.POINTER(ctypes.POINTER(ctypes.c_uint32)), | |
222 | ctypes.POINTER(ctypes.POINTER(ctypes.c_int64)), | |
223 | ctypes.POINTER(ctypes.POINTER(ctypes.POINTER(ctypes.c_int64))), | |
224 | ctypes.POINTER(ctypes.POINTER(ctypes.POINTER(ctypes.c_double))), | |
225 | ctypes.POINTER(ctypes.POINTER(ctypes.POINTER(ctypes.c_double))), | |
226 | ctypes.POINTER(ctypes.c_uint32)] | |
227 | rt.Index_GetLeaves.restype = ctypes.c_int | |
228 | rt.Index_GetLeaves.errcheck = check_return | |
229 | ||
230 | rt.Index_DestroyObjResults.argtypes = [ctypes.POINTER(ctypes.POINTER(ctypes.c_void_p)), ctypes.c_uint32] | |
231 | rt.Index_DestroyObjResults.restype = None | |
232 | rt.Index_DestroyObjResults.errcheck = check_void_done | |
233 | ||
234 | rt.Index_ClearBuffer.argtypes = [ctypes.c_void_p] | |
235 | rt.Index_ClearBuffer.restype = None | |
236 | rt.Index_ClearBuffer.errcheck = check_void_done | |
237 | ||
238 | rt.Index_Free.argtypes = [ctypes.POINTER(ctypes.c_void_p)] | |
239 | rt.Index_Free.restype = None | |
240 | ||
241 | rt.IndexItem_Destroy.argtypes = [ctypes.c_void_p] | |
242 | rt.IndexItem_Destroy.restype = None | |
243 | rt.IndexItem_Destroy.errcheck = check_void_done | |
244 | ||
245 | rt.IndexItem_GetData.argtypes = [ ctypes.c_void_p, | |
246 | ctypes.POINTER(ctypes.POINTER(ctypes.c_ubyte)), | |
247 | ctypes.POINTER(ctypes.c_uint64)] | |
248 | rt.IndexItem_GetData.restype = ctypes.c_int | |
249 | rt.IndexItem_GetData.errcheck = check_value | |
250 | ||
251 | rt.IndexItem_GetBounds.argtypes = [ ctypes.c_void_p, | |
252 | ctypes.POINTER(ctypes.POINTER(ctypes.c_double)), | |
253 | ctypes.POINTER(ctypes.POINTER(ctypes.c_double)), | |
254 | ctypes.POINTER(ctypes.c_uint32)] | |
255 | rt.IndexItem_GetBounds.restype = ctypes.c_int | |
256 | rt.IndexItem_GetBounds.errcheck = check_value | |
257 | ||
258 | rt.IndexItem_GetID.argtypes = [ctypes.c_void_p] | |
259 | rt.IndexItem_GetID.restype = ctypes.c_int64 | |
260 | rt.IndexItem_GetID.errcheck = check_value | |
261 | ||
262 | rt.IndexProperty_Create.argtypes = [] | |
263 | rt.IndexProperty_Create.restype = ctypes.c_void_p | |
264 | rt.IndexProperty_Create.errcheck = check_void | |
265 | ||
266 | rt.IndexProperty_Destroy.argtypes = [ctypes.c_void_p] | |
267 | rt.IndexProperty_Destroy.restype = None | |
268 | rt.IndexProperty_Destroy.errcheck = check_void_done | |
269 | ||
270 | rt.IndexProperty_SetIndexType.argtypes = [ctypes.c_void_p, ctypes.c_int32] | |
271 | rt.IndexProperty_SetIndexType.restype = ctypes.c_int | |
272 | rt.IndexProperty_SetIndexType.errcheck = check_return | |
273 | ||
274 | rt.IndexProperty_GetIndexType.argtypes = [ctypes.c_void_p] | |
275 | rt.IndexProperty_GetIndexType.restype = ctypes.c_int | |
276 | rt.IndexProperty_GetIndexType.errcheck = check_value | |
277 | ||
278 | rt.IndexProperty_SetDimension.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
279 | rt.IndexProperty_SetDimension.restype = ctypes.c_int | |
280 | rt.IndexProperty_SetDimension.errcheck = check_return | |
281 | ||
282 | rt.IndexProperty_GetDimension.argtypes = [ctypes.c_void_p] | |
283 | rt.IndexProperty_GetDimension.restype = ctypes.c_int | |
284 | rt.IndexProperty_GetDimension.errcheck = check_value | |
285 | ||
286 | rt.IndexProperty_SetIndexVariant.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
287 | rt.IndexProperty_SetIndexVariant.restype = ctypes.c_int | |
288 | rt.IndexProperty_SetIndexVariant.errcheck = check_return | |
289 | ||
290 | rt.IndexProperty_GetIndexVariant.argtypes = [ctypes.c_void_p] | |
291 | rt.IndexProperty_GetIndexVariant.restype = ctypes.c_int | |
292 | rt.IndexProperty_GetIndexVariant.errcheck = check_value | |
293 | ||
294 | rt.IndexProperty_SetIndexStorage.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
295 | rt.IndexProperty_SetIndexStorage.restype = ctypes.c_int | |
296 | rt.IndexProperty_SetIndexStorage.errcheck = check_return | |
297 | ||
298 | rt.IndexProperty_GetIndexStorage.argtypes = [ctypes.c_void_p] | |
299 | rt.IndexProperty_GetIndexStorage.restype = ctypes.c_int | |
300 | rt.IndexProperty_GetIndexStorage.errcheck = check_value | |
301 | ||
302 | rt.IndexProperty_SetIndexCapacity.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
303 | rt.IndexProperty_SetIndexCapacity.restype = ctypes.c_int | |
304 | rt.IndexProperty_SetIndexCapacity.errcheck = check_return | |
305 | ||
306 | rt.IndexProperty_GetIndexCapacity.argtypes = [ctypes.c_void_p] | |
307 | rt.IndexProperty_GetIndexCapacity.restype = ctypes.c_int | |
308 | rt.IndexProperty_GetIndexCapacity.errcheck = check_value | |
309 | ||
310 | rt.IndexProperty_SetLeafCapacity.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
311 | rt.IndexProperty_SetLeafCapacity.restype = ctypes.c_int | |
312 | rt.IndexProperty_SetLeafCapacity.errcheck = check_return | |
313 | ||
314 | rt.IndexProperty_GetLeafCapacity.argtypes = [ctypes.c_void_p] | |
315 | rt.IndexProperty_GetLeafCapacity.restype = ctypes.c_int | |
316 | rt.IndexProperty_GetLeafCapacity.errcheck = check_value | |
317 | ||
318 | rt.IndexProperty_SetPagesize.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
319 | rt.IndexProperty_SetPagesize.restype = ctypes.c_int | |
320 | rt.IndexProperty_SetPagesize.errcheck = check_return | |
321 | ||
322 | rt.IndexProperty_GetPagesize.argtypes = [ctypes.c_void_p] | |
323 | rt.IndexProperty_GetPagesize.restype = ctypes.c_int | |
324 | rt.IndexProperty_GetPagesize.errcheck = check_value | |
325 | ||
326 | rt.IndexProperty_SetLeafPoolCapacity.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
327 | rt.IndexProperty_SetLeafPoolCapacity.restype = ctypes.c_int | |
328 | rt.IndexProperty_SetLeafPoolCapacity.errcheck = check_return | |
329 | ||
330 | rt.IndexProperty_GetLeafPoolCapacity.argtypes = [ctypes.c_void_p] | |
331 | rt.IndexProperty_GetLeafPoolCapacity.restype = ctypes.c_int | |
332 | rt.IndexProperty_GetLeafPoolCapacity.errcheck = check_value | |
333 | ||
334 | rt.IndexProperty_SetIndexPoolCapacity.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
335 | rt.IndexProperty_SetIndexPoolCapacity.restype = ctypes.c_int | |
336 | rt.IndexProperty_SetIndexPoolCapacity.errcheck = check_return | |
337 | ||
338 | rt.IndexProperty_GetIndexPoolCapacity.argtypes = [ctypes.c_void_p] | |
339 | rt.IndexProperty_GetIndexPoolCapacity.restype = ctypes.c_int | |
340 | rt.IndexProperty_GetIndexPoolCapacity.errcheck = check_value | |
341 | ||
342 | rt.IndexProperty_SetRegionPoolCapacity.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
343 | rt.IndexProperty_SetRegionPoolCapacity.restype = ctypes.c_int | |
344 | rt.IndexProperty_SetRegionPoolCapacity.errcheck = check_return | |
345 | ||
346 | rt.IndexProperty_GetRegionPoolCapacity.argtypes = [ctypes.c_void_p] | |
347 | rt.IndexProperty_GetRegionPoolCapacity.restype = ctypes.c_int | |
348 | rt.IndexProperty_GetRegionPoolCapacity.errcheck = check_value | |
349 | ||
350 | rt.IndexProperty_SetPointPoolCapacity.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
351 | rt.IndexProperty_SetPointPoolCapacity.restype = ctypes.c_int | |
352 | rt.IndexProperty_SetPointPoolCapacity.errcheck = check_return | |
353 | ||
354 | rt.IndexProperty_GetPointPoolCapacity.argtypes = [ctypes.c_void_p] | |
355 | rt.IndexProperty_GetPointPoolCapacity.restype = ctypes.c_int | |
356 | rt.IndexProperty_GetPointPoolCapacity.errcheck = check_value | |
357 | ||
358 | rt.IndexProperty_SetBufferingCapacity.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
359 | rt.IndexProperty_SetBufferingCapacity.restype = ctypes.c_int | |
360 | rt.IndexProperty_SetBufferingCapacity.errcheck = check_return | |
361 | ||
362 | rt.IndexProperty_GetBufferingCapacity.argtypes = [ctypes.c_void_p] | |
363 | rt.IndexProperty_GetBufferingCapacity.restype = ctypes.c_int | |
364 | rt.IndexProperty_GetBufferingCapacity.errcheck = check_value | |
365 | ||
366 | rt.IndexProperty_SetEnsureTightMBRs.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
367 | rt.IndexProperty_SetEnsureTightMBRs.restype = ctypes.c_int | |
368 | rt.IndexProperty_SetEnsureTightMBRs.errcheck = check_return | |
369 | ||
370 | rt.IndexProperty_GetEnsureTightMBRs.argtypes = [ctypes.c_void_p] | |
371 | rt.IndexProperty_GetEnsureTightMBRs.restype = ctypes.c_int | |
372 | rt.IndexProperty_GetEnsureTightMBRs.errcheck = check_value | |
373 | ||
374 | rt.IndexProperty_SetOverwrite.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
375 | rt.IndexProperty_SetOverwrite.restype = ctypes.c_int | |
376 | rt.IndexProperty_SetOverwrite.errcheck = check_return | |
377 | ||
378 | rt.IndexProperty_GetOverwrite.argtypes = [ctypes.c_void_p] | |
379 | rt.IndexProperty_GetOverwrite.restype = ctypes.c_int | |
380 | rt.IndexProperty_GetOverwrite.errcheck = check_value | |
381 | ||
382 | rt.IndexProperty_SetNearMinimumOverlapFactor.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
383 | rt.IndexProperty_SetNearMinimumOverlapFactor.restype = ctypes.c_int | |
384 | rt.IndexProperty_SetNearMinimumOverlapFactor.errcheck = check_return | |
385 | ||
386 | rt.IndexProperty_GetNearMinimumOverlapFactor.argtypes = [ctypes.c_void_p] | |
387 | rt.IndexProperty_GetNearMinimumOverlapFactor.restype = ctypes.c_int | |
388 | rt.IndexProperty_GetNearMinimumOverlapFactor.errcheck = check_value | |
389 | ||
390 | rt.IndexProperty_SetWriteThrough.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
391 | rt.IndexProperty_SetWriteThrough.restype = ctypes.c_int | |
392 | rt.IndexProperty_SetWriteThrough.errcheck = check_return | |
393 | ||
394 | rt.IndexProperty_GetWriteThrough.argtypes = [ctypes.c_void_p] | |
395 | rt.IndexProperty_GetWriteThrough.restype = ctypes.c_int | |
396 | rt.IndexProperty_GetWriteThrough.errcheck = check_value | |
397 | ||
398 | rt.IndexProperty_SetFillFactor.argtypes = [ctypes.c_void_p, ctypes.c_double] | |
399 | rt.IndexProperty_SetFillFactor.restype = ctypes.c_int | |
400 | rt.IndexProperty_SetFillFactor.errcheck = check_return | |
401 | ||
402 | rt.IndexProperty_GetFillFactor.argtypes = [ctypes.c_void_p] | |
403 | rt.IndexProperty_GetFillFactor.restype = ctypes.c_double | |
404 | rt.IndexProperty_GetFillFactor.errcheck = check_value | |
405 | ||
406 | rt.IndexProperty_SetSplitDistributionFactor.argtypes = [ctypes.c_void_p, ctypes.c_double] | |
407 | rt.IndexProperty_SetSplitDistributionFactor.restype = ctypes.c_int | |
408 | rt.IndexProperty_SetSplitDistributionFactor.errcheck = check_return | |
409 | ||
410 | rt.IndexProperty_GetSplitDistributionFactor.argtypes = [ctypes.c_void_p] | |
411 | rt.IndexProperty_GetSplitDistributionFactor.restype = ctypes.c_double | |
412 | rt.IndexProperty_GetSplitDistributionFactor.errcheck = check_value | |
413 | ||
414 | rt.IndexProperty_SetTPRHorizon.argtypes = [ctypes.c_void_p, ctypes.c_double] | |
415 | rt.IndexProperty_SetTPRHorizon.restype = ctypes.c_int | |
416 | rt.IndexProperty_SetTPRHorizon.errcheck = check_return | |
417 | ||
418 | rt.IndexProperty_GetTPRHorizon.argtypes = [ctypes.c_void_p] | |
419 | rt.IndexProperty_GetTPRHorizon.restype = ctypes.c_double | |
420 | rt.IndexProperty_GetTPRHorizon.errcheck = check_value | |
421 | ||
422 | rt.IndexProperty_SetReinsertFactor.argtypes = [ctypes.c_void_p, ctypes.c_double] | |
423 | rt.IndexProperty_SetReinsertFactor.restype = ctypes.c_int | |
424 | rt.IndexProperty_SetReinsertFactor.errcheck = check_return | |
425 | ||
426 | rt.IndexProperty_GetReinsertFactor.argtypes = [ctypes.c_void_p] | |
427 | rt.IndexProperty_GetReinsertFactor.restype = ctypes.c_double | |
428 | rt.IndexProperty_GetReinsertFactor.errcheck = check_value | |
429 | ||
430 | rt.IndexProperty_SetFileName.argtypes = [ctypes.c_void_p, ctypes.c_char_p] | |
431 | rt.IndexProperty_SetFileName.restype = ctypes.c_int | |
432 | rt.IndexProperty_SetFileName.errcheck = check_return | |
433 | ||
434 | rt.IndexProperty_GetFileName.argtypes = [ctypes.c_void_p] | |
435 | rt.IndexProperty_GetFileName.errcheck = free_returned_char_p | |
436 | rt.IndexProperty_GetFileName.restype = ctypes.POINTER(ctypes.c_char) | |
437 | ||
438 | rt.IndexProperty_SetFileNameExtensionDat.argtypes = [ctypes.c_void_p, ctypes.c_char_p] | |
439 | rt.IndexProperty_SetFileNameExtensionDat.restype = ctypes.c_int | |
440 | rt.IndexProperty_SetFileNameExtensionDat.errcheck = check_return | |
441 | ||
442 | rt.IndexProperty_GetFileNameExtensionDat.argtypes = [ctypes.c_void_p] | |
443 | rt.IndexProperty_GetFileNameExtensionDat.errcheck = free_returned_char_p | |
444 | rt.IndexProperty_GetFileNameExtensionDat.restype = ctypes.POINTER(ctypes.c_char) | |
445 | ||
446 | rt.IndexProperty_SetFileNameExtensionIdx.argtypes = [ctypes.c_void_p, ctypes.c_char_p] | |
447 | rt.IndexProperty_SetFileNameExtensionIdx.restype = ctypes.c_int | |
448 | rt.IndexProperty_SetFileNameExtensionIdx.errcheck = check_return | |
449 | ||
450 | rt.IndexProperty_GetFileNameExtensionIdx.argtypes = [ctypes.c_void_p] | |
451 | rt.IndexProperty_GetFileNameExtensionIdx.errcheck = free_returned_char_p | |
452 | rt.IndexProperty_GetFileNameExtensionIdx.restype = ctypes.POINTER(ctypes.c_char) | |
453 | ||
454 | rt.IndexProperty_SetCustomStorageCallbacksSize.argtypes = [ctypes.c_void_p, ctypes.c_uint32] | |
455 | rt.IndexProperty_SetCustomStorageCallbacksSize.restype = ctypes.c_int | |
456 | rt.IndexProperty_SetCustomStorageCallbacksSize.errcheck = check_return | |
457 | ||
458 | rt.IndexProperty_GetCustomStorageCallbacksSize.argtypes = [ctypes.c_void_p] | |
459 | rt.IndexProperty_GetCustomStorageCallbacksSize.restype = ctypes.c_uint32 | |
460 | rt.IndexProperty_GetCustomStorageCallbacksSize.errcheck = check_value | |
461 | ||
462 | rt.IndexProperty_SetCustomStorageCallbacks.argtypes = [ctypes.c_void_p, ctypes.c_void_p] | |
463 | rt.IndexProperty_SetCustomStorageCallbacks.restype = ctypes.c_int | |
464 | rt.IndexProperty_SetCustomStorageCallbacks.errcheck = check_return | |
465 | ||
466 | rt.IndexProperty_GetCustomStorageCallbacks.argtypes = [ctypes.c_void_p] | |
467 | rt.IndexProperty_GetCustomStorageCallbacks.restype = ctypes.c_void_p | |
468 | rt.IndexProperty_GetCustomStorageCallbacks.errcheck = check_value | |
469 | ||
470 | rt.IndexProperty_SetIndexID.argtypes = [ctypes.c_void_p, ctypes.c_int64] | |
471 | rt.IndexProperty_SetIndexID.restype = ctypes.c_int | |
472 | rt.IndexProperty_SetIndexID.errcheck = check_return | |
473 | ||
474 | rt.IndexProperty_GetIndexID.argtypes = [ctypes.c_void_p] | |
475 | rt.IndexProperty_GetIndexID.restype = ctypes.c_int64 | |
476 | rt.IndexProperty_GetIndexID.errcheck = check_value | |
477 | ||
478 | rt.SIDX_NewBuffer.argtypes = [ctypes.c_uint] | |
479 | rt.SIDX_NewBuffer.restype = ctypes.c_void_p | |
480 | rt.SIDX_NewBuffer.errcheck = check_void | |
481 | ||
482 | rt.SIDX_DeleteBuffer.argtypes = [ctypes.c_void_p] | |
483 | rt.SIDX_DeleteBuffer.restype = None | |
484 | ||
485 | rt.SIDX_Version.argtypes = [] | |
486 | rt.SIDX_Version.restype = ctypes.POINTER(ctypes.c_char) | |
487 | rt.SIDX_Version.errcheck = free_returned_char_p |
0 | ||
1 | import os | |
2 | import os.path | |
3 | import pprint | |
4 | ||
5 | from . import core | |
6 | import ctypes | |
7 | try: | |
8 | import cPickle as pickle | |
9 | except ImportError: | |
10 | import pickle | |
11 | ||
12 | import sys | |
13 | if sys.version_info[0] == 2: | |
14 | range = xrange | |
15 | string_types = basestring | |
16 | elif sys.version_info[0] == 3: | |
17 | string_types = str | |
18 | ||
19 | RT_Memory = 0 | |
20 | RT_Disk = 1 | |
21 | RT_Custom = 2 | |
22 | ||
23 | RT_Linear = 0 | |
24 | RT_Quadratic = 1 | |
25 | RT_Star = 2 | |
26 | ||
27 | RT_RTree = 0 | |
28 | RT_MVRTree = 1 | |
29 | RT_TPRTree = 2 | |
30 | ||
31 | __c_api_version__ = core.rt.SIDX_Version() | |
32 | ||
33 | major_version, minor_version, patch_version = [ | |
34 | int(t) for t in __c_api_version__.decode('utf-8').split('.')] | |
35 | ||
36 | if (major_version < 2 and minor_version < 7): | |
37 | raise Exception("This version of Rtree requires libspatialindex 1.7.0 or greater") | |
38 | ||
39 | __all__ = ['Rtree', 'Index', 'Property'] | |
40 | ||
41 | def _get_bounds(handle, bounds_fn, interleaved): | |
42 | pp_mins = ctypes.pointer(ctypes.c_double()) | |
43 | pp_maxs = ctypes.pointer(ctypes.c_double()) | |
44 | dimension = ctypes.c_uint32(0) | |
45 | ||
46 | bounds_fn(handle, | |
47 | ctypes.byref(pp_mins), | |
48 | ctypes.byref(pp_maxs), | |
49 | ctypes.byref(dimension)) | |
50 | if (dimension.value == 0): return None | |
51 | ||
52 | mins = ctypes.cast(pp_mins,ctypes.POINTER(ctypes.c_double \ | |
53 | * dimension.value)) | |
54 | maxs = ctypes.cast(pp_maxs,ctypes.POINTER(ctypes.c_double \ | |
55 | * dimension.value)) | |
56 | ||
57 | results = [mins.contents[i] for i in range(dimension.value)] | |
58 | results += [maxs.contents[i] for i in range(dimension.value)] | |
59 | ||
60 | p_mins = ctypes.cast(mins,ctypes.POINTER(ctypes.c_double)) | |
61 | p_maxs = ctypes.cast(maxs,ctypes.POINTER(ctypes.c_double)) | |
62 | core.rt.Index_Free(ctypes.cast(p_mins, ctypes.POINTER(ctypes.c_void_p))) | |
63 | core.rt.Index_Free(ctypes.cast(p_maxs, ctypes.POINTER(ctypes.c_void_p))) | |
64 | if interleaved: # they want bbox order. | |
65 | return results | |
66 | return Index.deinterleave(results) | |
67 | ||
68 | def _get_data(handle): | |
69 | length = ctypes.c_uint64(0) | |
70 | d = ctypes.pointer(ctypes.c_uint8(0)) | |
71 | core.rt.IndexItem_GetData(handle, ctypes.byref(d), ctypes.byref(length)) | |
72 | c = ctypes.cast(d, ctypes.POINTER(ctypes.c_void_p)) | |
73 | if length.value == 0: | |
74 | core.rt.Index_Free(c) | |
75 | return None | |
76 | s = ctypes.string_at(d, length.value) | |
77 | core.rt.Index_Free(c) | |
78 | return s | |
79 | ||
80 | class Index(object): | |
81 | """An R-Tree, MVR-Tree, or TPR-Tree indexing object""" | |
82 | ||
83 | def __init__(self, *args, **kwargs): | |
84 | """Creates a new index | |
85 | ||
86 | :param filename: | |
87 | The first argument in the constructor is assumed to be a filename | |
88 | determining that a file-based storage for the index should be used. | |
89 | If the first argument is not of type basestring, it is then assumed | |
90 | to be an instance of ICustomStorage or derived class. | |
91 | If the first argument is neither of type basestring nor an instance | |
92 | of ICustomStorage, it is then assumed to be an input index item | |
93 | stream. | |
94 | ||
95 | :param stream: | |
96 | If the first argument in the constructor is not of type basestring, | |
97 | it is assumed to be an iterable stream of data that will raise a | |
98 | StopIteration. It must be in the form defined by the :attr:`interleaved` | |
99 | attribute of the index. The following example would assume | |
100 | :attr:`interleaved` is False:: | |
101 | ||
102 | (id, (minx, maxx, miny, maxy, minz, maxz, ..., ..., mink, maxk), object) | |
103 | ||
104 | The object can be None, but you must put a place holder of ``None`` there. | |
105 | ||
106 | :param storage: | |
107 | If the first argument in the constructor is an instance of ICustomStorage | |
108 | then the given custom storage is used. | |
109 | ||
110 | :param interleaved: True or False, defaults to True. | |
111 | This parameter determines the coordinate order for all methods that | |
112 | take in coordinates. | |
113 | ||
114 | :param properties: An :class:`index.Property` object | |
115 | This object sets both the creation and instantiation properties | |
116 | for the object and they are passed down into libspatialindex. | |
117 | A few properties are curried from instantiation parameters | |
118 | for you like ``pagesize`` and ``overwrite`` | |
119 | to ensure compatibility with previous versions of the library. All | |
120 | other properties must be set on the object. | |
121 | ||
122 | .. warning:: | |
123 | The coordinate ordering for all functions are sensitive the the | |
124 | index's :attr:`interleaved` data member. If :attr:`interleaved` | |
125 | is False, the coordinates must be in the form | |
126 | [xmin, xmax, ymin, ymax, ..., ..., kmin, kmax]. If :attr:`interleaved` | |
127 | is True, the coordinates must be in the form | |
128 | [xmin, ymin, ..., kmin, xmax, ymax, ..., kmax]. | |
129 | ||
130 | A basic example | |
131 | :: | |
132 | ||
133 | >>> from rtree import index | |
134 | >>> p = index.Property() | |
135 | ||
136 | >>> idx = index.Index(properties=p) | |
137 | >>> idx # doctest: +ELLIPSIS | |
138 | <rtree.index.Index object at 0x...> | |
139 | ||
140 | Insert an item into the index:: | |
141 | ||
142 | >>> idx.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42) | |
143 | ||
144 | Query:: | |
145 | ||
146 | >>> hits = idx.intersection((0, 0, 60, 60), objects=True) | |
147 | >>> for i in hits: | |
148 | ... if i.id == 4321: | |
149 | ... i.object | |
150 | ... i.bbox | |
151 | 42 | |
152 | [34.3776829412, 26.737585373400002, 49.3776829412, 41.737585373400002] | |
153 | ||
154 | ||
155 | Using custom serializers | |
156 | :: | |
157 | ||
158 | >>> import simplejson | |
159 | >>> class JSONIndex(index.Index): | |
160 | ... dumps = staticmethod(simplejson.dumps) | |
161 | ... loads = staticmethod(simplejson.loads) | |
162 | ||
163 | >>> json_idx = JSONIndex() | |
164 | >>> json_idx.insert(1, (0, 1, 0, 1), {"nums": [23, 45], "letters": "abcd"}) | |
165 | >>> list(json_idx.nearest((0, 0), 1, objects="raw")) | |
166 | [{'letters': 'abcd', 'nums': [23, 45]}] | |
167 | ||
168 | """ | |
169 | self.properties = kwargs.get('properties', Property()) | |
170 | ||
171 | # interleaved True gives 'bbox' order. | |
172 | self.interleaved = bool(kwargs.get('interleaved', True)) | |
173 | ||
174 | stream = None | |
175 | basename = None | |
176 | storage = None | |
177 | if args: | |
178 | if isinstance(args[0], string_types) or isinstance(args[0], bytes): | |
179 | # they sent in a filename | |
180 | basename = args[0] | |
181 | # they sent in a filename, stream | |
182 | if len(args) > 1: | |
183 | stream = args[1] | |
184 | elif isinstance(args[0], ICustomStorage): | |
185 | storage = args[0] | |
186 | # they sent in a storage, stream | |
187 | if len(args) > 1: | |
188 | stream = args[1] | |
189 | else: | |
190 | stream = args[0] | |
191 | ||
192 | ||
193 | if basename: | |
194 | self.properties.storage = RT_Disk | |
195 | self.properties.filename = basename | |
196 | ||
197 | # check we can read the file | |
198 | f = basename + "." + self.properties.idx_extension | |
199 | p = os.path.abspath(f) | |
200 | ||
201 | ||
202 | # assume if the file exists, we're not going to overwrite it | |
203 | # unless the user explicitly set the property to do so | |
204 | if os.path.exists(p): | |
205 | ||
206 | self.properties.overwrite = bool(kwargs.get('overwrite', False)) | |
207 | ||
208 | # assume we're fetching the first index_id. If the user | |
209 | # set it, we'll fetch that one. | |
210 | if not self.properties.overwrite: | |
211 | try: | |
212 | self.properties.index_id | |
213 | except core.RTreeError: | |
214 | self.properties.index_id=1 | |
215 | ||
216 | d = os.path.dirname(p) | |
217 | if not os.access(d, os.W_OK): | |
218 | message = "Unable to open file '%s' for index storage"%f | |
219 | raise IOError(message) | |
220 | elif storage: | |
221 | if (major_version < 2 and minor_version < 8): | |
222 | raise core.RTreeError("libspatialindex {0} does not support custom storage".format(__c_api_version__)) | |
223 | ||
224 | self.properties.storage = RT_Custom | |
225 | if storage.hasData: | |
226 | self.properties.overwrite = bool(kwargs.get('overwrite', False)) | |
227 | if not self.properties.overwrite: | |
228 | try: | |
229 | self.properties.index_id | |
230 | except core.RTreeError: | |
231 | self.properties.index_id=1 | |
232 | else: | |
233 | storage.clear() | |
234 | self.customstorage = storage | |
235 | storage.registerCallbacks( self.properties ) | |
236 | else: | |
237 | self.properties.storage = RT_Memory | |
238 | ||
239 | try: | |
240 | self.properties.pagesize = int(kwargs['pagesize']) | |
241 | except KeyError: | |
242 | pass | |
243 | ||
244 | if stream: | |
245 | self.handle = self._create_idx_from_stream(stream) | |
246 | else: | |
247 | self.handle = core.rt.Index_Create(self.properties.handle) | |
248 | self.owned = True | |
249 | ||
250 | def __del__(self): | |
251 | try: | |
252 | self.owned | |
253 | except AttributeError: | |
254 | # we were partially constructed. We're going to let it leak | |
255 | # in that case | |
256 | return | |
257 | if self.owned: | |
258 | if self.handle and core: | |
259 | try: | |
260 | core.rt | |
261 | except AttributeError: | |
262 | # uh, leak? We're owned, and have a handle | |
263 | # but for some reason the dll isn't active | |
264 | return | |
265 | ||
266 | core.rt.Index_Destroy(self.handle) | |
267 | self.owned = False | |
268 | self.handle = None | |
269 | ||
270 | def dumps(self, obj): | |
271 | return pickle.dumps(obj) | |
272 | ||
273 | def loads(self, string): | |
274 | return pickle.loads(string) | |
275 | ||
276 | def close(self): | |
277 | """Force a flush of the index to storage. Renders index | |
278 | inaccessible. | |
279 | """ | |
280 | if self.handle and core: | |
281 | core.rt.Index_Destroy(self.handle) | |
282 | self.handle = None | |
283 | self.owned = False | |
284 | else: | |
285 | raise IOError("Unclosable index") | |
286 | ||
287 | def get_coordinate_pointers(self, coordinates): | |
288 | ||
289 | try: | |
290 | iter(coordinates) | |
291 | except TypeError: | |
292 | raise TypeError('Bounds must be a sequence') | |
293 | dimension = self.properties.dimension | |
294 | ||
295 | mins = ctypes.c_double * dimension | |
296 | maxs = ctypes.c_double * dimension | |
297 | ||
298 | if not self.interleaved: | |
299 | coordinates = Index.interleave(coordinates) | |
300 | ||
301 | # it's a point make it into a bbox. [x, y] => [x, y, x, y] | |
302 | if len(coordinates) == dimension: | |
303 | coordinates += coordinates | |
304 | ||
305 | if len(coordinates) != dimension * 2: | |
306 | raise core.RTreeError("Coordinates must be in the form " | |
307 | "(minx, miny, maxx, maxy) or (x, y) for 2D indexes") | |
308 | ||
309 | # so here all coords are in the form: | |
310 | # [xmin, ymin, zmin, xmax, ymax, zmax] | |
311 | for i in range(dimension): | |
312 | if not coordinates[i] <= coordinates[i + dimension]: | |
313 | raise core.RTreeError("Coordinates must not have minimums more than maximums") | |
314 | ||
315 | p_mins = mins(*[ctypes.c_double(\ | |
316 | coordinates[i]) for i in range(dimension)]) | |
317 | p_maxs = maxs(*[ctypes.c_double(\ | |
318 | coordinates[i + dimension]) for i in range(dimension)]) | |
319 | ||
320 | return (p_mins, p_maxs) | |
321 | ||
322 | def _serialize(self, obj): | |
323 | serialized = self.dumps(obj) | |
324 | size = len(serialized) | |
325 | ||
326 | d = ctypes.create_string_buffer(serialized) | |
327 | #d.value = serialized | |
328 | p = ctypes.pointer(d) | |
329 | ||
330 | # return serialized to keep it alive for the pointer. | |
331 | return size, ctypes.cast(p, ctypes.POINTER(ctypes.c_uint8)), serialized | |
332 | ||
333 | def insert(self, id, coordinates, obj = None): | |
334 | """Inserts an item into the index with the given coordinates. | |
335 | ||
336 | :param id: long integer | |
337 | A long integer that is the identifier for this index entry. IDs | |
338 | need not be unique to be inserted into the index, and it is up | |
339 | to the user to ensure they are unique if this is a requirement. | |
340 | ||
341 | :param coordinates: sequence or array | |
342 | This may be an object that satisfies the numpy array | |
343 | protocol, providing the index's dimension * 2 coordinate | |
344 | pairs representing the `mink` and `maxk` coordinates in | |
345 | each dimension defining the bounds of the query window. | |
346 | ||
347 | :param obj: a pickleable object. If not None, this object will be | |
348 | stored in the index with the :attr:`id`. | |
349 | ||
350 | The following example inserts an entry into the index with id `4321`, | |
351 | and the object it stores with that id is the number `42`. The coordinate | |
352 | ordering in this instance is the default (interleaved=True) ordering:: | |
353 | ||
354 | >>> from rtree import index | |
355 | >>> idx = index.Index() | |
356 | >>> idx.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42) | |
357 | ||
358 | """ | |
359 | p_mins, p_maxs = self.get_coordinate_pointers(coordinates) | |
360 | data = ctypes.c_ubyte(0) | |
361 | size = 0 | |
362 | pyserialized = None | |
363 | if obj is not None: | |
364 | size, data, pyserialized = self._serialize(obj) | |
365 | core.rt.Index_InsertData(self.handle, id, p_mins, p_maxs, self.properties.dimension, data, size) | |
366 | add = insert | |
367 | ||
368 | def count(self, coordinates): | |
369 | """Return number of objects that intersect the given coordinates. | |
370 | ||
371 | :param coordinates: sequence or array | |
372 | This may be an object that satisfies the numpy array | |
373 | protocol, providing the index's dimension * 2 coordinate | |
374 | pairs representing the `mink` and `maxk` coordinates in | |
375 | each dimension defining the bounds of the query window. | |
376 | ||
377 | The following example queries the index for any objects any objects | |
378 | that were stored in the index intersect the bounds given in the coordinates:: | |
379 | ||
380 | >>> from rtree import index | |
381 | >>> idx = index.Index() | |
382 | >>> idx.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42) | |
383 | ||
384 | >>> idx.count((0, 0, 60, 60)) | |
385 | 1 | |
386 | ||
387 | """ | |
388 | p_mins, p_maxs = self.get_coordinate_pointers(coordinates) | |
389 | ||
390 | p_num_results = ctypes.c_uint64(0) | |
391 | ||
392 | ||
393 | core.rt.Index_Intersects_count( self.handle, | |
394 | p_mins, | |
395 | p_maxs, | |
396 | self.properties.dimension, | |
397 | ctypes.byref(p_num_results)) | |
398 | ||
399 | ||
400 | return p_num_results.value | |
401 | ||
402 | def intersection(self, coordinates, objects=False): | |
403 | """Return ids or objects in the index that intersect the given coordinates. | |
404 | ||
405 | :param coordinates: sequence or array | |
406 | This may be an object that satisfies the numpy array | |
407 | protocol, providing the index's dimension * 2 coordinate | |
408 | pairs representing the `mink` and `maxk` coordinates in | |
409 | each dimension defining the bounds of the query window. | |
410 | ||
411 | :param objects: True or False or 'raw' | |
412 | If True, the intersection method will return index objects that | |
413 | were pickled when they were stored with each index entry, as well | |
414 | as the id and bounds of the index entries. If 'raw', the objects | |
415 | will be returned without the :class:`rtree.index.Item` wrapper. | |
416 | ||
417 | The following example queries the index for any objects any objects | |
418 | that were stored in the index intersect the bounds given in the coordinates:: | |
419 | ||
420 | >>> from rtree import index | |
421 | >>> idx = index.Index() | |
422 | >>> idx.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42) | |
423 | ||
424 | >>> hits = list(idx.intersection((0, 0, 60, 60), objects=True)) | |
425 | >>> [(item.object, item.bbox) for item in hits if item.id == 4321] | |
426 | [(42, [34.3776829412, 26.737585373400002, 49.3776829412, 41.737585373400002])] | |
427 | ||
428 | If the :class:`rtree.index.Item` wrapper is not used, it is faster to | |
429 | request the 'raw' objects:: | |
430 | ||
431 | >>> list(idx.intersection((0, 0, 60, 60), objects="raw")) | |
432 | [42] | |
433 | ||
434 | ||
435 | """ | |
436 | ||
437 | if objects: return self._intersection_obj(coordinates, objects) | |
438 | ||
439 | p_mins, p_maxs = self.get_coordinate_pointers(coordinates) | |
440 | ||
441 | p_num_results = ctypes.c_uint64(0) | |
442 | ||
443 | it = ctypes.pointer(ctypes.c_int64()) | |
444 | ||
445 | core.rt.Index_Intersects_id( self.handle, | |
446 | p_mins, | |
447 | p_maxs, | |
448 | self.properties.dimension, | |
449 | ctypes.byref(it), | |
450 | ctypes.byref(p_num_results)) | |
451 | return self._get_ids(it, p_num_results.value) | |
452 | ||
453 | def _intersection_obj(self, coordinates, objects): | |
454 | ||
455 | p_mins, p_maxs = self.get_coordinate_pointers(coordinates) | |
456 | ||
457 | p_num_results = ctypes.c_uint64(0) | |
458 | ||
459 | it = ctypes.pointer(ctypes.c_void_p()) | |
460 | ||
461 | core.rt.Index_Intersects_obj( self.handle, | |
462 | p_mins, | |
463 | p_maxs, | |
464 | self.properties.dimension, | |
465 | ctypes.byref(it), | |
466 | ctypes.byref(p_num_results)) | |
467 | return self._get_objects(it, p_num_results.value, objects) | |
468 | ||
469 | def _get_objects(self, it, num_results, objects): | |
470 | # take the pointer, yield the result objects and free | |
471 | items = ctypes.cast(it, ctypes.POINTER(ctypes.POINTER(ctypes.c_void_p * num_results))) | |
472 | its = ctypes.cast(items, ctypes.POINTER(ctypes.POINTER(ctypes.c_void_p))) | |
473 | ||
474 | try: | |
475 | if objects != 'raw': | |
476 | for i in range(num_results): | |
477 | yield Item(self.loads, items[i]) | |
478 | else: | |
479 | for i in range(num_results): | |
480 | data = _get_data(items[i]) | |
481 | if data is None: | |
482 | yield data | |
483 | else: | |
484 | yield self.loads(data) | |
485 | ||
486 | core.rt.Index_DestroyObjResults(its, num_results) | |
487 | except: # need to catch all exceptions, not just rtree. | |
488 | core.rt.Index_DestroyObjResults(its, num_results) | |
489 | raise | |
490 | ||
491 | def _get_ids(self, it, num_results): | |
492 | # take the pointer, yield the results and free | |
493 | items = ctypes.cast(it, ctypes.POINTER(ctypes.c_int64 * num_results)) | |
494 | its = ctypes.cast(items, ctypes.POINTER(ctypes.c_void_p)) | |
495 | ||
496 | try: | |
497 | for i in range(num_results): | |
498 | yield items.contents[i] | |
499 | core.rt.Index_Free(its) | |
500 | except: | |
501 | core.rt.Index_Free(its) | |
502 | raise | |
503 | ||
504 | def _nearest_obj(self, coordinates, num_results, objects): | |
505 | ||
506 | p_mins, p_maxs = self.get_coordinate_pointers(coordinates) | |
507 | ||
508 | p_num_results = ctypes.pointer(ctypes.c_uint64(num_results)) | |
509 | ||
510 | it = ctypes.pointer(ctypes.c_void_p()) | |
511 | ||
512 | core.rt.Index_NearestNeighbors_obj( self.handle, | |
513 | p_mins, | |
514 | p_maxs, | |
515 | self.properties.dimension, | |
516 | ctypes.byref(it), | |
517 | p_num_results) | |
518 | ||
519 | return self._get_objects(it, p_num_results.contents.value, objects) | |
520 | ||
521 | def nearest(self, coordinates, num_results=1, objects=False): | |
522 | """Returns the ``k``-nearest objects to the given coordinates. | |
523 | ||
524 | :param coordinates: sequence or array | |
525 | This may be an object that satisfies the numpy array | |
526 | protocol, providing the index's dimension * 2 coordinate | |
527 | pairs representing the `mink` and `maxk` coordinates in | |
528 | each dimension defining the bounds of the query window. | |
529 | ||
530 | :param num_results: integer | |
531 | The number of results to return nearest to the given coordinates. | |
532 | If two index entries are equidistant, *both* are returned. | |
533 | This property means that :attr:`num_results` may return more | |
534 | items than specified | |
535 | ||
536 | :param objects: True / False / 'raw' | |
537 | If True, the nearest method will return index objects that | |
538 | were pickled when they were stored with each index entry, as | |
539 | well as the id and bounds of the index entries. | |
540 | If 'raw', it will return the object as entered into the database | |
541 | without the :class:`rtree.index.Item` wrapper. | |
542 | ||
543 | Example of finding the three items nearest to this one:: | |
544 | ||
545 | >>> from rtree import index | |
546 | >>> idx = index.Index() | |
547 | >>> idx.insert(4321, (34.37, 26.73, 49.37, 41.73), obj=42) | |
548 | >>> hits = idx.nearest((0, 0, 10, 10), 3, objects=True) | |
549 | """ | |
550 | if objects: return self._nearest_obj(coordinates, num_results, objects) | |
551 | p_mins, p_maxs = self.get_coordinate_pointers(coordinates) | |
552 | ||
553 | p_num_results = ctypes.pointer(ctypes.c_uint64(num_results)) | |
554 | ||
555 | it = ctypes.pointer(ctypes.c_int64()) | |
556 | ||
557 | core.rt.Index_NearestNeighbors_id( self.handle, | |
558 | p_mins, | |
559 | p_maxs, | |
560 | self.properties.dimension, | |
561 | ctypes.byref(it), | |
562 | p_num_results) | |
563 | ||
564 | return self._get_ids(it, p_num_results.contents.value) | |
565 | ||
566 | def get_bounds(self, coordinate_interleaved=None): | |
567 | """Returns the bounds of the index | |
568 | ||
569 | :param coordinate_interleaved: If True, the coordinates are turned | |
570 | in the form [xmin, ymin, ..., kmin, xmax, ymax, ..., kmax], | |
571 | otherwise they are returned as | |
572 | [xmin, xmax, ymin, ymax, ..., ..., kmin, kmax]. If not specified, | |
573 | the :attr:`interleaved` member of the index is used, which | |
574 | defaults to True. | |
575 | ||
576 | """ | |
577 | if coordinate_interleaved is None: | |
578 | coordinate_interleaved = self.interleaved | |
579 | return _get_bounds(self.handle, core.rt.Index_GetBounds, coordinate_interleaved) | |
580 | bounds = property(get_bounds) | |
581 | ||
582 | def delete(self, id, coordinates): | |
583 | """Deletes items from the index with the given ``'id'`` within the | |
584 | specified coordinates. | |
585 | ||
586 | :param id: long integer | |
587 | A long integer that is the identifier for this index entry. IDs | |
588 | need not be unique to be inserted into the index, and it is up | |
589 | to the user to ensure they are unique if this is a requirement. | |
590 | ||
591 | :param coordinates: sequence or array | |
592 | Dimension * 2 coordinate pairs, representing the min | |
593 | and max coordinates in each dimension of the item to be | |
594 | deleted from the index. Their ordering will depend on the | |
595 | index's :attr:`interleaved` data member. | |
596 | These are not the coordinates of a space containing the | |
597 | item, but those of the item itself. Together with the | |
598 | id parameter, they determine which item will be deleted. | |
599 | This may be an object that satisfies the numpy array protocol. | |
600 | ||
601 | Example:: | |
602 | ||
603 | >>> from rtree import index | |
604 | >>> idx = index.Index() | |
605 | >>> idx.delete(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734) ) | |
606 | ||
607 | """ | |
608 | p_mins, p_maxs = self.get_coordinate_pointers(coordinates) | |
609 | core.rt.Index_DeleteData(self.handle, id, p_mins, p_maxs, self.properties.dimension) | |
610 | ||
611 | def valid(self): | |
612 | return bool(core.rt.Index_IsValid(self.handle)) | |
613 | ||
614 | def clearBuffer(self): | |
615 | return core.rt.Index_ClearBuffer(self.handle) | |
616 | ||
617 | @classmethod | |
618 | def deinterleave(self, interleaved): | |
619 | """ | |
620 | [xmin, ymin, xmax, ymax] => [xmin, xmax, ymin, ymax] | |
621 | ||
622 | >>> Index.deinterleave([0, 10, 1, 11]) | |
623 | [0, 1, 10, 11] | |
624 | ||
625 | >>> Index.deinterleave([0, 1, 2, 10, 11, 12]) | |
626 | [0, 10, 1, 11, 2, 12] | |
627 | ||
628 | """ | |
629 | assert len(interleaved) % 2 == 0, ("must be a pairwise list") | |
630 | dimension = len(interleaved) // 2 | |
631 | di = [] | |
632 | for i in range(dimension): | |
633 | di.extend([interleaved[i], interleaved[i + dimension]]) | |
634 | return di | |
635 | ||
636 | @classmethod | |
637 | def interleave(self, deinterleaved): | |
638 | """ | |
639 | [xmin, xmax, ymin, ymax, zmin, zmax] => [xmin, ymin, zmin, xmax, ymax, zmax] | |
640 | ||
641 | >>> Index.interleave([0, 1, 10, 11]) | |
642 | [0, 10, 1, 11] | |
643 | ||
644 | >>> Index.interleave([0, 10, 1, 11, 2, 12]) | |
645 | [0, 1, 2, 10, 11, 12] | |
646 | ||
647 | >>> Index.interleave((-1, 1, 58, 62, 22, 24)) | |
648 | [-1, 58, 22, 1, 62, 24] | |
649 | ||
650 | """ | |
651 | assert len(deinterleaved) % 2 == 0, ("must be a pairwise list") | |
652 | dimension = len(deinterleaved) / 2 | |
653 | interleaved = [] | |
654 | for i in range(2): | |
655 | interleaved.extend([deinterleaved[i + j] \ | |
656 | for j in range(0, len(deinterleaved), 2)]) | |
657 | return interleaved | |
658 | ||
659 | def _create_idx_from_stream(self, stream): | |
660 | """This function is used to instantiate the index given an | |
661 | iterable stream of data. """ | |
662 | ||
663 | stream_iter = iter(stream) | |
664 | dimension = self.properties.dimension | |
665 | darray = ctypes.c_double * dimension | |
666 | mins = darray() | |
667 | maxs = darray() | |
668 | no_data = ctypes.cast(ctypes.pointer(ctypes.c_ubyte(0)), | |
669 | ctypes.POINTER(ctypes.c_ubyte)) | |
670 | ||
671 | def py_next_item(p_id, p_mins, p_maxs, p_dimension, p_data, p_length): | |
672 | """This function must fill pointers to individual entries that will | |
673 | be added to the index. The C API will actually call this function | |
674 | to fill out the pointers. If this function returns anything other | |
675 | than 0, it is assumed that the stream of data is done.""" | |
676 | ||
677 | try: | |
678 | p_id[0], coordinates, obj = next(stream_iter) | |
679 | except StopIteration: | |
680 | # we're done | |
681 | return -1 | |
682 | ||
683 | # set the id | |
684 | if self.interleaved: | |
685 | coordinates = Index.deinterleave(coordinates) | |
686 | ||
687 | # this code assumes the coords ar not interleaved. | |
688 | # xmin, xmax, ymin, ymax, zmin, zmax | |
689 | for i in range(dimension): | |
690 | mins[i] = coordinates[i*2] | |
691 | maxs[i] = coordinates[(i*2)+1] | |
692 | ||
693 | p_mins[0] = ctypes.cast(mins, ctypes.POINTER(ctypes.c_double)) | |
694 | p_maxs[0] = ctypes.cast(maxs, ctypes.POINTER(ctypes.c_double)) | |
695 | ||
696 | # set the dimension | |
697 | p_dimension[0] = dimension | |
698 | if obj is None: | |
699 | p_data[0] = no_data | |
700 | p_length[0] = 0 | |
701 | else: | |
702 | p_length[0], data, _ = self._serialize(obj) | |
703 | p_data[0] = ctypes.cast(data, ctypes.POINTER(ctypes.c_ubyte)) | |
704 | ||
705 | return 0 | |
706 | ||
707 | ||
708 | stream = core.NEXTFUNC(py_next_item) | |
709 | return core.rt.Index_CreateWithStream(self.properties.handle, stream) | |
710 | ||
711 | def leaves(self): | |
712 | leaf_node_count = ctypes.c_uint32() | |
713 | p_leafsizes = ctypes.pointer(ctypes.c_uint32()) | |
714 | p_leafids = ctypes.pointer(ctypes.c_int64()) | |
715 | pp_childids = ctypes.pointer(ctypes.pointer(ctypes.c_int64())) | |
716 | ||
717 | pp_mins = ctypes.pointer(ctypes.pointer(ctypes.c_double())) | |
718 | pp_maxs = ctypes.pointer(ctypes.pointer(ctypes.c_double())) | |
719 | dimension = ctypes.c_uint32(0) | |
720 | ||
721 | ||
722 | core.rt.Index_GetLeaves( self.handle, | |
723 | ctypes.byref(leaf_node_count), | |
724 | ctypes.byref(p_leafsizes), | |
725 | ctypes.byref(p_leafids), | |
726 | ctypes.byref(pp_childids), | |
727 | ctypes.byref(pp_mins), | |
728 | ctypes.byref(pp_maxs), | |
729 | ctypes.byref(dimension) | |
730 | ) | |
731 | ||
732 | output = [] | |
733 | ||
734 | count = leaf_node_count.value | |
735 | sizes = ctypes.cast(p_leafsizes, ctypes.POINTER(ctypes.c_uint32 * count)) | |
736 | ids = ctypes.cast(p_leafids, ctypes.POINTER(ctypes.c_int64 * count)) | |
737 | child = ctypes.cast(pp_childids, ctypes.POINTER(ctypes.POINTER(ctypes.c_int64) * count)) | |
738 | mins = ctypes.cast(pp_mins, ctypes.POINTER(ctypes.POINTER(ctypes.c_double) * count)) | |
739 | maxs = ctypes.cast(pp_maxs, ctypes.POINTER(ctypes.POINTER(ctypes.c_double) * count)) | |
740 | for i in range(count): | |
741 | p_child_ids = child.contents[i] | |
742 | ||
743 | id = ids.contents[i] | |
744 | size = sizes.contents[i] | |
745 | child_ids_array = ctypes.cast(p_child_ids, ctypes.POINTER(ctypes.c_int64 * size)) | |
746 | ||
747 | child_ids = [] | |
748 | for j in range(size): | |
749 | child_ids.append(child_ids_array.contents[j]) | |
750 | ||
751 | # free the child ids list | |
752 | core.rt.Index_Free(ctypes.cast(p_child_ids, ctypes.POINTER(ctypes.c_void_p))) | |
753 | ||
754 | p_mins = mins.contents[i] | |
755 | p_maxs = maxs.contents[i] | |
756 | ||
757 | p_mins = ctypes.cast(p_mins, ctypes.POINTER(ctypes.c_double * dimension.value)) | |
758 | p_maxs = ctypes.cast(p_maxs, ctypes.POINTER(ctypes.c_double * dimension.value)) | |
759 | ||
760 | bounds = [] | |
761 | bounds = [p_mins.contents[i] for i in range(dimension.value)] | |
762 | bounds += [p_maxs.contents[i] for i in range(dimension.value)] | |
763 | ||
764 | # free the bounds | |
765 | p_mins = ctypes.cast(p_mins,ctypes.POINTER(ctypes.c_double)) | |
766 | p_maxs = ctypes.cast(p_maxs,ctypes.POINTER(ctypes.c_double)) | |
767 | core.rt.Index_Free(ctypes.cast(p_mins, ctypes.POINTER(ctypes.c_void_p))) | |
768 | core.rt.Index_Free(ctypes.cast(p_maxs, ctypes.POINTER(ctypes.c_void_p))) | |
769 | ||
770 | output.append((id, child_ids, bounds)) | |
771 | ||
772 | return output | |
773 | ||
774 | # An alias to preserve backward compatibility | |
775 | Rtree = Index | |
776 | ||
777 | class Item(object): | |
778 | """A container for index entries""" | |
779 | __slots__ = ('handle', 'owned', 'id', 'object', 'bounds') | |
780 | def __init__(self, loads, handle, owned=False): | |
781 | """There should be no reason to instantiate these yourself. Items are | |
782 | created automatically when you call | |
783 | :meth:`rtree.index.Index.intersection` (or other index querying | |
784 | methods) with objects=True given the parameters of the function.""" | |
785 | ||
786 | if handle: | |
787 | self.handle = handle | |
788 | ||
789 | self.owned = owned | |
790 | ||
791 | self.id = core.rt.IndexItem_GetID(self.handle) | |
792 | ||
793 | self.object = None | |
794 | self.object = self.get_object(loads) | |
795 | self.bounds = _get_bounds(self.handle, core.rt.IndexItem_GetBounds, False) | |
796 | ||
797 | @property | |
798 | def bbox(self): | |
799 | """Returns the bounding box of the index entry""" | |
800 | return Index.interleave(self.bounds) | |
801 | ||
802 | def get_object(self, loads): | |
803 | # short circuit this so we only do it at construction time | |
804 | if self.object is not None: return self.object | |
805 | data = _get_data(self.handle) | |
806 | if data is None: return None | |
807 | return loads(data) | |
808 | ||
809 | class Property(object): | |
810 | """An index property object is a container that contains a number of | |
811 | settable index properties. Many of these properties must be set at | |
812 | index creation times, while others can be used to adjust performance | |
813 | or behavior.""" | |
814 | ||
815 | pkeys = ( | |
816 | 'buffering_capacity', 'custom_storage_callbacks', | |
817 | 'custom_storage_callbacks_size', 'dat_extension', 'dimension', | |
818 | 'filename', 'fill_factor', 'idx_extension', 'index_capacity', | |
819 | 'index_id', 'leaf_capacity', 'near_minimum_overlap_factor', | |
820 | 'overwrite', 'pagesize', 'point_pool_capacity', | |
821 | 'region_pool_capacity', 'reinsert_factor', | |
822 | 'split_distribution_factor', 'storage', 'tight_mbr', 'tpr_horizon', | |
823 | 'type', 'variant', 'writethrough' ) | |
824 | ||
825 | def __init__(self, handle=None, owned=True, **kwargs): | |
826 | if handle: | |
827 | self.handle = handle | |
828 | else: | |
829 | self.handle = core.rt.IndexProperty_Create() | |
830 | self.owned = owned | |
831 | for k, v in list(kwargs.items()): | |
832 | if v is not None: | |
833 | setattr(self, k, v) | |
834 | ||
835 | def __del__(self): | |
836 | if self.owned: | |
837 | if self.handle and core: | |
838 | try: | |
839 | core.rt | |
840 | except AttributeError: | |
841 | # uh, leak? We're owned, and have a handle | |
842 | # but for some reason the dll isn't active | |
843 | return | |
844 | core.rt.IndexProperty_Destroy(self.handle) | |
845 | ||
846 | def as_dict(self): | |
847 | d = {} | |
848 | for k in self.pkeys: | |
849 | try: | |
850 | v = getattr(self, k) | |
851 | except core.RTreeError: | |
852 | v = None | |
853 | d[k] = v | |
854 | return d | |
855 | ||
856 | def __repr__(self): | |
857 | return repr(self.as_dict()) | |
858 | ||
859 | def __str__(self): | |
860 | return pprint.pformat(self.as_dict()) | |
861 | ||
862 | def get_index_type(self): | |
863 | return core.rt.IndexProperty_GetIndexType(self.handle) | |
864 | def set_index_type(self, value): | |
865 | return core.rt.IndexProperty_SetIndexType(self.handle, value) | |
866 | ||
867 | type = property(get_index_type, set_index_type) | |
868 | """Index type. Valid index type values are | |
869 | :data:`RT_RTree`, :data:`RT_MVTree`, or :data:`RT_TPRTree`. Only | |
870 | RT_RTree (the default) is practically supported at this time.""" | |
871 | ||
872 | def get_variant(self): | |
873 | return core.rt.IndexProperty_GetIndexVariant(self.handle) | |
874 | def set_variant(self, value): | |
875 | return core.rt.IndexProperty_SetIndexVariant(self.handle, value) | |
876 | ||
877 | variant = property(get_variant, set_variant) | |
878 | """Index variant. Valid index variant values are | |
879 | :data:`RT_Linear`, :data:`RT_Quadratic`, and :data:`RT_Star`""" | |
880 | ||
881 | def get_dimension(self): | |
882 | return core.rt.IndexProperty_GetDimension(self.handle) | |
883 | def set_dimension(self, value): | |
884 | if (value <= 0): | |
885 | raise core.RTreeError("Negative or 0 dimensional indexes are not allowed") | |
886 | return core.rt.IndexProperty_SetDimension(self.handle, value) | |
887 | ||
888 | dimension = property(get_dimension, set_dimension) | |
889 | """Index dimension. Must be greater than 0, though a dimension of 1 might | |
890 | have undefined behavior.""" | |
891 | ||
892 | def get_storage(self): | |
893 | return core.rt.IndexProperty_GetIndexStorage(self.handle) | |
894 | def set_storage(self, value): | |
895 | return core.rt.IndexProperty_SetIndexStorage(self.handle, value) | |
896 | ||
897 | storage = property(get_storage, set_storage) | |
898 | """Index storage. One of :data:`RT_Disk`, :data:`RT_Memory` or :data:`RT_Custom`. | |
899 | If a filename is passed as the first parameter to :class:index.Index, :data:`RT_Disk` | |
900 | is assumed. If a CustomStorage instance is passed, :data:`RT_Custom` is assumed. | |
901 | Otherwise, :data:`RT_Memory` is the default.""" | |
902 | ||
903 | def get_pagesize(self): | |
904 | return core.rt.IndexProperty_GetPagesize(self.handle) | |
905 | def set_pagesize(self, value): | |
906 | if (value <= 0): | |
907 | raise core.RTreeError("Pagesize must be > 0") | |
908 | return core.rt.IndexProperty_SetPagesize(self.handle, value) | |
909 | ||
910 | pagesize = property(get_pagesize, set_pagesize) | |
911 | """The pagesize when disk storage is used. It is ideal to ensure that your | |
912 | index entries fit within a single page for best performance. """ | |
913 | ||
914 | def get_index_capacity(self): | |
915 | return core.rt.IndexProperty_GetIndexCapacity(self.handle) | |
916 | def set_index_capacity(self, value): | |
917 | if (value <= 0): | |
918 | raise core.RTreeError("index_capacity must be > 0") | |
919 | return core.rt.IndexProperty_SetIndexCapacity(self.handle, value) | |
920 | ||
921 | index_capacity = property(get_index_capacity, set_index_capacity) | |
922 | """Index capacity""" | |
923 | ||
924 | def get_leaf_capacity(self): | |
925 | return core.rt.IndexProperty_GetLeafCapacity(self.handle) | |
926 | def set_leaf_capacity(self, value): | |
927 | if (value <= 0): | |
928 | raise core.RTreeError("leaf_capacity must be > 0") | |
929 | return core.rt.IndexProperty_SetLeafCapacity(self.handle, value) | |
930 | ||
931 | leaf_capacity = property(get_leaf_capacity, set_leaf_capacity) | |
932 | """Leaf capacity""" | |
933 | ||
934 | def get_index_pool_capacity(self): | |
935 | return core.rt.IndexProperty_GetIndexPoolCapacity(self.handle) | |
936 | def set_index_pool_capacity(self, value): | |
937 | if (value <= 0): | |
938 | raise core.RTreeError("index_pool_capacity must be > 0") | |
939 | return core.rt.IndexProperty_SetIndexPoolCapacity(self.handle, value) | |
940 | ||
941 | index_pool_capacity = property(get_index_pool_capacity, set_index_pool_capacity) | |
942 | """Index pool capacity""" | |
943 | ||
944 | def get_point_pool_capacity(self): | |
945 | return core.rt.IndexProperty_GetPointPoolCapacity(self.handle) | |
946 | def set_point_pool_capacity(self, value): | |
947 | if (value <= 0): | |
948 | raise core.RTreeError("point_pool_capacity must be > 0") | |
949 | return core.rt.IndexProperty_SetPointPoolCapacity(self.handle, value) | |
950 | ||
951 | point_pool_capacity = property(get_point_pool_capacity, set_point_pool_capacity) | |
952 | """Point pool capacity""" | |
953 | ||
954 | def get_region_pool_capacity(self): | |
955 | return core.rt.IndexProperty_GetRegionPoolCapacity(self.handle) | |
956 | def set_region_pool_capacity(self, value): | |
957 | if (value <= 0): | |
958 | raise core.RTreeError("region_pool_capacity must be > 0") | |
959 | return core.rt.IndexProperty_SetRegionPoolCapacity(self.handle, value) | |
960 | ||
961 | region_pool_capacity = property(get_region_pool_capacity, set_region_pool_capacity) | |
962 | """Region pool capacity""" | |
963 | ||
964 | def get_buffering_capacity(self): | |
965 | return core.rt.IndexProperty_GetBufferingCapacity(self.handle) | |
966 | def set_buffering_capacity(self, value): | |
967 | if (value <= 0): | |
968 | raise core.RTreeError("buffering_capacity must be > 0") | |
969 | return core.rt.IndexProperty_SetBufferingCapacity(self.handle, value) | |
970 | ||
971 | buffering_capacity = property(get_buffering_capacity, set_buffering_capacity) | |
972 | """Buffering capacity""" | |
973 | ||
974 | def get_tight_mbr(self): | |
975 | return bool(core.rt.IndexProperty_GetEnsureTightMBRs(self.handle)) | |
976 | def set_tight_mbr(self, value): | |
977 | value = bool(value) | |
978 | return bool(core.rt.IndexProperty_SetEnsureTightMBRs(self.handle, value)) | |
979 | ||
980 | tight_mbr = property(get_tight_mbr, set_tight_mbr) | |
981 | """Uses tight bounding rectangles""" | |
982 | ||
983 | def get_overwrite(self): | |
984 | return bool(core.rt.IndexProperty_GetOverwrite(self.handle)) | |
985 | def set_overwrite(self, value): | |
986 | value = bool(value) | |
987 | return bool(core.rt.IndexProperty_SetOverwrite(self.handle, value)) | |
988 | ||
989 | overwrite = property(get_overwrite, set_overwrite) | |
990 | """Overwrite existing index files""" | |
991 | ||
992 | def get_near_minimum_overlap_factor(self): | |
993 | return core.rt.IndexProperty_GetNearMinimumOverlapFactor(self.handle) | |
994 | def set_near_minimum_overlap_factor(self, value): | |
995 | if (value <= 0): | |
996 | raise core.RTreeError("near_minimum_overlap_factor must be > 0") | |
997 | return core.rt.IndexProperty_SetNearMinimumOverlapFactor(self.handle, value) | |
998 | ||
999 | near_minimum_overlap_factor = property(get_near_minimum_overlap_factor, set_near_minimum_overlap_factor) | |
1000 | """Overlap factor for MVRTrees""" | |
1001 | ||
1002 | def get_writethrough(self): | |
1003 | return bool(core.rt.IndexProperty_GetWriteThrough(self.handle)) | |
1004 | def set_writethrough(self, value): | |
1005 | value = bool(value) | |
1006 | return bool(core.rt.IndexProperty_SetWriteThrough(self.handle, value)) | |
1007 | ||
1008 | writethrough = property(get_writethrough, set_writethrough) | |
1009 | """Write through caching""" | |
1010 | ||
1011 | def get_fill_factor(self): | |
1012 | return core.rt.IndexProperty_GetFillFactor(self.handle) | |
1013 | def set_fill_factor(self, value): | |
1014 | return core.rt.IndexProperty_SetFillFactor(self.handle, value) | |
1015 | ||
1016 | fill_factor = property(get_fill_factor, set_fill_factor) | |
1017 | """Index node fill factor before branching""" | |
1018 | ||
1019 | def get_split_distribution_factor(self): | |
1020 | return core.rt.IndexProperty_GetSplitDistributionFactor(self.handle) | |
1021 | def set_split_distribution_factor(self, value): | |
1022 | return core.rt.IndexProperty_SetSplitDistributionFactor(self.handle, value) | |
1023 | ||
1024 | split_distribution_factor = property(get_split_distribution_factor, set_split_distribution_factor) | |
1025 | """Split distribution factor""" | |
1026 | ||
1027 | def get_tpr_horizon(self): | |
1028 | return core.rt.IndexProperty_GetTPRHorizon(self.handle) | |
1029 | def set_tpr_horizon(self, value): | |
1030 | return core.rt.IndexProperty_SetTPRHorizon(self.handle, value) | |
1031 | ||
1032 | tpr_horizon = property(get_tpr_horizon, set_tpr_horizon) | |
1033 | """TPR horizon""" | |
1034 | ||
1035 | def get_reinsert_factor(self): | |
1036 | return core.rt.IndexProperty_GetReinsertFactor(self.handle) | |
1037 | def set_reinsert_factor(self, value): | |
1038 | return core.rt.IndexProperty_SetReinsertFactor(self.handle, value) | |
1039 | ||
1040 | reinsert_factor = property(get_reinsert_factor, set_reinsert_factor) | |
1041 | """Reinsert factor""" | |
1042 | ||
1043 | def get_filename(self): | |
1044 | return core.rt.IndexProperty_GetFileName(self.handle) | |
1045 | def set_filename(self, value): | |
1046 | v = value.encode('utf-8') | |
1047 | return core.rt.IndexProperty_SetFileName(self.handle, v) | |
1048 | ||
1049 | filename = property(get_filename, set_filename) | |
1050 | """Index filename for disk storage""" | |
1051 | ||
1052 | def get_dat_extension(self): | |
1053 | return core.rt.IndexProperty_GetFileNameExtensionDat(self.handle) | |
1054 | def set_dat_extension(self, value): | |
1055 | v = value.encode('utf-8') | |
1056 | return core.rt.IndexProperty_SetFileNameExtensionDat(self.handle, value) | |
1057 | ||
1058 | dat_extension = property(get_dat_extension, set_dat_extension) | |
1059 | """Extension for .dat file""" | |
1060 | ||
1061 | def get_idx_extension(self): | |
1062 | return core.rt.IndexProperty_GetFileNameExtensionIdx(self.handle) | |
1063 | def set_idx_extension(self, value): | |
1064 | v = value.encode('utf-8') | |
1065 | return core.rt.IndexProperty_SetFileNameExtensionIdx(self.handle, value) | |
1066 | ||
1067 | idx_extension = property(get_idx_extension, set_idx_extension) | |
1068 | """Extension for .idx file""" | |
1069 | ||
1070 | def get_custom_storage_callbacks_size(self): | |
1071 | return core.rt.IndexProperty_GetCustomStorageCallbacksSize(self.handle) | |
1072 | def set_custom_storage_callbacks_size(self, value): | |
1073 | return core.rt.IndexProperty_SetCustomStorageCallbacksSize(self.handle, value) | |
1074 | ||
1075 | custom_storage_callbacks_size = property(get_custom_storage_callbacks_size, set_custom_storage_callbacks_size) | |
1076 | """Size of callbacks for custom storage""" | |
1077 | ||
1078 | def get_custom_storage_callbacks(self): | |
1079 | return core.rt.IndexProperty_GetCustomStorageCallbacks(self.handle) | |
1080 | def set_custom_storage_callbacks(self, value): | |
1081 | return core.rt.IndexProperty_SetCustomStorageCallbacks(self.handle, value) | |
1082 | ||
1083 | custom_storage_callbacks = property(get_custom_storage_callbacks, set_custom_storage_callbacks) | |
1084 | """Callbacks for custom storage""" | |
1085 | ||
1086 | def get_index_id(self): | |
1087 | return core.rt.IndexProperty_GetIndexID(self.handle) | |
1088 | def set_index_id(self, value): | |
1089 | return core.rt.IndexProperty_SetIndexID(self.handle, value) | |
1090 | ||
1091 | index_id = property(get_index_id, set_index_id) | |
1092 | """First node index id""" | |
1093 | ||
1094 | ||
1095 | # custom storage implementation | |
1096 | ||
1097 | id_type = ctypes.c_int64 | |
1098 | ||
1099 | class CustomStorageCallbacks(ctypes.Structure): | |
1100 | # callback types | |
1101 | createCallbackType = ctypes.CFUNCTYPE( | |
1102 | None, ctypes.c_void_p, ctypes.POINTER(ctypes.c_int) | |
1103 | ) | |
1104 | destroyCallbackType = ctypes.CFUNCTYPE( | |
1105 | None, ctypes.c_void_p, ctypes.POINTER(ctypes.c_int) | |
1106 | ) | |
1107 | flushCallbackType = ctypes.CFUNCTYPE( | |
1108 | None, ctypes.c_void_p, ctypes.POINTER(ctypes.c_int) | |
1109 | ) | |
1110 | ||
1111 | loadCallbackType = ctypes.CFUNCTYPE( | |
1112 | None, ctypes.c_void_p, id_type, ctypes.POINTER(ctypes.c_uint32), | |
1113 | ctypes.POINTER(ctypes.POINTER(ctypes.c_uint8)), ctypes.POINTER(ctypes.c_int) | |
1114 | ) | |
1115 | storeCallbackType = ctypes.CFUNCTYPE( | |
1116 | None, ctypes.c_void_p, ctypes.POINTER(id_type), ctypes.c_uint32, | |
1117 | ctypes.POINTER(ctypes.c_uint8), ctypes.POINTER(ctypes.c_int) | |
1118 | ) | |
1119 | deleteCallbackType = ctypes.CFUNCTYPE( | |
1120 | None, ctypes.c_void_p, id_type, ctypes.POINTER(ctypes.c_int) | |
1121 | ) | |
1122 | ||
1123 | _fields_ = [ ('context', ctypes.c_void_p), | |
1124 | ('createCallback', createCallbackType), | |
1125 | ('destroyCallback', destroyCallbackType), | |
1126 | ('flushCallback', flushCallbackType), | |
1127 | ('loadCallback', loadCallbackType), | |
1128 | ('storeCallback', storeCallbackType), | |
1129 | ('deleteCallback', deleteCallbackType), | |
1130 | ] | |
1131 | ||
1132 | def __init__(self, context, createCallback, destroyCallback, flushCallback, loadCallback, storeCallback, deleteCallback): | |
1133 | ctypes.Structure.__init__( self, | |
1134 | ctypes.c_void_p( context ), | |
1135 | self.createCallbackType( createCallback ), | |
1136 | self.destroyCallbackType( destroyCallback ), | |
1137 | self.flushCallbackType ( flushCallback ), | |
1138 | self.loadCallbackType ( loadCallback ), | |
1139 | self.storeCallbackType ( storeCallback ), | |
1140 | self.deleteCallbackType( deleteCallback ), | |
1141 | ) | |
1142 | ||
1143 | class ICustomStorage(object): | |
1144 | # error codes | |
1145 | NoError = 0 | |
1146 | InvalidPageError = 1 | |
1147 | IllegalStateError = 2 | |
1148 | ||
1149 | # special pages | |
1150 | EmptyPage = -0x1 | |
1151 | NewPage = -0x1 | |
1152 | ||
1153 | def allocateBuffer(self, length): | |
1154 | return core.rt.SIDX_NewBuffer( length ) | |
1155 | ||
1156 | def registerCallbacks(self, properties): | |
1157 | raise NotImplementedError() | |
1158 | ||
1159 | def clear(self): | |
1160 | raise NotImplementedError() | |
1161 | ||
1162 | hasData = property( lambda self: False ) | |
1163 | ''' Override this property to allow for reloadable storages ''' | |
1164 | ||
1165 | ||
1166 | class CustomStorageBase(ICustomStorage): | |
1167 | """ Derive from this class to create your own storage manager with access | |
1168 | to the raw C buffers. | |
1169 | """ | |
1170 | ||
1171 | def registerCallbacks(self, properties): | |
1172 | callbacks = CustomStorageCallbacks( ctypes.c_void_p(), self.create, | |
1173 | self.destroy, self.flush, | |
1174 | self.loadByteArray, self.storeByteArray, | |
1175 | self.deleteByteArray ) | |
1176 | properties.custom_storage_callbacks_size = ctypes.sizeof( callbacks ) | |
1177 | self.callbacks = callbacks | |
1178 | properties.custom_storage_callbacks = ctypes.cast( ctypes.pointer(callbacks), ctypes.c_void_p ) | |
1179 | ||
1180 | # the user must override these callback functions | |
1181 | def create(self, context, returnError): | |
1182 | returnError.contents.value = self.IllegalStateError | |
1183 | raise NotImplementedError( "You must override this method." ) | |
1184 | ||
1185 | def destroy(self, context, returnError): | |
1186 | """ please override """ | |
1187 | returnError.contents.value = self.IllegalStateError | |
1188 | raise NotImplementedError( "You must override this method." ) | |
1189 | ||
1190 | def loadByteArray(self, context, page, resultLen, resultData, returnError): | |
1191 | """ please override """ | |
1192 | returnError.contents.value = self.IllegalStateError | |
1193 | raise NotImplementedError( "You must override this method." ) | |
1194 | ||
1195 | def storeByteArray(self, context, page, len, data, returnError): | |
1196 | """ please override """ | |
1197 | returnError.contents.value = self.IllegalStateError | |
1198 | raise NotImplementedError( "You must override this method." ) | |
1199 | ||
1200 | def deleteByteArray(self, context, page, returnError): | |
1201 | """ please override """ | |
1202 | returnError.contents.value = self.IllegalStateError | |
1203 | raise NotImplementedError( "You must override this method." ) | |
1204 | ||
1205 | def flush(self, context, returnError): | |
1206 | """ please override """ | |
1207 | returnError.contents.value = self.IllegalStateError | |
1208 | raise NotImplementedError( "You must override this method." ) | |
1209 | ||
1210 | ||
1211 | class CustomStorage(ICustomStorage): | |
1212 | """ Provides a useful default custom storage implementation which marshals | |
1213 | the buffers on the C side from/to python strings. | |
1214 | Derive from this class and override the necessary methods to provide | |
1215 | your own custom storage manager. | |
1216 | """ | |
1217 | ||
1218 | def registerCallbacks(self, properties): | |
1219 | callbacks = CustomStorageCallbacks( 0, self._create, self._destroy, self._flush, self._loadByteArray, | |
1220 | self._storeByteArray, self._deleteByteArray ) | |
1221 | properties.custom_storage_callbacks_size = ctypes.sizeof( callbacks ) | |
1222 | self.callbacks = callbacks | |
1223 | properties.custom_storage_callbacks = ctypes.cast( ctypes.pointer(callbacks), ctypes.c_void_p ) | |
1224 | ||
1225 | # these functions handle the C callbacks and massage the data, then delegate | |
1226 | # to the function without underscore below | |
1227 | def _create(self, context, returnError): | |
1228 | self.create( returnError ) | |
1229 | ||
1230 | def _destroy(self, context, returnError): | |
1231 | self.destroy( returnError ) | |
1232 | ||
1233 | def _flush(self, context, returnError): | |
1234 | self.flush( returnError ) | |
1235 | ||
1236 | def _loadByteArray(self, context, page, resultLen, resultData, returnError): | |
1237 | resultString = self.loadByteArray( page, returnError ) | |
1238 | if returnError.contents.value != self.NoError: | |
1239 | return | |
1240 | # Copy python string over into a buffer allocated on the C side. | |
1241 | # The buffer will later be freed by the C side. This prevents | |
1242 | # possible heap corruption issues as buffers allocated by ctypes | |
1243 | # and the c library might be allocated on different heaps. | |
1244 | # Freeing a buffer allocated on another heap might make the application | |
1245 | # crash. | |
1246 | count = len(resultString) | |
1247 | resultLen.contents.value = count | |
1248 | buffer = self.allocateBuffer( count ) | |
1249 | ctypes.memmove( buffer, ctypes.c_char_p(resultString), count ) | |
1250 | resultData[0] = ctypes.cast( buffer, ctypes.POINTER(ctypes.c_uint8) ) | |
1251 | ||
1252 | def _storeByteArray(self, context, page, len, data, returnError): | |
1253 | str = ctypes.string_at( data, len ) | |
1254 | newPageId = self.storeByteArray( page.contents.value, str, returnError ) | |
1255 | page.contents.value = newPageId | |
1256 | ||
1257 | def _deleteByteArray(self, context, page, returnError): | |
1258 | self.deleteByteArray( page, returnError ) | |
1259 | ||
1260 | ||
1261 | # the user must override these callback functions | |
1262 | def create(self, returnError): | |
1263 | """ Must be overriden. No return value. """ | |
1264 | returnError.contents.value = self.IllegalStateError | |
1265 | raise NotImplementedError( "You must override this method." ) | |
1266 | ||
1267 | def destroy(self, returnError): | |
1268 | """ Must be overriden. No return value. """ | |
1269 | returnError.contents.value = self.IllegalStateError | |
1270 | raise NotImplementedError( "You must override this method." ) | |
1271 | ||
1272 | def flush(self, returnError): | |
1273 | """ Must be overriden. No return value. """ | |
1274 | returnError.contents.value = self.IllegalStateError | |
1275 | raise NotImplementedError( "You must override this method." ) | |
1276 | ||
1277 | def loadByteArray(self, page, returnError): | |
1278 | """ Must be overriden. Must return a string with the loaded data. """ | |
1279 | returnError.contents.value = self.IllegalStateError | |
1280 | raise NotImplementedError( "You must override this method." ) | |
1281 | return '' | |
1282 | ||
1283 | def storeByteArray(self, page, data, returnError): | |
1284 | """ Must be overriden. Must return the new 64-bit page ID of the stored | |
1285 | data if a new page had to be created (i.e. page is not NewPage). | |
1286 | """ | |
1287 | returnError.contents.value = self.IllegalStateError | |
1288 | raise NotImplementedError( "You must override this method." ) | |
1289 | return 0 | |
1290 | ||
1291 | def deleteByteArray(self, page, returnError): | |
1292 | """ please override """ | |
1293 | returnError.contents.value = self.IllegalStateError | |
1294 | raise NotImplementedError( "You must override this method." ) |
0 | [build_ext] | |
1 | define = | |
2 | include-dirs = /usr/local/include:/usr/local/include/spatialindex | |
3 | library-dirs = /usr/local/lib | |
4 | ||
5 | [egg_info] | |
6 | tag_build = | |
7 | tag_date = 0 | |
8 | tag_svn_revision = 0 | |
9 |
0 | from glob import glob | |
1 | from setuptools import setup | |
2 | ||
3 | import rtree | |
4 | ||
5 | # Get text from README.txt | |
6 | readme_text = open('docs/source/README.txt', 'r').read() | |
7 | ||
8 | import os | |
9 | ||
10 | if os.name == 'nt': | |
11 | data_files=[('Lib/site-packages/rtree', | |
12 | [r'D:\libspatialindex\bin\spatialindex.dll', | |
13 | r'D:\libspatialindex\bin\spatialindex_c.dll',]),] | |
14 | else: | |
15 | data_files = None | |
16 | ||
17 | setup(name = 'Rtree', | |
18 | version = rtree.__version__, | |
19 | description = 'R-Tree spatial index for Python GIS', | |
20 | license = 'LGPL', | |
21 | keywords = 'gis spatial index r-tree', | |
22 | author = 'Sean Gillies', | |
23 | author_email = 'sean.gillies@gmail.com', | |
24 | maintainer = 'Howard Butler', | |
25 | maintainer_email = 'hobu@hobu.net', | |
26 | url = 'http://toblerity.github.com/rtree/', | |
27 | long_description = readme_text, | |
28 | packages = ['rtree'], | |
29 | install_requires = ['setuptools'], | |
30 | test_suite = 'tests.test_suite', | |
31 | data_files = data_files, | |
32 | zip_safe = False, | |
33 | classifiers = [ | |
34 | 'Development Status :: 5 - Production/Stable', | |
35 | 'Intended Audience :: Developers', | |
36 | 'Intended Audience :: Science/Research', | |
37 | 'License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)', | |
38 | 'Operating System :: OS Independent', | |
39 | 'Programming Language :: C', | |
40 | 'Programming Language :: C++', | |
41 | 'Programming Language :: Python', | |
42 | 'Topic :: Scientific/Engineering :: GIS', | |
43 | 'Topic :: Database', | |
44 | ], | |
45 | ) | |
46 |
0 | Bounding Box Checking | |
1 | ===================== | |
2 | ||
3 | See http://trac.gispython.org/projects/PCL/ticket/127. | |
4 | ||
5 | Adding with bogus bounds | |
6 | ------------------------ | |
7 | ||
8 | >>> import rtree | |
9 | >>> index = rtree.Rtree() | |
10 | >>> index.add(1, (0.0, 0.0, -1.0, 1.0)) | |
11 | Traceback (most recent call last): | |
12 | ... | |
13 | RTreeError: Coordinates must not have minimums more than maximums | |
14 | ||
15 | >>> index.intersection((0.0, 0.0, -1.0, 1.0)) | |
16 | Traceback (most recent call last): | |
17 | ... | |
18 | RTreeError: Coordinates must not have minimums more than maximums | |
19 | ||
20 | Adding with invalid bounds argument should raise an exception | |
21 | ||
22 | >>> index.add(1, 1) | |
23 | Traceback (most recent call last): | |
24 | ... | |
25 | TypeError: Bounds must be a sequence |
0 | # hobu's latest results on his 2006-era machine | |
1 | ||
2 | # Stream load: | |
3 | # 293710.04 usec/pass | |
4 | # | |
5 | # One-at-a-time load: | |
6 | # 527883.95 usec/pass | |
7 | # | |
8 | # | |
9 | # 30000 points | |
10 | # Query box: (1240000, 1010000, 1400000, 1390000) | |
11 | # | |
12 | # | |
13 | # Brute Force: | |
14 | # 46 hits | |
15 | # 13533.60 usec/pass | |
16 | # | |
17 | # Memory-based Rtree Intersection: | |
18 | # 46 hits | |
19 | # 7516.19 usec/pass | |
20 | # | |
21 | # Disk-based Rtree Intersection: | |
22 | # 46 hits | |
23 | # 7543.00 usec/pass | |
24 | # | |
25 | # Disk-based Rtree Intersection without Item() wrapper (objects='raw'): | |
26 | # 46 raw hits | |
27 | # 347.60 usec/pass | |
28 | ||
29 | import random | |
30 | import timeit | |
31 | ||
32 | try: | |
33 | import pkg_resources | |
34 | pkg_resources.require('Rtree') | |
35 | except: | |
36 | pass | |
37 | ||
38 | from rtree import Rtree as _Rtree | |
39 | ||
40 | TEST_TIMES = 20 | |
41 | ||
42 | # a very basic Geometry | |
43 | class Point(object): | |
44 | def __init__(self, x, y): | |
45 | self.x = x | |
46 | self.y = y | |
47 | ||
48 | # Scatter points randomly in a 1x1 box | |
49 | # | |
50 | ||
51 | class Rtree(_Rtree): | |
52 | pickle_protocol = -1 | |
53 | ||
54 | bounds = (0, 0, 6000000, 6000000) | |
55 | count = 30000 | |
56 | points = [] | |
57 | ||
58 | insert_object = None | |
59 | insert_object = {'a': list(range(100)), 'b': 10, 'c': object(), 'd': dict(x=1), 'e': Point(2, 3)} | |
60 | ||
61 | index = Rtree() | |
62 | disk_index = Rtree('test', overwrite=1) | |
63 | ||
64 | coordinates = [] | |
65 | for i in range(count): | |
66 | x = random.randrange(bounds[0], bounds[2]) + random.random() | |
67 | y = random.randrange(bounds[1], bounds[3]) + random.random() | |
68 | point = Point(x, y) | |
69 | points.append(point) | |
70 | ||
71 | index.add(i, (x, y), insert_object) | |
72 | disk_index.add(i, (x, y), insert_object) | |
73 | coordinates.append((i, (x, y, x, y), insert_object)) | |
74 | ||
75 | s =""" | |
76 | bulk = Rtree(coordinates[:2000]) | |
77 | """ | |
78 | t = timeit.Timer(stmt=s, setup='from __main__ import coordinates, Rtree, insert_object') | |
79 | print("\nStream load:") | |
80 | print("%.2f usec/pass" % (1000000 * t.timeit(number=TEST_TIMES)/TEST_TIMES)) | |
81 | ||
82 | s =""" | |
83 | idx = Rtree() | |
84 | i = 0 | |
85 | for point in points[:2000]: | |
86 | idx.add(i, (point.x, point.y), insert_object) | |
87 | i+=1 | |
88 | """ | |
89 | t = timeit.Timer(stmt=s, setup='from __main__ import points, Rtree, insert_object') | |
90 | print("\nOne-at-a-time load:") | |
91 | print("%.2f usec/pass\n\n" % (1000000 * t.timeit(number=TEST_TIMES)/TEST_TIMES)) | |
92 | ||
93 | ||
94 | bbox = (1240000, 1010000, 1400000, 1390000) | |
95 | print(count, "points") | |
96 | print("Query box: ", bbox) | |
97 | print("") | |
98 | ||
99 | # Brute force all points within a 0.1x0.1 box | |
100 | s = """ | |
101 | hits = [p for p in points if p.x >= bbox[0] and p.x <= bbox[2] and p.y >= bbox[1] and p.y <= bbox[3]] | |
102 | """ | |
103 | t = timeit.Timer(stmt=s, setup='from __main__ import points, bbox') | |
104 | print("\nBrute Force:") | |
105 | print(len([p for p in points if p.x >= bbox[0] and p.x <= bbox[2] and p.y >= bbox[1] and p.y <= bbox[3]]), "hits") | |
106 | print("%.2f usec/pass" % (1000000 * t.timeit(number=TEST_TIMES)/TEST_TIMES)) | |
107 | ||
108 | # 0.1x0.1 box using intersection | |
109 | ||
110 | if insert_object is None: | |
111 | s = """ | |
112 | hits = [points[id] for id in index.intersection(bbox)] | |
113 | """ | |
114 | else: | |
115 | s = """ | |
116 | hits = [p.object for p in index.intersection(bbox, objects=insert_object)] | |
117 | """ | |
118 | ||
119 | t = timeit.Timer(stmt=s, setup='from __main__ import points, index, bbox, insert_object') | |
120 | print("\nMemory-based Rtree Intersection:") | |
121 | print(len([points[id] for id in index.intersection(bbox)]), "hits") | |
122 | print("%.2f usec/pass" % (1000000 * t.timeit(number=100)/100)) | |
123 | ||
124 | ||
125 | # run same test on disk_index. | |
126 | s = s.replace("index.", "disk_index.") | |
127 | ||
128 | t = timeit.Timer(stmt=s, setup='from __main__ import points, disk_index, bbox, insert_object') | |
129 | print("\nDisk-based Rtree Intersection:") | |
130 | hits = list(disk_index.intersection(bbox)) | |
131 | print(len(hits), "hits") | |
132 | print("%.2f usec/pass" % (1000000 * t.timeit(number=TEST_TIMES)/TEST_TIMES)) | |
133 | ||
134 | ||
135 | if insert_object: | |
136 | s = """ | |
137 | hits = disk_index.intersection(bbox, objects="raw") | |
138 | """ | |
139 | t = timeit.Timer(stmt=s, setup='from __main__ import points, disk_index, bbox, insert_object') | |
140 | print("\nDisk-based Rtree Intersection without Item() wrapper (objects='raw'):") | |
141 | result = list(disk_index.intersection(bbox, objects="raw")) | |
142 | print(len(result), "raw hits") | |
143 | print("%.2f usec/pass" % (1000000 * t.timeit(number=TEST_TIMES)/TEST_TIMES)) | |
144 | assert 'a' in result[0], result[0] | |
145 | ||
146 | import os | |
147 | try: | |
148 | os.remove('test.dat') | |
149 | os.remove('test.idx') | |
150 | except: | |
151 | pass |
0 | 34.3776829412 26.7375853734 49.3776829412 41.7375853734 | |
1 | -51.7912278527 56.5716384064 -36.7912278527 71.5716384064 | |
2 | -132.417278478 -96.7177218184 -117.417278478 -81.7177218184 | |
3 | 19.9788779448 -53.1068061438 34.9788779448 -38.1068061438 | |
4 | 50.9432853241 53.830194296 65.9432853241 68.830194296 | |
5 | 114.777310066 -42.0534139041 129.777310066 -27.0534139041 | |
6 | -80.5201136918 -60.5173650142 -65.5201136918 -45.5173650142 | |
7 | -109.709042971 -88.8853631128 -94.7090429709 -73.8853631128 | |
8 | 163.797701593 49.0535662325 178.797701593 64.0535662325 | |
9 | 119.52474488 -47.8047995045 134.52474488 -32.8047995045 | |
10 | -49.6358346107 25.7591536504 -34.6358346107 40.7591536504 | |
11 | 43.1951329802 -61.7003551556 58.1951329802 -46.7003551556 | |
12 | 5.07182469992 -32.9621617938 20.0718246999 -17.9621617938 | |
13 | 157.392784956 -59.9967638674 172.392784956 -44.9967638674 | |
14 | 169.761387556 77.3118040104 184.761387556 92.3118040104 | |
15 | -90.9030625259 23.7969275036 -75.9030625259 38.7969275036 | |
16 | 13.3161023563 35.5651016032 28.3161023563 50.5651016032 | |
17 | -71.4124633746 -27.8098115487 -56.4124633746 -12.8098115487 | |
18 | -101.490578923 40.5161619529 -86.4905789231 55.5161619529 | |
19 | -22.5493804457 -9.48190527182 -7.54938044566 5.51809472818 | |
20 | 22.7819453953 81.6043699778 37.7819453953 96.6043699778 | |
21 | 163.851232856 52.6576397095 178.851232856 67.6576397095 | |
22 | 8.7520267341 -82.9532179134 23.7520267341 -67.9532179134 | |
23 | -25.1295517688 -52.9753074372 -10.1295517688 -37.9753074372 | |
24 | 125.380855923 53.093317371 140.380855923 68.093317371 | |
25 | -79.9963004315 -8.58901526761 -64.9963004315 6.41098473239 | |
26 | -3.49476632412 -93.5592177527 11.5052336759 -78.5592177527 | |
27 | 5.12311663372 38.9766284779 20.1231166337 53.9766284779 | |
28 | -126.802193031 72.7620993955 -111.802193031 87.7620993955 | |
29 | 144.816733092 33.8296664631 159.816733092 48.8296664631 | |
30 | -124.187243051 30.4856075292 -109.187243051 45.4856075292 | |
31 | 63.8011147852 -64.8232471563 78.8011147852 -49.8232471563 | |
32 | 125.091625278 10.0243913301 140.091625278 25.0243913301 | |
33 | -79.6265618345 37.4238531184 -64.6265618345 52.4238531184 | |
34 | 84.0917344559 -61.9889564492 99.0917344559 -46.9889564492 | |
35 | 44.1303873224 36.9948838398 59.1303873224 51.9948838398 | |
36 | 57.579189376 -44.3308895399 72.579189376 -29.3308895399 | |
37 | -135.915887605 -68.4604833795 -120.915887605 -53.4604833795 | |
38 | -52.5931165731 -83.132095062 -37.5931165731 -68.132095062 | |
39 | -3.66134703734 -24.6160151663 11.3386529627 -9.61601516627 | |
40 | 50.9138603775 6.66349450637 65.9138603775 21.6634945064 | |
41 | -59.0308862561 -28.7050068456 -44.0308862561 -13.7050068456 | |
42 | 51.6601755093 -32.4794848001 66.6601755093 -17.4794848001 | |
43 | -174.739939684 35.8453347176 -159.739939684 50.8453347176 | |
44 | -107.905359545 -33.9905804035 -92.9053595447 -18.9905804035 | |
45 | -43.8298865873 -38.8139629115 -28.8298865873 -23.8139629115 | |
46 | -186.673789279 15.8707951216 -171.673789279 30.8707951216 | |
47 | 13.0878151873 18.9267257542 28.0878151873 33.9267257542 | |
48 | -19.7764534411 -15.1648038653 -4.7764534411 -0.16480386529 | |
49 | -136.725385806 -62.3357813894 -121.725385806 -47.3357813894 | |
50 | 56.3180682679 27.7748493606 71.3180682679 42.7748493606 | |
51 | -117.234207271 -95.984091959 -102.234207271 -80.984091959 | |
52 | -112.676334783 69.8614225716 -97.6763347829 84.8614225716 | |
53 | 63.4481415226 49.5185084111 78.4481415226 64.5185084111 | |
54 | -164.583933393 -24.3224792074 -149.583933393 -9.32247920738 | |
55 | 29.8740632141 -94.4036564677 44.8740632141 -79.4036564677 | |
56 | 111.222002785 27.3091348937 126.222002785 42.3091348937 | |
57 | 153.388416036 -51.7982686059 168.388416036 -36.7982686059 | |
58 | 101.187835391 -79.2096166175 116.187835391 -64.2096166175 | |
59 | 88.5716895369 -0.592196575665 103.571689537 14.4078034243 | |
60 | 121.697565289 -20.4740930579 136.697565289 -5.47409305786 | |
61 | -57.6430699458 32.6596016791 -42.6430699458 47.6596016791 | |
62 | -51.9988160106 -16.5263906642 -36.9988160106 -1.52639066423 | |
63 | -128.45654531 40.0833021378 -113.45654531 55.0833021378 | |
64 | 104.084274855 1.04302798395 119.084274855 16.0430279839 | |
65 | -65.3078063084 52.8659272125 -50.3078063084 67.8659272125 | |
66 | -185.575231871 0.603830128936 -170.575231871 15.6038301289 | |
67 | -99.670852574 63.077063843 -84.670852574 78.077063843 | |
68 | -97.5397037499 24.1544066414 -82.5397037499 39.1544066414 | |
69 | 17.1213365558 80.8998469932 32.1213365558 95.8998469932 | |
70 | -66.0514693697 -67.879371904 -51.0514693697 -52.879371904 | |
71 | -165.624597131 -28.2121530482 -150.624597131 -13.2121530482 | |
72 | -153.938620771 -22.5333324395 -138.938620771 -7.5333324395 | |
73 | 108.059653776 -30.1015722619 123.059653776 -15.1015722619 | |
74 | 66.3357992327 33.4460170804 81.3357992327 48.4460170804 | |
75 | 122.051245261 62.1986667929 137.051245261 77.1986667929 | |
76 | -9.14331797752 -4.94220638202 5.85668202248 10.057793618 | |
77 | -6.21767716831 -37.4474638489 8.78232283169 -22.4474638489 | |
78 | -10.2422235441 -36.7771789022 4.75777645591 -21.7771789022 | |
79 | 151.39952872 5.78259379576 166.39952872 20.7825937958 | |
80 | 53.0412866301 27.1060539476 68.0412866301 42.1060539476 | |
81 | -179.969415049 -86.9431323167 -164.969415049 -71.9431323167 | |
82 | -122.143517094 52.4812451482 -107.143517094 67.4812451482 | |
83 | 126.651232891 -71.3593917404 141.651232891 -56.3593917404 | |
84 | 35.5628371672 -44.4833782826 50.5628371672 -29.4833782826 | |
85 | 106.338230585 74.4980976394 121.338230585 89.4980976394 | |
86 | 2.49246106376 64.4571886404 17.4924610638 79.4571886404 | |
87 | 26.9239556956 74.8154250821 41.9239556956 89.8154250821 | |
88 | -145.467051901 -23.3901235678 -130.467051901 -8.39012356782 | |
89 | -31.1747618493 -78.3450857919 -16.1747618493 -63.3450857919 | |
90 | -45.6363494594 41.8549865381 -30.6363494594 56.8549865381 | |
91 | -139.598628861 -76.0620586165 -124.598628861 -61.0620586165 | |
92 | 75.3893757582 -96.3227872859 90.3893757582 -81.3227872859 | |
93 | 66.4127845964 -29.3758752649 81.4127845964 -14.3758752649 | |
94 | 71.002709831 5.93248532466 86.002709831 20.9324853247 | |
95 | -166.73585749 -91.958750292 -151.73585749 -76.958750292 | |
96 | -122.966652056 -44.5184865975 -107.966652056 -29.5184865975 | |
97 | -114.787601823 -21.1179486167 -99.7876018227 -6.11794861667 | |
98 | -37.7449906403 -70.1494304858 -22.7449906403 -55.1494304858 | |
99 | 70.2802523802 34.6578320934 85.2802523802 49.6578320934 |
0 | -77.6266074937 17.9253077286 -74.6266074937 20.9253077286 | |
1 | 146.760813507 -66.1176158519 149.760813507 -63.1176158519 | |
2 | -61.5952714867 1.53336501911 -58.5952714867 4.53336501911 | |
3 | -97.6541571808 78.9279172851 -94.6541571808 81.9279172851 | |
4 | -26.9653607563 -48.4712157725 -23.9653607563 -45.4712157725 | |
5 | -143.552516091 14.3494488115 -140.552516091 17.3494488115 | |
6 | -80.4613341911 17.1488336406 -77.4613341911 20.1488336406 | |
7 | -170.539134443 -8.03564691796 -167.539134443 -5.03564691796 | |
8 | 41.1324604695 53.1528891157 44.1324604695 56.1528891157 | |
9 | -16.4280335397 15.7994413301 -13.4280335397 18.7994413301 | |
10 | -13.0608137513 79.8825849424 -10.0608137513 82.8825849424 | |
11 | 11.0220907685 -25.1820010025 14.0220907685 -22.1820010025 | |
12 | -10.853938973 -83.415598855 -7.85393897295 -80.415598855 | |
13 | 154.64572196 -36.9887910088 157.64572196 -33.9887910088 | |
14 | 18.573136694 21.4354048786 21.573136694 24.4354048786 | |
15 | -44.6555011074 -71.282412391 -41.6555011074 -68.282412391 | |
16 | -145.411701186 4.4541144677 -142.411701186 7.4541144677 | |
17 | 5.40282526442 -35.1352567283 8.40282526442 -32.1352567283 | |
18 | 62.5944808962 -43.6191170071 65.5944808962 -40.6191170071 | |
19 | -146.213229942 13.3263433101 -143.213229942 16.3263433101 | |
20 | -77.4449126588 -22.6182449882 -74.4449126588 -19.6182449882 | |
21 | -106.789240681 78.8103222748 -103.789240681 81.8103222748 | |
22 | -3.73652421112 -19.5291285896 -0.736524211117 -16.5291285896 | |
23 | -58.0281342568 32.8106143002 -55.0281342568 35.8106143002 | |
24 | -13.8033575384 -50.2089822292 -10.8033575384 -47.2089822292 | |
25 | -172.843024283 16.5468581097 -169.843024283 19.5468581097 | |
26 | -172.107799874 -11.7749825659 -169.107799874 -8.7749825659 | |
27 | -73.0310329326 54.460547423 -70.0310329326 57.460547423 | |
28 | -21.5697127876 72.3233077645 -18.5697127876 75.3233077645 | |
29 | -146.309829213 50.2891374472 -143.309829213 53.2891374472 | |
30 | -26.1978262524 74.0518428004 -23.1978262524 77.0518428004 | |
31 | -19.4804067324 -56.2863939382 -16.4804067324 -53.2863939382 | |
32 | 164.888440702 6.83914583504 167.888440702 9.83914583504 | |
33 | -20.4710471678 60.8436455137 -17.4710471678 63.8436455137 | |
34 | -162.20464081 14.3482977242 -159.20464081 17.3482977242 | |
35 | -46.823944655 -57.0836083862 -43.823944655 -54.0836083862 | |
36 | -116.933648325 -74.2067851587 -113.933648325 -71.2067851587 | |
37 | 14.9470581084 -3.10178427836 17.9470581084 -0.101784278362 | |
38 | -174.718634431 -42.9059514464 -171.718634431 -39.9059514464 | |
39 | -39.4478339796 -21.9917960894 -36.4478339796 -18.9917960894 | |
40 | 115.730938802 21.2135753799 118.730938802 24.2135753799 | |
41 | 10.8416658737 72.0678680529 13.8416658737 75.0678680529 | |
42 | -95.9535577321 -49.1590716919 -92.9535577321 -46.1590716919 | |
43 | -125.459235693 19.2496047235 -122.459235693 22.2496047235 | |
44 | -132.545232353 -58.25552454 -129.545232353 -55.25552454 | |
45 | -67.5395639913 -70.2635622306 -64.5395639913 -67.2635622306 | |
46 | 114.897795224 81.6755123276 117.897795224 84.6755123276 | |
47 | -45.8275691135 51.4851475055 -42.8275691135 54.4851475055 | |
48 | 67.7315695721 81.6891131584 70.7315695721 84.6891131584 | |
49 | -143.646124789 65.8539305596 -140.646124789 68.8539305596 | |
50 | 39.600377171 -65.278784271 42.600377171 -62.278784271 | |
51 | -50.4241198338 -61.0588571002 -47.4241198338 -58.0588571002 | |
52 | 29.6470115345 -69.1121010466 32.6470115345 -66.1121010466 | |
53 | 74.7933751695 -87.0504609911 77.7933751695 -84.0504609911 | |
54 | -44.6451547594 -21.6787016415 -41.6451547594 -18.6787016415 | |
55 | -125.896784285 57.6216177466 -122.896784285 60.6216177466 | |
56 | -177.918010191 39.075981359 -174.918010191 42.075981359 | |
57 | 149.458654065 -63.1555370915 152.458654065 -60.1555370915 | |
58 | -93.8541244608 14.6920424922 -90.8541244608 17.6920424922 | |
59 | 103.015148455 -82.0537507881 106.015148455 -79.0537507881 | |
60 | -14.1875994263 5.92016732751 -11.1875994263 8.92016732751 | |
61 | -10.5260324823 -66.999980844 -7.5260324823 -63.999980844 | |
62 | -77.4049966201 76.698477819 -74.4049966201 79.698477819 | |
63 | 163.365893138 36.3937967838 166.365893138 39.3937967838 | |
64 | -77.6637113634 -20.3921897679 -74.6637113634 -17.3921897679 | |
65 | -118.209984451 -89.6757056733 -115.209984451 -86.6757056733 | |
66 | 24.5096630884 -39.4951326405 27.5096630884 -36.4951326405 | |
67 | 104.683305708 -50.5163367082 107.683305708 -47.5163367082 | |
68 | 89.9633794652 -49.8790576673 92.9633794652 -46.8790576673 | |
69 | 74.1792004231 76.939779241 77.1792004231 79.939779241 | |
70 | 159.611093819 24.3012006505 162.611093819 27.3012006505 | |
71 | -33.9960825337 48.0848879862 -30.9960825337 51.0848879862 | |
72 | -74.0378541877 -74.4126488941 -71.0378541877 -71.4126488941 | |
73 | 92.6624431726 55.1115098398 95.6624431726 58.1115098398 | |
74 | -115.093677605 27.8478080505 -112.093677605 30.8478080505 | |
75 | -170.037980591 58.2298099844 -167.037980591 61.2298099844 | |
76 | 166.197199218 -38.4613177937 169.197199218 -35.4613177937 | |
77 | 63.6008145168 60.8908437143 66.6008145168 63.8908437143 | |
78 | 41.6381666956 74.698625008 44.6381666956 77.698625008 | |
79 | 30.4199356009 24.6821280736 33.4199356009 27.6821280736 | |
80 | -160.657901861 46.5236688914 -157.657901861 49.5236688914 | |
81 | 124.039804763 -3.75214084639 127.039804763 -0.752140846393 | |
82 | -98.4364817072 -34.640932721 -95.4364817072 -31.640932721 | |
83 | 85.2576310296 52.0416775746 88.2576310296 55.0416775746 | |
84 | -135.299373946 -39.8575058091 -132.299373946 -36.8575058091 | |
85 | -81.1726623037 -38.2018616886 -78.1726623037 -35.2018616886 | |
86 | 86.1432448082 -81.4944583964 89.1432448082 -78.4944583964 | |
87 | -12.7133836326 12.0678158492 -9.71338363261 15.0678158492 | |
88 | 65.0162301938 -67.6995457631 68.0162301938 -64.6995457631 | |
89 | 169.200931012 32.4585152701 172.200931012 35.4585152701 | |
90 | -105.391368296 42.7902931996 -102.391368296 45.7902931996 | |
91 | -139.704228408 24.0433792599 -136.704228408 27.0433792599 | |
92 | -153.800381092 16.5046872988 -150.800381092 19.5046872988 | |
93 | -97.0657162703 27.2524937158 -94.0657162703 30.2524937158 | |
94 | -1.60098774744 -14.9988726034 1.39901225256 -11.9988726034 | |
95 | -81.0423533346 51.1588554456 -78.0423533346 54.1588554456 | |
96 | 157.601695863 -75.891662644 160.601695863 -72.891662644 | |
97 | 10.433405189 86.8920650943 13.433405189 89.8920650943 | |
98 | 113.813941489 -24.5868189503 116.813941489 -21.5868189503 | |
99 | 47.0050784943 -52.865903321 50.0050784943 -49.865903321 |
0 | import os.path | |
1 | ||
2 | boxes15 = [] | |
3 | f = open(os.path.join(os.path.dirname(__file__), 'boxes_15x15.data'), 'r') | |
4 | for line in f.readlines(): | |
5 | if not line: | |
6 | break | |
7 | [left, bottom, right, top] = [float(x) for x in line.split()] | |
8 | boxes15.append((left, bottom, right, top)) | |
9 | ||
10 | boxes3 = [] | |
11 | f = open(os.path.join(os.path.dirname(__file__), 'boxes_3x3.data'), 'r') | |
12 | for line in f.readlines(): | |
13 | if not line: | |
14 | break | |
15 | [left, bottom, right, top] = [float(x) for x in line.split()] | |
16 | boxes3.append((left, bottom, right, top)) | |
17 | ||
18 | points = [] | |
19 | f = open(os.path.join(os.path.dirname(__file__), 'point_clusters.data'), 'r') | |
20 | for line in f.readlines(): | |
21 | if not line: | |
22 | break | |
23 | [left, bottom] = [float(x) for x in line.split()] | |
24 | points.append((left, bottom)) | |
25 | ||
26 | def draw_data(filename): | |
27 | from PIL import Image, ImageDraw | |
28 | im = Image.new('RGB', (1440, 720)) | |
29 | d = ImageDraw.Draw(im) | |
30 | for box in boxes15: | |
31 | coords = [4.0*(box[0]+180), 4.0*(box[1]+90), 4.0*(box[2]+180), 4.0*(box[3]+90)] | |
32 | d.rectangle(coords, outline='red') | |
33 | for box in boxes3: | |
34 | coords = [4.0*(box[0]+180), 4.0*(box[1]+90), 4.0*(box[2]+180), 4.0*(box[3]+90)] | |
35 | d.rectangle(coords, outline='blue') | |
36 | ||
37 | im.save(filename) | |
38 |
0 | .. _index_test: | |
1 | ||
2 | Examples | |
3 | .............................................................................. | |
4 | ||
5 | >>> from rtree import index | |
6 | >>> from rtree.index import Rtree | |
7 | ||
8 | Ensure libspatialindex version is >= 1.7.0 | |
9 | ||
10 | >>> index.__c_api_version__.split('.')[1] >= 7 | |
11 | True | |
12 | ||
13 | Make an instance, index stored in memory | |
14 | ||
15 | >>> p = index.Property() | |
16 | ||
17 | >>> idx = index.Index(properties=p) | |
18 | >>> idx | |
19 | <rtree.index.Index object at 0x...> | |
20 | ||
21 | Add 100 largish boxes randomly distributed over the domain | |
22 | ||
23 | >>> for i, coords in enumerate(boxes15): | |
24 | ... idx.add(i, coords) | |
25 | ||
26 | >>> 0 in idx.intersection((0, 0, 60, 60)) | |
27 | True | |
28 | >>> hits = list(idx.intersection((0, 0, 60, 60))) | |
29 | >>> len(hits) | |
30 | 10 | |
31 | >>> hits | |
32 | [0, 4, 16, 27, 35, 40, 47, 50, 76, 80] | |
33 | ||
34 | Insert an object into the index that can be pickled | |
35 | ||
36 | >>> idx.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42) | |
37 | ||
38 | Fetch our straggler that contains a pickled object | |
39 | >>> hits = idx.intersection((0, 0, 60, 60), objects=True) | |
40 | >>> for i in hits: | |
41 | ... if i.id == 4321: | |
42 | ... i.object | |
43 | ... ['%.10f' % t for t in i.bbox] | |
44 | 42 | |
45 | ['34.3776829412', '26.7375853734', '49.3776829412', '41.7375853734'] | |
46 | ||
47 | ||
48 | Find the three items nearest to this one | |
49 | >>> hits = list(idx.nearest((0,0,10,10), 3)) | |
50 | >>> hits | |
51 | [76, 48, 19] | |
52 | >>> len(hits) | |
53 | 3 | |
54 | ||
55 | ||
56 | Default order is [xmin, ymin, xmax, ymax] | |
57 | >>> ['%.10f' % t for t in idx.bounds] | |
58 | ['-186.6737892790', '-96.7177218184', '184.7613875560', '96.6043699778'] | |
59 | ||
60 | To get in order [xmin, xmax, ymin, ymax (... for n-d indexes)] use the kwarg: | |
61 | >>> ['%.10f' % t for t in idx.get_bounds(coordinate_interleaved=False)] | |
62 | ['-186.6737892790', '184.7613875560', '-96.7177218184', '96.6043699778'] | |
63 | ||
64 | Delete index members | |
65 | ||
66 | >>> for i, coords in enumerate(boxes15): | |
67 | ... idx.delete(i, coords) | |
68 | ||
69 | Delete our straggler too | |
70 | >>> idx.delete(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734) ) | |
71 | ||
72 | Check that we have deleted stuff | |
73 | ||
74 | >>> hits = 0 | |
75 | >>> hits = list(idx.intersection((0, 0, 60, 60))) | |
76 | >>> len(hits) | |
77 | 0 | |
78 | ||
79 | Check that nearest returns *all* of the items that are nearby | |
80 | ||
81 | >>> idx2 = Rtree() | |
82 | >>> idx2 | |
83 | <rtree.index.Index object at 0x...> | |
84 | ||
85 | >>> locs = [(14, 10, 14, 10), | |
86 | ... (16, 10, 16, 10)] | |
87 | ||
88 | >>> for i, (minx, miny, maxx, maxy) in enumerate(locs): | |
89 | ... idx2.add(i, (minx, miny, maxx, maxy)) | |
90 | ||
91 | >>> sorted(idx2.nearest((15, 10, 15, 10), 1)) | |
92 | [0, 1] | |
93 | ||
94 | ||
95 | Check that nearest returns *all* of the items that are nearby (with objects) | |
96 | >>> idx2 = Rtree() | |
97 | >>> idx2 | |
98 | <rtree.index.Index object at 0x...> | |
99 | ||
100 | >>> locs = [(14, 10, 14, 10), | |
101 | ... (16, 10, 16, 10)] | |
102 | ||
103 | >>> for i, (minx, miny, maxx, maxy) in enumerate(locs): | |
104 | ... idx2.add(i, (minx, miny, maxx, maxy), obj={'a': 42}) | |
105 | ||
106 | >>> sorted([(i.id, i.object) for i in idx2.nearest((15, 10, 15, 10), 1, objects=True)]) | |
107 | [(0, {'a': 42}), (1, {'a': 42})] | |
108 | ||
109 | ||
110 | >>> idx2 = Rtree() | |
111 | >>> idx2 | |
112 | <rtree.index.Index object at 0x...> | |
113 | ||
114 | >>> locs = [(2, 4), (6, 8), (10, 12), (11, 13), (15, 17), (13, 20)] | |
115 | ||
116 | >>> for i, (start, stop) in enumerate(locs): | |
117 | ... idx2.add(i, (start, 1, stop, 1)) | |
118 | ||
119 | >>> sorted(idx2.nearest((13, 0, 20, 2), 1)) | |
120 | [3, 4, 5] | |
121 | ||
122 | Default page size 4096 | |
123 | ||
124 | >>> idx3 = Rtree("defaultidx") | |
125 | >>> for i, coords in enumerate(boxes15): | |
126 | ... idx3.add(i, coords) | |
127 | >>> hits = list(idx3.intersection((0, 0, 60, 60))) | |
128 | >>> len(hits) | |
129 | 10 | |
130 | ||
131 | Make sure to delete the index or the file is not flushed and it | |
132 | will be invalid | |
133 | ||
134 | >>> del idx3 | |
135 | ||
136 | Page size 3 | |
137 | ||
138 | >>> idx4 = Rtree("pagesize3", pagesize=3) | |
139 | >>> for i, coords in enumerate(boxes15): | |
140 | ... idx4.add(i, coords) | |
141 | >>> hits = list(idx4.intersection((0, 0, 60, 60))) | |
142 | >>> len(hits) | |
143 | 10 | |
144 | ||
145 | >>> idx4.close() | |
146 | >>> del idx4 | |
147 | ||
148 | Test invalid name | |
149 | ||
150 | >>> inv = Rtree("bogus/foo") | |
151 | Traceback (most recent call last): | |
152 | ... | |
153 | IOError: Unable to open file 'bogus/foo.idx' for index storage | |
154 | ||
155 | Load a persisted index | |
156 | ||
157 | >>> import shutil | |
158 | >>> _ = shutil.copy("defaultidx.dat", "testing.dat") | |
159 | >>> _ = shutil.copy("defaultidx.idx", "testing.idx") | |
160 | ||
161 | # >>> import pdb;pdb.set_trace() | |
162 | ||
163 | >>> idx = Rtree("testing") | |
164 | >>> hits = list(idx.intersection((0, 0, 60, 60))) | |
165 | >>> len(hits) | |
166 | 10 | |
167 | ||
168 | Make a 3D index | |
169 | >>> p = index.Property() | |
170 | >>> p.dimension = 3 | |
171 | ||
172 | ||
173 | with interleaved=False, the order of input and output is: | |
174 | (xmin, xmax, ymin, ymax, zmin, zmax) | |
175 | ||
176 | >>> idx3d = index.Index(properties=p, interleaved=False) | |
177 | >>> idx3d | |
178 | <rtree.index.Index object at 0x...> | |
179 | ||
180 | >>> idx3d.insert(1, (0, 0, 60, 60, 22, 22.0)) | |
181 | ||
182 | >>> 1 in idx3d.intersection((-1, 1, 58, 62, 22, 24)) | |
183 | True | |
184 | ||
185 | ||
186 | Make a 4D index | |
187 | >>> p = index.Property() | |
188 | >>> p.dimension = 4 | |
189 | ||
190 | ||
191 | with interleaved=False, the order of input and output is: (xmin, xmax, ymin, ymax, zmin, zmax, kmin, kmax) | |
192 | ||
193 | >>> idx4d = index.Index(properties=p, interleaved=False) | |
194 | >>> idx4d | |
195 | <rtree.index.Index object at 0x...> | |
196 | ||
197 | >>> idx4d.insert(1, (0, 0, 60, 60, 22, 22.0, 128, 142)) | |
198 | ||
199 | >>> 1 in idx4d.intersection((-1, 1, 58, 62, 22, 24, 120, 150)) | |
200 | True | |
201 | ||
202 | Check that we can make an index with custom filename extensions | |
203 | ||
204 | >>> p = index.Property() | |
205 | >>> p.dat_extension = 'data' | |
206 | >>> p.idx_extension = 'index' | |
207 | ||
208 | >>> idx_cust = Rtree('custom', properties=p) | |
209 | >>> for i, coords in enumerate(boxes15): | |
210 | ... idx_cust.add(i, coords) | |
211 | >>> hits = list(idx_cust.intersection((0, 0, 60, 60))) | |
212 | >>> len(hits) | |
213 | 10 | |
214 | ||
215 | >>> del idx_cust | |
216 | ||
217 | Reopen the index | |
218 | >>> p2 = index.Property() | |
219 | >>> p2.dat_extension = 'data' | |
220 | >>> p2.idx_extension = 'index' | |
221 | ||
222 | >>> idx_cust2 = Rtree('custom', properties=p2) | |
223 | >>> hits = list(idx_cust2.intersection((0, 0, 60, 60))) | |
224 | >>> len(hits) | |
225 | 10 | |
226 | ||
227 | >>> del idx_cust2 | |
228 | ||
229 | Adding the same id twice does not overwrite existing data | |
230 | ||
231 | >>> r = Rtree() | |
232 | >>> r.add(1, (2, 2)) | |
233 | >>> r.add(1, (3, 3)) | |
234 | >>> list(r.intersection((0, 0, 5, 5))) | |
235 | [1, 1] | |
236 | ||
237 | A stream of data need that needs to be an iterator that will raise a | |
238 | StopIteration. The order depends on the interleaved kwarg sent to the | |
239 | constructor. | |
240 | ||
241 | The object can be None, but you must put a place holder of 'None' there. | |
242 | ||
243 | >>> p = index.Property() | |
244 | >>> def data_gen(interleaved=True): | |
245 | ... for i, (minx, miny, maxx, maxy) in enumerate(boxes15): | |
246 | ... if interleaved: | |
247 | ... yield (i, (minx, miny, maxx, maxy), 42) | |
248 | ... else: | |
249 | ... yield (i, (minx, maxx, miny, maxy), 42) | |
250 | ||
251 | >>> strm_idx = index.Rtree(data_gen(), properties = p) | |
252 | ||
253 | >>> hits = list(strm_idx.intersection((0, 0, 60, 60))) | |
254 | ||
255 | >>> len(hits) | |
256 | 10 | |
257 | ||
258 | ||
259 | >>> sorted(hits) | |
260 | [0, 4, 16, 27, 35, 40, 47, 50, 76, 80] | |
261 | ||
262 | >>> hits = list(strm_idx.intersection((0, 0, 60, 60), objects=True)) | |
263 | >>> len(hits) | |
264 | 10 | |
265 | ||
266 | >>> hits[0].object | |
267 | 42 | |
268 | ||
269 | Try streaming against a persisted index without interleaving. | |
270 | >>> strm_idx = index.Rtree('streamed', data_gen(interleaved=False), properties = p, interleaved=False) | |
271 | ||
272 | Note the arguments to intersection must be xmin, xmax, ymin, ymax for interleaved=False | |
273 | >>> hits = list(strm_idx.intersection((0, 60, 0, 60))) | |
274 | >>> len(hits) | |
275 | 10 | |
276 | ||
277 | >>> sorted(hits) | |
278 | [0, 4, 16, 27, 35, 40, 47, 50, 76, 80] | |
279 | ||
280 | >>> hits = list(strm_idx.intersection((0, 60, 0, 60), objects=True)) | |
281 | >>> len(hits) | |
282 | 10 | |
283 | ||
284 | >>> hits[0].object | |
285 | 42 | |
286 | ||
287 | >>> hits = list(strm_idx.intersection((0, 60, 0, 60), objects='raw')) | |
288 | >>> hits[0] | |
289 | 42 | |
290 | >>> len(hits) | |
291 | 10 | |
292 | ||
293 | >>> strm_idx.count((0, 60, 0, 60)) | |
294 | 10L | |
295 | ||
296 | >>> del strm_idx | |
297 | ||
298 | >>> p = index.Property() | |
299 | >>> p.leaf_capacity = 100 | |
300 | >>> p.fill_factor = 0.5 | |
301 | >>> p.index_capacity = 10 | |
302 | >>> p.near_minimum_overlap_factor = 7 | |
303 | >>> idx = index.Index(data_gen(interleaved=False), properties = p, interleaved=False) | |
304 | ||
305 | >>> leaves = idx.leaves() | |
306 | ||
307 | >>> del idx |
0 | Bounding Box Checking | |
1 | ===================== | |
2 | ||
3 | See http://trac.gispython.org/projects/PCL/ticket/127. | |
4 | ||
5 | Adding with bogus bounds | |
6 | ------------------------ | |
7 | ||
8 | >>> import rtree | |
9 | >>> index = rtree.Rtree() | |
10 | >>> index.add(1, (0.0, 0.0, -1.0, 1.0)) | |
11 | Traceback (most recent call last): | |
12 | ... | |
13 | RTreeError: Coordinates must not have minimums more than maximums | |
14 | ||
15 | >>> index.intersection((0.0, 0.0, -1.0, 1.0)) | |
16 | Traceback (most recent call last): | |
17 | ... | |
18 | RTreeError: Coordinates must not have minimums more than maximums | |
19 | ||
20 | Adding with invalid bounds argument should raise an exception | |
21 | ||
22 | >>> index.add(1, 1) | |
23 | Traceback (most recent call last): | |
24 | ... | |
25 | TypeError: Bounds must be a sequence |
0 | .. _index_test: | |
1 | ||
2 | Examples | |
3 | .............................................................................. | |
4 | ||
5 | >>> from rtree import index | |
6 | >>> from rtree.index import Rtree | |
7 | ||
8 | Ensure libspatialindex version is >= 1.7.0 | |
9 | ||
10 | >>> index.__c_api_version__.split('.')[1] >= 7 | |
11 | True | |
12 | ||
13 | Make an instance, index stored in memory | |
14 | ||
15 | >>> p = index.Property() | |
16 | ||
17 | >>> idx = index.Index(properties=p) | |
18 | >>> idx | |
19 | <rtree.index.Index object at 0x...> | |
20 | ||
21 | Add 100 largish boxes randomly distributed over the domain | |
22 | ||
23 | >>> for i, coords in enumerate(boxes15): | |
24 | ... idx.add(i, coords) | |
25 | ||
26 | >>> 0 in idx.intersection((0, 0, 60, 60)) | |
27 | True | |
28 | >>> hits = list(idx.intersection((0, 0, 60, 60))) | |
29 | >>> len(hits) | |
30 | 10 | |
31 | >>> hits | |
32 | [0, 4, 16, 27, 35, 40, 47, 50, 76, 80] | |
33 | ||
34 | Insert an object into the index that can be pickled | |
35 | ||
36 | >>> idx.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42) | |
37 | ||
38 | Fetch our straggler that contains a pickled object | |
39 | >>> hits = idx.intersection((0, 0, 60, 60), objects=True) | |
40 | >>> for i in hits: | |
41 | ... if i.id == 4321: | |
42 | ... i.object | |
43 | ... ['%.10f' % t for t in i.bbox] | |
44 | 42 | |
45 | ['34.3776829412', '26.7375853734', '49.3776829412', '41.7375853734'] | |
46 | ||
47 | ||
48 | Find the three items nearest to this one | |
49 | >>> hits = list(idx.nearest((0,0,10,10), 3)) | |
50 | >>> hits | |
51 | [76, 48, 19] | |
52 | >>> len(hits) | |
53 | 3 | |
54 | ||
55 | ||
56 | Default order is [xmin, ymin, xmax, ymax] | |
57 | >>> ['%.10f' % t for t in idx.bounds] | |
58 | ['-186.6737892790', '-96.7177218184', '184.7613875560', '96.6043699778'] | |
59 | ||
60 | To get in order [xmin, xmax, ymin, ymax (... for n-d indexes)] use the kwarg: | |
61 | >>> ['%.10f' % t for t in idx.get_bounds(coordinate_interleaved=False)] | |
62 | ['-186.6737892790', '184.7613875560', '-96.7177218184', '96.6043699778'] | |
63 | ||
64 | Delete index members | |
65 | ||
66 | >>> for i, coords in enumerate(boxes15): | |
67 | ... idx.delete(i, coords) | |
68 | ||
69 | Delete our straggler too | |
70 | >>> idx.delete(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734) ) | |
71 | ||
72 | Check that we have deleted stuff | |
73 | ||
74 | >>> hits = 0 | |
75 | >>> hits = list(idx.intersection((0, 0, 60, 60))) | |
76 | >>> len(hits) | |
77 | 0 | |
78 | ||
79 | Check that nearest returns *all* of the items that are nearby | |
80 | ||
81 | >>> idx2 = Rtree() | |
82 | >>> idx2 | |
83 | <rtree.index.Index object at 0x...> | |
84 | ||
85 | >>> locs = [(14, 10, 14, 10), | |
86 | ... (16, 10, 16, 10)] | |
87 | ||
88 | >>> for i, (minx, miny, maxx, maxy) in enumerate(locs): | |
89 | ... idx2.add(i, (minx, miny, maxx, maxy)) | |
90 | ||
91 | >>> sorted(idx2.nearest((15, 10, 15, 10), 1)) | |
92 | [0, 1] | |
93 | ||
94 | ||
95 | Check that nearest returns *all* of the items that are nearby (with objects) | |
96 | >>> idx2 = Rtree() | |
97 | >>> idx2 | |
98 | <rtree.index.Index object at 0x...> | |
99 | ||
100 | >>> locs = [(14, 10, 14, 10), | |
101 | ... (16, 10, 16, 10)] | |
102 | ||
103 | >>> for i, (minx, miny, maxx, maxy) in enumerate(locs): | |
104 | ... idx2.add(i, (minx, miny, maxx, maxy), obj={'a': 42}) | |
105 | ||
106 | >>> sorted([(i.id, i.object) for i in idx2.nearest((15, 10, 15, 10), 1, objects=True)]) | |
107 | [(0, {'a': 42}), (1, {'a': 42})] | |
108 | ||
109 | ||
110 | >>> idx2 = Rtree() | |
111 | >>> idx2 | |
112 | <rtree.index.Index object at 0x...> | |
113 | ||
114 | >>> locs = [(2, 4), (6, 8), (10, 12), (11, 13), (15, 17), (13, 20)] | |
115 | ||
116 | >>> for i, (start, stop) in enumerate(locs): | |
117 | ... idx2.add(i, (start, 1, stop, 1)) | |
118 | ||
119 | >>> sorted(idx2.nearest((13, 0, 20, 2), 1)) | |
120 | [3, 4, 5] | |
121 | ||
122 | Default page size 4096 | |
123 | ||
124 | >>> idx3 = Rtree("defaultidx") | |
125 | >>> for i, coords in enumerate(boxes15): | |
126 | ... idx3.add(i, coords) | |
127 | >>> hits = list(idx3.intersection((0, 0, 60, 60))) | |
128 | >>> len(hits) | |
129 | 10 | |
130 | ||
131 | Make sure to delete the index or the file is not flushed and it | |
132 | will be invalid | |
133 | ||
134 | >>> del idx3 | |
135 | ||
136 | Page size 3 | |
137 | ||
138 | >>> idx4 = Rtree("pagesize3", pagesize=3) | |
139 | >>> for i, coords in enumerate(boxes15): | |
140 | ... idx4.add(i, coords) | |
141 | >>> hits = list(idx4.intersection((0, 0, 60, 60))) | |
142 | >>> len(hits) | |
143 | 10 | |
144 | ||
145 | >>> idx4.close() | |
146 | >>> del idx4 | |
147 | ||
148 | Test invalid name | |
149 | ||
150 | >>> inv = Rtree("bogus/foo") | |
151 | Traceback (most recent call last): | |
152 | ... | |
153 | IOError: Unable to open file 'bogus/foo.idx' for index storage | |
154 | ||
155 | Load a persisted index | |
156 | ||
157 | >>> import shutil | |
158 | >>> _ = shutil.copy("defaultidx.dat", "testing.dat") | |
159 | >>> _ = shutil.copy("defaultidx.idx", "testing.idx") | |
160 | ||
161 | # >>> import pdb;pdb.set_trace() | |
162 | ||
163 | >>> idx = Rtree("testing") | |
164 | >>> hits = list(idx.intersection((0, 0, 60, 60))) | |
165 | >>> len(hits) | |
166 | 10 | |
167 | ||
168 | Make a 3D index | |
169 | >>> p = index.Property() | |
170 | >>> p.dimension = 3 | |
171 | ||
172 | ||
173 | with interleaved=False, the order of input and output is: | |
174 | (xmin, xmax, ymin, ymax, zmin, zmax) | |
175 | ||
176 | >>> idx3d = index.Index(properties=p, interleaved=False) | |
177 | >>> idx3d | |
178 | <rtree.index.Index object at 0x...> | |
179 | ||
180 | >>> idx3d.insert(1, (0, 0, 60, 60, 22, 22.0)) | |
181 | ||
182 | >>> 1 in idx3d.intersection((-1, 1, 58, 62, 22, 24)) | |
183 | True | |
184 | ||
185 | ||
186 | Make a 4D index | |
187 | >>> p = index.Property() | |
188 | >>> p.dimension = 4 | |
189 | ||
190 | ||
191 | with interleaved=False, the order of input and output is: (xmin, xmax, ymin, ymax, zmin, zmax, kmin, kmax) | |
192 | ||
193 | >>> idx4d = index.Index(properties=p, interleaved=False) | |
194 | >>> idx4d | |
195 | <rtree.index.Index object at 0x...> | |
196 | ||
197 | >>> idx4d.insert(1, (0, 0, 60, 60, 22, 22.0, 128, 142)) | |
198 | ||
199 | >>> 1 in idx4d.intersection((-1, 1, 58, 62, 22, 24, 120, 150)) | |
200 | True | |
201 | ||
202 | Check that we can make an index with custom filename extensions | |
203 | ||
204 | >>> p = index.Property() | |
205 | >>> p.dat_extension = 'data' | |
206 | >>> p.idx_extension = 'index' | |
207 | ||
208 | >>> idx_cust = Rtree('custom', properties=p) | |
209 | >>> for i, coords in enumerate(boxes15): | |
210 | ... idx_cust.add(i, coords) | |
211 | >>> hits = list(idx_cust.intersection((0, 0, 60, 60))) | |
212 | >>> len(hits) | |
213 | 10 | |
214 | ||
215 | >>> del idx_cust | |
216 | ||
217 | Reopen the index | |
218 | >>> p2 = index.Property() | |
219 | >>> p2.dat_extension = 'data' | |
220 | >>> p2.idx_extension = 'index' | |
221 | ||
222 | >>> idx_cust2 = Rtree('custom', properties=p2) | |
223 | >>> hits = list(idx_cust2.intersection((0, 0, 60, 60))) | |
224 | >>> len(hits) | |
225 | 10 | |
226 | ||
227 | >>> del idx_cust2 | |
228 | ||
229 | Adding the same id twice does not overwrite existing data | |
230 | ||
231 | >>> r = Rtree() | |
232 | >>> r.add(1, (2, 2)) | |
233 | >>> r.add(1, (3, 3)) | |
234 | >>> list(r.intersection((0, 0, 5, 5))) | |
235 | [1, 1] | |
236 | ||
237 | A stream of data need that needs to be an iterator that will raise a | |
238 | StopIteration. The order depends on the interleaved kwarg sent to the | |
239 | constructor. | |
240 | ||
241 | The object can be None, but you must put a place holder of 'None' there. | |
242 | ||
243 | >>> p = index.Property() | |
244 | >>> def data_gen(interleaved=True): | |
245 | ... for i, (minx, miny, maxx, maxy) in enumerate(boxes15): | |
246 | ... if interleaved: | |
247 | ... yield (i, (minx, miny, maxx, maxy), 42) | |
248 | ... else: | |
249 | ... yield (i, (minx, maxx, miny, maxy), 42) | |
250 | ||
251 | >>> strm_idx = index.Rtree(data_gen(), properties = p) | |
252 | ||
253 | >>> hits = list(strm_idx.intersection((0, 0, 60, 60))) | |
254 | ||
255 | >>> len(hits) | |
256 | 10 | |
257 | ||
258 | ||
259 | >>> sorted(hits) | |
260 | [0, 4, 16, 27, 35, 40, 47, 50, 76, 80] | |
261 | ||
262 | >>> hits = list(strm_idx.intersection((0, 0, 60, 60), objects=True)) | |
263 | >>> len(hits) | |
264 | 10 | |
265 | ||
266 | >>> hits[0].object | |
267 | 42 | |
268 | ||
269 | Try streaming against a persisted index without interleaving. | |
270 | >>> strm_idx = index.Rtree('streamed', data_gen(interleaved=False), properties = p, interleaved=False) | |
271 | ||
272 | Note the arguments to intersection must be xmin, xmax, ymin, ymax for interleaved=False | |
273 | >>> hits = list(strm_idx.intersection((0, 60, 0, 60))) | |
274 | >>> len(hits) | |
275 | 10 | |
276 | ||
277 | >>> sorted(hits) | |
278 | [0, 4, 16, 27, 35, 40, 47, 50, 76, 80] | |
279 | ||
280 | >>> hits = list(strm_idx.intersection((0, 60, 0, 60), objects=True)) | |
281 | >>> len(hits) | |
282 | 10 | |
283 | ||
284 | >>> hits[0].object | |
285 | 42 | |
286 | ||
287 | >>> hits = list(strm_idx.intersection((0, 60, 0, 60), objects='raw')) | |
288 | >>> hits[0] | |
289 | 42 | |
290 | >>> len(hits) | |
291 | 10 | |
292 | ||
293 | >>> strm_idx.count((0, 60, 0, 60)) | |
294 | 10L | |
295 | ||
296 | >>> del strm_idx | |
297 | ||
298 | >>> p = index.Property() | |
299 | >>> p.leaf_capacity = 100 | |
300 | >>> p.fill_factor = 0.5 | |
301 | >>> p.index_capacity = 10 | |
302 | >>> p.near_minimum_overlap_factor = 7 | |
303 | >>> idx = index.Index(data_gen(interleaved=False), properties = p, interleaved=False) | |
304 | ||
305 | >>> leaves = idx.leaves() | |
306 | ||
307 | >>> del idx |
0 | Testing rtree properties | |
1 | ========================== | |
2 | ||
3 | Make a simple properties object | |
4 | ||
5 | >>> from rtree import index | |
6 | >>> p = index.Property() | |
7 | ||
8 | Test as_dict() | |
9 | ||
10 | >>> d = p.as_dict() | |
11 | >>> d['index_id'] is None | |
12 | True | |
13 | ||
14 | Test creation from kwargs and eval() of its repr() | |
15 | ||
16 | >>> q = index.Property(**d) | |
17 | >>> eval(repr(q))['index_id'] is None | |
18 | True | |
19 | ||
20 | Test pretty printed string | |
21 | ||
22 | >>> print q | |
23 | {'buffering_capacity': 10, | |
24 | 'custom_storage_callbacks': None, | |
25 | 'custom_storage_callbacks_size': 0L, | |
26 | 'dat_extension': 'dat', | |
27 | 'dimension': 2, | |
28 | 'filename': '', | |
29 | 'fill_factor': 0..., | |
30 | 'idx_extension': 'idx', | |
31 | 'index_capacity': 100, | |
32 | 'index_id': None, | |
33 | 'leaf_capacity': 100, | |
34 | 'near_minimum_overlap_factor': 32, | |
35 | 'overwrite': True, | |
36 | 'pagesize': 4096, | |
37 | 'point_pool_capacity': 500, | |
38 | 'region_pool_capacity': 1000, | |
39 | 'reinsert_factor': 0..., | |
40 | 'split_distribution_factor': 0..., | |
41 | 'storage': 1, | |
42 | 'tight_mbr': True, | |
43 | 'tpr_horizon': 20.0, | |
44 | 'type': 0, | |
45 | 'variant': 2, | |
46 | 'writethrough': False} | |
47 | ||
48 | Test property setting | |
49 | ||
50 | >>> p = index.Property() | |
51 | >>> p.type = 0 | |
52 | >>> p.type | |
53 | 0 | |
54 | ||
55 | >>> p.type = 2 | |
56 | >>> p.type | |
57 | 2 | |
58 | ||
59 | >>> p.type = 6 | |
60 | Traceback (most recent call last): | |
61 | ... | |
62 | RTreeError: LASError in "IndexProperty_SetIndexType": Inputted value is not a valid index type | |
63 | ||
64 | >>> p.dimension = 3 | |
65 | >>> p.dimension | |
66 | 3 | |
67 | ||
68 | >>> p.dimension = 2 | |
69 | >>> p.dimension | |
70 | 2 | |
71 | ||
72 | >>> p.dimension = -2 | |
73 | Traceback (most recent call last): | |
74 | ... | |
75 | RTreeError: Negative or 0 dimensional indexes are not allowed | |
76 | ||
77 | >>> p.variant = 0 | |
78 | >>> p.variant | |
79 | 0 | |
80 | ||
81 | >>> p.variant = 6 | |
82 | Traceback (most recent call last): | |
83 | ... | |
84 | RTreeError: LASError in "IndexProperty_SetIndexVariant": Inputted value is not a valid index variant | |
85 | ||
86 | >>> p.storage = 0 | |
87 | >>> p.storage | |
88 | 0 | |
89 | ||
90 | >>> p.storage = 1 | |
91 | >>> p.storage | |
92 | 1 | |
93 | ||
94 | >>> p.storage = 3 | |
95 | Traceback (most recent call last): | |
96 | ... | |
97 | RTreeError: LASError in "IndexProperty_SetIndexStorage": Inputted value is not a valid index storage type | |
98 | ||
99 | >>> p.index_capacity | |
100 | 100 | |
101 | ||
102 | >>> p.index_capacity = 300 | |
103 | >>> p.index_capacity | |
104 | 300 | |
105 | ||
106 | >>> p.index_capacity = -4321 | |
107 | Traceback (most recent call last): | |
108 | ... | |
109 | RTreeError: index_capacity must be > 0 | |
110 | ||
111 | >>> p.pagesize | |
112 | 4096 | |
113 | ||
114 | >>> p.pagesize = 8192 | |
115 | >>> p.pagesize | |
116 | 8192 | |
117 | ||
118 | >>> p.pagesize = -4321 | |
119 | Traceback (most recent call last): | |
120 | ... | |
121 | RTreeError: Pagesize must be > 0 | |
122 | ||
123 | >>> p.leaf_capacity | |
124 | 100 | |
125 | ||
126 | >>> p.leaf_capacity = 1000 | |
127 | >>> p.leaf_capacity | |
128 | 1000 | |
129 | >>> p.leaf_capacity = -4321 | |
130 | Traceback (most recent call last): | |
131 | ... | |
132 | RTreeError: leaf_capacity must be > 0 | |
133 | ||
134 | >>> p.index_pool_capacity | |
135 | 100 | |
136 | ||
137 | >>> p.index_pool_capacity = 1500 | |
138 | >>> p.index_pool_capacity = -4321 | |
139 | Traceback (most recent call last): | |
140 | ... | |
141 | RTreeError: index_pool_capacity must be > 0 | |
142 | ||
143 | >>> p.point_pool_capacity | |
144 | 500 | |
145 | ||
146 | >>> p.point_pool_capacity = 1500 | |
147 | >>> p.point_pool_capacity = -4321 | |
148 | Traceback (most recent call last): | |
149 | ... | |
150 | RTreeError: point_pool_capacity must be > 0 | |
151 | ||
152 | >>> p.region_pool_capacity | |
153 | 1000 | |
154 | ||
155 | >>> p.region_pool_capacity = 1500 | |
156 | >>> p.region_pool_capacity | |
157 | 1500 | |
158 | >>> p.region_pool_capacity = -4321 | |
159 | Traceback (most recent call last): | |
160 | ... | |
161 | RTreeError: region_pool_capacity must be > 0 | |
162 | ||
163 | >>> p.buffering_capacity | |
164 | 10 | |
165 | ||
166 | >>> p.buffering_capacity = 100 | |
167 | >>> p.buffering_capacity = -4321 | |
168 | Traceback (most recent call last): | |
169 | ... | |
170 | RTreeError: buffering_capacity must be > 0 | |
171 | ||
172 | >>> p.tight_mbr | |
173 | True | |
174 | ||
175 | >>> p.tight_mbr = 100 | |
176 | >>> p.tight_mbr | |
177 | True | |
178 | ||
179 | >>> p.tight_mbr = False | |
180 | >>> p.tight_mbr | |
181 | False | |
182 | ||
183 | >>> p.overwrite | |
184 | True | |
185 | ||
186 | >>> p.overwrite = 100 | |
187 | >>> p.overwrite | |
188 | True | |
189 | ||
190 | >>> p.overwrite = False | |
191 | >>> p.overwrite | |
192 | False | |
193 | ||
194 | >>> p.near_minimum_overlap_factor | |
195 | 32 | |
196 | ||
197 | >>> p.near_minimum_overlap_factor = 100 | |
198 | >>> p.near_minimum_overlap_factor = -4321 | |
199 | Traceback (most recent call last): | |
200 | ... | |
201 | RTreeError: near_minimum_overlap_factor must be > 0 | |
202 | ||
203 | >>> p.writethrough | |
204 | False | |
205 | ||
206 | >>> p.writethrough = 100 | |
207 | >>> p.writethrough | |
208 | True | |
209 | ||
210 | >>> p.writethrough = False | |
211 | >>> p.writethrough | |
212 | False | |
213 | ||
214 | >>> '%.2f' % p.fill_factor | |
215 | '0.70' | |
216 | ||
217 | >>> p.fill_factor = 0.99 | |
218 | >>> '%.2f' % p.fill_factor | |
219 | '0.99' | |
220 | ||
221 | >>> '%.2f' % p.split_distribution_factor | |
222 | '0.40' | |
223 | ||
224 | >>> p.tpr_horizon | |
225 | 20.0 | |
226 | ||
227 | >>> '%.2f' % p.reinsert_factor | |
228 | '0.30' | |
229 | ||
230 | >>> p.filename | |
231 | '' | |
232 | ||
233 | >>> p.filename = 'testing123testing' | |
234 | >>> p.filename | |
235 | 'testing123testing' | |
236 | ||
237 | >>> p.dat_extension | |
238 | 'dat' | |
239 | ||
240 | >>> p.dat_extension = 'data' | |
241 | >>> p.dat_extension | |
242 | 'data' | |
243 | ||
244 | >>> p.idx_extension | |
245 | 'idx' | |
246 | >>> p.idx_extension = 'index' | |
247 | >>> p.idx_extension | |
248 | 'index' | |
249 | ||
250 | >>> p.index_id | |
251 | Traceback (most recent call last): | |
252 | ... | |
253 | RTreeError: Error in "IndexProperty_GetIndexID": Property IndexIdentifier was empty | |
254 | >>> p.index_id = -420 | |
255 | >>> int(p.index_id) | |
256 | -420 |
0 | ||
1 | Shows how to create custom storage backend. | |
2 | ||
3 | Derive your custom storage for rtree.index.CustomStorage and override the methods | |
4 | shown in this example. | |
5 | You can also derive from rtree.index.CustomStorageBase to get at the raw C buffers | |
6 | if you need the extra speed and want to avoid translating from/to python strings. | |
7 | ||
8 | The essential methods are the load/store/deleteByteArray. The rtree library calls | |
9 | them whenever it needs to access the data in any way. | |
10 | ||
11 | Example storage which maps the page (ids) to the page data. | |
12 | ||
13 | >>> from rtree.index import Rtree, CustomStorage, Property | |
14 | ||
15 | >>> class DictStorage(CustomStorage): | |
16 | ... """ A simple storage which saves the pages in a python dictionary """ | |
17 | ... def __init__(self): | |
18 | ... CustomStorage.__init__( self ) | |
19 | ... self.clear() | |
20 | ... | |
21 | ... def create(self, returnError): | |
22 | ... """ Called when the storage is created on the C side """ | |
23 | ... | |
24 | ... def destroy(self, returnError): | |
25 | ... """ Called when the storage is destroyed on the C side """ | |
26 | ... | |
27 | ... def clear(self): | |
28 | ... """ Clear all our data """ | |
29 | ... self.dict = {} | |
30 | ... | |
31 | ... def loadByteArray(self, page, returnError): | |
32 | ... """ Returns the data for page or returns an error """ | |
33 | ... try: | |
34 | ... return self.dict[page] | |
35 | ... except KeyError: | |
36 | ... returnError.contents.value = self.InvalidPageError | |
37 | ... | |
38 | ... def storeByteArray(self, page, data, returnError): | |
39 | ... """ Stores the data for page """ | |
40 | ... if page == self.NewPage: | |
41 | ... newPageId = len(self.dict) | |
42 | ... self.dict[newPageId] = data | |
43 | ... return newPageId | |
44 | ... else: | |
45 | ... if page not in self.dict: | |
46 | ... returnError.value = self.InvalidPageError | |
47 | ... return 0 | |
48 | ... self.dict[page] = data | |
49 | ... return page | |
50 | ... | |
51 | ... def deleteByteArray(self, page, returnError): | |
52 | ... """ Deletes a page """ | |
53 | ... try: | |
54 | ... del self.dict[page] | |
55 | ... except KeyError: | |
56 | ... returnError.contents.value = self.InvalidPageError | |
57 | ... | |
58 | ... hasData = property( lambda self: bool(self.dict) ) | |
59 | ... """ Returns true if we contains some data """ | |
60 | ||
61 | ||
62 | Now let's test drive our custom storage. | |
63 | ||
64 | First let's define the basic properties we will use for all rtrees: | |
65 | ||
66 | >>> settings = Property() | |
67 | >>> settings.writethrough = True | |
68 | >>> settings.buffering_capacity = 1 | |
69 | ||
70 | Notice that there is a small in-memory buffer by default. We effectively disable | |
71 | it here so our storage directly receives any load/store/delete calls. | |
72 | This is not necessary in general and can hamper performance; we just use it here | |
73 | for illustrative and testing purposes. | |
74 | ||
75 | Let's start with a basic test: | |
76 | ||
77 | Create the storage and hook it up with a new rtree: | |
78 | ||
79 | >>> storage = DictStorage() | |
80 | >>> r = Rtree( storage, properties = settings ) | |
81 | ||
82 | Interestingly enough, if we take a look at the contents of our storage now, we | |
83 | can see the Rtree has already written two pages to it. This is for header and | |
84 | index. | |
85 | ||
86 | >>> state1 = storage.dict.copy() | |
87 | >>> list(state1.keys()) | |
88 | [0, 1] | |
89 | ||
90 | Let's add an item: | |
91 | ||
92 | >>> r.add(123, (0, 0, 1, 1)) | |
93 | ||
94 | Make sure the data in the storage before and after the addition of the new item | |
95 | is different: | |
96 | ||
97 | >>> state2 = storage.dict.copy() | |
98 | >>> state1 != state2 | |
99 | True | |
100 | ||
101 | Now perform a few queries and assure the tree is still valid: | |
102 | ||
103 | >>> item = list(r.nearest((0, 0), 1, objects=True))[0] | |
104 | >>> int(item.id) | |
105 | 123 | |
106 | >>> r.valid() | |
107 | True | |
108 | ||
109 | Check if the stored data is a byte string | |
110 | ||
111 | >>> isinstance(list(storage.dict.values())[0], bytes) | |
112 | True | |
113 | ||
114 | Delete an item | |
115 | ||
116 | >>> r.delete(123, (0, 0, 1, 1)) | |
117 | >>> r.valid() | |
118 | True | |
119 | ||
120 | Just for reference show how to flush the internal buffers (e.g. when | |
121 | properties.buffer_capacity is > 1) | |
122 | ||
123 | >>> r.clearBuffer() | |
124 | >>> r.valid() | |
125 | True | |
126 | ||
127 | Let's get rid of the tree, we're done with it | |
128 | ||
129 | >>> del r | |
130 | ||
131 | Show how to empty the storage | |
132 | ||
133 | >>> storage.clear() | |
134 | >>> storage.hasData | |
135 | False | |
136 | >>> del storage | |
137 | ||
138 | ||
139 | Ok, let's create another small test. This time we'll test reopening our custom | |
140 | storage. This is useful for persistent storages. | |
141 | ||
142 | First create a storage and put some data into it: | |
143 | ||
144 | >>> storage = DictStorage() | |
145 | >>> r1 = Rtree( storage, properties = settings, overwrite = True ) | |
146 | >>> r1.add(555, (2, 2)) | |
147 | >>> del r1 | |
148 | >>> storage.hasData | |
149 | True | |
150 | ||
151 | Then reopen the storage with a new tree and see if the data is still there | |
152 | ||
153 | >>> r2 = Rtree( storage, properties = settings, overwrite = False ) | |
154 | >>> r2.count( (0,0,10,10) ) == 1 | |
155 | True | |
156 | >>> del r2 |
0 | ||
1 | make sure a file-based index is overwriteable. | |
2 | ||
3 | >>> from rtree.index import Rtree | |
4 | >>> r = Rtree('overwriteme') | |
5 | >>> del r | |
6 | >>> r = Rtree('overwriteme', overwrite=True) | |
7 | ||
8 | ||
9 | the default serializer is pickle, can use any by overriding dumps, loads | |
10 | ||
11 | >>> r = Rtree() | |
12 | >>> some_data = {"a": 22, "b": [1, "ccc"]} | |
13 | >>> try: | |
14 | ... import simplejson | |
15 | ... r.dumps = simplejson.dumps | |
16 | ... r.loads = simplejson.loads | |
17 | ... r.add(0, (0, 0, 1, 1), some_data) | |
18 | ... list(r.nearest((0, 0), 1, objects="raw"))[0] == some_data | |
19 | ... except ImportError: | |
20 | ... # "no import, failed" | |
21 | ... True | |
22 | True | |
23 | ||
24 | ||
25 | >>> r = Rtree() | |
26 | >>> r.add(123, (0, 0, 1, 1)) | |
27 | >>> item = list(r.nearest((0, 0), 1, objects=True))[0] | |
28 | >>> item.id | |
29 | 123 | |
30 | ||
31 | >>> r.valid() | |
32 | True | |
33 | ||
34 | test UTF-8 filenames | |
35 | ||
36 | >>> f = u'gilename\u4500abc' | |
37 | ||
38 | >>> r = Rtree(f) | |
39 | >>> r.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42) | |
40 | ||
41 | >>> del r |
0 | >>> from rtree import core | |
1 | >>> del core.rt | |
2 | >>> files = ['defaultidx.dat','defaultidx.idx', | |
3 | ... 'pagesize3.dat','pagesize3.idx', | |
4 | ... 'testing.dat','testing.idx', | |
5 | ... 'custom.data','custom.index', | |
6 | ... 'streamed.idx','streamed.dat', | |
7 | ... 'gilename䔀abc.dat','gilename䔀abc.idx', | |
8 | ... 'overwriteme.idx', 'overwriteme.dat'] | |
9 | >>> import os | |
10 | >>> import time | |
11 | >>> for f in files: | |
12 | ... try: | |
13 | ... os.remove(f) | |
14 | ... except OSError: | |
15 | ... time.sleep(0.1) | |
16 | ... os.remove(f) | |
17 |
0 | -99.6892107034 39.8916853627 | |
1 | -119.929145785 54.6975115102 | |
2 | -87.9404916748 46.7461477222 | |
3 | -103.34859903 27.3051127003 | |
4 | -103.647596618 53.8807507917 | |
5 | -91.8927833592 35.7753372219 | |
6 | -89.1652931686 36.1142561824 | |
7 | -110.126101209 23.1717348361 | |
8 | -80.0045033941 38.1454752139 | |
9 | -106.64011996 46.0721918659 | |
10 | -112.18238306 50.0387811785 | |
11 | -115.52666575 49.8014009131 | |
12 | -84.7813828591 26.1819403837 | |
13 | -106.036014203 20.4490174512 | |
14 | -101.757539351 41.9304244741 | |
15 | -95.8296753559 56.7690801746 | |
16 | -92.2014788887 22.7196420456 | |
17 | -92.4694650257 36.0372197509 | |
18 | -95.9693778032 24.7743480763 | |
19 | -116.892131043 58.3921666824 | |
20 | -81.7116145102 43.0997450632 | |
21 | -80.4274533745 57.264509789 | |
22 | -100.141449392 28.7945317045 | |
23 | -83.938309905 24.4706430566 | |
24 | -116.825766821 54.4155675643 | |
25 | -109.041900666 48.4070794953 | |
26 | -104.948562232 46.755055732 | |
27 | -93.3182267273 25.7153440898 | |
28 | -105.692452228 34.1668233688 | |
29 | -90.0421361623 44.3240654845 | |
30 | -104.539878551 21.6407826234 | |
31 | -101.237553395 41.1370129094 | |
32 | -86.6978722357 42.3737781109 | |
33 | -85.6212971271 54.3973766798 | |
34 | -84.7087435581 30.9745681929 | |
35 | -83.5560972991 27.2395110389 | |
36 | -112.224540205 34.0412265793 | |
37 | -112.217509712 21.0944835429 | |
38 | -92.3342976535 23.0815733884 | |
39 | -87.7231811048 42.0253658091 | |
40 | -83.7711891214 35.846982396 | |
41 | -114.489815147 39.9102600009 | |
42 | -110.230199031 55.6397960984 | |
43 | -89.7312460371 40.195281519 | |
44 | -114.839230733 30.9793913832 | |
45 | -91.8211592397 44.5025580473 | |
46 | -80.6740618977 27.4621539695 | |
47 | -89.0016046553 25.614226888 | |
48 | -109.529957004 35.76933736 | |
49 | -116.308211506 51.619356631 | |
50 | -108.039179673 48.8786346853 | |
51 | -114.688504417 38.6402036876 | |
52 | -101.453481649 50.5354288275 | |
53 | -89.2420289071 38.439732812 | |
54 | -86.7313281606 27.3379253528 | |
55 | -88.3419470249 31.4306493133 | |
56 | -81.3513268328 34.3657607419 | |
57 | -107.41532032 59.6742008238 | |
58 | -115.004114428 49.2723700185 | |
59 | -89.6973489591 22.8852207708 | |
60 | -83.8397837025 39.9841988713 | |
61 | -97.0186983466 22.0471178349 | |
62 | -106.424157679 32.895362106 | |
63 | -95.1848371325 51.5043201437 | |
64 | -86.3616752194 21.9548312528 | |
65 | -105.991425461 22.5286573131 | |
66 | -80.5668902367 42.9792375503 | |
67 | -112.783634301 40.0776845685 | |
68 | -82.5477767834 28.0613388183 | |
69 | -91.0515968771 23.1905128055 | |
70 | -110.149810556 47.1166200287 | |
71 | -83.1182736485 25.2447131214 | |
72 | -84.5076122639 44.5323462279 | |
73 | -105.648654014 20.1862355065 | |
74 | -110.475101921 31.8228885917 | |
75 | -99.8038044984 31.7219492613 | |
76 | -99.2715379504 55.995839678 | |
77 | -88.2132334567 43.2039556608 | |
78 | -106.08196914 48.2430863629 | |
79 | -112.208186359 53.1787640688 | |
80 | -95.5574424354 37.5192150379 | |
81 | -100.198900483 45.5786586794 | |
82 | -102.750565726 39.2778615954 | |
83 | -99.7750925736 39.5666262605 | |
84 | -101.473586825 57.4200835182 | |
85 | -80.299431091 32.2200518727 | |
86 | -93.1827073287 29.7365078678 | |
87 | -105.079941409 23.6106407608 | |
88 | -108.660717273 50.6341222608 | |
89 | -109.720823995 59.8332354648 | |
90 | -99.0121042978 48.4492204324 | |
91 | -116.279364519 56.5768009089 | |
92 | -94.2288025492 55.4285838669 | |
93 | -115.549238258 24.072445234 | |
94 | -99.157995656 30.0012169357 | |
95 | -97.0117440317 40.2556293841 | |
96 | -113.839305491 43.173923535 | |
97 | -80.4396012705 43.3278307342 | |
98 | -80.4410933026 31.5999201013 | |
99 | -91.4651975999 41.2493114192 | |
100 | 4.60579495227 49.8831194463 | |
101 | 3.20672814175 51.3609018809 | |
102 | 3.36845381787 51.2257002999 | |
103 | 5.07383783754 50.1890984029 | |
104 | 3.47062525041 54.4034196606 | |
105 | 9.42944118113 49.3310802203 | |
106 | 9.65029800292 52.3849774631 | |
107 | 3.15390911197 53.4356410134 | |
108 | 2.1641889441 46.3160467484 | |
109 | 2.76615096474 51.9333052166 | |
110 | 5.92610984385 52.3525895826 | |
111 | 8.27672843388 47.3971697671 | |
112 | 8.09018831965 45.0103660536 | |
113 | 0.295422443428 45.7047614237 | |
114 | 7.08973842847 50.6442234879 | |
115 | 9.17602294413 48.5340026698 | |
116 | 4.15118930812 51.8952792855 | |
117 | 2.28117868048 47.1178084718 | |
118 | 1.58374188707 52.2081804176 | |
119 | 2.53872504734 45.0849317176 | |
120 | 3.28711441668 45.432312898 | |
121 | 0.758226492991 53.2607340635 | |
122 | 3.25985095142 45.2573310672 | |
123 | 8.00601068476 47.0850066017 | |
124 | 1.28830154784 52.4991772013 | |
125 | 1.66996778024 51.6367913296 | |
126 | 3.9815072575 52.3482628229 | |
127 | 5.88977607375 53.9423490524 | |
128 | 8.96337945228 51.0139312609 | |
129 | 5.24818348639 46.134455909 | |
130 | 5.61289802982 54.5954320121 | |
131 | 3.62429244309 52.3359637309 | |
132 | 4.21198096829 50.0855475321 | |
133 | 7.94160546253 45.4384582083 | |
134 | 0.858870058152 47.360936282 | |
135 | 0.268199044279 54.6205941141 | |
136 | 8.8517468866 47.0177541617 | |
137 | 3.82634390799 49.6129475929 | |
138 | 7.18121039316 50.9102230118 | |
139 | 6.33987705575 48.5869965232 | |
140 | 4.27748859389 46.8445771091 | |
141 | 8.16102273984 52.0542397504 | |
142 | 8.90629765654 46.8267304009 | |
143 | 8.50418457135 54.2788836483 | |
144 | 2.83298541659 54.4015795236 | |
145 | 5.05727814519 46.8966220104 | |
146 | 5.42722723233 49.2522002876 | |
147 | 3.74779199012 47.3024407957 | |
148 | 5.5540513786 48.0000667884 | |
149 | 4.22046721694 48.4813710125 | |
150 | 7.77423935705 54.7851361942 | |
151 | 1.69100522731 49.1479444596 | |
152 | 9.88831364423 46.6307793199 | |
153 | 0.557972287014 49.2862367442 | |
154 | 3.833420502 54.5873993944 | |
155 | 6.46076039362 48.8972713871 | |
156 | 2.63955752349 54.232602505 | |
157 | 5.8885693494 45.5366348259 | |
158 | 4.43826955483 46.7887436394 | |
159 | 2.17158704423 48.0543749425 | |
160 | 7.72137464151 45.4366964436 | |
161 | 6.17791100255 50.2728073458 | |
162 | 8.76313855735 48.3295373887 | |
163 | 8.55934310911 45.7877488932 | |
164 | 5.44000186467 49.4093319122 | |
165 | 5.79751516319 54.5349162318 | |
166 | 8.1429940015 48.3476815956 | |
167 | 0.835864306517 49.0796610468 | |
168 | 0.696120353103 46.4340363637 | |
169 | 7.18918891741 52.2180907225 | |
170 | 2.52640501319 48.1556192069 | |
171 | 3.83156688721 48.4500486034 | |
172 | 9.22538921 45.0064651048 | |
173 | 3.14537149103 51.1287933121 | |
174 | 1.83951586739 50.9044177555 | |
175 | 2.71188429211 45.8851076637 | |
176 | 9.71187132506 47.3613845231 | |
177 | 4.88830479883 46.8400164971 | |
178 | 2.73227645129 52.5950604826 | |
179 | 0.0139182499152 49.8607853636 | |
180 | 2.72072453562 47.8977681874 | |
181 | 9.36967823942 46.9493204587 | |
182 | 4.10066520654 49.1711325962 | |
183 | 3.31840270526 52.9602125993 | |
184 | 2.4887190236 54.9710679569 | |
185 | 7.37847486987 52.8425110492 | |
186 | 1.0003078803 51.0441658531 | |
187 | 9.24208567999 54.4966902592 | |
188 | 3.29499693052 50.4276526188 | |
189 | 0.0878474024123 49.9046355191 | |
190 | 0.0235438079302 53.0495224168 | |
191 | 5.69863091011 46.2782685579 | |
192 | 8.32974835131 46.6179198084 | |
193 | 4.42226544448 50.304272252 | |
194 | 8.89982687198 51.7079435384 | |
195 | 7.32317795781 48.7452952961 | |
196 | 0.922452349264 47.7145307105 | |
197 | 4.83907582793 52.185114017 | |
198 | 4.30389764594 47.9144865159 | |
199 | 7.63593976114 47.452190841 |
0 | Testing rtree properties | |
1 | ========================== | |
2 | ||
3 | Make a simple properties object | |
4 | ||
5 | >>> from rtree import index | |
6 | >>> p = index.Property() | |
7 | ||
8 | Test as_dict() | |
9 | ||
10 | >>> d = p.as_dict() | |
11 | >>> d['index_id'] is None | |
12 | True | |
13 | ||
14 | Test creation from kwargs and eval() of its repr() | |
15 | ||
16 | >>> q = index.Property(**d) | |
17 | >>> eval(repr(q))['index_id'] is None | |
18 | True | |
19 | ||
20 | Test pretty printed string | |
21 | ||
22 | >>> print q | |
23 | {'buffering_capacity': 10, | |
24 | 'custom_storage_callbacks': None, | |
25 | 'custom_storage_callbacks_size': 0L, | |
26 | 'dat_extension': 'dat', | |
27 | 'dimension': 2, | |
28 | 'filename': '', | |
29 | 'fill_factor': 0..., | |
30 | 'idx_extension': 'idx', | |
31 | 'index_capacity': 100, | |
32 | 'index_id': None, | |
33 | 'leaf_capacity': 100, | |
34 | 'near_minimum_overlap_factor': 32, | |
35 | 'overwrite': True, | |
36 | 'pagesize': 4096, | |
37 | 'point_pool_capacity': 500, | |
38 | 'region_pool_capacity': 1000, | |
39 | 'reinsert_factor': 0..., | |
40 | 'split_distribution_factor': 0..., | |
41 | 'storage': 1, | |
42 | 'tight_mbr': True, | |
43 | 'tpr_horizon': 20.0, | |
44 | 'type': 0, | |
45 | 'variant': 2, | |
46 | 'writethrough': False} | |
47 | ||
48 | Test property setting | |
49 | ||
50 | >>> p = index.Property() | |
51 | >>> p.type = 0 | |
52 | >>> p.type | |
53 | 0 | |
54 | ||
55 | >>> p.type = 2 | |
56 | >>> p.type | |
57 | 2 | |
58 | ||
59 | >>> p.type = 6 | |
60 | Traceback (most recent call last): | |
61 | ... | |
62 | RTreeError: LASError in "IndexProperty_SetIndexType": Inputted value is not a valid index type | |
63 | ||
64 | >>> p.dimension = 3 | |
65 | >>> p.dimension | |
66 | 3 | |
67 | ||
68 | >>> p.dimension = 2 | |
69 | >>> p.dimension | |
70 | 2 | |
71 | ||
72 | >>> p.dimension = -2 | |
73 | Traceback (most recent call last): | |
74 | ... | |
75 | RTreeError: Negative or 0 dimensional indexes are not allowed | |
76 | ||
77 | >>> p.variant = 0 | |
78 | >>> p.variant | |
79 | 0 | |
80 | ||
81 | >>> p.variant = 6 | |
82 | Traceback (most recent call last): | |
83 | ... | |
84 | RTreeError: LASError in "IndexProperty_SetIndexVariant": Inputted value is not a valid index variant | |
85 | ||
86 | >>> p.storage = 0 | |
87 | >>> p.storage | |
88 | 0 | |
89 | ||
90 | >>> p.storage = 1 | |
91 | >>> p.storage | |
92 | 1 | |
93 | ||
94 | >>> p.storage = 3 | |
95 | Traceback (most recent call last): | |
96 | ... | |
97 | RTreeError: LASError in "IndexProperty_SetIndexStorage": Inputted value is not a valid index storage type | |
98 | ||
99 | >>> p.index_capacity | |
100 | 100 | |
101 | ||
102 | >>> p.index_capacity = 300 | |
103 | >>> p.index_capacity | |
104 | 300 | |
105 | ||
106 | >>> p.index_capacity = -4321 | |
107 | Traceback (most recent call last): | |
108 | ... | |
109 | RTreeError: index_capacity must be > 0 | |
110 | ||
111 | >>> p.pagesize | |
112 | 4096 | |
113 | ||
114 | >>> p.pagesize = 8192 | |
115 | >>> p.pagesize | |
116 | 8192 | |
117 | ||
118 | >>> p.pagesize = -4321 | |
119 | Traceback (most recent call last): | |
120 | ... | |
121 | RTreeError: Pagesize must be > 0 | |
122 | ||
123 | >>> p.leaf_capacity | |
124 | 100 | |
125 | ||
126 | >>> p.leaf_capacity = 1000 | |
127 | >>> p.leaf_capacity | |
128 | 1000 | |
129 | >>> p.leaf_capacity = -4321 | |
130 | Traceback (most recent call last): | |
131 | ... | |
132 | RTreeError: leaf_capacity must be > 0 | |
133 | ||
134 | >>> p.index_pool_capacity | |
135 | 100 | |
136 | ||
137 | >>> p.index_pool_capacity = 1500 | |
138 | >>> p.index_pool_capacity = -4321 | |
139 | Traceback (most recent call last): | |
140 | ... | |
141 | RTreeError: index_pool_capacity must be > 0 | |
142 | ||
143 | >>> p.point_pool_capacity | |
144 | 500 | |
145 | ||
146 | >>> p.point_pool_capacity = 1500 | |
147 | >>> p.point_pool_capacity = -4321 | |
148 | Traceback (most recent call last): | |
149 | ... | |
150 | RTreeError: point_pool_capacity must be > 0 | |
151 | ||
152 | >>> p.region_pool_capacity | |
153 | 1000 | |
154 | ||
155 | >>> p.region_pool_capacity = 1500 | |
156 | >>> p.region_pool_capacity | |
157 | 1500 | |
158 | >>> p.region_pool_capacity = -4321 | |
159 | Traceback (most recent call last): | |
160 | ... | |
161 | RTreeError: region_pool_capacity must be > 0 | |
162 | ||
163 | >>> p.buffering_capacity | |
164 | 10 | |
165 | ||
166 | >>> p.buffering_capacity = 100 | |
167 | >>> p.buffering_capacity = -4321 | |
168 | Traceback (most recent call last): | |
169 | ... | |
170 | RTreeError: buffering_capacity must be > 0 | |
171 | ||
172 | >>> p.tight_mbr | |
173 | True | |
174 | ||
175 | >>> p.tight_mbr = 100 | |
176 | >>> p.tight_mbr | |
177 | True | |
178 | ||
179 | >>> p.tight_mbr = False | |
180 | >>> p.tight_mbr | |
181 | False | |
182 | ||
183 | >>> p.overwrite | |
184 | True | |
185 | ||
186 | >>> p.overwrite = 100 | |
187 | >>> p.overwrite | |
188 | True | |
189 | ||
190 | >>> p.overwrite = False | |
191 | >>> p.overwrite | |
192 | False | |
193 | ||
194 | >>> p.near_minimum_overlap_factor | |
195 | 32 | |
196 | ||
197 | >>> p.near_minimum_overlap_factor = 100 | |
198 | >>> p.near_minimum_overlap_factor = -4321 | |
199 | Traceback (most recent call last): | |
200 | ... | |
201 | RTreeError: near_minimum_overlap_factor must be > 0 | |
202 | ||
203 | >>> p.writethrough | |
204 | False | |
205 | ||
206 | >>> p.writethrough = 100 | |
207 | >>> p.writethrough | |
208 | True | |
209 | ||
210 | >>> p.writethrough = False | |
211 | >>> p.writethrough | |
212 | False | |
213 | ||
214 | >>> '%.2f' % p.fill_factor | |
215 | '0.70' | |
216 | ||
217 | >>> p.fill_factor = 0.99 | |
218 | >>> '%.2f' % p.fill_factor | |
219 | '0.99' | |
220 | ||
221 | >>> '%.2f' % p.split_distribution_factor | |
222 | '0.40' | |
223 | ||
224 | >>> p.tpr_horizon | |
225 | 20.0 | |
226 | ||
227 | >>> '%.2f' % p.reinsert_factor | |
228 | '0.30' | |
229 | ||
230 | >>> p.filename | |
231 | '' | |
232 | ||
233 | >>> p.filename = 'testing123testing' | |
234 | >>> p.filename | |
235 | 'testing123testing' | |
236 | ||
237 | >>> p.dat_extension | |
238 | 'dat' | |
239 | ||
240 | >>> p.dat_extension = 'data' | |
241 | >>> p.dat_extension | |
242 | 'data' | |
243 | ||
244 | >>> p.idx_extension | |
245 | 'idx' | |
246 | >>> p.idx_extension = 'index' | |
247 | >>> p.idx_extension | |
248 | 'index' | |
249 | ||
250 | >>> p.index_id | |
251 | Traceback (most recent call last): | |
252 | ... | |
253 | RTreeError: Error in "IndexProperty_GetIndexID": Property IndexIdentifier was empty | |
254 | >>> p.index_id = -420 | |
255 | >>> int(p.index_id) | |
256 | -420 |
0 | #!/bin/sh | |
1 | valgrind --tool=memcheck --leak-check=yes --suppressions=/home/sean/Projects/valgrind-python.supp python test_doctests.py | |
2 |
0 | ||
1 | Shows how to create custom storage backend. | |
2 | ||
3 | Derive your custom storage for rtree.index.CustomStorage and override the methods | |
4 | shown in this example. | |
5 | You can also derive from rtree.index.CustomStorageBase to get at the raw C buffers | |
6 | if you need the extra speed and want to avoid translating from/to python strings. | |
7 | ||
8 | The essential methods are the load/store/deleteByteArray. The rtree library calls | |
9 | them whenever it needs to access the data in any way. | |
10 | ||
11 | Example storage which maps the page (ids) to the page data. | |
12 | ||
13 | >>> from rtree.index import Rtree, CustomStorage, Property | |
14 | ||
15 | >>> class DictStorage(CustomStorage): | |
16 | ... """ A simple storage which saves the pages in a python dictionary """ | |
17 | ... def __init__(self): | |
18 | ... CustomStorage.__init__( self ) | |
19 | ... self.clear() | |
20 | ... | |
21 | ... def create(self, returnError): | |
22 | ... """ Called when the storage is created on the C side """ | |
23 | ... | |
24 | ... def destroy(self, returnError): | |
25 | ... """ Called when the storage is destroyed on the C side """ | |
26 | ... | |
27 | ... def clear(self): | |
28 | ... """ Clear all our data """ | |
29 | ... self.dict = {} | |
30 | ... | |
31 | ... def loadByteArray(self, page, returnError): | |
32 | ... """ Returns the data for page or returns an error """ | |
33 | ... try: | |
34 | ... return self.dict[page] | |
35 | ... except KeyError: | |
36 | ... returnError.contents.value = self.InvalidPageError | |
37 | ... | |
38 | ... def storeByteArray(self, page, data, returnError): | |
39 | ... """ Stores the data for page """ | |
40 | ... if page == self.NewPage: | |
41 | ... newPageId = len(self.dict) | |
42 | ... self.dict[newPageId] = data | |
43 | ... return newPageId | |
44 | ... else: | |
45 | ... if page not in self.dict: | |
46 | ... returnError.value = self.InvalidPageError | |
47 | ... return 0 | |
48 | ... self.dict[page] = data | |
49 | ... return page | |
50 | ... | |
51 | ... def deleteByteArray(self, page, returnError): | |
52 | ... """ Deletes a page """ | |
53 | ... try: | |
54 | ... del self.dict[page] | |
55 | ... except KeyError: | |
56 | ... returnError.contents.value = self.InvalidPageError | |
57 | ... | |
58 | ... hasData = property( lambda self: bool(self.dict) ) | |
59 | ... """ Returns true if we contains some data """ | |
60 | ||
61 | ||
62 | Now let's test drive our custom storage. | |
63 | ||
64 | First let's define the basic properties we will use for all rtrees: | |
65 | ||
66 | >>> settings = Property() | |
67 | >>> settings.writethrough = True | |
68 | >>> settings.buffering_capacity = 1 | |
69 | ||
70 | Notice that there is a small in-memory buffer by default. We effectively disable | |
71 | it here so our storage directly receives any load/store/delete calls. | |
72 | This is not necessary in general and can hamper performance; we just use it here | |
73 | for illustrative and testing purposes. | |
74 | ||
75 | Let's start with a basic test: | |
76 | ||
77 | Create the storage and hook it up with a new rtree: | |
78 | ||
79 | >>> storage = DictStorage() | |
80 | >>> r = Rtree( storage, properties = settings ) | |
81 | ||
82 | Interestingly enough, if we take a look at the contents of our storage now, we | |
83 | can see the Rtree has already written two pages to it. This is for header and | |
84 | index. | |
85 | ||
86 | >>> state1 = storage.dict.copy() | |
87 | >>> list(state1.keys()) | |
88 | [0, 1] | |
89 | ||
90 | Let's add an item: | |
91 | ||
92 | >>> r.add(123, (0, 0, 1, 1)) | |
93 | ||
94 | Make sure the data in the storage before and after the addition of the new item | |
95 | is different: | |
96 | ||
97 | >>> state2 = storage.dict.copy() | |
98 | >>> state1 != state2 | |
99 | True | |
100 | ||
101 | Now perform a few queries and assure the tree is still valid: | |
102 | ||
103 | >>> item = list(r.nearest((0, 0), 1, objects=True))[0] | |
104 | >>> int(item.id) | |
105 | 123 | |
106 | >>> r.valid() | |
107 | True | |
108 | ||
109 | Check if the stored data is a byte string | |
110 | ||
111 | >>> isinstance(list(storage.dict.values())[0], bytes) | |
112 | True | |
113 | ||
114 | Delete an item | |
115 | ||
116 | >>> r.delete(123, (0, 0, 1, 1)) | |
117 | >>> r.valid() | |
118 | True | |
119 | ||
120 | Just for reference show how to flush the internal buffers (e.g. when | |
121 | properties.buffer_capacity is > 1) | |
122 | ||
123 | >>> r.clearBuffer() | |
124 | >>> r.valid() | |
125 | True | |
126 | ||
127 | Let's get rid of the tree, we're done with it | |
128 | ||
129 | >>> del r | |
130 | ||
131 | Show how to empty the storage | |
132 | ||
133 | >>> storage.clear() | |
134 | >>> storage.hasData | |
135 | False | |
136 | >>> del storage | |
137 | ||
138 | ||
139 | Ok, let's create another small test. This time we'll test reopening our custom | |
140 | storage. This is useful for persistent storages. | |
141 | ||
142 | First create a storage and put some data into it: | |
143 | ||
144 | >>> storage = DictStorage() | |
145 | >>> r1 = Rtree( storage, properties = settings, overwrite = True ) | |
146 | >>> r1.add(555, (2, 2)) | |
147 | >>> del r1 | |
148 | >>> storage.hasData | |
149 | True | |
150 | ||
151 | Then reopen the storage with a new tree and see if the data is still there | |
152 | ||
153 | >>> r2 = Rtree( storage, properties = settings, overwrite = False ) | |
154 | >>> r2.count( (0,0,10,10) ) == 1 | |
155 | True | |
156 | >>> del r2 |
0 | import doctest | |
1 | import unittest | |
2 | import glob | |
3 | import os | |
4 | ||
5 | #from zope.testing import doctest | |
6 | from rtree.index import major_version, minor_version, patch_version | |
7 | ||
8 | from .data import boxes15, boxes3, points | |
9 | ||
10 | optionflags = (doctest.REPORT_ONLY_FIRST_FAILURE | | |
11 | doctest.NORMALIZE_WHITESPACE | | |
12 | doctest.ELLIPSIS) | |
13 | ||
14 | def list_doctests(): | |
15 | # Skip the custom storage test unless we have libspatialindex 1.8+. | |
16 | return [filename | |
17 | for filename | |
18 | in glob.glob(os.path.join(os.path.dirname(__file__), '*.txt')) | |
19 | if not ( | |
20 | filename.endswith('customStorage.txt') | |
21 | and major_version < 2 and minor_version < 8)] | |
22 | ||
23 | def open_file(filename, mode='r'): | |
24 | """Helper function to open files from within the tests package.""" | |
25 | return open(os.path.join(os.path.dirname(__file__), filename), mode) | |
26 | ||
27 | def setUp(test): | |
28 | test.globs.update(dict( | |
29 | open_file = open_file, | |
30 | boxes15=boxes15, | |
31 | boxes3=boxes3, | |
32 | points=points | |
33 | )) | |
34 | ||
35 | def test_suite(): | |
36 | return unittest.TestSuite( | |
37 | [doctest.DocFileSuite(os.path.basename(filename), | |
38 | optionflags=optionflags, | |
39 | setUp=setUp) | |
40 | for filename | |
41 | in sorted(list_doctests())]) | |
42 | ||
43 | if __name__ == "__main__": | |
44 | runner = unittest.TextTestRunner() | |
45 | runner.run(test_suite()) |
0 | from rtree import index | |
1 | ||
2 | from .data import boxes15 | |
3 | ||
4 | def boxes15_stream(interleaved=True): | |
5 | for i, (minx, miny, maxx, maxy) in enumerate(boxes15): | |
6 | if interleaved: | |
7 | yield (i, (minx, miny, maxx, maxy), 42) | |
8 | else: | |
9 | yield (i, (minx, maxx, miny, maxy), 42) | |
10 | ||
11 | ||
12 | def test_rtree_constructor_stream_input(): | |
13 | p = index.Property() | |
14 | sindex = index.Rtree(boxes15_stream(), properties=p) | |
15 | ||
16 | bounds = (0, 0, 60, 60) | |
17 | hits = list(sindex.intersection(bounds)) | |
18 | assert sorted(hits) == [0, 4, 16, 27, 35, 40, 47, 50, 76, 80] |
0 | ||
1 | make sure a file-based index is overwriteable. | |
2 | ||
3 | >>> from rtree.index import Rtree | |
4 | >>> r = Rtree('overwriteme') | |
5 | >>> del r | |
6 | >>> r = Rtree('overwriteme', overwrite=True) | |
7 | ||
8 | ||
9 | the default serializer is pickle, can use any by overriding dumps, loads | |
10 | ||
11 | >>> r = Rtree() | |
12 | >>> some_data = {"a": 22, "b": [1, "ccc"]} | |
13 | >>> try: | |
14 | ... import simplejson | |
15 | ... r.dumps = simplejson.dumps | |
16 | ... r.loads = simplejson.loads | |
17 | ... r.add(0, (0, 0, 1, 1), some_data) | |
18 | ... list(r.nearest((0, 0), 1, objects="raw"))[0] == some_data | |
19 | ... except ImportError: | |
20 | ... # "no import, failed" | |
21 | ... True | |
22 | True | |
23 | ||
24 | ||
25 | >>> r = Rtree() | |
26 | >>> r.add(123, (0, 0, 1, 1)) | |
27 | >>> item = list(r.nearest((0, 0), 1, objects=True))[0] | |
28 | >>> item.id | |
29 | 123 | |
30 | ||
31 | >>> r.valid() | |
32 | True | |
33 | ||
34 | test UTF-8 filenames | |
35 | ||
36 | >>> f = u'gilename\u4500abc' | |
37 | ||
38 | >>> r = Rtree(f) | |
39 | >>> r.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42) | |
40 | ||
41 | >>> del r |
0 | >>> from rtree import core | |
1 | >>> del core.rt | |
2 | >>> files = ['defaultidx.dat','defaultidx.idx', | |
3 | ... 'pagesize3.dat','pagesize3.idx', | |
4 | ... 'testing.dat','testing.idx', | |
5 | ... 'custom.data','custom.index', | |
6 | ... 'streamed.idx','streamed.dat', | |
7 | ... 'gilename䔀abc.dat','gilename䔀abc.idx', | |
8 | ... 'overwriteme.idx', 'overwriteme.dat'] | |
9 | >>> import os | |
10 | >>> import time | |
11 | >>> for f in files: | |
12 | ... try: | |
13 | ... os.remove(f) | |
14 | ... except OSError: | |
15 | ... time.sleep(0.1) | |
16 | ... os.remove(f) | |
17 |